JIT options and visualization using Pandas

JIT options and visualization using Pandas#

Author: Jørgen S. Dokken

In this chapter, we will explore how to optimize and inspect the integration kernels used in DOLFINx. As we have seen in the previous demos, DOLFINx uses the Unified form language to describe variational problems.

These descriptions have to be translated into code for assembling the right and left hand side of the discrete variational problem.

DOLFINx uses ffcx to generate efficient C code assembling the element matrices. This C code is in turn compiled using CFFI, and we can specify a variety of compile options.

We start by specifying the current directory as the location to place the generated C files, we obtain the current directory using pathlib

import matplotlib.pyplot as plt
import pandas as pd
import seaborn
import time
import ufl

from ufl import TestFunction, TrialFunction, dx, inner
from dolfinx.mesh import create_unit_cube
from dolfinx.fem.petsc import assemble_matrix
from dolfinx.fem import functionspace, form

from mpi4py import MPI
from pathlib import Path
from typing import Dict

cache_dir = f"{str(Path.cwd())}/.cache"
print(f"Directory to put C files in: {cache_dir}")
Directory to put C files in: /__w/dolfinx-tutorial/dolfinx-tutorial/chapter4/.cache

Next we generate a general function to assemble the mass matrix for a unit cube. Note that we use dolfinx.fem.form to compile the variational form. For codes using dolfinx.fem.petsc.LinearProblem, you can supply jit_options as a keyword argument.

def compile_form(space: str, degree: int, jit_options: Dict):
    N = 10
    mesh = create_unit_cube(MPI.COMM_WORLD, N, N, N)
    V = functionspace(mesh, (space, degree))
    u = TrialFunction(V)
    v = TestFunction(V)
    a = inner(u, v) * dx
    a_compiled = form(a, jit_options=jit_options)
    start = time.perf_counter()
    assemble_matrix(a_compiled)
    end = time.perf_counter()
    return end - start

We start by considering the different levels of optimization that the C compiler can use on the optimized code. A list of optimization options and explanations can be found here

optimization_options = ["-O1", "-O2", "-O3", "-Ofast"]

The next option we can choose is if we want to compile the code with -march=native or not. This option enables instructions for the local machine, and can give different results on different systems. More information can be found here

march_native = [True, False]

We choose a subset of finite element spaces, varying the order of the space to look at the effects it has on the assembly time with different compile options.

results = {"Space": [], "Degree": [], "Options": [], "Time": []}
for space in ["N1curl", "Lagrange", "RT"]:
    for degree in [1, 2, 3]:
        for native in march_native:
            for option in optimization_options:
                if native:
                    cffi_options = [option, "-march=native"]
                else:
                    cffi_options = [option]
                jit_options = {"cffi_extra_compile_args": cffi_options,
                               "cache_dir": cache_dir, "cffi_libraries": ["m"]}
                runtime = compile_form(space, degree, jit_options=jit_options)
                results["Space"].append(space)
                results["Degree"].append(str(degree))
                results["Options"].append("\n".join(cffi_options))
                results["Time"].append(runtime)

We have now stored all the results to a dictionary. To visualize it, we use pandas and its Dataframe class. We can inspect the data in a jupyter notebook as follows

results_df = pd.DataFrame.from_dict(results)
results_df
Space Degree Options Time
0 N1curl 1 -O1\n-march=native 0.016457
1 N1curl 1 -O2\n-march=native 0.013253
2 N1curl 1 -O3\n-march=native 0.013492
3 N1curl 1 -Ofast\n-march=native 0.012536
4 N1curl 1 -O1 0.015016
... ... ... ... ...
67 RT 3 -Ofast\n-march=native 0.281418
68 RT 3 -O1 0.687937
69 RT 3 -O2 0.508438
70 RT 3 -O3 0.503097
71 RT 3 -Ofast 0.375934

72 rows × 4 columns

We can now make a plot for each element type to see the variation given the different compile options. We create a new colum for each element type and degree.

seaborn.set(style="ticks")
seaborn.set(font_scale=1.2)
seaborn.set_style("darkgrid")
results_df["Element"] = results_df["Space"] + " " + results_df["Degree"]
elements = sorted(set(results_df["Element"]))
for element in elements:
    df_e = results_df[results_df["Element"] == element]
    g = seaborn.catplot(x="Options", y="Time", kind="bar", data=df_e, col="Element")
    g.fig.set_size_inches(16, 4)
../_images/188a8ac1c6fb86527ff01f756b408f70dba3b29a5702691789f3a3b4780de5d6.png ../_images/38bb90a510229e316313a4c504ee4eb8ee7af888c3b856b96aa5c91bde3d6871.png ../_images/9a90294ed689e5cf2cf57822fbf9d4c7c89c10c96fa57d5802667f463ba2e8a4.png ../_images/c008c9cfac07450d6e41f7f30738681e1959122bf8cf90e02541686ea5a26d0c.png ../_images/ed1b4add29ae80e5770bef6bdabbd59f242344eff8157d377b5ea9faed14c0f5.png ../_images/c174d3acfa279c2e62f943e69738b1087babca4e0ed6888dc0194f2791d5cac9.png ../_images/32ef758b1ef966e23b466b1060dc221abbf0099bec1da0585817b593bf8be0b1.png ../_images/c6e87f6833649b047ca392f931743b657069419f656051bcd3b320dc8198bd0d.png ../_images/6696c20006a76d1b0452b39cee526b2b21afcc07a77963384c21ff9e740b0c12.png

We observe that the compile time increases when increasing the degree of the function space, and that we get most speedup by using “-O3” or “-Ofast” combined with “-march=native”.