JIT options and visualization using Pandas#

Author: Jørgen S. Dokken

In this chapter, we will explore how to optimize and inspect the integration kernels used in DOLFINx. As we have seen in the previous demos, DOLFINx uses the Unified form language to describe variational problems.

These descriptions has to be translated in to code for assembling the right and left hand side of the discrete variational problem.

DOLFINx uses ffcx to generate efficient C code assembling the element matrices. This C code is in turned compiled using CFFI, and we can specify a variety of compile options.

We start by specifying the current directory as the place to place the generated C files, we obtain the current directory using pathlib

from pathlib import Path
cache_dir = f"{str(Path.cwd())}/.cache"
print(f"Directory to put C files in: {cache_dir}")
Directory to put C files in: /__w/dolfinx-tutorial/dolfinx-tutorial/chapter4/.cache

Next we generate a general function to assemble the mass matrix for a unit cube. Note that we use dolfinx.fem.Form to compile the variational form. For codes using dolfinx.LinearProblem, you can supply jit_options as a keyword argument.

from dolfinx.fem import FunctionSpace, form
from dolfinx.fem.petsc import assemble_matrix
from dolfinx.mesh import create_unit_cube
from ufl import TestFunction, TrialFunction, dx, inner
from mpi4py import MPI
from typing import Dict
import time
import ufl

def compile_form(space:str, degree:int, jit_options:Dict):
    N = 10
    mesh = create_unit_cube(MPI.COMM_WORLD, N, N, N)
    V = FunctionSpace(mesh, (space, degree))
    u = TrialFunction(V)
    v = TestFunction(V)
    a = inner(u, v) * dx
    a_compiled = form(a, jit_options=jit_options)
    start = time.perf_counter()
    end = time.perf_counter()
    return end - start

We start by considering the different levels of optimization the C compiled can use on the optimized code. A list of optimization options and explainations can be found here

optimization_options = ["-O1", "-O2", "-O3", "-Ofast"]

The next option we can choose is if we want to compile the code with -march=native or not. This option enables instructions for the local machine, and can give different results on different systems. More information can be found here

march_native = [True, False]

We choose a subset of finite element spaces, varying the order of the space to look at the effects it has on the assembly time with different compile options.

results = {"Space":[], "Degree":[], "Options":[], "Time":[]}
for space in ["N1curl", "CG", "RT"]:
    for degree in [1, 2, 3]:
        for native in march_native:
            for option in optimization_options:
                if native:
                    cffi_options = [option, "-march=native"]
                    cffi_options = [option]
                jit_options = {"cffi_extra_compile_args": cffi_options, 
                    "cache_dir": cache_dir, "cffi_libraries": ["m"]}
                runtime = compile_form(space, degree, jit_options=jit_options)

We have now stored all the results to a dictionary. To visualize it, we use pandas and its Dataframe class. We can inspect the data in a jupyter notebook as follows

import pandas as pd
results_df = pd.DataFrame.from_dict(results)
Space Degree Options Time
0 N1curl 1 -O1\n-march=native 0.020212
1 N1curl 1 -O2\n-march=native 0.018007
2 N1curl 1 -O3\n-march=native 0.018245
3 N1curl 1 -Ofast\n-march=native 0.017504
4 N1curl 1 -O1 0.019459
... ... ... ... ...
67 RT 3 -Ofast\n-march=native 0.404823
68 RT 3 -O1 0.662172
69 RT 3 -O2 0.584248
70 RT 3 -O3 0.487661
71 RT 3 -Ofast 0.485339

72 rows × 4 columns

We can now make a plot for each element type to see the variation given the different compile options. We create a new colum for each element type and degree.

import seaborn
import matplotlib.pyplot as plt
results_df["Element"] = results_df["Space"]+" " + results_df["Degree"]
elements = sorted(set(results_df["Element"]))
for element in elements:
    df_e = results_df[results_df["Element"]==element]
    g = seaborn.catplot(x="Options", y="Time", kind="bar", data=df_e, col="Element")
../_images/a23193ef268517fa0844bffea90cc2616d2ebee066044862212f6fc4753416bb.png ../_images/4abf6e887eda3af7f8f3ea61ce171245deefc30c1a476448f4403b4b7c14ed28.png ../_images/08089ef74282c6624495ccfba60962d9f4c444bc715260774506bd79f4e40a00.png ../_images/0b56d133e3e01d3830aefd5fa30502edc014f0945fe10b5c985c184a168fff0b.png ../_images/da1a02e75ef7e9abec6c09034ad867fe7df190b0198b24722b21a491705f05b2.png ../_images/9ff5ee2dddabfdc8775f53f980e8ad489bfadab5fcaf270cd39a6f4ae53b9a0b.png ../_images/0b57cbb73c6a0c17cbbb3b4a78342e30275ccb2dc645d48f82ddc35c7500bc7a.png ../_images/be883dc4754ffa56c01ad2da8da5796e69b55b6732dedd3e105742f51db20003.png ../_images/2328f6d5e15f2091993a191a2233af1787b66225df6971dd01e508661856be59.png

We observe that the compile time increases when increasing the degree of the function space, and that we get most speedup by using “-O3” or “-Ofast” combined with “-march=native”.