JIT options and visualization using Pandas

JIT options and visualization using Pandas#

Author: Jørgen S. Dokken

In this chapter, we will explore how to optimize and inspect the integration kernels used in DOLFINx. As we have seen in the previous demos, DOLFINx uses the Unified form language to describe variational problems.

These descriptions have to be translated into code for assembling the right and left hand side of the discrete variational problem.

DOLFINx uses ffcx to generate efficient C code assembling the element matrices. This C code is in turn compiled using CFFI, and we can specify a variety of compile options.

We start by specifying the current directory as the location to place the generated C files, we obtain the current directory using pathlib

import matplotlib.pyplot as plt
import pandas as pd
import seaborn
import time
import ufl

from ufl import TestFunction, TrialFunction, dx, inner
from dolfinx.mesh import create_unit_cube
from dolfinx.fem.petsc import assemble_matrix
from dolfinx.fem import functionspace, form

from mpi4py import MPI
from pathlib import Path
from typing import Dict

cache_dir = f"{str(Path.cwd())}/.cache"
print(f"Directory to put C files in: {cache_dir}")
Directory to put C files in: /__w/dolfinx-tutorial/dolfinx-tutorial/chapter4/.cache

Next we generate a general function to assemble the mass matrix for a unit cube. Note that we use dolfinx.fem.Form to compile the variational form. For codes using dolfinx.LinearProblem, you can supply jit_options as a keyword argument.

def compile_form(space: str, degree: int, jit_options: Dict):
    N = 10
    mesh = create_unit_cube(MPI.COMM_WORLD, N, N, N)
    V = functionspace(mesh, (space, degree))
    u = TrialFunction(V)
    v = TestFunction(V)
    a = inner(u, v) * dx
    a_compiled = form(a, jit_options=jit_options)
    start = time.perf_counter()
    end = time.perf_counter()
    return end - start

We start by considering the different levels of optimization that the C compiler can use on the optimized code. A list of optimization options and explanations can be found here

optimization_options = ["-O1", "-O2", "-O3", "-Ofast"]

The next option we can choose is if we want to compile the code with -march=native or not. This option enables instructions for the local machine, and can give different results on different systems. More information can be found here

march_native = [True, False]

We choose a subset of finite element spaces, varying the order of the space to look at the effects it has on the assembly time with different compile options.

results = {"Space": [], "Degree": [], "Options": [], "Time": []}
for space in ["N1curl", "Lagrange", "RT"]:
    for degree in [1, 2, 3]:
        for native in march_native:
            for option in optimization_options:
                if native:
                    cffi_options = [option, "-march=native"]
                    cffi_options = [option]
                jit_options = {"cffi_extra_compile_args": cffi_options,
                               "cache_dir": cache_dir, "cffi_libraries": ["m"]}
                runtime = compile_form(space, degree, jit_options=jit_options)

We have now stored all the results to a dictionary. To visualize it, we use pandas and its Dataframe class. We can inspect the data in a jupyter notebook as follows

results_df = pd.DataFrame.from_dict(results)
Space Degree Options Time
0 N1curl 1 -O1\n-march=native 0.014400
1 N1curl 1 -O2\n-march=native 0.012129
2 N1curl 1 -O3\n-march=native 0.011096
3 N1curl 1 -Ofast\n-march=native 0.010659
4 N1curl 1 -O1 0.013169
... ... ... ... ...
67 RT 3 -Ofast\n-march=native 0.276210
68 RT 3 -O1 0.689000
69 RT 3 -O2 0.668559
70 RT 3 -O3 0.501555
71 RT 3 -Ofast 0.358613

72 rows × 4 columns

We can now make a plot for each element type to see the variation given the different compile options. We create a new colum for each element type and degree.

results_df["Element"] = results_df["Space"] + " " + results_df["Degree"]
elements = sorted(set(results_df["Element"]))
for element in elements:
    df_e = results_df[results_df["Element"] == element]
    g = seaborn.catplot(x="Options", y="Time", kind="bar", data=df_e, col="Element")
    g.fig.set_size_inches(16, 4)
../_images/1264e02eceb57330acd6871751b8e6a260d92ff117bfa9c9f52b9755134166a4.png ../_images/8e907b88c32bc5beddb731c95a5e32426e13a3fd1af55c80a1ea6d28aa2a0292.png ../_images/cd400aea36082899f726df8c58b76d96d2dffa4c61667deefdbc0c0978ef30f7.png ../_images/210c69ab9f7e439124fba176c3d3c6c5ed6cc75ea96d5d4430f7c1df8b7b394b.png ../_images/e1c0212d17b3fb8fa191920e0434249659512006699998a1ce756b58cf40024d.png ../_images/259d2f326cd379799f973ade28f0db97ae57b4be50047574fe6e32f2faeb83fa.png ../_images/80cde1c91e7e93cff78886741eec6a86322b86fdbf81545aa19013c592cd6b37.png ../_images/36d2de1e93368232a376a2c40d38229046b48bdce9c0da9c0a8234a475c37869.png ../_images/12556575071e91d8786e3b1f1fdf82fe8adf4a43bf312fdc3623c96179b571f8.png

We observe that the compile time increases when increasing the degree of the function space, and that we get most speedup by using “-O3” or “-Ofast” combined with “-march=native”.