The standard way of compiling code with DOLFINx#

In this section, we will focus on the approach most users use to interact with UFL, FFCx and basix. Here we will start by creating the domain we want to solve a problem on. In this case, we will use a unit square

from mpi4py import MPI
import dolfinx
import ufl
import numpy as np
import scipy

N = 10
mesh = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, N, N)
tdim = mesh.topology.dim

Problem specification: Non-linear Poisson.#

Next, let’s consider the problem

\[\begin{split} \begin{align} -\nabla \cdot p(u) \nabla u &= f \quad \text{in } \Omega, \\ u &= g \quad \text{on } \partial \Omega \end{align} \end{split}\]

where \(p(u)=1+u^2\). We choose to use a first order Lagrange space for the unknown.

V = dolfinx.fem.functionspace(mesh, ("Lagrange", 1))

We do as in the previous section, and define a manufactured solution

def u_exact(module, x):
    return module.sin(2 * module.pi * x[1]) + x[0] * x[1] ** 2


x = ufl.SpatialCoordinate(mesh)
u_ex = u_exact(ufl, x)

As the problem is non-linear we need to prepare for solving this with a Newton-solver.

uh = dolfinx.fem.Function(V)

We define the residual

def p(u):
    return 1 + u**2


f = -ufl.div(p(u_ex) * ufl.grad(u_ex))
v = ufl.TestFunction(V)
F = ufl.inner(p(uh) * ufl.grad(uh), ufl.grad(v)) * ufl.dx - ufl.inner(f, v) * ufl.dx

gh = dolfinx.fem.Function(V)
gh.interpolate(lambda x: u_exact(np, x))

mesh.topology.create_entities(tdim - 1)
mesh.topology.create_connectivity(tdim - 1, tdim)
boundary_facets = dolfinx.mesh.exterior_facet_indices(mesh.topology)
boundary_dofs = dolfinx.fem.locate_dofs_topological(V, tdim - 1, boundary_facets)
bc = dolfinx.fem.dirichletbc(gh, boundary_dofs)
bcs = [bc]

We compute the Jacobian of the system using UFL.

J = ufl.derivative(F, uh)

Now that we have associated the forms with the discrete problem already, we use dolfinx.fem.form

residual = dolfinx.fem.form(F)
jacobian = dolfinx.fem.form(J)
A = dolfinx.fem.create_matrix(jacobian)
b = dolfinx.fem.create_vector(residual)

We are now ready to use these forms to solve the variational problem.

\[\begin{split} \begin{align} \frac{\partial F}{\partial u}[\delta u] &= F(u_k) \\ u_{k+1} = u_k - \delta u \end{align} \end{split}\]

As we want \(u_{k+1}=g\), but we do not know if \(u_k=g\), we need to take this into account when assembling the right hand side for the Jacobian equation.

We will go through how we apply this boundary condition using lifting

Lifting#

Let us split the degrees of freedom into two disjoint sets, \(u_d\), and \(u_{bc}\), and set up the corresponding linear system

(3)#\[\begin{split} \begin{align} \begin{pmatrix} A_{d,d} & A_{d, bc} \\ A_{bc,d} & A_{bc, bc} \end{pmatrix} \begin{pmatrix} u_d \\ u_{bc} \end{pmatrix} &= \begin{pmatrix} b_d \\ b_{bc} \end{pmatrix} \end{align} \end{split}\]

In the identity row approach, we set the rows corresponding to the Dirichlet conditions to the identity row and set the appropriate dofs on the right hand side to contain the Dirichlet values:

\[\begin{split} \begin{align} \begin{pmatrix} A_{d,d} & A_{d, bc} \\ 0 & I \end{pmatrix} \begin{pmatrix} u_d \\ u_{bc} \end{pmatrix} &= \begin{pmatrix} b_d \\ g \end{pmatrix} \end{align} \end{split}\]

where \(g\) is the vector satisfying the various Dirichlet conditions. We can now reduce this to a smaller system

\[ \begin{align} A_{d,d} u_d &= b_d - A_{d, bc}g \end{align} \]

which is symmetric if \(A_{d,d}\) is symmetric. However, we do not want to remove the degrees of freedom from the sparse matrix, as it makes the matrix less adaptable for varying boundary conditions, so we set up the system

\[\begin{split} \begin{align} \begin{pmatrix} A_{d,d} & 0 \\ 0 & I \end{pmatrix} \begin{pmatrix} u_d \\ u_{bc} \end{pmatrix} &= \begin{pmatrix} b_d - A_{d,bc}g \\ g \end{pmatrix} \end{align} \end{split}\]

We start by zeroing out existing entries in the right hand side.

b.array[:] = 0

Next we assemble the residual into b

dolfinx.fem.assemble_vector(b.array, residual)
array([ 0.03958686,  0.01188388,  0.01710644, -0.02737186,  0.03823091,
       -0.23051416, -0.03254705, -0.54959513,  0.03687615, -0.34640994,
       -0.03769314, -0.55369695, -0.62745118,  0.03552259, -0.09197282,
       -0.04281025, -0.55769458, -0.62269633, -0.09635011,  0.03417021,
        0.06913528, -0.04789851, -0.56158922, -0.61787444, -0.09218259,
        0.13624624,  0.03281901,  0.09353074, -0.05295803, -0.56538209,
       -0.61299092, -0.088331  ,  0.12526456,  0.21218638,  0.031469  ,
        0.2075149 , -0.05798895, -0.56907438, -0.60805118, -0.08481164,
        0.11337486,  0.20059909,  0.4420161 ,  0.03012017,  0.19965123,
       -0.06299139, -0.5726673 , -0.60306061, -0.08164085,  0.10053814,
        0.18813566,  0.45402806,  0.37525153,  0.02877253,  0.21879208,
       -0.06796546, -0.57616206, -0.59802463, -0.07883495,  0.08671539,
        0.17471618,  0.46704232,  0.37861077,  0.47975947,  0.01960355,
        0.3349097 , -0.06727417, -0.57955985, -0.59294864, -0.07641027,
        0.0718676 ,  0.16026071,  0.48091188,  0.38727936,  0.39495728,
        0.45596566, -0.32819296, -0.58783803, -0.07438312,  0.05595578,
        0.14468933,  0.49548973,  0.4010077 ,  0.32099272,  0.37650491,
       -0.25487848, -0.07276983,  0.03894091,  0.12792213,  0.51062887,
        0.41954619,  0.25746726,  0.30395982, -0.00428696,  0.020784  ,
        0.10987917,  0.52618229,  0.44264523,  0.20398238,  0.23806749,
       -0.00737359,  0.09048053,  0.54200299,  0.47005519,  0.16013955,
        0.17856505,  0.06979188,  0.55794397,  0.5015265 ,  0.12554024,
        0.12518961,  0.32131229,  0.53680953,  0.09978595,  0.07767828,
        0.24371794,  0.08247813,  0.0357682 ,  0.00620004, -0.00080353,
       -0.00535419])

We want to apply the condition

\[ \delta u = u_k - g \]

since \(u_{k+1} = u_k - \delta u\)

This means that we would like to compute

\[ b - J (u_k - g) = b + J (g - u_k) \]

This means that we set \(\alpha=-1\), \(x0=uk\), \(A=J\).

dolfinx.fem.apply_lifting(b.array, [jacobian], [bcs], x0=[uh.x.array], alpha=-1.0)
b.scatter_reverse(dolfinx.la.InsertMode.add)

As for the last part of the application of Dirichlet condtitions, we want to set \(u_k - g\) for the constrained dofs. We can do this with dolfinx.fem.set_bc.

For the example at hand, we set \(\alpha=-1\), \(x0=u_k\).

[bc.set(b.array, x0=uh.x.array, alpha=-1.0) for bc in bcs]
[None]

Next, we can compute the assemble the Jacobian

A_scipy = A.to_scipy()
dolfinx.fem.assemble_matrix(A, jacobian, bcs)
<dolfinx.la.MatrixCSR at 0x7f6cb514eb40>

Hand-coded Newton solver#

# We are ready to do this iteratively
du = dolfinx.fem.Function(V)
correction_norm = None
for i in range(1, 20):
    # Assemble residual
    b.array[:] = 0
    dolfinx.fem.assemble_vector(b.array, residual)
    dolfinx.fem.apply_lifting(b.array, [jacobian], [bcs], x0=[uh.x.array], alpha=-1.0)
    b.scatter_reverse(dolfinx.la.InsertMode.add)
    [bc.set(b.array, x0=uh.x.array, alpha=-1.0) for bc in bcs]

    print(f"Iteration {i}, |F|={np.linalg.norm(b.array)}, |du|={correction_norm}")
    # Assemble jacobian
    A.data[:] = 0
    dolfinx.fem.assemble_matrix(A, jacobian, bcs)

    A_inv = scipy.sparse.linalg.splu(A_scipy)
    du.x.array[:] = A_inv.solve(b.array)
    uh.x.array[:] -= du.x.array
    if (correction_norm := np.linalg.norm(du.x.array)) < 1e-6:
        print(f"Iteration {i}, Converged: |du|={correction_norm}")
        break
Iteration 1, |F|=6.331970827486794, |du|=None
Iteration 2, |F|=3.085561917162284, |du|=7.823179873314087
Iteration 3, |F|=0.3332304295727603, |du|=1.0619315658452844
Iteration 4, |F|=0.0047913468963322, |du|=0.1119568588590222
Iteration 5, |F|=1.1478673543738253e-06, |du|=0.001526192852867471
Iteration 5, Converged: |du|=3.3242674414716e-07

We do as before, and compute the error between the exact and approximate solution.

error = dolfinx.fem.form(ufl.inner(uh - u_ex, uh - u_ex) * ufl.dx)
local_error = dolfinx.fem.assemble_scalar(error)
global_error = np.sqrt(mesh.comm.allreduce(local_error, op=MPI.SUM))
print(f"L2-error: {global_error:.2e}")
L2-error: 2.58e-02

Using scipy’s Newton solver#

Of course we don’t want to write out this kind of loop for every problem. We can for instance use the newton_krylov solver from scipy, but instead of approximating the Jacobian, we will solve the non-linear problem directly. We do this by overloading the “ksp” method in scipy with a method that only does pre-conditioning, where the pre-conditioning is to solve the system for the exact Jacobian. We can do this with the following code:

class NonLinearSolver(scipy.sparse.linalg.LinearOperator):
    def __init__(self, F, uh, bcs):
        """
        Solve the problem F(uh, v)=0 forall v
        """
        jacobian = ufl.derivative(F, uh)
        self.J_compiled = dolfinx.fem.form(jacobian)
        self.F_compiled = dolfinx.fem.form(F)
        self.bcs = bcs
        self.A = dolfinx.fem.create_matrix(self.J_compiled)
        self.b = dolfinx.fem.Function(uh.function_space)
        self._A_scipy = self.A.to_scipy()
        self.uh = uh

        # Scipy specific parameters
        self.shape = (len(uh.x.array), len(uh.x.array))
        self.dtype = uh.x.array.dtype
        self.update(uh.x.array, None)

    def update(self, x, f):
        """Update and invert Jacobian"""
        self.A.data[:] = 0
        self.uh.x.array[:] = x
        dolfinx.fem.assemble_matrix(self.A, self.J_compiled, bcs=self.bcs)
        self._A_inv = scipy.sparse.linalg.splu(self._A_scipy)

    def _matvec(self, x):
        """Compute J^{-1}x"""
        return self._A_inv.solve(x)

    def _compute_residual(self, x):
        """
        Evaluate the residual F(x) = 0
        Args:
            x: Input vector with current solution
        Returns:
            Residual array
        """
        self.uh.x.array[:] = x
        self.b.x.array[:] = 0
        dolfinx.fem.assemble_vector(self.b.x.array, self.F_compiled)
        dolfinx.fem.apply_lifting(self.b.x.array, [self.J_compiled], [self.bcs], x0=[self.uh.x.array], alpha=-1.0)
        self.b.x.scatter_reverse(dolfinx.la.InsertMode.add)
        [bc.set(self.b.x.array, x0=self.uh.x.array, alpha=-1.0) for bc in self.bcs]
        return self.b.x.array

    def linSolver(self, _A, x, **kwargs):
        """
        The linear solver method we will use.
        Simply return `J^-1 x` for an input x
        """
        return kwargs["M"]._matvec(x), 0

    def solve(self, maxiter: int = 100, verbose: bool = False):
        """Call Newton-Krylov solver with direct solving (pre-conditioning only)"""
        self.uh.x.array[:] = scipy.optimize.newton_krylov(
            self._compute_residual,
            self.uh.x.array,
            method=self.linSolver,
            verbose=verbose,
            line_search=None,
            maxiter=maxiter,
            inner_M=self,
        )

Inspect the code above, does it follow what we have described earlier?

Expand the code above to see the solution.

Hide code cell source
Ns = np.array([4, 8, 16, 32])
errors = np.zeros_like(Ns, dtype=np.float64)
hs = np.zeros_like(Ns, dtype=np.float64)
for i, N in enumerate(Ns):
    mesh = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, N, N)
    tdim = mesh.topology.dim
    x = ufl.SpatialCoordinate(mesh)
    u_ex = u_exact(ufl, x)

    V = dolfinx.fem.functionspace(mesh, ("Lagrange", 1))
    uh = dolfinx.fem.Function(V)

    f = -ufl.div(p(u_ex) * ufl.grad(u_ex))
    v = ufl.TestFunction(V)
    F = ufl.inner(p(uh) * ufl.grad(uh), ufl.grad(v)) * ufl.dx - ufl.inner(f, v) * ufl.dx

    gh = dolfinx.fem.Function(V)
    gh.interpolate(lambda x: u_exact(np, x))

    mesh.topology.create_entities(tdim - 1)
    mesh.topology.create_connectivity(tdim - 1, tdim)
    boundary_facets = dolfinx.mesh.exterior_facet_indices(mesh.topology)
    boundary_dofs = dolfinx.fem.locate_dofs_topological(V, tdim - 1, boundary_facets)
    bc = dolfinx.fem.dirichletbc(gh, boundary_dofs)
    bcs = [bc]
    solver = NonLinearSolver(F, uh, bcs)
    solver.solve(verbose=True)

    error = dolfinx.fem.form(ufl.inner(uh - u_ex, uh - u_ex) * ufl.dx)
    local_error = dolfinx.fem.assemble_scalar(error)
    global_error = np.sqrt(mesh.comm.allreduce(local_error, op=MPI.SUM))
    errors[i] = global_error
    hs[i] = 1.0 / N


def compute_converence(hs, errors):
    return np.log(errors[1:] / errors[:-1]) / np.log(hs[1:] / hs[:-1])


print(f"{hs=}")
print(f"{errors=}")
print(f"{compute_converence(hs, errors)=}")
Hide code cell output
0:  |F(x)| = 1.60998; step 1
1:  |F(x)| = 0.17587; step 1
2:  |F(x)| = 0.00271463; step 1
3:  |F(x)| = 5.78398e-07; step 1
0:  |F(x)| = 1.26328; step 1
1:  |F(x)| = 0.179859; step 1
2:  |F(x)| = 0.00400358; step 1
3:  |F(x)| = 1.70605e-06; step 1
0:  |F(x)| = 0.484913; step 1
1:  |F(x)| = 0.082938; step 1
2:  |F(x)| = 0.00269519; step 1
3:  |F(x)| = 1.80485e-06; step 1
0:  |F(x)| = 0.161563; step 1
1:  |F(x)| = 0.0244575; step 1
2:  |F(x)| = 0.000920647; step 1
3:  |F(x)| = 8.13442e-07; step 1
hs=array([0.25   , 0.125  , 0.0625 , 0.03125])
errors=array([0.15443872, 0.04018271, 0.01015208, 0.00254469])
compute_converence(hs, errors)=array([1.9423876 , 1.98479959, 1.99621559])