Mesh generation#
So far, we have covered how to generate code for the local element assembly. We can now look into how to solve specific problems. Let us start with generating the computational domain.
In DOLFINx, the mesh creation requires 4 inputs:
MPI communicator: This is used to decide how the partitioning is performed. It is usually
MPI.COMM_WORLD
orMPI.COMM_SELF
.Nodes: A set of coordinates in 1, 2 or 3D, that represents all the points in the mesh
Connectivity: A nested list, where each row corresponds to the node indices of a single cell
Coordinate element: A finite element used for pushing coordinates from the reference element to the physical element and its inverse.
As an example, let us consider a single element mesh, of a triangle with straight edges
import ipyparallel as ipp
import dolfinx
import numpy as np
import ufl
from mpi4py import MPI
import pyvista
pyvista.start_xvfb(1.0)
nodes = np.array([[1., 0.], [2., 0.], [3., 2.], [1, 3]], dtype=np.float64)
connectivity = np.array([[0, 1, 2], [0, 2, 3]], dtype=np.int64)
c_el = ufl.Mesh(ufl.VectorElement("Lagrange", ufl.triangle, 1))
domain = dolfinx.mesh.create_mesh(MPI.COMM_SELF, connectivity, nodes, c_el)
We start by creating a simple plotter by interfacing with Pyvista
def plot_mesh(mesh: dolfinx.mesh.Mesh):
"""
Given a DOLFINx mesh, create a `pyvista.UnstructuredGrid`,
and plot it and the mesh nodes
"""
plotter = pyvista.Plotter()
ugrid = pyvista.UnstructuredGrid(*dolfinx.plot.create_vtk_mesh(mesh))
if mesh.geometry.cmap.degree > 1:
plotter.add_mesh(ugrid, style="points", color="b", point_size=10)
ugrid = ugrid.tessellate()
show_edges = False
else:
show_edges = True
plotter.add_mesh(ugrid, show_edges=show_edges)
plotter.show_axes()
plotter.view_xy()
plotter.show()
# We can then easily visualize the domain
plot_mesh(domain)

Higher order meshes#
If we want to create a mesh with higher order edges, we can supply the extra nodes to the mesh geometry, and adjust the coordinate element
nodes = np.array([[1., 0.], [2., 0.], [3., 2.],
[2.9, 1.3], [1.5, 1.5], [1.5, -0.2]], dtype=np.float64)
connectivity = np.array([[0, 1, 2, 3, 4, 5]], dtype=np.int64)
c_el = ufl.Mesh(ufl.VectorElement("Lagrange", ufl.triangle, 2))
domain = dolfinx.mesh.create_mesh(MPI.COMM_SELF, connectivity, nodes, c_el)
plot_mesh(domain)

Interfacing with other software#
As DOLFINx works on numpy
-arrays it is easy to convert any mesh format that be converted into this structure.
DOLFINx has a gmsh
-interface, using the GMSH Python-API to read GMSH
-models or .msh
files.
For more information about mesh-partitioning and mesh input, see: https://jsdokken.com/dolfinx_docs/meshes.html
Cell ownership#
For the remainder of this section we will consider a 3x3 unit square mesh:
domain = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, 3, 3)
The mesh consists of cells, edges and vertices. A mesh is created by supplying the information regarding the connectivity between the cells and the mesh nodes. As a higher order mesh has more nodes than vertices, we can get the connectivity between the cells and mesh vertices through the mesh-topology.
We start by creating a simple function for inspecting these outputs in serial and parallel
def inspect_mesh(shared_facet: bool = False):
from mpi4py import MPI
import dolfinx
ghost_mode = dolfinx.mesh.GhostMode.shared_facet if shared_facet else dolfinx.mesh.GhostMode.none
domain = dolfinx.mesh.create_unit_square(
MPI.COMM_WORLD, 3, 3, ghost_mode=ghost_mode)
topology = domain.topology
tdim = topology.dim
print(
f"Number of cells in process: {topology.index_map(tdim).size_local}", flush=True)
print(
f"Number of shared cells: {topology.index_map(tdim).num_ghosts}", flush=True)
print(f"Global range {topology.index_map(tdim).local_range}")
cell_to_vertices = topology.connectivity(tdim, 0)
print(cell_to_vertices)
We start by inspecting the outputs in serial
inspect_mesh()
Number of cells in process: 18
Number of shared cells: 0
Global range [0, 18]
<AdjacencyList> with 18 nodes
0: [0 1 2 ]
1: [0 3 2 ]
2: [4 0 3 ]
3: [3 2 5 ]
4: [4 6 3 ]
5: [3 7 5 ]
6: [8 4 6 ]
7: [6 3 7 ]
8: [7 5 9 ]
9: [8 10 6 ]
10: [6 11 7 ]
11: [7 12 9 ]
12: [10 6 11 ]
13: [11 7 12 ]
14: [10 13 11 ]
15: [11 14 12 ]
16: [13 11 14 ]
17: [13 15 14 ]
We observe that we have 18 cells in the mesh, that are connected to the 18 vertices.
All indices start from 0, and can be mapped to its global owner by calling domain.topology.index_map(tdim).local_to_global([idx])
Parallel execution#
We use IPython-Parallel for running DOLFINx on multiple processes inside our script.
cluster = ipp.Cluster(engines="mpi", n=2)
rc = cluster.start_and_connect_sync()
Starting 2 engines with <class 'ipyparallel.cluster.launcher.MPIEngineSetLauncher'>
INFO:ipyparallel.cluster.cluster.1699599736-6cho:Starting 2 engines with <class 'ipyparallel.cluster.launcher.MPIEngineSetLauncher'>
0%| | 0/2 [00:00<?, ?engine/s]
50%|█████ | 1/2 [00:05<00:05, 5.62s/engine]
100%|██████████| 2/2 [00:05<00:00, 2.81s/engine]
This has started to processes that can execute code with a MPI communicator. We run the mesh code on two processes, instructing DOLFINx not to share any cells between the processes
query = rc[:].apply_async(inspect_mesh, False)
query.wait()
query.display_outputs()
[stdout:0]
Number of cells in process: 9
Number of shared cells: 0
Global range [0, 9]
<AdjacencyList> with 9 nodes
0: [0 7 8 ]
1: [0 1 8 ]
2: [1 8 2 ]
3: [1 3 2 ]
4: [8 2 4 ]
5: [3 2 5 ]
6: [2 4 9 ]
7: [3 6 5 ]
8: [2 5 9 ]
[stdout:1]
Number of cells in process: 9
Number of shared cells: 0
Global range [9, 18]
<AdjacencyList> with 9 nodes
0: [0 1 2 ]
1: [0 3 2 ]
2: [4 0 3 ]
3: [3 2 5 ]
4: [4 6 3 ]
5: [3 9 5 ]
6: [6 3 9 ]
7: [9 5 7 ]
8: [9 8 7 ]
Next, we instruct DOLFINx to share cells between two processes if a facet is shared between the two processes.
new_query = rc[:].apply_async(inspect_mesh, True)
new_query.wait()
new_query.display_outputs()
[stdout:0]
Number of cells in process: 9
Number of shared cells: 3
Global range [0, 9]
<AdjacencyList> with 12 nodes
0: [0 7 8 ]
1: [0 1 8 ]
2: [1 8 2 ]
3: [1 3 2 ]
4: [8 2 4 ]
5: [3 2 5 ]
6: [2 4 9 ]
7: [3 6 5 ]
8: [2 5 9 ]
9: [7 8 10 ]
10: [8 10 4 ]
11: [4 9 11 ]
[stdout:1]
Number of cells in process: 9
Number of shared cells: 3
Global range [9, 18]
<AdjacencyList> with 12 nodes
0: [0 1 2 ]
1: [0 3 2 ]
2: [4 0 3 ]
3: [3 2 5 ]
4: [4 6 3 ]
5: [3 9 5 ]
6: [6 3 9 ]
7: [9 5 7 ]
8: [9 8 7 ]
9: [10 4 6 ]
10: [6 11 9 ]
11: [11 9 8 ]
cluster.stop_cluster_sync()
Show code cell output
Stopping controller
INFO:ipyparallel.cluster.cluster.1699599736-6cho:Stopping controller
Controller stopped: {'exit_code': 0, 'pid': 1400, 'identifier': 'ipcontroller-1699599736-6cho-1352'}
INFO:ipyparallel.cluster.cluster.1699599736-6cho:Controller stopped: {'exit_code': 0, 'pid': 1400, 'identifier': 'ipcontroller-1699599736-6cho-1352'}
Stopping engine(s): 1699599737
INFO:ipyparallel.cluster.cluster.1699599736-6cho:Stopping engine(s): 1699599737
mpiexec error output:
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 1434 RUNNING AT 7dbb71d950c9
= EXIT CODE: 15
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15)
WARNING:ipyparallel.cluster.cluster.1699599736-6cho:mpiexec error output:
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 1434 RUNNING AT 7dbb71d950c9
= EXIT CODE: 15
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15)
engine set stopped 1699599737: {'exit_code': 15, 'pid': 1431, 'identifier': 'ipengine-1699599736-6cho-1699599737-1352'}
WARNING:ipyparallel.cluster.cluster.1699599736-6cho:engine set stopped 1699599737: {'exit_code': 15, 'pid': 1431, 'identifier': 'ipengine-1699599736-6cho-1699599737-1352'}