This tutorial demonstrates how to solve the Helmholtz equation (the eigenvalue problem for the Laplace operator) on a box mesh with an opposite inlet and outlet. Specifically, it shows how to:

  • obtain the variational formulation of an eigenvalue problem
  • apply Dirichlet boundary conditions (tricky!)
  • use SLEPc and scipy to solve the generalized eigenvalue problem

Additionally, different SLEPc solvers are compared and a benchmark is performed against AVSP (our in-house tool to solve this kind of problems).

Note: some basic knowledge of FEniCS is assumed. If it’s the first time you are dealing with FEniCS, our tutorial on solving the heat equation may be a good starting point.

Formulating the problem

Our goal is to find u_lambda such that

problem

Following the usual approach (remember FEniCS needs the variational formulation of a problem), we multiply both sides by a test function v and integrate by parts. Our problem is to find u_lambda such that

var_problem

Notice our eigenvalue problem is now a generalized eigenvalue problem of the form:

gen_standard

This is very different from the problems most commonly solved in FEniCS, as u appears in both sides of equations, and therefore the problem cannot be represented in the canonical notation for variational problems: find u_i such that

canonical_form

You can find more details about the formulation in these delightful notes from Patrick Farrell.

A simple solution

Creating the mesh

For this example we will create a simple box mesh.

from dolfin.cpp.generation import BoxMesh
from dolfin.cpp.generation import BoxMesh

N = 60
lx = 0.2
ly = 0.1
lz = 1.0
mesh = BoxMesh(Point(0, 0, 0), Point(lx, ly, lz), int(lx*N), int(ly*N), int(lz*N))

This is how it looks:

Translate the problem formulation into code

The equations shown above are reflected in the code as follows:

from dolfin.function.functionspace import FunctionSpace
from dolfin.function.argument import TrialFunction
from dolfin.function.argument import TestFunction

from ufl import inner
from ufl import grad
from ufl import dx

V = FunctionSpace(mesh, 'Lagrange', 1)

u_h = TrialFunction(V)
v = TestFunction(V)

a = inner(grad(u_h), grad(v)) * dx
b = inner(u_h, v) * dx
dummy = inner(1., v) * dx

The trickier part here is we are creating a dummy variable which only purpose is to be present (and ignored) in the assemble part (it is just a workaround and hopefully does not confuse you).

Assemble the matrices and apply boundary conditions

The assembling of the matrices and application of the boundary conditions is the most sensitive part of this tutorial. The main reason is that it is not advised to remove rows and columns of matrices, as it is an inefficient operation. To circumvent that, several alternatives to impose Dirichlet boundary conditions exist. Independently of the chosen strategy, the key point is to preserve symmetry in both matrices, as the available eigensolvers are better suited for symmetric problems (and it is a pitty to loose such a nice property!).

Our strategy consists in zeroing-out the rows and columns corresponding to “Dirichlet degrees of freedom” and assign alpha and 1 to the corresponding diagonals, for A and M, respectively. alpha is the value that meaningless eigenvalues will take. If you want to avoid them, just choose a value that is outside the desired spectrum.

In terms of coding, the assemble of matrix M is done as usual (there’s other ways to do it):

from dolfin.cpp.la import PETScMatrix
from dolfin.fem.assembling import SystemAssembler

asm = SystemAssembler(b, dummy, bcs)
B = PETScMatrix()
asm.assemble(B)

Here, bcs is a list with Dirichlet boundary conditions (check this to remember how to create this type of boundary conditions). We are fixing the extreme faces of the z direction.

Notice also we are using PETSc matrices, which are sparse. These data type is required to work with SLEPc later.

For A the process is slightly more complex:

from dolfin.cpp.la import PETScMatrix
from dolfin.fem.assembling import assemble

diag_value = 1e6

A = PETScMatrix()
assemble(a, tensor=A)
dummy_vec = assemble(dummy)

for bc in bcs:
    bc.zero(A)
    bc.zero_columns(A, dummy_vec, diag_value)

For each boundary condition we start by zeroing-out the rows of the corresponding degrees of freedom (bc.zero(A)) and then we zero-out the columns and assign diag_value to the diagonal (bc.zero_columns(A, dummy_vec, diag_value)). As you’ll see later, this strategy retrieves the correct results. Unfortunately, bc.zero_columns does not work in properly in parallel (let us know if you have a valid alternative to perform the same operation, as we are eager to parallelize this!).

Note that Neumann boundary conditions are very simple to treat: do nothing!

What to do with these matrices?

FEniCS considers SLEPc the go-to library to solve eigenvalue problems. Nevertheless, after assembling the matrices, we can choose any solver we want (including our own!). A good starting point is the most commonly used scientific libraries numpy and scipy. numpy provides machinery to solve “normal” eigenvalue problems, but it is not able to solve generalized eigenvalue problems. Besides, the lack of sparse data structures clearly shows its inability to scale (you cannot go that far with dense matrices). On the other hand, scipy provide solutions both for dense and sparse matrices.

Let’s solve it with scipy then:

import scipy

A_array = scipy.sparse.csr_matrix(A.mat().getValuesCSR()[::-1]) # A is a PETCs matrix
B_array = scipy.sparse.csr_matrix(B.mat().getValuesCSR()[::-1]) # B is a PETCs matrix

k = 20  # number of eigenvalues
which = 'SM'  # smallest magnityde
w, v = scipy.linalg.eigh(A_array, B_array, k=k, which=which)

scipy.sparse.linalg.eigsh relies on ARPACK‘s implementation of the Implicitly Restarted Lanczos method.

On the other hand, the basic usage of SLEPc is also very simple:

import numpy as np
from dolfin.cpp.la import SLEPcEigenSolver

# instantiate solver
solver = SLEPcEigenSolver(A, B)

# update parameters
solver.parameters['solver'] = 'krylov-schur'
solver.parameters['spectrum'] = 'smallest magnitude'
solver.parameters['problem_type'] = 'gen_hermitian'
solver.parameters['tolerance'] = 1e-4

# solve
n_eig = 20
solver.solve(n_eig)

# collect eigenpairs
w, v = [], []
for i in range(solver.get_number_converged()):
    r, _, rv, _ = solver.get_eigenpair(i)
    w.append(r)
    v.append(rv)

w = np.array(w)
v = np.array(v).T

Our preliminary studies show scipy is not competitive with SLEPc for large problems.

This paper presents open-source libraries capable of solving eigenvalue problems (focus is generalized non-hermitian eigenvalue problems), which may also be explored: LAPACK, ARPACK, Anasazi, SLEPc, FEAST, z-PARES. Their summary of each library (and algorithms) is worth reading.

What SLEPc offers?

Above, we’ve used SLEPc without much thinking. Yet, it offers several algorithms to solve eigenvalue problems, including: power iterations, subspace iterations, Arnoldi, Lanczos, Krylov-Schur (default), Jacobi-Davidson and Generalized Davidson.

FEniCS documentation is outdated (it is also a good idea to peek into the code to know what is really available). These are the possible arguments for solver.parameters['solver']: power, subspace, arnoldi, lanczos, krylov-schur, lapack, arpack, jacobi-davidson, generalized-davidson. You can find here further information about each option.

We are simply relying on the methods FEniCS exposes from SLEPc library. If we want to go further, we can also use slepc4py, which are the bindings for SLEPc and provide much more flexibility.

The following picture shows how each algorithm compare in terms of performance for the problem described above (changing the mesh size):

algo_comparison

As you can see, Generalized Davidson outperforms all the other (the options not tested are not suitable for this problem). In fact, it allows to solve a problem with approximately 100k degrees of freedom in about 13 seconds. From this number of degrees of freedom on, the weakest point of the chosen strategy is not anymore the solver, but the assembling step.

It is also important to check the number of converged solutions (we ask for 20):

converged_comp

Modes

TODO: update

And an example of the modes 1, 2, 6 and 7 visualized with paraview:

Benchmarking versus AVSP on a single processor

For a mesh with around 120k nodes on both FEniCS and AVSP:

Cores FEniCS AVSP
1 <20s 17s
8 - 2s
16 - 1s
36 - 1s

Like this post? Share on: TwitterFacebookEmail


Antoine Dauptain is a research scientist focused on computer science and engineering topics for HPC.
Luís F. Pereira is an engineer that enjoys to develop science/engineering related software.

Keep Reading


Published

Category

Tutorials

Tags

Stay in Touch