SStruct Interface Example Codes

Example 6

This is a two processor example and is the same problem as is solved with the structured interface in Example 2. (The grid boxes are exactly those in the example diagram in the struct interface chapter of the User's Manual. Processor 0 owns two boxes and processor 1 owns one box.) This is the simplest sstruct example. There is one part and one variable. The solver is PCG with SMG preconditioner. We use a structured solver for this example.

We recommend comparing this example with Example 2.

Example 7

This example uses the sstruct interface to solve the same problem as was solved in Example 4 with the struct interface. Therefore, there is only one part and one variable.

This code solves the convection-reaction-diffusion problem div (-K grad u + B u) + C u = F in the unit square with boundary condition u = U0. The domain is split into N x N processor grid. Thus, the given number of processors should be a perfect square. Each processor has a n x n grid, with nodes connected by a 5-point stencil. We use cell-centered variables, and, therefore, the nodes are not shared.

To incorporate the boundary conditions, we do the following: Let x_i and x_b be the interior and boundary parts of the solution vector x. If we split the matrix A as

A = [A_ii A_ib; A_bi A_bb],

then we solve

[A_ii 0; 0 I] [x_i ; x_b] = [b_i - A_ib u_0; u_0].

Note that this differs from the previous example in that we are actually solving for the boundary conditions (so they may not be exact as in ex3, where we only solved for the interior). This approach is useful for more general types of b.c.

As in the previous example (Example 6), we use a structured solver. A number of structured solvers are available. More information can be found in the Solvers and Preconditioners chapter of the User's Manual.

We recommend viewing Examples 6 before viewing this example.

Example 8

This is a two processor example which solves a similar problem to the one in Example 2, and Example 6 (The grid boxes are exactly those in the example diagram in the struct interface chapter of the User's Manual.)

The difference with the previous examples is that we use three parts, two with a 5-point and one with a 9-point discretization stencil. The solver is PCG with split-SMG preconditioner.

We recommend comparing this example with Example 2 and Example 6.

Example 9

This code solves a system corresponding to a discretization of the biharmonic problem treated as a system of equations on the unit square. Specifically, instead of solving Delta^2(u) = f with zero boundary conditions for u and Delta(u), we solve the system A x = b, where

A = [ Delta -I ; 0 Delta], x = [ u ; v] and b = [ 0 ; f]

The corresponding boundary conditions are u = 0 and v = 0.

The domain is split into an N x N processor grid. Thus, the given number of processors should be a perfect square. Each processor's piece of the grid has n x n cells with n x n nodes. We use cell-centered variables, and, therefore, the nodes are not shared. Note that we have two variables, u and v, and need only one part to describe the domain. We use the standard 5-point stencil to discretize the Laplace operators. The boundary conditions are incorporated as in Example 3.

We recommend viewing Examples 3, 6 and 7 before this example.

Example 12

The grid layout is the same as ex1, but with nodal unknowns. The solver is PCG preconditioned with either PFMG or BoomerAMG, selected on the command line.

We recommend viewing the Struct examples before viewing this and the other SStruct examples. This is one of the simplest SStruct examples, used primarily to demonstrate how to set up non-cell-centered problems, and to demonstrate how easy it is to switch between structured solvers (PFMG) and solvers designed for more general settings (AMG).

Example 13

This code solves the 2D Laplace equation using bilinear finite element discretization on a mesh with an "enhanced connectivity" point. Specifically, we solve -Delta u = 1 with zero boundary conditions on a star-shaped domain consisting of identical rhombic parts each meshed with a uniform n x n grid. Every part is assigned to a different processor and all parts meet at the origin, equally subdividing the 2*pi angle there. The case of six processors (parts) looks as follows:

                                    +
                                   / \
                                  /   \
                                 /     \
                       +--------+   1   +---------+
                        \        \     /         /
                         \    2   \   /    0    /
                          \        \ /         /
                           +--------+---------+
                          /        / \         \
                         /    3   /   \    5    \
                        /        /     \         \
                       +--------+   4   +---------+
                                 \     /
                                  \   /
                                   \ /
                                    +

Note that in this problem we use nodal variables, which will be shared between the different parts, so the node at the origin, for example, will belong to all parts.

We recommend viewing the Struct examples before viewing this and the other SStruct examples. The primary role of this particular SStruct example is to demonstrate how to set up non-cell-centered problems, and specifically problems with an "enhanced connectivity" point.

Example 14

This is a version of Example 13, which uses the SStruct FEM input functions instead of stencils to describe a problem on a mesh with an "enhanced connectivity" point. This is the recommended way to set up a finite element problem in the SStruct interface.

Example 15

This code solves a 3D electromagnetic diffusion (definite curl-curl) problem using the lowest order Nedelec, or "edge" finite element discretization on a uniform hexahedral meshing of the unit cube. The right-side corresponds to a unit force and we use uniform zero Dirichlet boundary conditions. The overall problem reads: curl alpha curl E + beta E = 1, with E x n = 0 on the boundary, where alpha and beta are piecewise-constant material coefficients.

The linear system is split in parallel using the SStruct interface with a n x n x n grid on each processors. Note that, the number of processors should therefore be a perfect cube!

This code is mainly meant as an illustration of using the Auxiliary-space Maxwell Solver (AMS) through the SStruct interface. It uses two grids -- one for the nodal and one for the edge variables, and we show how to constructs the rectangular "discrete gradient" matrix that connects them. Finally, this is also an example of setting up a finite element discretization in the SStruct interface, and we recommend viewing Example 13 and Example 14 before viewing this example.

Example 16

This code solves the 2D Laplace equation using a high order Q3 finite element discretization. Specifically, we solve -Delta u = 1 with zero boundary conditions on a unit square domain meshed with a uniform grid. The mesh is distributed across an N x N process grid, with each processor containing an n x n sub-mesh of data, so the global mesh is nN x nN.