# Program Execution and Manipulation¶

**Objective:** At the end of this tutorial you will understand how to run, query, and change different components of the Laplace solver.

**Assumptions:** I assume that you have both PETSc and PetIGA configured and compiled.

## Compilation and Execution¶

We will use the Laplace code included in the `$PETIGA_DIR/demo/`

directory to explain and highlight features of using PetIGA to solve
partial differential equations. First enter this directory and build
this application by typing:

```
make Laplace
```

This code has been written to solve the Laplace equation on the unit
domain in 1,2, or 3 spatial dimensions. The boundary conditions are
Dirichlet on the left and free Neumann on the right, chosen such that
the exact solution is *u = 1* no matter what spatial dimension
is selected. To run this code, simply type:

```
./Laplace -print_error
```

The `-print_error`

option is particular to this code and will
print the *L2* norm of the exact error.

## Query Components¶

The natural question is then how does one inquire what the code did. What discretization was used? Which solver? All of this can be determined without opening the source file. PETSc/PetIGA was built on the philosophy that solver components should be changeable from the command line, without requiring the code to be recompiled. This engenders a culture of experimentation in research. There are numerous options available. To see some of them type:

```
./Laplace -help
```

A long list of options will print along with short descriptions of what they control. Perusing this list is educational as it gives you an idea of just how much can be controlled from the command line. To see information about the default discretization, type:

```
./Laplace -iga_view
```

which will produce:

```
IGA: dim=2 dof=1 order=2 geometry=0 rational=0 property=0
Axis 0: periodic=0 degree=2 quadrature=3 processors=1 nodes=18 elements=16
Axis 1: periodic=0 degree=2 quadrature=3 processors=1 nodes=18 elements=16
Partitioning - nodes: sum=324 min=324 max=324 max/min=1
Partitioning - elements: sum=256 min=256 max=256 max/min=1
```

This view command tells you information about the function space that
was chosen. We used a 2D quadratic polynomial space with no mapped
geometries. Although we use general knot vectors, you can see by the
relationship of the nodes to elements that we used *C1* quadratics. If
run in parallel, `-iga_view`

will also provide information
about how the problem was divided. For example, on 2 processors:

```
IGA: dim=2 dof=1 order=2 geometry=0 rational=0 property=0
Axis 0: periodic=0 degree=2 quadrature=3 processors=1 nodes=18 elements=16
Axis 1: periodic=0 degree=2 quadrature=3 processors=2 nodes=18 elements=16
Partitioning - nodes: sum=324 min=144 max=180 max/min=1.25
Partitioning - elements: sum=256 min=128 max=128 max/min=1
```

The axis lines reveal that second axis was divided onto 2 processors. The partitioning information is included to provide details of how balanced the partitioning was in terms of nodes and elements. We can also learn about which solver was utilized by running the code with the following command:

```
./Lapace -ksp_view
```

which generates:

```
KSP Object: 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift to prevent zero pivot
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=5832, cols=5832
package used to perform factorization: petsc
total: nonzeros=592704, allocated nonzeros=592704
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=5832, cols=5832
total: nonzeros=592704, allocated nonzeros=592704
total number of mallocs used during MatSetValues calls =0
not using I-node routines
```

The acronym KSP represents the PETSc abstraction of Krylov subspace methods. In this case, the method used was GMRES with ILU(0) as preconditioner. This is PETSc’s default for linear systems which are not flagged as symmetric. We see that convergence is being determined by a relative tolerance reduction of 5 orders of magnitude or 10,000 iterations. All of these can be changed by commandline options.

We can also monitor the progress of the iterative solver by running
with the `-ksp_monitor`

option. This yields:

```
0 KSP Residual norm 9.857710770813e+00
1 KSP Residual norm 2.428479391655e+00
2 KSP Residual norm 1.345666842371e+00
3 KSP Residual norm 9.246122694394e-01
4 KSP Residual norm 2.975286675460e-01
5 KSP Residual norm 8.081827277076e-02
6 KSP Residual norm 1.690789970595e-02
7 KSP Residual norm 3.845013201980e-03
8 KSP Residual norm 5.596156980846e-04
9 KSP Residual norm 7.441946439099e-05
```

Here we see the iterations as well as the norm of the preconditioned
residual. More concisely we could just use
`-ksp_converged_reason`

:

```
Linear solve converged due to CONVERGED_RTOL iterations 10
```

These *view* and *monitor* options are usually available for all PETSc
objects and are useful for debugging and understanding how methods
work/progress.

## Changing Components¶

The commandline options are not just for querying components. You can
also use the commandline to change the behavior of the solver. For
example, if you run the Laplace code with `-help`

again, locate
a block of options for the IGA:

```
IGA options -------------------------------------------------
-iga_dim <-1>: Number of dimensions (IGASetDim)
-iga_dof <1>: Number of DOFs per node (IGASetDof)
-iga_collocation: <FALSE> Use collocation (IGASetUseCollocation)
-iga_processors <-1>: Processor grid (IGASetProcessors)
-iga_periodic <0>: Periodicity (IGAAxisSetPeriodic)
-iga_basis_type <BSPLINE> (choose one of) BSPLINE BERNSTEIN LAGRANGE HIERARCHICAL (IGABasisSetType)
-iga_geometry <>: Specify IGA geometry file (IGARead)
-iga_elements <16>: Elements (IGAAxisInitUniform)
-iga_degree <2>: Degree (IGAAxisSetDegree)
-iga_continuity <-1>: Continuity (IGAAxisInitUniform)
-iga_limits <0>: Limits (IGAAxisInitUniform)
-iga_order <2>: Order (IGASetOrder)
-iga_quadrature <3>: Quadrature points (IGARuleInit)
```

Here are a few of the discretization details that you can change from
the commandline. The numbers in brackets are the default values. Many
of the -1 values are rather flags for something internal in our
code. For example, for the `-iga_continuity`

, the -1 value means
that we will use a continuity of order *p-1*. So, if we run:

```
./Laplace -iga_dim 3 -iga_elements 10 -iga_degree 3 -iga_continuity 0 -iga_view
```

This corresponds to a 3D, 10 by 10 by 10 element discretization of
*C0* cubics:

```
IGA: dim=3 dof=1 order=3 geometry=0 rational=0 property=0
Axis 0: periodic=0 degree=3 quadrature=4 processors=1 nodes=31 elements=10
Axis 1: periodic=0 degree=3 quadrature=4 processors=1 nodes=31 elements=10
Axis 2: periodic=0 degree=3 quadrature=4 processors=1 nodes=31 elements=10
Partitioning - nodes: sum=29791 min=29791 max=29791 max/min=1
Partitioning - elements: sum=1000 min=1000 max=1000 max/min=1
```

If only one value is specified for options such as
`-iga_elements`

or `-iga_degree`

, then that value will be
used in all parametric directions. However, you can also specify
different values delimited by commas to asign behavior to each
direction separately. For example:

```
./Laplace -iga_dim 2 -iga_elements 10,20 -iga_continuity -1,0 -iga_degree 3,4 -iga_view
```

will generate a 2D space of 10 by 20 elements of *C2* cubics by *C0*
quartics:

```
IGA: dim=2 dof=1 order=3 geometry=0 rational=0 property=0
Axis 0: periodic=0 degree=3 quadrature=4 processors=1 nodes=13 elements=10
Axis 1: periodic=0 degree=4 quadrature=5 processors=1 nodes=81 elements=20
Partitioning - nodes: sum=1053 min=1053 max=1053 max/min=1
Partitioning - elements: sum=200 min=200 max=200 max/min=1
```

Similarly the solver components can be changed from the command line. For example, we can solve the system using CG and Jacobi by:

```
./Laplace -ksp_type cg -pc_type jacobi -ksp_view
```

which produces:

```
KSP Object: 1 MPI processes
type: cg
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: jacobi
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=5832, cols=5832
total: nonzeros=592704, allocated nonzeros=592704
total number of mallocs used during MatSetValues calls =0
not using I-node routines
```

Here the acronym PC stands for the preconditioner. You can identify
the types of KSP and PC that are available by running the code with
the `-help`

option and finding the option for `-ksp_type`

and `-pc_type`

. They will be listed there.

## Why Not the Exact Solution?¶

It is also possible to write in you own application-specific
options. These are located in their own block when you run the code
with the `-help`

option:

```
Laplace Options -------------------------------------------------
-collocation: <FALSE> Enable to use collocation (Laplace.c)
-print_error: <FALSE> Prints the L2 error of the solution (Laplace.c)
-check_error: <FALSE> Checks the L2 error of the solution (Laplace.c)
-save: <FALSE> Save the solution to file (Laplace.c)
-draw: <FALSE> If dim <= 2, then draw the solution to the screen (Laplace.c)
```

You will recall that when you ran the code before with
`-print_error`

the *L2* norm of the error was printed. You can see
now that this is an option we have added to this specific code and is
not available to all codes. What might have been somewhat
disconcerting is that the number was not closer to zero (you should
have seen something on the order of `1e-6`

). For the problem we
have selected, all our spaces should return the exact solution. The
reason we have an error is that the iterative solver is inexact. The
linear algebra error leads to discretization error, albeit very
small. We can reduce this error by tightening the conditions for
convergence:

```
./Laplace -ksp_rtol 1e-8 -print_error
```

which defines convergence as an eight order reduction of the norm of
the preconditioned residual. Now the *L2* error should be on the order
`1e-10`

. We can further reduce this by tightening the tolerances
further, or by using a direct solver:

```
./Laplace -ksp_type preonly -pc_type lu -print_error
```

The preonly KSP only applies the PC once and nothing more. It is
intended to be used with a LU factorization as a preconditioner such
that you are effectively using a direct solver. Now you will see that
the *L2* error has indeed dropped to something on the order
`1e-15`

.

This tutorial highlights a feature of using PetIGA/PETSc to solve PDEs–you immediately have access to a wide variety of expert solvers and preconditioners. Furthermore, you have query tools to examine and study your problems for when they fail.