Tests

First install the test suite with make install. Then you can start unit and/or functional tests.

Parallel tests suppose you can start a MPI job with mpirun, so please check you can use mpirun. On clusters like Neptune, you can connect to a node with

$  qsub -I -lselect=126

And start the perl scripts directly. Another solution is to submit the perl script(s) as a job. For example, with PBS, you can use the file (named custom.batch):

#PBS -N pangolin
#PBS -l select=126
#PBS -o output_80lat.log
#PBS -l walltime=00:45:00
#PBS -j oe

cd $PBS_O_WORKDIR
perl t/functional_io_hdf5.t

and then submit it from the tests/ directory with

$  qsub custom.batch

The output of the tests will be in output_80lat.log as defined in the configuration file.

Warning

Keep in mind the number of MPI processes you can use on the machine. Some tests require a large number of processes and are best suited for cluster nodes.

Unit tests

Description

Sequential unit tests check the partitioning is done properly. In particular, it checks the cell neighbours, the subdomains neighbours and the subdomains ghost cells.

Parallel unit tests try to send messages between the different cores to see if communication is done properly.

Running the tests

Unit tests are started with :

$  perl tests_run.pl --unit

The output should be :

Sequential unit tests .. ok
Parallel unit tests .... ok
All tests successful.
Files=2, Tests=408, 88 wallclock secs ( 0.44 usr  0.05 sys + 60.80 cusr  9.73
csys = 71.02 CPU)
Result: PASS

For more details, you can set the verbosity to 1 in tests_run.pl.

If you want to start only part of the tests use either:

$  prove t/unit_sequential.t
$  prove t/unit_parallel.t

It is also possible to use perl directly:

$  perl t/unit_sequential.t
$  perl t/unit_parallel.t

The number of partitions can be changes by editing the correct test file. Setting n_min and n_max will give you the number of partitions (sequential) or processes (parallel). Beware, the number of partitions must alway be 1 or a multiple of 3, except for 1 partition. By default, parallel tests are aimed for a PC, so the limit is set at 24 processes.

Finally, you can disable parallel unit tests with the --no-parallel flag:

$  perl tests_run.pl --unit --no-parallel

**Warning**

If you want to print debugging information in the code, the testing
suite might not work anymore.

Functional tests

Description

Here, we run the complete model with different initial conditions. Pangolin is run in the so-called Hourdin and Lauritzen configurations (according to a paper written by these scientists) in parallel. The output of the parallel version is compared to the output of the sequential version. The parallel version is validated if the difference is less than a given threshold (1e-12 typically). This is done for several number of cores, up to the limit fixed in the test.

The I/O tests ensure that reading and writing HDF5 data is done properly. Pangolin is run with 0 iterations and we check the output data is the same as the data in input. This supposes you have enable the writing of ratio and both winds in the model. Otherwise, the tests will be skipped.

Running the tests

Functional tests can be started with:

$  perl tests_run.pl --func

**Warning**

If you are using Pangolin with parallel I/O (HDF5), be very careful
of the filesystem you read and write from/on. For Neptune (CERFACS),
this means you must do your I/O on
/scratch
only as
/home
does not support it. At best, the code will be slow and may crash in
the worst case.

**Warning**

Please note that parallel simulations are run only if the output
files do not exist. Otherwise, the older version will be used in the
comparison to check the output.

A subset of the tests can be started manually with perl or prove as before:

$  perl t/functional_hourdin.t
$  perl t/functional_lauritzen.t
$  perl t/functional_io_hdf5.t

The first two tests will start a sequential and parallel advection for comparison. Data is written in subfolders of output_hourdin or output_lauritzen or output_hdf5. While you may specify the location for input and output folders (see below), you will need the files shown in ?. We assume the HDF5 format, otherwise the extension should be .dat.

Nb lat I/O Hourdin Lauritzen
80 ratio_1_201301010000.h5 ratio_1_201301010000.h5 ratio_1_201301010000.h5
  u_201301010000.h5 u_201301010000.h5 u_201301010000.h5
  v_201301010000.h5 v_201301010000.h5 v_201301010000.h5
160 ratio_1_201301010000.h5 u_201301010000.h5 v_201301010000.h5
320 ratio_1_201301010000.h5 u_201301010000.h5 v_201301010000.h5

Table: Files needed for functional tests

Note

We have added the winds as a requirement for the Lauritzen test case. However, the current version of Pangolin still includes these winds internally.

To set the different input and output folder is done by editing the relevant section of the different .t files. Here is an example setting the folder in the scratch on Neptune:

my $folder = "/scratch/ae/praga/";

my $ratio_in = $folder."input/gaussianhills_80lat/";
my $winds_in = $folder."input/cv_winds/80lat/";
my $folder_out = $folder."tests/output_hdf5";

The number of MPI processes can be set manually in the adequate .t with:

$test->set_nmin($n_min);
$test->set_nmax($n_max);

The number of tracers is 1 by default by default but can be changed with

$test->set_nbtracers(1);

Do not forget to read the warning of at the beginning of the section about the parallel requirements. Finally, you can have more information about the Perl module by generating a small documentation :

$  perldoc Functional.pm

Cleaning

Functional tests generate a lot of data (around 3G for all I/O tests and 500M for the others) and a lot of log files, so do not forget to remove the output files when you have finished. Logs can be cleaned with: