Serpent
Overview
Serpent is a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, developed at VTT Technical Research Centre of Finland since 2004.
Version have been compiled using MPI (small and larger multi-node jobs), OpenMP (single-node multithreaded) and Mixed-mode, combining MPI and OpenMP which might help with jobs that require large memory.
The software has been compiled from source using the Intel 2020.02 compiler.
Restrictions on use
Access to this software is restricted to a specific research group. Please contact its-ri-team@manchester.ac.uk to request access, indicating you have read and agree to the terms and conditions in the license, detailed below:
We will Inform the The University of Manchester NEA databank liaison officer of your request to use the software.
Before being permitted to use the software all users must read and adhere to the license conditions. In particular
- The code can be used free of charge by licensed organizations for non-commercial research and educational purposes.
- Usage to promoting the development of weapons of mass destruction is strictly prohibited.
- The code cannot be used outside the Licensee Organization or distributed to a third party.
- VTT and the developers assume no liability for the use of the code or the validity of the results.
Set up procedure
We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.
To access the software you must first load the following modulefile (this gives access to the OpenMP and MPI versions):
module load serpent/2.1.31-iomkl-2020.02
Any other required modulefiles (e.g. MPI) will be loaded automatically by the above modulefile.
Cross Section Data
The Serpent cross section data supplied with version 1.1.7 is available in all of the above versions. An environment variable named $SERPENT_XSDATA
is set by all of the above modulefiles to give the directory name containing the data. To see what is available run the following on the login node after loading one of the above modulefiles:
ls $SERPENT_XSDATA
Your serpent input file may need to refer to these data libraries in which case you should use the full path to the data libraries. For example, first report what the full path is:
echo $SERPENT_XSDATA
Note that you should check the paths in your input files if you have previously used them on CSF3. The paths on CSF4 might be different – please see below for what to use on CSF4. |
Then use that path in your serpent input file. For example it may contain the lines:
set acelib "/mnt/data-sets/serpent/xsdata/jef22/sss_jef22u.xsdata" set declib "/mnt/data-sets/serpent/xsdata/jef22/sss_jef22.dec"
Alternatively you could create symbolic links (shortcuts) in your job directory pointing to the centrally installed directories. For example, to create a shortcut in your current directory named jef22
which points to the central jef22
directory, run the following on the login node or in your jobscript:
ln -s $SERPENT_XSDATA/jef22
Then in your serpent input file you can use the much shorter path:
set acelib "./jef22/sss_jef22u.xsdata" set declib "./jef22/sss_jef22.dec"
To remove the shortcut, run the following from within the directory containing the shortcut (or in your jobscript). This will remove only the shortcut – it won’t touch the centrally installed data.
rm jef22
Photon Data
As of version 2.1.24 photon data can be read by Serpent. As with the cross section data above, once you have loaded the modulefile you can access the photon data using and environment variable $SERPENT_PHOTON_DATA
. For example:
ls $SERPENT_PHOTON_DATA
Your serpent input file may need to refer to these data libraries in which case you should use the full path to the data libraries. For example, first report what the full path is:
echo $SERPENT_PHOTON_DATA
Note that you should check the paths in your input files if you have previously used them on CSF3. The paths on CSF4 might be different – please see below for what to use on CSF4. |
Then use that path in your serpent input file. The full path to the cohff.dat
file, for example, is:
/mnt/data-sets/serpent/photon_data/cohff.dat
Alternatively you could create symbolic links (shortcuts) in your job directory pointing to the centrally installed directories. For example, to create a shortcut in your current directory named photon_data
which points to the central photon_data
directory, run the following on the login node or in your jobscript:
ln -s $SERPENT_PHOTON_DATA
Then in your serpent input file you can use the much shorter path:
./coff.dat
To remove the shortcut, run the following from within the directory containing the shortcut (or in your jobscript). This will remove only the shortcut – it won’t touch the centrally installed data.
rm photon_data
Optimizing Memory Usage in Burn-up Calculations
Some burn-up calculations can consume large amounts of memory. If your jobs are not completing due to memory limits being exceeded, there is an optimization setting that can be applied in the Serpent input deck. Please see the following section in the Serpent manual for advice: http://serpent.vtt.fi/mediawiki/index.php/Input_syntax_manual#set_opti.
Thanks to Jeremy Owston for this tip.
Running the application
Please do not run Serpent on the login node. Jobs should be submitted to the compute nodes via batch.
The executable to run is named as follows:
sss2
– MPI version for single-node and multi-node parallel jobssss2-omp
– OpenMP (multithreaded) version for single-node parallel jobssss2-mixed
– MPI+OpenMP (multithreaded) mixed-mode version for multi-node parallel jobs
Unless using the OpenMP version all executables should be run as MPI applications (see below for example jobscripts).
Below are examples for the following types of jobs:
- Small (single-node) Parallel batch job submission (OpenMP version)
- Small (single-node) Parallel batch job submission (MPI version)
- Large (multi-node) Parallel batch job submission (MPI version)
- Small (single-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)
- Large (multi-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)
Small (single-node) Parallel batch job submission (OpenMP version)
The OMP version can be used on only a single compute node but will use multiple cores.
Note that the serpent program name is sss2-omp
for the OpenMP version.
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=multicode) Single-node parallel #SBATCH -n 16 # (or --ntasks=16) Number of cores (2--40) module purge # Remove any modulefiles inherited from login node module load serpent/2.1.31-iomkl-2020.02 ### You MUST say how many OpenMP threads to use. $SLURM_NTASKS is automatically ### set to the number requested on the -pe line above. export OMP_NUM_THREADS=$SLURM_NTASKS sss2-omp your_input_file
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Small (single-node) Parallel batch job submission (MPI version)
This example uses the MPI version on multiple CPU cores within a single compute node (see below for larger multi-node MPI jobs).
Note that the serpent program name is sss2
for the MPI version (NOT sss2-mpi
!!)
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=multicode) Single-node parallel #SBATCH -n 16 # (or --ntasks=16) Number of cores (2--40) module purge # Remove any modulefiles inherited from login node module load serpent/2.1.31-iomkl-2020.02 # mpirun knows how many cores to use mpirun sss2 your_input_file
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Note: some versions of serpent allow you to pass a -mpi
flag on the serpent command-line rather than using mpirun
. This will cause serpent to crash on the CSF. You must use the mpirun
method of starting serpent as shown in the example above.
Large (multi-node) Parallel batch job submission (MPI version)
This example uses the MPI version on multiple CPU cores within multiple compute nodes.
Note that the serpent program name is sss2
for the MPI version (NOT sss2-mpi
!!)
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multinode # (or --partition=) Multi-node job #SBATCH -n 80 # (or --ntasks=) 80 or more cores in multiples of 40 module purge # Remove any modulefiles inherited from login node module load serpent/2.1.31-iomkl-2020.02 # mpirun knows how many cores to use mpirun sss2 your_input_file
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Note: some versions of serpent allow you to pass a -mpi
flag on the serpent command-line rather than using mpirun
. This will cause serpent to crash on the CSF. You must use the mpirun
method of starting serpent as shown in the example above.
Small (single-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)
The mixed-mode version of serpent will use a combination of MPI processes and OpenMP threads. Each MPI process will use multiple OpenMP threads to perform calculation using multi-core OpenMP methods. By using a small number of MPI processes, each using a larger number of OpenMP threads, the relatively slow communication between many MPI processes is reduced in favour of faster communication between the OpenMP threads. The number of MPI process multiplied by the number of OpenMP threads per process should equal the total number of cores requested in your job.
This is supposed to provide a happy-medium between running large multi-node jobs and small single-node jobs. We do, however, recommend you test the performance of this version with your input data. For small simulations, running the ordinary OpenMP version (see above) may well be faster.
The following example will use the mixed-mode version on a single compute-node. See later for a larger multi-node mixed-mode example job.
Note that the serpent program name is sss2-mixed
for the MPI+OpenMP mixed-mode version.
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multicore # We are using just one node in this small mixed-mode example #SBATCH -N 1 # 1 compute node (a node has 40-cores available) #SBATCH -n 2 # 2 MPI processes in total #SBATCH -c 20 # 20 cores to be used by each MPI process module purge # Remove any modulefiles inherited from login node module load serpent/2.1.31-iomkl-2020.02 # Inform each MPI process how many OpenMP threads to use (20 in this example) export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK # Run the MPI processes (mpirun knows to use 1 compute node, each running 2 processes in this example). mpirun --map-by ppr:1:socket:pe=$OMP_NUM_THREADS sss2-mixed your_input_file # # # # Each MPI process will use 20 cores in this example # # ppr means 'processes per resource' where the number of # processes is 1 and the resource is 'socket' in this example. # Hence each MPI process will run on its own socket. The node # has two sockets.
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Large (multi-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)
The mixed-mode version of serpent will use a combination of MPI processes and OpenMP threads. Each MPI process will use multiple OpenMP threads to perform calculation using multi-core OpenMP methods. By using a small number of MPI processes, each using a larger number of OpenMP threads, the relatively slow communication between many MPI processes is reduced in favour of faster communication between the OpenMP threads. The number of MPI process multiplied by the number of OpenMP threads per process should equal the total number of cores requested in your job.
This is supposed to provide a happy-medium between running large multi-node jobs and small single-node jobs. We do, however, recommend you test the performance of this version with your input data. For small simulations, running the ordinary OpenMP version (see earlier) may well be faster.
The following example will use the mixed-mode version across multiple compute-nodes.
Note that the serpent program name is sss2-mixed
for the MPI+OpenMP mixed-mode version.
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multinode # We are using multiple nodes in this large mixed-mode example #SBATCH -N 3 # 3 compute nodes (each node has 40-cores available) #SBATCH -n 6 # 6 MPI processes in total (2 per compute node) #SBATCH -c 20 # 20 cores to be used by each MPI process module purge # Remove any modulefiles inherited from login node module load serpent/2.1.31-iomkl-2020.02 # Inform each MPI process how many OpenMP threads to use (20 in this example) export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK # Run the MPI processes (mpirun knows to use 3 compute nodes, each running 2 processes in this example). mpirun --map-by ppr:1:socket:pe=$OMP_NUM_THREADS sss2-mixed your_input_file # # # # Each MPI process will use 20 cores in this example # # ppr means 'processes per resource' where the number of # processes is 1 and the resource is 'socket' in this example. # Hence each MPI process will run on its own socket. The node # has two sockets.
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Further info
- Serpent website which provides a Serpent manual (pdf)
- Serpent forum.
Updates
None.