Serpent
Overview
Serpent is a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, developed at VTT Technical Research Centre of Finland since 2004.
Version have been compiled using MPI (small and larger multi-node jobs), OpenMP (single-node multithreaded) and Mixed-mode, combining MPI and OpenMP which might help with jobs that require large memory.
The software has been compiled from source using the Intel 17.0.7 compiler.
Restrictions on use
Access to this software is restricted to a specific research group. Please contact its-ri-team@manchester.ac.uk to request access, indicating you have read and agree to the terms and conditions in the license, detailed below:
We will Inform the The University of Manchester NEA databank liaison officer of your request to use the software.
Before being permitted to use the software all users must read and adhere to the license conditions. In particular
- The code can be used free of charge by licensed organizations for non-commercial research and educational purposes.
- Usage to promoting the development of weapons of mass destruction is strictly prohibited.
- The code cannot be used outside the Licensee Organization or distributed to a third party.
- VTT and the developers assume no liability for the use of the code or the validity of the results.
Set up procedure
We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.
To access the software you must first load the following modulefile (this gives access to the OpenMP and MPI versions):
# Load one of the following modulefiles - which ever version you need module load apps/intel-17.0/serpent/2.1.31 module load apps/intel-17.0/serpent/2.1.30 module load apps/intel-17.0/serpent/2.1.29 module load apps/intel-17.0/serpent/2.1.27
Any other required modulefiles (e.g. MPI) will be loaded automatically by the above modulefile.
Cross Section Data
The Serpent cross section data supplied with version 1.1.7 is available in all of the above versions. An environment variable named $SERPENT_XSDATA
is set by all of the above modulefiles to give the directory name containing the data. To see what is available run the following on the login node after loading one of the above modulefiles:
ls $SERPENT_XSDATA
Your serpent input file may need to refer to these data libraries in which case you should use the full path to the data libraries. For example, first report what the full path is:
echo $SERPENT_XSDATA
Then use that path in your serpent input file. For example it may contain the lines:
set acelib "/opt/apps/apps/intel-17.0/serpent/2.1.24/xsdata/jef22/sss_jef22u.xsdata" set declib "/opt/apps/apps/intel-17.0/serpent/2.1.24/xsdata/jef22/sss_jef22.dec"
Alternatively you could create symbolic links (shortcuts) in your job directory pointing to the centrally installed directories. For example, to create a shortcut in your current directory named jef22
which points to the central jef22
directory, run the following on the login node or in your jobscript:
ln -s $SERPENT_XSDATA/jef22
Then in your serpent input file you can use the much shorter path:
set acelib "./jef22/sss_jef22u.xsdata" set declib "./jef22/sss_jef22.dec"
To remove the shortcut, run the following from within the directory containing the shortcut (or in your jobscript). This will remove only the shortcut – it won’t touch the centrally installed data.
rm jef22
Photon Data
As of version 2.1.24 photon data can be read by Serpent. As with the cross section data above, once you have loaded the modulefile you can access the photon data using and environment variable $SERPENT_PHOTON_DATA
. For example:
ls $SERPENT_PHOTON_DATA
Your serpent input file may need to refer to these data libraries in which case you should use the full path to the data libraries. For example, first report what the full path is:
echo $SERPENT_PHOTON_DATA
Then use that path in your serpent input file. The full path to the cohff.dat
file, for example, is:
/opt/apps/apps/intel-17.0/serpent/2.1.24/photon_data/cohff.dat
Alternatively you could create symbolic links (shortcuts) in your job directory pointing to the centrally installed directories. For example, to create a shortcut in your current directory named photon_data
which points to the central photon_data
directory, run the following on the login node or in your jobscript:
ln -s $SERPENT_PHOTON_DATA
Then in your serpent input file you can use the much shorter path:
./coff.dat
To remove the shortcut, run the following from within the directory containing the shortcut (or in your jobscript). This will remove only the shortcut – it won’t touch the centrally installed data.
rm photon_data
Running the application
Please do not run Serpent on the login node. Jobs should be submitted to the compute nodes via batch.
The executable to run is named as follows:
sss2
– MPI version for single-node and multi-node parallel jobssss2-omp
– OpenMP (multithreaded) version for single-node parallel jobssss2-mixed
– MPI+OpenMP (multithreaded) mixed-mode version for multi-node parallel jobs
Unless using the OpenMP version all executables should be run as MPI applications (see below for example jobscripts).
Below are examples for the following types of jobs:
- Small (single-node) Parallel batch job submission (OpenMP version)
- Small (single-node) Parallel batch job submission (MPI version)
- Large (multi-node) Parallel batch job submission (MPI version)
- Small (single-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)
- Large (multi-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)
Small (single-node) Parallel batch job submission (OpenMP version)
The OMP version can be used on only a single compute node but will use multiple cores.
Note that the serpent program name is sss2-omp
for the OpenMP version.
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd # Job will run from the current directory #$ -pe smp.pe 4 # Max 32 cores allowed (a single node) ### We now load the modulefile in the jobscript, for example: module load apps/intel-17.0/serpent/2.1.31 ### You MUST say how many OpenMP threads to use. $NSLOTS is automatically ### set to the number requested on the -pe line above. export OMP_NUM_THREADS=$NSLOTS sss2-omp your_input_file
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Small (single-node) Parallel batch job submission (MPI version)
This example uses the MPI version on multiple CPU cores within a single compute node (see below for larger multi-node MPI jobs).
Note that the serpent program name is sss2
for the MPI version (NOT sss2-mpi
!!)
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd # Job will run from the current directory #$ -pe smp.pe 16 # 2--32 cores allowed in single compute-node jobs ### We now load the modulefile in the jobscript, for example: module load apps/intel-17.0/serpent/2.1.31 ### $NSLOTS is automatically set to number of cores requested above mpirun -np $NSLOTS sss2 your_input_file
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Note: some versions of serpent allow you to pass a -mpi
flag on the serpent command-line rather than using mpirun
. This will cause serpent to crash on the CSF. You must use the mpirun
method of starting serpent as shown in the example above.
Large (multi-node) Parallel batch job submission (MPI version)
This example uses the MPI version on multiple CPU cores within multiple compute nodes.
Note that the serpent program name is sss2
for the MPI version (NOT sss2-mpi
!!)
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd # Job will run from the current directory #$ -pe mpi-24-ib.pe 48 # Must be at least 48 cores and in multiples of 24 ### We now load the modulefile in the jobscript, for example: module load apps/intel-17.0/serpent/2.1.31 ### $NSLOTS is automatically set to number of cores requested above mpirun -np $NSLOTS sss2 your_input_file
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Note: some versions of serpent allow you to pass a -mpi
flag on the serpent command-line rather than using mpirun
. This will cause serpent to crash on the CSF. You must use the mpirun
method of starting serpent as shown in the example above.
Small (single-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)
The mixed-mode version of serpent will use a combination of MPI processes and OpenMP threads. Each MPI process will use multiple OpenMP threads to perform calculation using multi-core OpenMP methods. By using a small number of MPI processes, each using a larger number of OpenMP threads, the relatively slow communication between many MPI processes is reduced in favour of faster communication between the OpenMP threads. The number of MPI process multiplied by the number of OpenMP threads per process should equal the total number of cores requested in your job.
This is supposed to provide a happy-medium between running large multi-node jobs and small single-node jobs. We do, however, recommend you test the performance of this version with your input data. For small simulations, running the ordinary OpenMP version (see above) may well be faster.
The following example will use the mixed-mode version on a single compute-node. See later for a larger multi-node mixed-mode example job.
Note that the serpent program name is sss2-mixed
for the MPI+OpenMP mixed-mode version.
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd # Job will run from the current directory #$ -pe smp.pe 32 # Max 32 cores allowed (a single compute node) ### We now load the modulefile in the jobscript, for example: module load apps/intel-17.0/serpent/2.1.31 ### $NSLOTS is automatically set to number of cores requested above. ### We want this many MPI processes PER COMPUTE-NODE (CHANGE AS REQUIRED). ### This is usually a small number so that we can use more cores for OpenMP. MPIPROCS_PER_NODE=2 ### Calc total MPI procs and number of OpenMP threads per MPI proc (calculated, DO NOT CHANGE) MPI_TOTAL=$((NHOSTS*MPIPROCS_PER_NODE)) OMP_PER_MPIPROC=$((NSLOTS/MPI_TOTAL)) ### Now start serpent using some extra flags for mixed-mode mpirun -np $MPI_TOTAL --map-by node sss2-mixed -omp $OMP_PER_MPIPROC your_input_file
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Large (multi-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)
The mixed-mode version of serpent will use a combination of MPI processes and OpenMP threads. Each MPI process will use multiple OpenMP threads to perform calculation using multi-core OpenMP methods. By using a small number of MPI processes, each using a larger number of OpenMP threads, the relatively slow communication between many MPI processes is reduced in favour of faster communication between the OpenMP threads. The number of MPI process multiplied by the number of OpenMP threads per process should equal the total number of cores requested in your job.
This is supposed to provide a happy-medium between running large multi-node jobs and small single-node jobs. We do, however, recommend you test the performance of this version with your input data. For small simulations, running the ordinary OpenMP version (see earlier) may well be faster.
The following example will use the mixed-mode version across multiple compute-nodes.
Note that the serpent program name is sss2-mixed
for the MPI+OpenMP mixed-mode version.
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd # Job will run from the current directory #$ -pe mpi-24-ib.pe 48 # Must be at least 48 cores and in multiples of 24 ### We now load the modulefile in the jobscript, for example: module load apps/intel-17.0/serpent/2.1.31 ### $NSLOTS is automatically set to number of cores requested above. ### We want this many MPI processes PER COMPUTE-NODE (CHANGE AS REQUIRED). ### This is usually a small number so that we can use more cores for OpenMP. ### (in this example we request 48 cores in total which gives us ### two 24-core compute nodes. We'll run 2 MPI processes on each compute node) MPIPROCS_PER_NODE=2 ### Calc total MPI procs and number of OpenMP threads per MPI proc (calculated, DO NOT CHANGE) MPI_TOTAL=$((NHOSTS*MPIPROCS_PER_NODE)) OMP_PER_MPIPROC=$((NSLOTS/MPI_TOTAL)) ### Now start serpent using some extra flags for mixed-mode mpirun -np $MPI_TOTAL --map-by node sss2-mixed -omp $OMP_PER_MPIPROC your_input_file
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Further info
- Serpent website which provides a Serpent manual (pdf)
- Serpent forum.
- CSF Parallel Environments
Updates
None.