The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
Serpent
Overview
Serpent is a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, developed at VTT Technical Research Centre of Finland since 2004.
Versions 1.1.7, 1.1.19, 2.1.0 and 2.1.21, 2.1.23, 2.1.24, 2.1.25, 2.1.26, 2.1.27, 2.1.28, 2.1.29, 2.1.30 are installed on the CSF (version 2.1.22 did not compile correctly and so is not available). All versions support MPI-based parallelism. v2.1.0, v2.1.21, and v2.1.23, v2.1.24, v2.1.25, v2.1.26, v2.1.27, v2.1.28, v2.1.29, v2.1.30 are also available as an OpenMP (multithreaded) parallel versions.
The software has been compiled from source using the Intel 12.0.5 compiler.
Restrictions on use
Access to this software is restricted to a specific research group. Please contact its-ri-team@manchester.ac.uk to request access, indicating you have read and agree to the terms and conditions in the license, detailed below:
We will Inform the The University of Manchester NEA databank liaison officer of your request to use the software.
Before being permitted to use the software all users must read and adhere to the license conditions. In particular
- The code can be used free of charge by licensed organizations for non-commercial research and educational purposes.
- Usage to promoting the development of weapons of mass destruction is strictly prohibited.
- The code cannot be used outside the Licensee Organization or distributed to a third party.
- VTT and the developers assume no liability for the use of the code or the validity of the results.
Set up procedure
To access the software you must first load one of the following modulefiles:
# OpenMP (multi-thread), single compute-node only (no MPI) module load apps/intel-12.0/serpent/2.1.30-omp # NB: possible fatal error when trying to track nuclide inventories module load apps/intel-12.0/serpent/2.1.29-omp module load apps/intel-12.0/serpent/2.1.28-fix-omp # Fixes bugs in readinput.c and pretrans.c module load apps/intel-12.0/serpent/2.1.27-omp module load apps/intel-12.0/serpent/2.1.26-patch-pre-2.1.27-omp # Fixes several bugs in 2.1.26 module load apps/intel-12.0/serpent/2.1.26-fix-omp # Fixes bugs in rroutput.c module load apps/intel-12.0/serpent/2.1.26-omp module load apps/intel-12.0/serpent/2.1.25-fix-omp # Fixes bugs in geometryplotter.c and burnmatcompositions.c module load apps/intel-12.0/serpent/2.1.25-omp module load apps/intel-12.0/serpent/2.1.24-omp module load apps/intel-12.0/serpent/2.1.23-omp module load apps/intel-12.0/serpent/2.1.21-fix-omp # Fixes a bug in coldet.c module load apps/intel-12.0/serpent/2.1.21-omp module load apps/intel-12.0/serpent/2.1.0-omp # MPI versions (for use with InfiniBand connected nodes - for multi-node MPI jobs) module load apps/intel-12.0/serpent/2.1.30-ib # NB: possible fatal error when trying to track nuclide inventories module load apps/intel-12.0/serpent/2.1.29-ib module load apps/intel-12.0/serpent/2.1.28-fix-ib # Fixes bugs in readinput.c and pretrans.c module load apps/intel-12.0/serpent/2.1.27-ib module load apps/intel-12.0/serpent/2.1.26-patch-pre-2.1.27-ib # Fixes several bugs in 2.1.26 module load apps/intel-12.0/serpent/2.1.26-fix-ib # Fixes bugs in rroutput.c module load apps/intel-12.0/serpent/2.1.26-ib module load apps/intel-12.0/serpent/2.1.25-fix-ib # Fixes bugs in geometryplotter.c and burnmatcompositions.c module load apps/intel-12.0/serpent/2.1.25-ib module load apps/intel-12.0/serpent/2.1.24-ib module load apps/intel-12.0/serpent/2.1.23-ib module load apps/intel-12.0/serpent/2.1.21-fix-ib # Fixes a bug in coldet.c module load apps/intel-12.0/serpent/2.1.21-ib module load apps/intel-12.0/serpent/2.1.0-ib module load apps/intel-12.0/serpent/1.1.19-ib module load apps/intel-12.0/serpent/1.1.7-ib # MPI versions (slower than InfiniBand - for single node MPI jobs) module load apps/intel-12.0/serpent/2.1.30 # NB: possible fatal error when trying to track nuclide inventories module load apps/intel-12.0/serpent/2.1.29 module load apps/intel-12.0/serpent/2.1.28-fix # Fixes bugs in readinput.c and pretrans.c module load apps/intel-12.0/serpent/2.1.27 module load apps/intel-12.0/serpent/2.1.26-patch-pre-2.1.27 # Fixes several bugs in 2.1.26 module load apps/intel-12.0/serpent/2.1.26-fix # Fixes bugs in rroutput.c module load apps/intel-12.0/serpent/2.1.26 module load apps/intel-12.0/serpent/2.1.25-fix # Fixes bugs in geometryplotter.c and burnmatcompositions.c module load apps/intel-12.0/serpent/2.1.25 module load apps/intel-12.0/serpent/2.1.24 module load apps/intel-12.0/serpent/2.1.23 module load apps/intel-12.0/serpent/2.1.21-fix # Fixes a bug in coldet.c module load apps/intel-12.0/serpent/2.1.21 module load apps/intel-12.0/serpent/2.1.0 module load apps/intel-12.0/serpent/1.1.19 module load apps/intel-12.0/serpent/1.1.7
Any other required modulefiles (e.g. MPI) will be loaded automatically by the above modulefiles.
Cross Section Data
The Serpent cross section data supplied with version 1.1.7 is available in all of the above versions. An environment variable named $SERPENT_XSDATA
is set by all of the above modulefiles to give the directory name containing the data. To see what is available run the following on the login node after loading one of the above modulefiles:
ls $SERPENT_XSDATA
Your serpent input file may need to refer to these data libraries in which case you should use the full path to the data libraries. For example, first report what the full path is:
echo $SERPENT_XSDATA
Then use that path in your serpent input file. For example it may contain the lines:
set acelib "/opt/gridware/apps/intel-12.0/serpent/2.1.24/xsdata/jef22/sss_jef22u.xsdata" set declib "/opt/gridware/apps/intel-12.0/serpent/2.1.24/xsdata/jef22/sss_jef22.dec"
Alternatively you could create symbolic links (shortcuts) in your job directory pointing to the centrally installed directories. For example, to create a shortcut in your current directory named jef22
which points to the central jef22
directory, run the following on the login node or in your jobscript:
ln -s $SERPENT_XSDATA/jef22
Then in your serpent input file you can use the much shorter path:
set acelib "./jef22/sss_jef22u.xsdata" set declib "./jef22/sss_jef22.dec"
To remove the shortcut, run the following from within the directory containing the shortcut (or in your jobscript). This will remove only the shortcut – it won’t touch the centrally installed data.
rm jef22
Photon Data
As of version 2.1.24 photon data can be read by Serpent. As with the cross section data above, once you have loaded the 2.1.24 (or later) modulefile you can access the photon data using and environment variable $SERPENT_PHOTON_DATA
. For example:
ls $SERPENT_PHOTON_DATA
Your serpent input file may need to refer to these data libraries in which case you should use the full path to the data libraries. For example, first report what the full path is:
echo $SERPENT_PHOTON_DATA
Then use that path in your serpent input file. The full path to the cohff.dat
file, for example, is:
/opt/gridware/apps/intel-12.0/serpent/2.1.24/photon_data/cohff.dat
Alternatively you could create symbolic links (shortcuts) in your job directory pointing to the centrally installed directories. For example, to create a shortcut in your current directory named photon_data
which points to the central photon_data
directory, run the following on the login node or in your jobscript:
ln -s $SERPENT_PHOTON_DATA
Then in your serpent input file you can use the much shorter path:
./coff.dat
To remove the shortcut, run the following from within the directory containing the shortcut (or in your jobscript). This will remove only the shortcut – it won’t touch the centrally installed data.
rm photon_data
Running the application
Please do not run Serpent on the login node. Jobs should be submitted to the compute nodes via batch.
The executable to run is named as follows:
sss
(if using version 1.x.x)sss2
(if using version 2.x.x)sss2-omp
(if using version 2.x.x-omp)
Unless using the OpenMP version all executables should be run as MPI applications (see below for example jobscripts).
Parallel batch job submission (OpenMP version)
The OMP version can be used on only a single compute node but will use multiple cores.
Make sure you have the modulefile loaded (see above) then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -pe smp.pe 4 # Max 24 cores allowed (a single node) ### You MUST say how many OpenMP threads to use. $NSLOTS is automatically ### set to the number requested on the -pe line above. export OMP_NUM_THREADS=$NSLOTS sss2-omp
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Parallel batch job submission (MPI versions)
The MPI version can be used across multiple compute nodes and also on multiple cores of a single compute node.
Make sure you have the modulefile loaded (see above) then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings ### Choose ONE of the following lines for parallel running: ### (the number of cores is just an example) #$ -pe smp.pe 4 # Max 24 cores allowed (a single compute node) #$ -pe orte-24-ib.pe 48 # Minimum is 48 and must be multiples of 24 cores ### $NSLOTS is automatically set to number of cores requested above mpirun -np $NSLOTS sss # or mpirun -np $NSLOTS sss2
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Note: some versions of serpent allow you to pass a -mpi
flag on the serpent command-line rather than using mpirun
. This will cause serpent to crash on the CSF. You must use the mpirun
method of starting serpent as shown in the example above.
Further info
- Serpent website which provides a Serpent manual (pdf)
- Serpent forum.
- CSF Parallel Environments
Updates
None.