{"id":512,"date":"2018-10-10T11:58:19","date_gmt":"2018-10-10T10:58:19","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=512"},"modified":"2026-03-20T14:46:38","modified_gmt":"2026-03-20T14:46:38","slug":"serpent","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/serpent\/","title":{"rendered":"Serpent"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"https:\/\/serpent.vtt.fi\/\">Serpent<\/a> is a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, developed at VTT Technical Research Centre of Finland since 2004.<\/p>\n<p>Version have been compiled using MPI (small and larger multi-node jobs), OpenMP (single-node multithreaded) and <em>Mixed-mode<\/em>, combining MPI and OpenMP which might help with jobs that require large memory.<\/p>\n<p>The software has been compiled from source.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>Access to this software is restricted to a specific research group. All users must apply for a licence through the NEA Databank. Once approved, they should attach the licence when submitting their access request. Please <a href=\"\/csf3\/overview\/help\/\">contact us<\/a> to request access, confirming that you have read and agree to the licence terms and the conditions outlined below.<\/p>\n<p>Before using the software, all users must read and comply with the licence conditions. In particular:<\/p>\n<ul>\n<li>The code may be used free of charge by licensed organisations for non\u2011commercial research and educational purposes.<\/li>\n<li>Any use that promotes the development of weapons of mass destruction is strictly prohibited.<\/li>\n<li>The code may not be used outside the Licensee Organisation or distributed to third parties.<\/li>\n<li>VTT and the developers assume no liability for the use of the code or the validity of the results<\/li>\n<p>.<\/ul>\n<h2>Set up procedure<\/h2>\n<p>We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job <abbr title=\"add '#$ -V' to your jobscript\">inherit these settings<\/abbr>.<\/p>\n<p>To access the software you must first load the following modulefile (this gives access to the OpenMP and MPI versions):<\/p>\n<pre class=slurm>\r\n# Load <em>one<\/em> of the following modulefiles - which ever version you need\r\nmodule load apps\/gcc\/serpent\/2.2.2\r\nmodule load apps\/gcc\/serpent\/2.2\r\nmodule load apps\/gcc\/serpent\/2.1.31\r\n<\/pre>\n<p>Any other required modulefiles (e.g. MPI) will be loaded automatically by the above modulefile.<\/p>\n<h2>Cross Section Data<\/h2>\n<p>The Serpent cross section data supplied with version 1.1.7 is available in all of the above versions. An environment variable named <code>$SERPENT_XSDATA<\/code> is set by all of the above modulefiles to give the directory name containing the data. To see what is available run the following on the login node after loading one of the above modulefiles:<\/p>\n<pre>\r\nls $SERPENT_XSDATA\r\n<\/pre>\n<p>Your serpent input file may need to refer to these data libraries in which case you should use the full path to the data libraries. For example, first report what the full path is:<\/p>\n<pre>\r\necho $SERPENT_XSDATA\r\n<\/pre>\n<p>Then use that path in your serpent input file. For example it may contain the lines:<\/p>\n<pre class=slurm>\r\nset acelib \"\/opt\/apps\/apps\/gcc\/serpent\/<em>2.2<\/em>\/xsdata\/jef22\/sss_jef22u.xsdata\"\r\nset declib \"\/opt\/apps\/apps\/gcc\/serpent\/<em>2.2<\/em>\/xsdata\/jef22\/sss_jef22.dec\"\r\n<\/pre>\n<p>Alternatively you could create symbolic links (shortcuts) in your job directory pointing to the centrally installed directories. For example, to create a shortcut in your current directory named <code>jef22<\/code> which points to the central <code>jef22<\/code> directory, run the following on the login node or in your jobscript:<\/p>\n<pre>\r\nln -s $SERPENT_XSDATA\/jef22\r\n<\/pre>\n<p>Then in your serpent input file you can use the much shorter path:<\/p>\n<pre>\r\nset acelib \".\/jef22\/sss_jef22u.xsdata\"\r\nset declib \".\/jef22\/sss_jef22.dec\"\r\n<\/pre>\n<p>To remove the shortcut, run the following from within the directory containing the shortcut (or in your jobscript). This will remove only the shortcut &#8211; it won&#8217;t touch the centrally installed data.<\/p>\n<pre>\r\nrm jef22\r\n<\/pre>\n<h2>Photon Data<\/h2>\n<p>As of version 2.1.24 photon data can be read by Serpent. As with the cross section data above, once you have loaded the modulefile you can access the photon data using and environment variable <code>$SERPENT_PHOTON_DATA<\/code>. For example:<\/p>\n<pre>\r\nls $SERPENT_PHOTON_DATA\r\n<\/pre>\n<p>Your serpent input file may need to refer to these data libraries in which case you should use the full path to the data libraries. For example, first report what the full path is:<\/p>\n<pre>\r\necho $SERPENT_PHOTON_DATA\r\n<\/pre>\n<p>Then use that path in your serpent input file. The full path to the <code>cohff.dat<\/code> file, for example, is:<\/p>\n<pre class=slurm>\r\n\/opt\/apps\/apps\/gcc\/serpent\/<em>2.2<\/em>\/photon_data\/cohff.dat\r\n<\/pre>\n<p>Alternatively you could create symbolic links (shortcuts) in your job directory pointing to the centrally installed directories. For example, to create a shortcut in your current directory named <code>photon_data<\/code> which points to the central <code>photon_data<\/code> directory, run the following on the login node or in your jobscript:<\/p>\n<pre>\r\nln -s $SERPENT_PHOTON_DATA\r\n<\/pre>\n<p>Then in your serpent input file you can use the much shorter path:<\/p>\n<pre>\r\n.\/coff.dat\r\n<\/pre>\n<p>To remove the shortcut, run the following from within the directory containing the shortcut (or in your jobscript). This will remove only the shortcut &#8211; it won&#8217;t touch the centrally installed data.<\/p>\n<pre>\r\nrm photon_data\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run Serpent on the login node. Jobs should be submitted to the compute nodes via batch.<\/p>\n<p>The executable to run is named as follows:<\/p>\n<ul>\n<li><code>sss2<\/code>  &#8211; MPI version for single-node and multi-node parallel jobs<\/li>\n<li><code>sss2-omp<\/code>  &#8211; OpenMP (multithreaded) version for single-node parallel jobs<\/li>\n<li><code>sss2-mixed<\/code> &#8211; MPI+OpenMP (multithreaded) mixed-mode version for multi-node parallel jobs (use with caution)<\/li>\n<\/ul>\n<p>Unless using the OpenMP version all executables should be run as MPI applications (see below for example jobscripts).<\/p>\n<p>Below are examples for the following types of jobs:<\/p>\n<ul>\n<li><a href=\"#smallomp\">Small (single-node) Parallel batch job submission (OpenMP version)<\/a><\/li>\n<li><a href=\"#smallmpi\">Small (single-node) Parallel batch job submission (MPI version)<\/a><\/li>\n<li><a href=\"#smallmixed\">Small (single-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)<\/a><\/li>\n<li><a href=\"#largempi\"><strong>Docs coming soon<\/strong> Large (multi-node) Parallel batch job submission (MPI version)<\/a><\/li>\n<li><a href=\"#largemixed\"><strong>Docs coming soon<\/strong> Large (multi-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)<\/a><\/li>\n<\/ul>\n<p><a name=\"smallomp\"><\/a><\/p>\n<h3>Small (single-node) Parallel batch job submission (OpenMP version)<\/h3>\n<p>The OMP version can be used on only a single compute node but will use multiple cores.<\/p>\n<p>Note that the serpent program name is <code>sss2-omp<\/code> for the OpenMP version.<\/p>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre class=slurm>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore    # Run on an AMD 169-core node.\r\n#SBATCH -n 64           # Number of cores. Max 168 cores allowed (a single node)\r\n#SBATCH -t 1-0          # Wallclock limit (days-hours). Required!\r\n                        # Max permitted is 7 days (7-0).\r\n\r\n### We now load the modulefile in the jobscript, for example:\r\nmodule purge\r\nmodule load apps\/gcc\/serpent\/2.2.2\r\n\r\n### You MUST say how many OpenMP threads to use. $SLURM_NTASKS is automatically\r\n### set to the number requested on the -n line above.\r\n\r\nexport OMP_NUM_THREADS=$SLURM_NTASKS\r\n<strong>sss2-omp<\/strong> <em>your_input_file<\/em>\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre class=slurm>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<p><a name=\"smallmpi\"><\/a><\/p>\n<h3>Small (single-node) Parallel batch job submission (MPI version)<\/h3>\n<p>This example uses the MPI version on multiple CPU cores within a single compute node (see below for larger multi-node MPI jobs).<\/p>\n<p>Note that the serpent program name is <code>sss2<\/code> for the MPI version (<strong>NOT<\/strong> <code>sss2-mpi<\/code> !!)<\/p>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre class=slurm>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore    # Run on an AMD 169-core node.\r\n#SBATCH -n 64           # Number of cores. Max 168 cores allowed (a single node)\r\n#SBATCH -t 1-0          # Wallclock limit (days-hours). Required!\r\n                        # Max permitted is 7 days (7-0).\r\n\r\n### We now load the modulefile in the jobscript, for example:\r\nmodule purge\r\nmodule load apps\/gcc\/serpent\/2.2.2\r\n\r\nmpirun -np $SLURM_NTASKS <strong>sss2<\/strong> <em>your_input_file<\/em>\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre class=slurm>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<p><strong>Note:<\/strong> some versions of serpent allow you to pass a <code>-mpi<\/code> flag on the serpent command-line rather than using <code>mpirun<\/code>. This will cause serpent to crash on the CSF. You <em>must<\/em> use the <code>mpirun<\/code> method of starting serpent as shown in the example above.<\/p>\n<p><strong>Note:<\/strong> some versions of serpent allow you to pass a <code>-mpi<\/code> flag on the serpent command-line rather than using <code>mpirun<\/code>. This will cause serpent to crash on the CSF. You <em>must<\/em> use the <code>mpirun<\/code> method of starting serpent as shown in the example above.<\/p>\n<p><a name=\"smallmixed\"><\/a><\/p>\n<h3>Small (single-node) Parallel batch job submission (MPI+OpenMP Mixed-mode versions)<\/h3>\n<p>The mixed-mode version of serpent will use a combination of MPI processes and OpenMP threads. Each MPI process will use multiple OpenMP threads to perform calculation using multi-core OpenMP methods. By using a small number of MPI processes, each using a larger number of OpenMP threads, the relatively slow communication between many MPI processes is reduced in favour of faster communication between the OpenMP threads. The number of MPI process multiplied by the number of OpenMP threads per process should equal the total number of cores requested in your job.<\/p>\n<p>This is supposed to provide a happy-medium between running large multi-node jobs and small single-node jobs. We do, however, recommend you test the performance of this version with your input data. For small simulations, running the ordinary OpenMP version (see above) may well be faster.<\/p>\n<p>The following example will use the mixed-mode version on a single compute-node. See later for a larger multi-node mixed-mode example job.<\/p>\n<p>Note that the serpent program name is <code>sss2-mixed<\/code> for the MPI+OpenMP mixed-mode version.<\/p>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre class=slurm>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore    # Run on an AMD 169-core node.\r\n#SBATCH -n 4            # Number of MPI processes.\r\n#SBATCH -c 16           # Number of cores per MPI process.\r\n                        # Total cores is n x c (4 x 16 = 64)\r\n#SBATCH -t 1-0          # Wallclock limit (days-hours). Required!\r\n                        # Max permitted is 7 days (7-0).\r\n\r\n### We now load the modulefile in the jobscript, for example:\r\nmodule purge\r\nmodule load apps\/gcc\/serpent\/2.2.2\r\n\r\n### Now start serpent using some extra flags for mixed-mode\r\nmpirun --map-by ppr:${SLURM_NTASKS}:node:pe=${SLURM_CPUS_PER_TASK} <strong>sss2-mixed<\/strong> -omp ${SLURM_CPUS_PER_TASK} <em>your_input_file<\/em>\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/montecarlo.vtt.fi\/\">Serpent website<\/a> which provides a <a href=\"http:\/\/montecarlo.vtt.fi\/download\/Serpent_manual.pdf\">Serpent manual<\/a> (pdf)<\/li>\n<li><a href=\"http:\/\/ttuki.vtt.fi\/serpent\">Serpent forum<\/a>.<\/li>\n<li><a href=\"\/csf3\/batch\/parallel-jobs\/\">CSF Parallel Environments<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>None.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview Serpent is a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, developed at VTT Technical Research Centre of Finland since 2004. Version have been compiled using MPI (small and larger multi-node jobs), OpenMP (single-node multithreaded) and Mixed-mode, combining MPI and OpenMP which might help with jobs that require large memory. The software has been compiled from source. Restrictions on use Access to this software is restricted to a specific research group. All users.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/serpent\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-512","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/512","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=512"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/512\/revisions"}],"predecessor-version":[{"id":12174,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/512\/revisions\/12174"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=512"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}