{"id":2266,"date":"2015-02-26T11:54:57","date_gmt":"2015-02-26T11:54:57","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/?page_id=2266"},"modified":"2018-11-02T11:35:52","modified_gmt":"2018-11-02T11:35:52","slug":"serpent","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/serpent\/","title":{"rendered":"Serpent"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"http:\/\/montecarlo.vtt.fi\/\">Serpent<\/a> is a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, developed at VTT Technical Research Centre of Finland since 2004.<\/p>\n<p>Versions 1.1.7, 1.1.19, 2.1.0 and 2.1.21, 2.1.23, 2.1.24, 2.1.25, 2.1.26, 2.1.27, 2.1.28, 2.1.29, 2.1.30 are installed on the CSF (version 2.1.22 did not compile correctly and so is not available). All versions support MPI-based parallelism. v2.1.0, v2.1.21, and v2.1.23, v2.1.24, v2.1.25,  v2.1.26, v2.1.27, v2.1.28, v2.1.29, v2.1.30 are also available as an OpenMP (multithreaded) parallel versions.<\/p>\n<p>The software has been compiled from source using the Intel 12.0.5 compiler.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>Access to this software is restricted to a specific research group. Please contact <a href=\"&#109;&#x61;&#x69;l&#116;&#x6f;&#x3a;i&#116;&#x73;-&#114;&#x69;&#x2d;t&#101;&#x61;m&#64;&#x6d;&#x61;n&#99;&#x68;&#x65;&#115;&#x74;&#x65;r&#46;&#x61;&#x63;&#46;&#117;&#x6b;\">i&#116;&#x73;&#x2d;r&#105;&#x2d;&#x74;e&#97;&#109;&#x40;&#x6d;a&#110;&#x63;&#x68;e&#115;&#x74;&#x65;r&#46;&#97;&#x63;&#x2e;u&#107;<\/a> to request access, indicating you have read and agree to the terms and conditions in the license, detailed below: <\/p>\n<p>We will Inform the The University of Manchester NEA databank liaison officer of your request to use the software.<\/p>\n<p>Before being permitted to use the software all users must read and adhere to the <a href=\"http:\/\/montecarlo.vtt.fi\/releasenotes.htm\">license conditions<\/a>. In particular<\/p>\n<div class=\"hidden\">You must inform the serpent development team by registering with <a href=\"&#109;a&#x69;l&#x74;o&#x3a;&#115;&#x65;&#114;&#x70;&#101;n&#x74;&#64;&#x76;t&#x74;&#46;&#x66;&#105;\">&#115;&#x65;&#x72;&#112;&#x65;&#x6e;&#116;&#x40;&#x76;&#116;&#x74;&#x2e;&#102;&#x69;<\/a> (see <a href=\"http:\/\/montecarlo.vtt.fi\">http:\/\/montecarlo.vtt.fi\/<\/a> for more information).<\/div>\n<ol>\n<li>The code can be used free of charge by licensed organizations for non-commercial research and educational purposes.<\/li>\n<li>Usage to promoting the development of weapons of mass destruction is strictly prohibited.<\/li>\n<li> The code cannot be used outside the Licensee Organization or distributed to a third party.<\/li>\n<li>VTT and the developers assume no liability for the use of the code or the validity of the results.<\/li>\n<\/ol>\n<h2>Set up procedure<\/h2>\n<p>To access the software you must first load <em>one<\/em> of the following modulefiles:<\/p>\n<pre>\r\n# OpenMP (multi-thread), single compute-node only (no MPI)\r\nmodule load apps\/intel-12.0\/serpent\/2.1.30-omp            # NB: possible fatal error when trying to track nuclide inventories\r\nmodule load apps\/intel-12.0\/serpent\/2.1.29-omp\r\nmodule load apps\/intel-12.0\/serpent\/2.1.28-fix-omp        # Fixes bugs in readinput.c and pretrans.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.27-omp\r\nmodule load apps\/intel-12.0\/serpent\/2.1.26-patch-pre-2.1.27-omp   # Fixes several bugs in 2.1.26\r\nmodule load apps\/intel-12.0\/serpent\/2.1.26-fix-omp        # Fixes bugs in rroutput.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.26-omp\r\nmodule load apps\/intel-12.0\/serpent\/2.1.25-fix-omp        # Fixes bugs in geometryplotter.c and burnmatcompositions.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.25-omp\r\nmodule load apps\/intel-12.0\/serpent\/2.1.24-omp\r\nmodule load apps\/intel-12.0\/serpent\/2.1.23-omp\r\nmodule load apps\/intel-12.0\/serpent\/2.1.21-fix-omp        # Fixes a bug in coldet.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.21-omp\r\nmodule load apps\/intel-12.0\/serpent\/2.1.0-omp\r\n\r\n# MPI versions (for use with InfiniBand connected nodes - for multi-node MPI jobs)\r\nmodule load apps\/intel-12.0\/serpent\/2.1.30-ib             # NB: possible fatal error when trying to track nuclide inventories\r\nmodule load apps\/intel-12.0\/serpent\/2.1.29-ib\r\nmodule load apps\/intel-12.0\/serpent\/2.1.28-fix-ib         # Fixes bugs in readinput.c and pretrans.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.27-ib\r\nmodule load apps\/intel-12.0\/serpent\/2.1.26-patch-pre-2.1.27-ib   # Fixes several bugs in 2.1.26\r\nmodule load apps\/intel-12.0\/serpent\/2.1.26-fix-ib         # Fixes bugs in rroutput.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.26-ib\r\nmodule load apps\/intel-12.0\/serpent\/2.1.25-fix-ib         # Fixes bugs in geometryplotter.c and burnmatcompositions.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.25-ib\r\nmodule load apps\/intel-12.0\/serpent\/2.1.24-ib\r\nmodule load apps\/intel-12.0\/serpent\/2.1.23-ib\r\nmodule load apps\/intel-12.0\/serpent\/2.1.21-fix-ib         # Fixes a bug in coldet.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.21-ib\r\nmodule load apps\/intel-12.0\/serpent\/2.1.0-ib\r\nmodule load apps\/intel-12.0\/serpent\/1.1.19-ib\r\nmodule load apps\/intel-12.0\/serpent\/1.1.7-ib\r\n\r\n# MPI versions (slower than InfiniBand - for single node MPI jobs)\r\nmodule load apps\/intel-12.0\/serpent\/2.1.30                # NB: possible fatal error when trying to track nuclide inventories\r\nmodule load apps\/intel-12.0\/serpent\/2.1.29\r\nmodule load apps\/intel-12.0\/serpent\/2.1.28-fix            # Fixes bugs in readinput.c and pretrans.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.27\r\nmodule load apps\/intel-12.0\/serpent\/2.1.26-patch-pre-2.1.27       # Fixes several bugs in 2.1.26\r\nmodule load apps\/intel-12.0\/serpent\/2.1.26-fix            # Fixes bugs in rroutput.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.26\r\nmodule load apps\/intel-12.0\/serpent\/2.1.25-fix            # Fixes bugs in geometryplotter.c and burnmatcompositions.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.25\r\nmodule load apps\/intel-12.0\/serpent\/2.1.24\r\nmodule load apps\/intel-12.0\/serpent\/2.1.23\r\nmodule load apps\/intel-12.0\/serpent\/2.1.21-fix            # Fixes a bug in coldet.c\r\nmodule load apps\/intel-12.0\/serpent\/2.1.21\r\nmodule load apps\/intel-12.0\/serpent\/2.1.0\r\nmodule load apps\/intel-12.0\/serpent\/1.1.19\r\nmodule load apps\/intel-12.0\/serpent\/1.1.7\r\n<\/pre>\n<p>Any other required modulefiles (e.g. MPI) will be loaded automatically by the above modulefiles.<\/p>\n<h2>Cross Section Data<\/h2>\n<p>The Serpent cross section data supplied with version 1.1.7 is available in all of the above versions. An environment variable named <code>$SERPENT_XSDATA<\/code> is set by all of the above modulefiles to give the directory name containing the data. To see what is available run the following on the login node after loading one of the above modulefiles:<\/p>\n<pre>\r\nls $SERPENT_XSDATA\r\n<\/pre>\n<p>Your serpent input file may need to refer to these data libraries in which case you should use the full path to the data libraries. For example, first report what the full path is:<\/p>\n<pre>\r\necho $SERPENT_XSDATA\r\n<\/pre>\n<p>Then use that path in your serpent input file. For example it may contain the lines:<\/p>\n<pre>\r\nset acelib \"\/opt\/gridware\/apps\/intel-12.0\/serpent\/2.1.24\/xsdata\/jef22\/sss_jef22u.xsdata\"\r\nset declib \"\/opt\/gridware\/apps\/intel-12.0\/serpent\/2.1.24\/xsdata\/jef22\/sss_jef22.dec\"\r\n<\/pre>\n<p>Alternatively you could create symbolic links (shortcuts) in your job directory pointing to the centrally installed directories. For example, to create a shortcut in your current directory named <code>jef22<\/code> which points to the central <code>jef22<\/code> directory, run the following on the login node or in your jobscript:<\/p>\n<pre>\r\nln -s $SERPENT_XSDATA\/jef22\r\n<\/pre>\n<p>Then in your serpent input file you can use the much shorter path:<\/p>\n<pre>\r\nset acelib \".\/jef22\/sss_jef22u.xsdata\"\r\nset declib \".\/jef22\/sss_jef22.dec\"\r\n<\/pre>\n<p>To remove the shortcut, run the following from within the directory containing the shortcut (or in your jobscript). This will remove only the shortcut &#8211; it won&#8217;t touch the centrally installed data.<\/p>\n<pre>\r\nrm jef22\r\n<\/pre>\n<h2>Photon Data<\/h2>\n<p>As of version 2.1.24 photon data can be read by Serpent. As with the cross section data above, once you have loaded the 2.1.24 (or later) modulefile you can access the photon data using and environment variable <code>$SERPENT_PHOTON_DATA<\/code>. For example:<\/p>\n<pre>\r\nls $SERPENT_PHOTON_DATA\r\n<\/pre>\n<p>Your serpent input file may need to refer to these data libraries in which case you should use the full path to the data libraries. For example, first report what the full path is:<\/p>\n<pre>\r\necho $SERPENT_PHOTON_DATA\r\n<\/pre>\n<p>Then use that path in your serpent input file. The full path to the <code>cohff.dat<\/code> file, for example, is:<\/p>\n<pre>\r\n\/opt\/gridware\/apps\/intel-12.0\/serpent\/2.1.24\/photon_data\/cohff.dat\r\n<\/pre>\n<p>Alternatively you could create symbolic links (shortcuts) in your job directory pointing to the centrally installed directories. For example, to create a shortcut in your current directory named <code>photon_data<\/code> which points to the central <code>photon_data<\/code> directory, run the following on the login node or in your jobscript:<\/p>\n<pre>\r\nln -s $SERPENT_PHOTON_DATA\r\n<\/pre>\n<p>Then in your serpent input file you can use the much shorter path:<\/p>\n<pre>\r\n.\/coff.dat\r\n<\/pre>\n<p>To remove the shortcut, run the following from within the directory containing the shortcut (or in your jobscript). This will remove only the shortcut &#8211; it won&#8217;t touch the centrally installed data.<\/p>\n<pre>\r\nrm photon_data\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run Serpent on the login node. Jobs should be submitted to the compute nodes via batch.<\/p>\n<p>The executable to run is named as follows:<\/p>\n<ul>\n<li><code>sss<\/code> (if using version 1.x.x)<\/li>\n<li><code>sss2<\/code> (if using version 2.x.x)<\/li>\n<li><code>sss2-omp<\/code> (if using version 2.x.x-omp)<\/li>\n<\/ul>\n<p>Unless using the OpenMP version all executables should be run as MPI applications (see below for example jobscripts).<\/p>\n<h3>Parallel batch job submission (OpenMP version)<\/h3>\n<p>The OMP version can be used on only a single compute node but will use multiple cores.<\/p>\n<p>Make sure you have the modulefile loaded (see above) then create a batch submission script, for example:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd             # Job will run from the current directory\r\n#$ -V               # Job will inherit current environment settings\r\n#$ -pe smp.pe 4     # Max 24 cores allowed (a single node)\r\n\r\n### You MUST say how many OpenMP threads to use. $NSLOTS is automatically\r\n### set to the number requested on the -pe line above.\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS\r\nsss2-omp\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Parallel batch job submission (MPI versions)<\/h3>\n<p>The MPI version can be used across multiple compute nodes and also on multiple cores of a single compute node.<\/p>\n<p>Make sure you have the modulefile loaded (see above) then create a batch submission script, for example:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd             # Job will run from the current directory\r\n#$ -V               # Job will inherit current environment settings\r\n\r\n### Choose ONE of the following lines for parallel running:\r\n### (the number of cores is just an example)\r\n#$ -pe smp.pe        4     # Max 24 cores allowed (a single compute node)\r\n#$ -pe orte-24-ib.pe 48    # Minimum is 48 and must be multiples of 24 cores\r\n\r\n### $NSLOTS is automatically set to number of cores requested above\r\n\r\nmpirun -np $NSLOTS sss\r\n# or\r\nmpirun -np $NSLOTS sss2\r\n\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<p><strong>Note:<\/strong> some versions of serpent allow you to pass a <code>-mpi<\/code> flag on the serpent command-line rather than using <code>mpirun<\/code>. This will cause serpent to crash on the CSF. You <em>must<\/em> use the <code>mpirun<\/code> method of starting serpent as shown in the example above.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/montecarlo.vtt.fi\/\">Serpent website<\/a> which provides a <a href=\"http:\/\/montecarlo.vtt.fi\/download\/Serpent_manual.pdf\">Serpent manual<\/a> (pdf)<\/li>\n<li><a href=\"http:\/\/ttuki.vtt.fi\/serpent\">Serpent forum<\/a>.<\/li>\n<li><a href=\"\/csf2\/csf-user-documentation\/parallel-jobs\/\">CSF Parallel Environments<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>None.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview Serpent is a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, developed at VTT Technical Research Centre of Finland since 2004. Versions 1.1.7, 1.1.19, 2.1.0 and 2.1.21, 2.1.23, 2.1.24, 2.1.25, 2.1.26, 2.1.27, 2.1.28, 2.1.29, 2.1.30 are installed on the CSF (version 2.1.22 did not compile correctly and so is not available). All versions support MPI-based parallelism. v2.1.0, v2.1.21, and v2.1.23, v2.1.24, v2.1.25, v2.1.26, v2.1.27, v2.1.28, v2.1.29, v2.1.30 are also available as an OpenMP.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/serpent\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":31,"menu_order":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-2266","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/2266","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/comments?post=2266"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/2266\/revisions"}],"predecessor-version":[{"id":4923,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/2266\/revisions\/4923"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/31"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/media?parent=2266"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}