{"id":765,"date":"2013-06-10T15:49:07","date_gmt":"2013-06-10T15:49:07","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/?page_id=765"},"modified":"2017-08-10T14:51:05","modified_gmt":"2017-08-10T14:51:05","slug":"telemac","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/telemac\/","title":{"rendered":"Telemac"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>Telemac (open TELEMAC-MASCARET) is an integrated suite of solvers for use in the field of free-surface flow.<\/p>\n<p>Version v7p2r1 (using python) is installed on the CSF. It was compiled with Intel V15 using <code>-axSSE4.2,AVX,CORE-AVX2<\/code> and so will work on all the CSF intel nodes and where applicable the underlying hardware instruction set. It can also be run AMD nodes. It is possible to run parallel (MPI) and scalar configurations. <\/p>\n<p>Version v6p2r1 (using perl) and v6p3r1 (using python) are installed on the CSF. They have both been compiled with the Intel v12 compiler using <code>-axAVX<\/code> so will take advantage of Sandybridge hardware if run on such nodes. It can also be run on AMD nodes. The installations provide parallel (MPI) and scalar configurations. MPI is the default configuration used (see below for how to instruct Telemac to use its scalar configuration).<\/p>\n<h2>Restrictions on use<\/h2>\n<p>Telemac is distributed under the GPL and LGPL licenses. Please see the <a href=\"http:\/\/www.opentelemac.org\/index.php?option=com_content&#038;view=article&#038;id=80&#038;Itemid=48&#038;lang=en\">Telemac licence<\/a> for full details.<\/p>\n<h2>Set up procedure<\/h2>\n<p>To access the software you must first load the appropriate OpenMPI modulefile (either InfiniBand or non-IB) and the Telemac modulefile:<\/p>\n<h3>InfiniBand MPI with Telemac 7.2.1 (inc.Sisyphe) <\/h3>\n<pre>\r\nmodule load mpi\/intel-15.0\/openmpi\/1.8.3-ib       # Intel only\r\nmodule load mpi\/intel-15.0\/openmpi\/1.8.3m-ib      # Intel and AMD\r\n\r\n# and then:\r\nmodule load apps\/intel-15.0\/telemac\/7.2.1\r\n<\/pre>\n<h3>non-InfiniBand MPI with Telemac 7.2.1 (inc.Sisyphe)<\/h3>\n<pre>\r\nmodule load mpi\/intel-15.0\/openmpi\/1.8.3          # Intel only\r\nmodule load mpi\/intel-15.0\/openmpi\/1.8.3m         # Intel and AMD\r\n# and then \r\nmodule load apps\/intel-15.0\/telemac\/7.2.1\r\n<\/pre>\n<h3>InfiniBand MPI with Telemac 6.x.x<\/h3>\n<pre>\r\nmodule load mpi\/intel-12.0\/openmpi\/1.6-ib\r\n\r\n# and then <strong>one<\/strong> of the following\r\nmodule load apps\/intel-12.0\/telemac\/6.3.1\r\n# or\r\nmodule load apps\/intel-12.0\/telemac\/6.2.1\r\n\r\n<\/pre>\n<h3>non-InfiniBand MPI with Telemac 6.x.x<\/h3>\n<pre>\r\nmodule load mpi\/intel-12.0\/openmpi\/1.6\r\n\r\n# and then <strong>one<\/strong> of the following\r\nmodule load apps\/intel-12.0\/telemac\/6.3.1\r\n# or\r\nmodule load apps\/intel-12.0\/telemac\/6.2.1\r\n<\/pre>\n<h3>Optimized Version of 6.3.1<\/h3>\n<p>A version of Telemac v6p3r1 with internal parallel coupling between modules such as Telemac2D and Sisyphe has been compiled. It is available by loading the following modulefiles:<\/p>\n<pre>\r\n# Single compute-node (up to 24 intel cores or 32 AMD Magny-cours cores)\r\nmodule load mpi\/intel-12.0\/openmpi\/1.6\r\nmodule load apps\/intel-12.0\/telemac\/6.3.1_mpi\r\n\r\n# Multiple compute-nodes (48 intel cores, 64 AMD Magny-cours cores -- or more!)\r\nmodule load mpi\/intel-12.0\/openmpi\/1.6-ib\r\nmodule load apps\/intel-12.0\/telemac\/6.3.1_mpi\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run Telemac on the login node. Jobs should be submitted to the compute nodes via batch.<\/p>\n<h3>Serial batch job submission<\/h3>\n<p>Make sure you have the modulefile loaded then create a batch submission script, for example:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n\r\n# Telemac 7.2.1 & 6.3.1 (python) should use:\r\nruncode.py telemac2d myinput.cas\r\n\r\n# Telemac 6.2.1 (perl) should use:\r\ntelemac2d myinput.cas\r\n\r\n   #\r\n   # In both cases you can replace telemac2d with another telemac executable\r\n   #\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Forcing Serial Execution of a Parallel Executable<\/h3>\n<p>In some cases you may wish to run a Telemac tool serially even though it has been compiled for parallel execution. You can do this in Telemac using the following method:<\/p>\n<ol>\n<li>Edit your <code>.cas<\/code> file and set the following option\n<pre>PARALLEL PROCESSORS = 0<\/pre>\n<\/li>\n<li>In your jobscript, request the scalar (serial) config (the default it always the parallel config):\n<pre>\r\nruncode.py --configname CSF.ifort15.scalar sisyphe myinput.cas ## 7.2.1\r\nruncode.py --configname CSF.ifort12.scalar sisyphe myinput.cas ## 6.3.1\r\n<\/pre>\n<p>In the above example we run the sisyphe executable.<\/li>\n<\/ol>\n<h3>Parallel batch job submission<\/h3>\n<p>You must specify the number of cores to use the telemac input file (<code>myinput.cas<\/code> in the example below). Look for a line similar to:<\/p>\n<pre>\r\nPARALLEL PROCESSORS = 24\r\n   \/\r\n   \/ change 24 to the number of cores you'll request on the PE line in the jobscript\r\n<\/pre>\n<p>You must <strong>also<\/strong> specify the number of cores to use in the jobscript and add a couple of lines which generate the <code>mpi_telemac.conf<\/code> file required by telemac, as per the examples below.<\/p>\n<h3>Single node<\/h3>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n### Specify the parallel environment (PE) and number of cores to use\r\n### In this example we use the Intel nodes\r\n### Please read the CSF docs for more info on available PE's.\r\n#$ -pe smp.pe 24   ## Uses Intel. Min 2, Max 24\r\n\r\n# $NSLOTS is automatically set to the number of cores requested above\r\n# We must now generate a temporary file required by parallel telemac\r\nMPICONF=mpi_telemac.conf\r\necho $NSLOTS > $MPICONF\r\ncat $PE_HOSTFILE | awk '{print $1, $2}' >> $MPICONF\r\n\r\n# NOTE: telemac will call mpirun - you should not call it in your jobscript\r\n\r\n# Telemac 7.2.1 & 6.3.1 (python) should use:\r\nruncode.py telemac2d myinput.cas\r\n\r\n# Telemac 6.2.1 (perl) should use:\r\ntelemac2d myinput.cas\r\n\r\n   #\r\n   # replace with your required telemac executable\r\n   #\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<p>Note that in the above jobscript the MPI host file <em>must<\/em> be named <code>mpi_telemac.conf<\/code>. Hence you should only run one job in a directory at any one time other multiple jobs will stamp on each other&#8217;s host file.<\/p>\n<p>If you wish to use AMD then please replace the -pe line as below:<\/p>\n<pre>\r\n#$ -pe smp-32mc.pe 32   # Uses AMD MagnyCour, Min 2, max 32\r\n#$ -pe smp-64bd.pe 64   # Uses AMD Bulldozer, Min 2, max 64\r\n<\/pre>\n<h3>Multi node<\/h3>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n### Specify the parallel environment (PE) and number of cores to use\r\n### In this example we use the Intel nodes\r\n### Please read the CSF docs for more info on available PE's.\r\n#$ -pe orte-24-ib.pe 24   ## Uses Intel. Min 48, must be a multiple of 24\r\n\r\n# $NSLOTS is automatically set to the number of cores requested above\r\n# We must now generate a temporary file required by parallel telemac\r\nMPICONF=mpi_telemac.conf\r\necho $NSLOTS > $MPICONF\r\ncat $PE_HOSTFILE | awk '{print $1, $2}' >> $MPICONF\r\n\r\n# NOTE: telemac will call mpirun - you should not call it in your jobscript\r\n\r\n# Telemac 7.2.1 & 6.3.1 (python) should use:\r\nruncode.py telemac2d myinput.cas\r\n\r\n# Telemac 6.2.1 (perl) should use:\r\ntelemac2d myinput.cas\r\n\r\n   #\r\n   # replace with your required telemac executable\r\n   #\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<p>Note that in the above jobscript the MPI host file <em>must<\/em> be named <code>mpi_telemac.conf<\/code>. Hence you should only run one job in a directory at any one time other multiple jobs will stamp on each other&#8217;s host file.<\/p>\n<p>If you wish to use AMD then please replace the -pe line as below:<\/p>\n<pre>\r\n#$ -pe orte-32mc.pe 32   # Uses AMD MagnyCour, Min 64, must be a multiple of 32\r\n#$ -pe orte-64bd.pe 64   # Uses AMD Bulldozer, Min 128, must be a multiple of 64\r\n<\/pre>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/www.openmascaret.org\/\">Telemac website<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>None.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview Telemac (open TELEMAC-MASCARET) is an integrated suite of solvers for use in the field of free-surface flow. Version v7p2r1 (using python) is installed on the CSF. It was compiled with Intel V15 using -axSSE4.2,AVX,CORE-AVX2 and so will work on all the CSF intel nodes and where applicable the underlying hardware instruction set. It can also be run AMD nodes. It is possible to run parallel (MPI) and scalar configurations. Version v6p2r1 (using perl) and.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/telemac\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":31,"menu_order":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-765","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/765","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/comments?post=765"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/765\/revisions"}],"predecessor-version":[{"id":4141,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/765\/revisions\/4141"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/31"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/media?parent=765"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}