{"id":2400,"date":"2019-02-21T11:53:56","date_gmt":"2019-02-21T11:53:56","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=2400"},"modified":"2019-04-23T12:15:40","modified_gmt":"2019-04-23T11:15:40","slug":"telemac","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/telemac\/","title":{"rendered":"Telemac"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"http:\/\/www.opentelemac.org\/\">Telemac<\/a> (open TELEMAC-MASCARET) is an integrated suite of solvers for use in the field of free-surface flow.<\/p>\n<p>Versions v7p2r1 and v6p3r1 (using python) are installed on the CSF. They have been compiled with the Intel v17.0 compiler to take advantage of all CSF3 compute node architectures. The installations provide parallel (MPI) and Scalar (serial) configurations. MPI is the default configuration used (see below for how to instruct Telemac to use its scalar configuration).<\/p>\n<h2>Restrictions on use<\/h2>\n<p>Telemac is distributed under the GPL and LGPL licenses. Please see the <a href=\"http:\/\/www.opentelemac.org\/index.php?option=com_content&#038;view=article&#038;id=80&#038;Itemid=48&#038;lang=en\">Telemac licence<\/a> for full details.<\/p>\n<h2>Set up procedure<\/h2>\n<p>We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job <abbr title=\"add '#$ -V' to your jobscript\">inherit these settings<\/abbr>.<\/p>\n<p>Load <em>one<\/em> of the following modulefiles:<\/p>\n<pre>\r\nmodule load apps\/intel-17.0\/telemac\/7.2.1             # Parallel MPI version\r\nmodule load apps\/intel-17.0\/telemac\/7.2.1_scalar      # Serial (1-core) version\r\n\r\nmodule load apps\/intel-17.0\/telemac\/6.3.1             # Parallel MPI version\r\nmodule load apps\/intel-17.0\/telemac\/6.3.1_scalar      # Serial (1-core) version\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run Telemac on the login node. Jobs should be submitted to the compute nodes via batch.<\/p>\n<h3>Serial batch job submission<\/h3>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n\r\n# Load the scalar version for serial (1-core jobs)\r\nmodule load apps\/intel-17.0\/telemac\/6.3.1_scalar\r\n\r\n# Run the app using the python helper script (v6.3.1 and later)\r\nruncode.py telemac2d myinput.cas\r\n              #\r\n              # You can replace telemac2d with another telemac executable\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Forcing Serial Execution of a Parallel Executable<\/h3>\n<p>In some cases you may wish to run a Telemac tool serially even though it has been compiled for parallel execution. You can do this in Telemac using the following method:<\/p>\n<ol>\n<li>Edit your <code>.cas<\/code> file and set the following option\n<pre>PARALLEL PROCESSORS = 0<\/pre>\n<\/li>\n<li>In your jobscript, request the scalar (serial) config (the default is always the parallel config when the parallel version&#8217;s modulefile has been loaded):\n<pre>\r\nruncode.py --configname CSF.ifort17.scalar sisyphe myinput.cas\r\n<\/pre>\n<p>In the above example we run the <code>sisyphe<\/code> executable.<\/li>\n<\/ol>\n<h3>Parallel batch job submission<\/h3>\n<p>You must specify the number of cores to use the telemac input file (<code>myinput.cas<\/code> in the example below). Look for a line similar to:<\/p>\n<pre>\r\nPARALLEL PROCESSORS = 8\r\n   \/\r\n   \/ change 8 to the number of cores you'll request on the PE line in the jobscript\r\n<\/pre>\n<p>You must <strong>also<\/strong> specify the number of cores to use in the jobscript and add a couple of lines which generate the <code>mpi_telemac.conf<\/code> file required by telemac, as per the examples below.<\/p>\n<h3>Single node (2-32 core jobs)<\/h3>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 8      # Use 8 cores in this example. You can specify 2 -- 32 cores.\r\n\r\n# Load the modulefile\r\nmodule load apps\/intel-17.0\/telemac\/6.3.1\r\n\r\n# $NSLOTS is automatically set to the number of cores requested above\r\n# We must now generate a temporary file required by parallel telemac\r\nMPICONF=mpi_telemac.conf\r\necho $NSLOTS > $MPICONF\r\ncat $PE_HOSTFILE | awk '{print $1, $2}' >> $MPICONF\r\n\r\n# NOTE: telemac will call mpirun - you should not call it in your jobscript\r\n\r\n# Run the app using the python helper script (v6.3.1 and later)\r\nruncode.py telemac2d myinput.cas\r\n               #\r\n               # You can replace telemac2d with another telemac executable\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<p>Note that in the above jobscript the MPI host file <em>must<\/em> be named <code>mpi_telemac.conf<\/code>. Hence you should only run one job in a directory at any one time otherwise multiple jobs will stamp on each other&#8217;s host file.<\/p>\n<h3>Multi node (large parallel jobs)<\/h3>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe mpi-24-ib.pe 48   ## Minimum permitted is 48 cores, must be a multiple of 24\r\n\r\n# Load the modulefile\r\nmodule load apps\/intel-17.0\/telemac\/6.3.1\r\n\r\n# $NSLOTS is automatically set to the number of cores requested above\r\n# We must now generate a temporary file required by parallel telemac\r\nMPICONF=mpi_telemac.conf\r\necho $NSLOTS > $MPICONF\r\ncat $PE_HOSTFILE | awk '{print $1, $2}' >> $MPICONF\r\n\r\n# NOTE: telemac will call mpirun - you should not call it in your jobscript\r\n\r\n# Run the app using the python helper script (v6.3.1 and later)\r\nruncode.py telemac2d myinput.cas\r\n              #\r\n              # You can replace telemac2d with another telemac executable\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<p>Note that in the above jobscript the MPI host file <em>must<\/em> be named <code>mpi_telemac.conf<\/code>. Hence you should only run one job in a directory at any one time other multiple jobs will stamp on each other&#8217;s host file.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/www.openmascaret.org\/\">Telemac website<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>None.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview Telemac (open TELEMAC-MASCARET) is an integrated suite of solvers for use in the field of free-surface flow. Versions v7p2r1 and v6p3r1 (using python) are installed on the CSF. They have been compiled with the Intel v17.0 compiler to take advantage of all CSF3 compute node architectures. The installations provide parallel (MPI) and Scalar (serial) configurations. MPI is the default configuration used (see below for how to instruct Telemac to use its scalar configuration). Restrictions.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/telemac\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-2400","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/2400","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=2400"}],"version-history":[{"count":10,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/2400\/revisions"}],"predecessor-version":[{"id":3224,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/2400\/revisions\/3224"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=2400"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}