Telemac
Overview
Telemac (open TELEMAC-MASCARET) is an integrated suite of solvers for use in the field of free-surface flow.
Versions v7p2r1 and v6p3r1 (using python) are installed on the CSF. They have been compiled with the Intel v17.0 compiler to take advantage of all CSF3 compute node architectures. The installations provide parallel (MPI) and Scalar (serial) configurations. MPI is the default configuration used (see below for how to instruct Telemac to use its scalar configuration).
Restrictions on use
Telemac is distributed under the GPL and LGPL licenses. Please see the Telemac licence for full details.
Set up procedure
We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.
Load one of the following modulefiles:
module load apps/intel-17.0/telemac/7.2.1 # Parallel MPI version module load apps/intel-17.0/telemac/7.2.1_scalar # Serial (1-core) version module load apps/intel-17.0/telemac/6.3.1 # Parallel MPI version module load apps/intel-17.0/telemac/6.3.1_scalar # Serial (1-core) version
Running the application
Please do not run Telemac on the login node. Jobs should be submitted to the compute nodes via batch.
Serial batch job submission
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd # Load the scalar version for serial (1-core jobs) module load apps/intel-17.0/telemac/6.3.1_scalar # Run the app using the python helper script (v6.3.1 and later) runcode.py telemac2d myinput.cas # # You can replace telemac2d with another telemac executable
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Forcing Serial Execution of a Parallel Executable
In some cases you may wish to run a Telemac tool serially even though it has been compiled for parallel execution. You can do this in Telemac using the following method:
- Edit your
.cas
file and set the following optionPARALLEL PROCESSORS = 0
- In your jobscript, request the scalar (serial) config (the default is always the parallel config when the parallel version’s modulefile has been loaded):
runcode.py --configname CSF.ifort17.scalar sisyphe myinput.cas
In the above example we run the
sisyphe
executable.
Parallel batch job submission
You must specify the number of cores to use the telemac input file (myinput.cas
in the example below). Look for a line similar to:
PARALLEL PROCESSORS = 8 / / change 8 to the number of cores you'll request on the PE line in the jobscript
You must also specify the number of cores to use in the jobscript and add a couple of lines which generate the mpi_telemac.conf
file required by telemac, as per the examples below.
Single node (2-32 core jobs)
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 8 # Use 8 cores in this example. You can specify 2 -- 32 cores. # Load the modulefile module load apps/intel-17.0/telemac/6.3.1 # $NSLOTS is automatically set to the number of cores requested above # We must now generate a temporary file required by parallel telemac MPICONF=mpi_telemac.conf echo $NSLOTS > $MPICONF cat $PE_HOSTFILE | awk '{print $1, $2}' >> $MPICONF # NOTE: telemac will call mpirun - you should not call it in your jobscript # Run the app using the python helper script (v6.3.1 and later) runcode.py telemac2d myinput.cas # # You can replace telemac2d with another telemac executable
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Note that in the above jobscript the MPI host file must be named mpi_telemac.conf
. Hence you should only run one job in a directory at any one time otherwise multiple jobs will stamp on each other’s host file.
Multi node (large parallel jobs)
#!/bin/bash --login #$ -cwd #$ -pe mpi-24-ib.pe 48 ## Minimum permitted is 48 cores, must be a multiple of 24 # Load the modulefile module load apps/intel-17.0/telemac/6.3.1 # $NSLOTS is automatically set to the number of cores requested above # We must now generate a temporary file required by parallel telemac MPICONF=mpi_telemac.conf echo $NSLOTS > $MPICONF cat $PE_HOSTFILE | awk '{print $1, $2}' >> $MPICONF # NOTE: telemac will call mpirun - you should not call it in your jobscript # Run the app using the python helper script (v6.3.1 and later) runcode.py telemac2d myinput.cas # # You can replace telemac2d with another telemac executable
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Note that in the above jobscript the MPI host file must be named mpi_telemac.conf
. Hence you should only run one job in a directory at any one time other multiple jobs will stamp on each other’s host file.
Further info
Updates
None.