The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
Telemac
Overview
Telemac (open TELEMAC-MASCARET) is an integrated suite of solvers for use in the field of free-surface flow.
Version v7p2r1 (using python) is installed on the CSF. It was compiled with Intel V15 using -axSSE4.2,AVX,CORE-AVX2
and so will work on all the CSF intel nodes and where applicable the underlying hardware instruction set. It can also be run AMD nodes. It is possible to run parallel (MPI) and scalar configurations.
Version v6p2r1 (using perl) and v6p3r1 (using python) are installed on the CSF. They have both been compiled with the Intel v12 compiler using -axAVX
so will take advantage of Sandybridge hardware if run on such nodes. It can also be run on AMD nodes. The installations provide parallel (MPI) and scalar configurations. MPI is the default configuration used (see below for how to instruct Telemac to use its scalar configuration).
Restrictions on use
Telemac is distributed under the GPL and LGPL licenses. Please see the Telemac licence for full details.
Set up procedure
To access the software you must first load the appropriate OpenMPI modulefile (either InfiniBand or non-IB) and the Telemac modulefile:
InfiniBand MPI with Telemac 7.2.1 (inc.Sisyphe)
module load mpi/intel-15.0/openmpi/1.8.3-ib # Intel only module load mpi/intel-15.0/openmpi/1.8.3m-ib # Intel and AMD # and then: module load apps/intel-15.0/telemac/7.2.1
non-InfiniBand MPI with Telemac 7.2.1 (inc.Sisyphe)
module load mpi/intel-15.0/openmpi/1.8.3 # Intel only module load mpi/intel-15.0/openmpi/1.8.3m # Intel and AMD # and then module load apps/intel-15.0/telemac/7.2.1
InfiniBand MPI with Telemac 6.x.x
module load mpi/intel-12.0/openmpi/1.6-ib # and then one of the following module load apps/intel-12.0/telemac/6.3.1 # or module load apps/intel-12.0/telemac/6.2.1
non-InfiniBand MPI with Telemac 6.x.x
module load mpi/intel-12.0/openmpi/1.6 # and then one of the following module load apps/intel-12.0/telemac/6.3.1 # or module load apps/intel-12.0/telemac/6.2.1
Optimized Version of 6.3.1
A version of Telemac v6p3r1 with internal parallel coupling between modules such as Telemac2D and Sisyphe has been compiled. It is available by loading the following modulefiles:
# Single compute-node (up to 24 intel cores or 32 AMD Magny-cours cores) module load mpi/intel-12.0/openmpi/1.6 module load apps/intel-12.0/telemac/6.3.1_mpi # Multiple compute-nodes (48 intel cores, 64 AMD Magny-cours cores -- or more!) module load mpi/intel-12.0/openmpi/1.6-ib module load apps/intel-12.0/telemac/6.3.1_mpi
Running the application
Please do not run Telemac on the login node. Jobs should be submitted to the compute nodes via batch.
Serial batch job submission
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V # Telemac 7.2.1 & 6.3.1 (python) should use: runcode.py telemac2d myinput.cas # Telemac 6.2.1 (perl) should use: telemac2d myinput.cas # # In both cases you can replace telemac2d with another telemac executable #
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Forcing Serial Execution of a Parallel Executable
In some cases you may wish to run a Telemac tool serially even though it has been compiled for parallel execution. You can do this in Telemac using the following method:
- Edit your
.cas
file and set the following optionPARALLEL PROCESSORS = 0
- In your jobscript, request the scalar (serial) config (the default it always the parallel config):
runcode.py --configname CSF.ifort15.scalar sisyphe myinput.cas ## 7.2.1 runcode.py --configname CSF.ifort12.scalar sisyphe myinput.cas ## 6.3.1
In the above example we run the sisyphe executable.
Parallel batch job submission
You must specify the number of cores to use the telemac input file (myinput.cas
in the example below). Look for a line similar to:
PARALLEL PROCESSORS = 24 / / change 24 to the number of cores you'll request on the PE line in the jobscript
You must also specify the number of cores to use in the jobscript and add a couple of lines which generate the mpi_telemac.conf
file required by telemac, as per the examples below.
Single node
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V ### Specify the parallel environment (PE) and number of cores to use ### In this example we use the Intel nodes ### Please read the CSF docs for more info on available PE's. #$ -pe smp.pe 24 ## Uses Intel. Min 2, Max 24 # $NSLOTS is automatically set to the number of cores requested above # We must now generate a temporary file required by parallel telemac MPICONF=mpi_telemac.conf echo $NSLOTS > $MPICONF cat $PE_HOSTFILE | awk '{print $1, $2}' >> $MPICONF # NOTE: telemac will call mpirun - you should not call it in your jobscript # Telemac 7.2.1 & 6.3.1 (python) should use: runcode.py telemac2d myinput.cas # Telemac 6.2.1 (perl) should use: telemac2d myinput.cas # # replace with your required telemac executable #
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Note that in the above jobscript the MPI host file must be named mpi_telemac.conf
. Hence you should only run one job in a directory at any one time other multiple jobs will stamp on each other’s host file.
If you wish to use AMD then please replace the -pe line as below:
#$ -pe smp-32mc.pe 32 # Uses AMD MagnyCour, Min 2, max 32 #$ -pe smp-64bd.pe 64 # Uses AMD Bulldozer, Min 2, max 64
Multi node
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V ### Specify the parallel environment (PE) and number of cores to use ### In this example we use the Intel nodes ### Please read the CSF docs for more info on available PE's. #$ -pe orte-24-ib.pe 24 ## Uses Intel. Min 48, must be a multiple of 24 # $NSLOTS is automatically set to the number of cores requested above # We must now generate a temporary file required by parallel telemac MPICONF=mpi_telemac.conf echo $NSLOTS > $MPICONF cat $PE_HOSTFILE | awk '{print $1, $2}' >> $MPICONF # NOTE: telemac will call mpirun - you should not call it in your jobscript # Telemac 7.2.1 & 6.3.1 (python) should use: runcode.py telemac2d myinput.cas # Telemac 6.2.1 (perl) should use: telemac2d myinput.cas # # replace with your required telemac executable #
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Note that in the above jobscript the MPI host file must be named mpi_telemac.conf
. Hence you should only run one job in a directory at any one time other multiple jobs will stamp on each other’s host file.
If you wish to use AMD then please replace the -pe line as below:
#$ -pe orte-32mc.pe 32 # Uses AMD MagnyCour, Min 64, must be a multiple of 32 #$ -pe orte-64bd.pe 64 # Uses AMD Bulldozer, Min 128, must be a multiple of 64
Further info
Updates
None.