The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
OpenFOAM
Overview
OpenFOAM (Open source Field Operation And Manipulation) is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, including computational fluid dynamics (CFD).
Versions 2.3.0 (under going user testing), 2.2.2 and 2.2.1 are installed on the CSF. All were compiled using gcc 4.7.0 and openmpi 1.6.
Version 2.3.0 has also been compiled with the PGI 14.10 with ACML(fma4) compiler optimized for AMD Bulldozer nodes. This is an experimental compilation because PGI is not officially supported by OpenFOAM. However, this may give better performance on the AMD Bulldozer (64-core) nodes.
The embedded Paraview and PV3FoamReader module are not installed.
swak4foam is installed as part of version 2.2.2, 5.0, and 6.
Version 3.0.1 (undergoing user testing, June 2016) was compiled using gcc 4.8.2 and openmpi 1.6.
Version 4.1 was compiled using gcc 6.3.0 and openmpi 1.8.
Version 5.0 was compiled using gcc 6.3.0 and openmpi 1.8.
Version 6 was compiled using gcc 6.3.0 and openmpi 1.8.
User are requested not to output data every timestep of their simulation if not needed. This can create a huge number of files and directories in your scratch area (we have seen millions of files generated). Please ensure you modify your controlDict file to turn off writing at every timestep. For example, set purgeWrite 5 to keep just 5 timesteps worth and set a suitable writeInterval . Please check the controlDict online documentation for more keywords and options. |
Restrictions on use
OpenFOAM is distributed by the OpenFOAM Foundation and is freely available and open source, licensed under the GNU General Public Licence as detailed on the OpenFOAM website. All CSF users may use this software.
Set up procedure
Unfortunately, this is a little complicated and different to most other CSF applications. You must load a modulefile then run a command on the command-line to complete the setup, as described below:
Step 1: Load the Modulefile
For standard GNU Compiler (gcc/g++) builds, which can run on any CSF nodes but is not optimised for AMD hardware:
- For single-core or multicore single-node jobs load one of the following modulefiles:
module load apps/gcc/openfoam/6 module load apps/gcc/openfoam/5.0 module load apps/gcc/openfoam/4.1 module load apps/gcc/openfoam/3.0.1 module load apps/gcc/openfoam/2.3.0 module load apps/gcc/openfoam/2.2.2 module load apps/gcc/openfoam/2.2.1
- For more than one node (InfiniBand connected) load one of the following modulefile:
module load apps/gcc/openfoam/6-ib module load apps/gcc/openfoam/5.0-ib module load apps/gcc/openfoam/4.1-ib module load apps/gcc/openfoam/3.0.1-ib module load apps/gcc/openfoam/2.3.0-ib module load apps/gcc/openfoam/2.2.2-ib module load apps/gcc/openfoam/2.2.1-ib
For non-standard PGI compiler (pgcc/pgc++) builds, optimized to run on AMD Bulldozer nodes only (this will not run anywhere else):
- For single-core, single-node or multi-node AMD Bulldozer jobs load the modulefile:
module load apps/pgi-14.10-acml-fma4/openfoam/2.3.0
To understand whether your job will be using a single node or multiple nodes check the limits section below.
Step 2: Source the dot-file
The above modulefiles will instruct you to complete the set up by running, either on the login node or in your jobscript:
source $foamDotFile
You must run this command for OpenFOAM to run!
Step 3: Set up a directory in scratch
It is highly recommended that you run jobs in scratch and then copy important files you need to keep back to your home area. OpenFOAM expects the variable FOAM_RUN
to be set for your job and to contain the relevant files and directories. To use scratch:
mkdir /scratch/$USER/OpenFoam export FOAM_RUN=/scratch/$USER/OpenFoam
where $USER
is your username and is automatically set when you login. Then
cd $FOAM_RUN
and set up your job/case directories (0, constant, system etc).
Running the application
It is not possible to run OpenFOAM on more than one compute node. If you try to then your job will hang, but keep using CPU resources until the time limit is reached.
Serial batch job submission
- Ensure you have followed the Set Up Procedure.
- Now in the top directory (
$FOAM_RUN
) where you have set up your job/case (the one containing 0, constant and system) create a batch submission script, called for examplesge.openfoam
containing:#!/bin/bash #$ -V #$ -cwd interFoam
replacing
interFoam
with the OpenFOAM executable appropriate to your job/case. - Submit the job:
qsub sge.openfoam
- A log of the job will got to the SGE output file, e.g.
sge.openfoam.o12345
Parallel batch job submission – single node
- Ensure you have followed the Set Up Procedure.
- You will need to decompose your case before you can run it. Ensure that you have a file called
decomposeParDict
in your job/casesystem
directory ($FOAM_RUN
) specifying the number of cores you wish to use withnumberOfSubdomains
and a suitable decompositon method, e.g. simple, and related settings (see Further Information below for links to documentation that will help you this). - Now run this command from the top level directory of your job/case (the one containing 0, constant and system,
$FOAM_RUN
):decomposePar
- Next, still in the top directory, create a batch submission script, called for example
sge.openfoam.par
containing:#!/bin/bash #$ -V #$ -pe smp.pe 4 #$ -cwd mpirun -np $NSLOTS interFoam -parallel
replacing
interFoam
with the OpenFOAM executable appropriate to your job/case.The number after
smp.pe
must match thenumberOfSubdomains
setting you made earlier, if it doesn’t your job will fail. - Submit the job:
qsub sge.openfoam.par
- A log of the job will got to the SGE output file, e.g.
sge.openfoam.par.o12345
Single node limits
- The minimum number of cores for any parallel job is 2.
- The maximum number of cores in
smp.pe
is 16. Jobs run on an Intel node. - You can run on up to 32 cores if you replace
smp.pe
withsmp-32mc.pe
. The job will use an AMD Magny-Cour node. - Jobs up to 64 cores can be run using
smp-64bd.pe
. The job will use an AMD Bulldozer node.
Parallel batch job submission – multi-node ( 2 or more nodes )
- Ensure you have followed the Set Up Procedure.
- You will need to decompose your case before you can run it. Ensure that you have a file called
decomposeParDict
in your job/casesystem
directory ($FOAM_RUN
) specifying the number of cores you wish to use withnumberOfSubdomains
and a suitable decompositon method, e.g. simple, and related settings (see Further Information below for links to documentation that will help you this). - Now run this command from the top level directory of your job/case (the one containing 0, constant and system,
$FOAM_RUN
):decomposePar
- Next, still in the top directory, create a batch submission script, called for example
sge.openfoam.par
containing:#!/bin/bash #$ -V #$ -pe orte-32-ib.pe 96 #$ -cwd mpirun -np $NSLOTS interFoam -parallel
replacing
interFoam
with the OpenFOAM executable appropriate to your job/case.The number after
orte-32-ib.pe
must match thenumberOfSubdomains
setting you made earlier, if it doesn’t your job will fail. - Submit the job:
qsub sge.openfoam.par
- A log of the job will got to the SGE output file, e.g.
sge.openfoam.par.o12345
.
Multi node limits
- The following parallel environments may be used:
orte-32-ib.pe
– Jobs must be 64 cores or more and a multiple of 32. Uses AMD Magny-Cour nodes.orte-64bd-ib.pe
– Jobs must be 128 cores or more and a multiple of 64. Use AMD Bulldozer nodes.orte-24-ib.pe
– Jobs must be 48 cores or more and a multiple of 24. Uses Intel nodes.
- All the above parallel environments use nodes connected with Infiniband.
Additional advice
- When changing the number of cores you will need to adjust your input files appropriately and ensure
decomposePar
is re-run. - If the
decomposePar
command takes more than a few minutes to run or uses significant resource on the login node then please include it in your job submission script instead by including it on the line before thempirun
so that it executes as part of the batch job on the compute node.
Further info
- The OpenFOAM tutorials are very good and ideal for testing and getting used to setting up a job on the CSF before you proceed to production runs of your own work.
- OpenFOAM documentation.
- swak4foam website.
- Useful swak4foam tutorials on the NTNU HPC Wiki.