OpenFOAM
Overview
OpenFOAM (Open source Field Operation And Manipulation) is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, including computational fluid dynamics (CFD).
Some versions have been installed from openfoam.org and some from openfoam.com (see modulefile information below for further notes).
See the modulefile list below for available version.
Version 5.20171030 was compiled using gcc 6.4.0 and openmpi 4.1.0
Version 6 was compiled using gcc 6.3.0 and openmpi 3.1.1
7 was compiled using gcc 8.2.4 and openmpi 3.1.4
v1812, was compiled using gcc 6.4.0 and openmpi 4.0.1
v1906, v1912, v2012 were compiled using gcc 8.2.0 and openmpi 4.0.1
7 was compiled using gcc 8.2.0 and openmpi 3.1.4
8 was compiled using gcc 8.2.0 and openmpi 4.0.1
9 was compiled using gcc 9.3.0 and openmpi 4.1.0
v2012 was compiled using gcc 8.2.0 and openmpi 4.0.1
v2106 was compiled using gcc 8.2.0 and openmpi 4.1.0
v2212 was compiled using gcc 9.3.0 and openmpi 4.1.0
v2306 was compiled using gcc 12.2.0 and openmpi 4.1.2
v2312 was compiled using gcc 12.2.0 and openmpi 4.1.2
controlDict
file to turn off writing at every timestep.
For example, set purgeWrite 5
to keep just the most recent 5 timesteps worth and set a suitable writeInterval
.
Please check the controlDict online documentation for more keywords and options.
Note: To check your scratch usage (space consumed and number of files), run the following command on the login node: scrusage
Restrictions on use
OpenFOAM is distributed by the OpenFOAM Foundation and is freely available and open source, licensed under open source licenses. All CSF users may use this software.
Set up procedure
This slightly different to most other CSF applications.
OpenFOAM expects the variable FOAM_RUN
to be set for your job and to contain the relevant files and directories. It is recommended to use scratch and then copy back any needed results to your home directory
In your jobscript you can use one of the following module load commands for the different versions on CSF3.
OpenFOAM.org versions
module load apps/gcc/openfoam/12 # Includes OpenFOAM.org 12 # New Nov 2024 module load apps/gcc/openfoam/11 # Includes OpenFOAM.org 11 module load apps/gcc/openfoam/10 # Includes OpenFOAM.org 10 module load apps/gcc/openfoam/9 # Includes OpenFOAM.org 9, rheotools module load apps/gcc/openfoam/8 # Includes OpenFOAM.org 8, swak4foam module load apps/gcc/openfoam/7-addons # Includes OpenFOAM.org 7, swak4foam and rheotools module load apps/gcc/openfoam/7 # Previously accidentally labelled as v1906 module load apps/gcc/openfoam/6 module load apps/gcc/openfoam/5.20171030 module load apps/gcc/openfoam/4.1 module load apps/gcc/openfoam/2.4.0-mnf # Includes MicroNanoFlows tools, e.g. dsmcfoam+ module load apps/gcc/openfoam/2.4.0 module load apps/gcc/openfoam/2.3.1 module load apps/gcc/openfoam/2.30
Version 6 includes rheoTool version 3 and Swak4Foam version 0.4.2. We do not advise using versions lower than 5 as they may be removed in the future.
OpenFOAM.com versions
This version contains, in addition to the main OpenFOAM tools, “customer sponsored developments and contributions from the community, including the OpenFOAM Foundation. This Official OpenFOAM release contains several man years of client-sponsored developments of which much has been transferred to, but not released in the OpenFOAM Foundation branch”.
module load apps/gcc/openfoam/v2312 # Includes METIS decomp library module load apps/gcc/openfoam/v2306 # Includes METIS decomp library module load apps/gcc/openfoam/v2212 # Includes METIS decomp library and swak4foam module load apps/gcc/openfoam/v2106 module load apps/gcc/openfoam/v2012 # Includes METIS decomp library and swak4foam module load apps/gcc/openfoam/v2006 # Includes METIS decomp library and swak4foam module load apps/gcc/openfoam/v1912 # Includes METIS decomp library and swak4foam module load apps/gcc/openfoam/v1906 module load apps/gcc/openfoam/v1812
Running the application
User are requested not to output data every timestep of their simulation if not needed. This can create a huge number of files and directories in your scratch area (we have seen millions of files generated). Please ensure you modify your controlDict
file to turn off writing at every timestep. For example, set purgeWrite 5
to keep just the most recent 5 timesteps worth and set a suitable writeInterval
. Please check the controlDict online documentation for more keywords and options.
To check your scratch usage (space consumed and number of files) run the following command on the login node: scrusage
Serial batch job submission
- Ensure you have followed the Set up Procedure (loading the modulefile and sourcing the dot file.)
- Now in the top directory (
$FOAM_RUN
) where you have set up your job/case (the one containing 0, constant and system) create a batch submission script, called for examplesge.openfoam
containing:#!/bin/bash --login #$ -cwd module load apps/gcc/openfoam/6 mkdir -p /scratch/$USER/OpenFoam export FOAM_RUN=/scratch/$USER/OpenFoam source $foamDotFile cd $FOAM_RUN interFoam
replacing
interFoam
with the OpenFOAM executable appropriate to your job/case. - Submit the job:
qsub sge.openfoam
- A log of the job will got to the SGE output file, e.g.
sge.openfoam.o12345
Parallel batch job submission – single node multi-core
- Ensure you have followed the Set up Procedure (loading the modulefile and sourcing the dot file.)
- You will need to decompose your case before you can run it. Ensure that you have a file called
decomposeParDict
in your job/casesystem
directory, specifying the number of cores you wish to use withnumberOfSubdomains
and a suitable decompositon method, e.g.simple
, and related settings (see Further Information below for links to documentation that will help you this). - Now run this command from the top level directory of your job/case (the directory containing
0
,constant
andsystem
directories):# All OpenFOAM versions have this serial executable decomposePar
Note that the
decomposePar
app can use a lot of memory for dense or complicated meshes. If your job doesn’t complete successfully, you can achieve the same result with a parallel job using:# Only "OpenFOAM.com" versions (v1812, ..., v2012 and so on) have this executable mpirun -n $NSLOTS redistributePar -decompose -parallel
- Next, still in the top directory, create a batch submission script, called for example
sge.openfoam.par
containing:#!/bin/bash --login #$ -pe smp.pe 4 #$ -cwd module load apps/gcc/openfoam/6 mkdir -p /scratch/$USER/OpenFoam export FOAM_RUN=/scratch/$USER/OpenFoam source $foamDotFile cd $FOAM_RUN mpirun -np $NSLOTS interFoam -parallel
replacing
interFoam
with the OpenFOAM executable appropriate to your job/case.The number after
smp.pe
must match thenumberOfSubdomains
setting you made earlier, if it doesn’t your job will fail. - Submit the job:
qsub sge.openfoam.par
- A log of the job will got to the SGE output file, e.g.
sge.openfoam.par.o12345
- After the simulation is finished you may need to recombine the distributed mesh (and results) in to one mesh. The following can be run in a serial (1-core) job to do that:
# All OpenFOAM versions have this serial executable reconstructPar
Alternatively, to perform this step in a parallel job (e.g., at the end of your main simulation job) you can use:
# Only "OpenFOAM.com" versions (v1812, ..., v2012 and so on) have this executable mpirun -n $NSLOTS redistributePar -recompose -parallel
This will generate a directory for each time-step in you simulation or just the last time-step if that is the only one you keep.
Single node limits
- The minimum number of cores for any parallel job is 2.
- The maximum number of cores in
smp.pe
is 32. Jobs run on an Intel node.
Parallel batch job submission – multi-node (2 or more compute nodes)
- Ensure you have followed the Set up Procedure (loading the modulefile and sourcing the dot file.)
- You will need to decompose your case before you can run it. Ensure that you have a file called
decomposeParDict
in your job/casesystem
directory, specifying the number of cores you wish to use withnumberOfSubdomains
and a suitable decompositon method, e.g. simple, and related settings (see Further Information below for links to documentation that will help you this). - Now run this command from the top level directory of your job/case (the directory containing
0
,constant
andsystem
directories):# All OpenFOAM versions have this serial executable decomposePar
Note that the
decomposePar
app can use a lot of memory for dense or complicated meshes. If your job doesn’t complete successfully, you can achieve the same result with a parallel job using:# Only "OpenFOAM.com" versions (v1812, ..., v2012 and so on) have this executable mpirun -n $NSLOTS redistributePar -decompose -parallel
- Next, still in the top directory, create a batch submission script, called for example
sge.openfoam.par
containing:#!/bin/bash --login #$ -pe mpi-24-ib.pe 48 # Must be 48 or more cores in multiples of 24 #$ -cwd module load apps/gcc/openfoam/6 mkdir -p /scratch/$USER/OpenFoam export FOAM_RUN=/scratch/$USER/OpenFoam source $foamDotFile mpirun -np $NSLOTS interFoam -parallel
replacing
interFoam
with the OpenFOAM executable appropriate to your job/case.The number after
mpi-24-ib.pe
must match thenumberOfSubdomains
setting you made earlier, if it doesn’t your job will fail. - Submit the job:
qsub sge.openfoam.par
- A log of the job will got to the SGE output file, e.g.
sge.openfoam.par.o12345
. - After the simulation is finished you may need to recombine the distributed mesh (and results) in to one mesh. The following can be run in a serial (1-core) job to do that:
# All OpenFOAM versions have this serial executable reconstructPar
Alternatively, to perform this step in a parallel job (e.g., at the end of your main simulation job) you can use:
# Only "OpenFOAM.com" versions (v1812, ..., v2012 and so on) have this executable mpirun -n $NSLOTS redistributePar -recompose -parallel
This will generate a directory for each time-step in you simulation or just the last time-step if that is the only one you keep.
Multi node limits
- The following parallel environments may also be used:
mpi-24-ib.pe
– Jobs must be 48 or more cores, in multiples of 24, up to a maximum of 120. Uses Intel nodes.
- All the above parallel environments use nodes connected with Infiniband.
Reducing disk space
OpenFOAM can generate a lot of output files – especially if the results of every time-step are written to disk (we strongly discourage this!) Once you’ve post-processed your time-step files, do you need to keep them? If not you could simply delete the files:
# Caution - this will delete a lot of files - scratch is NOT backed up! cd ~/scratch/my-openfoam-sim rm -rf processor*
If you ran the reconstructPar
app to recombine the results from each CPU, you will still have a file for every time-step in the postProcessing
directory. Do you need this – for example, if you have generated a movie file of the results, you might not want the individual time-step files:
# Caution - this will delete a lot of files - scratch is NOT backed up! cd ~/scratch/my-openfoam-sim rm -rf postProcessing
If you do want to keep the files, archiving them in to a single compressed file will save a lot of space. While time-step files might be individually small, the fact that the Lustre filesystem has a minimum block size means very small files actually consume more space than is really needed for them.
The following job-script will archive all of your step files in to a single compressed tar archive:
#!/bin/bash #$ -cwd #$ -pe smp.pe 4 #$ -l short #$ -j y module load tools/gcc/pigz/2.4 tar cf - processor* postProcessing | pigz -p $NSLOTS > my-openfoam-sim.tar.gz
Submit the job from the directory where your OpenFOAM files are located, using qsub jobscript
where jobscript is the name of your file.
Once the job has finished you can remove the individual directories as shown above.
You should also copy the my-openfoam-sim.tar.gz
file to your home directory:
cp my-openfoam-sim.tar.gz ~
or to your Research Data Storage area.
If you ever need to extract the files from the archive, simply run:
tar xzf my-openfoam-sim.tar.gz
It will recreate the above directories and files in your current directory.
Additional advice
- When changing the number of cores you will need to adjust your input files appropriately and ensure
decomposePar
is re-run. - If the
decomposePar
command takes more than a few minutes to run or uses significant resource on the login node then please include it in your job submission script instead by including it on the line before thempirun
so that it executes as part of the batch job on the compute node.
Further info
- The OpenFOAM tutorials are very good and ideal for testing and getting used to setting up a job on the CSF before you proceed to production runs of your own work. A tutorial on how to run OpenFOAM v9 on CSF3 is available in:
/opt/apps/apps/gcc/openfoam/9/openfoam9_training_csf3_rans.tar.gz
- OpenFOAM documentation.
- swak4foam website.
- Useful swak4foam tutorials on the NTNU HPC Wiki.