OpenFOAM, RheoTool and swak4Foam
Overview
OpenFOAM (Open source Field Operation And Manipulation) is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, including computational fluid dynamics (CFD).
Some versions have been installed from openfoam.org and some from openfoam.com (see modulefile information below for further notes).
Versions currently available are: see modulefiles below.
controlDict
file to turn off writing at every timestep. For example, set purgeWrite 5
to keep just 5 timesteps worth and set a suitable writeInterval
. Please check the controlDict online documentation for more keywords and options.
If you no longer need the individual processorNNN
directories after recomposing your mesh, you can delete the directories inside your jobscript using: rm -rf processor*
To check your scratch usage (space consumed and number of files) run the following command on the login node: scrusage
Restrictions on use
OpenFOAM is distributed by the OpenFOAM Foundation and is freely available and open source, licensed under open source licenses. All CSF users may use this software.
Set up procedure
This slightly different to most other CSF applications. You must first load a modulefile and then follow the instruction that it will display to source a further file:
source $FOAM_BASH
The $FOAM_BASH
variable is set by the modulefile.
OpenFOAM expects the variable FOAM_RUN
to be set for your job and to contain the relevant files and directories. It is recommended to use scratch and then copy back any needed results to your home directory
In your jobscript you can use one of the following module load commands for the different versions on CSF4.
OpenFOAM.org versions
module load openfoam/10-foss-2021a module load openfoam/9-foss-2021a module load openfoam/8-foss-2020a module load openfoam/7-foss-2019b-20200508 module load openfoam/6-foss-2019b module load openfoam/5.0-foss-2019b-20180606
In addition, RheoTool can be loaded with this version. This is a separate modulefile which should be loaded after the openfoam modulefile. For example:
# For OF9 module load openfoam/9-foss-2021a module load rheotool/6.0-foss-2021a # For OF6 module load openfoam/6-foss-2019b module load rheotool/3.0-foss-2019b
In addition, swak4Foam can be loaded with this version. This is a separate modulefile which should be loaded after the openfoam modulefile. For example:
# For OF9 module load openfoam/9-foss-2021a module load swak4foam/2021.05-foss-2021a # For OF7 module load openfoam/7-foss-2019b-20200508 module load swak4foam/2021.05-foss-2019b # For OF6 module load openfoam/6-foss-2019b module load swak4foam/2021.05-foss-2019b # You can load everything on one line using the default swak4foam version (2021.05-foss-2019b): module load openfoam/6-foss-2019b swak4foam
OpenFOAM.com versions
This version contains, in addition to the main OpenFOAM tools, “customer sponsored developments and contributions from the community, including the OpenFOAM Foundation. This Official OpenFOAM release contains several man years of client-sponsored developments of which much has been transferred to, but not released in the OpenFOAM Foundation branch”
openfoam/v2306-foss-2021a # Untested - please report any problems to its-ri-team openfoam/v2212-foss-2021a openfoam/v2206-foss-2021a openfoam/v2106-foss-2021a openfoam/v2012-foss-2020a openfoam/v2006-foss-2020a openfoam/v1912-foss-2020a-220610 openfoam/v1906-foss-2019b openfoam/v1812-foss-2019b
In addition, swak4Foam can be loaded with some of these versions. This is a separate modulefile which should be loaded after the openfoam modulefile. For example:
# For v2006 module load openfoam/v2006-foss-2020a module load swak4foam/2021.05-foss-2020a # If you require swak4Foam for other OF versions, please contact us.
Running the application
User are requested not to output data every timestep of their simulation if not needed. This can create a huge number of files and directories in your scratch area (we have seen millions of files generated). Please ensure you modify your controlDict
file to turn off writing at every timestep. For example, set purgeWrite 5
to keep just the most recent 5 timesteps worth and set a suitable writeInterval
. Please check the controlDict online documentation for more keywords and options.
To check your scratch usage (space consumed and number of files) run the following command on the login node: scrusage
Serial batch job submission
- Ensure you have followed the Set Up Procedure by making sure it sets
FOAM_RUN
. - Now in the top directory (
$FOAM_RUN
) where you have set up your job/case (the one containing0
,constant
andsystem
) create a batch submission script, called for exampleopenfoam.slurm
containing:#!/bin/bash --login # Job runs in current dir by default # Load the required version module load openfoam/v2006-foss-2020a mkdir -p /scratch/$USER/OpenFoam export FOAM_RUN=/scratch/$USER/OpenFoam source $FOAM_BASH # Note: Different on CSF3 cd $FOAM_RUN interFoam
replacing
interFoam
with the OpenFOAM executable appropriate to your job/case. - Submit the job:
sbatch openfoam.slurm
- A log of the job will got to the SLURM output file, e.g.
slurm-12345.out
Parallel batch job submission – single node
- Ensure you have followed the Set Up Procedure by making sure it sets
FOAM_RUN
. - You will need to decompose your case before you can run it. Ensure that you have a file called
decomposeParDict
in your job/casesystem
directory, specifying the number of cores you wish to use withnumberOfSubdomains
and a suitable decompositon method (e.g.simple
) and related settings (see Further Information below for links to documentation that will help you this). - Now run this command from the top level directory of your job/case (the one containing
0
,constant
andsystem
):# This will run an interactive single-core job module load openfoam/your-required-version srun --pty decomposePar # Alternatively submit a single-core batch job module load openfoam/your-required-version sbatch -J decomposePar --wrap="decomposePar"
- Once your decompose job has finished, still in the top directory, create a batch submission script, called for example
openfoam-par.slurm
containing:#!/bin/bash --login # Job runs in the current directory by default #SBATCH -p multicore # Parallel single-node job #SBATCH -n 4 # 4 cores # Load the required version module purge module load openfoam/v2006-foss-2020a mkdir -p /scratch/$USER/OpenFoam export FOAM_RUN=/scratch/$USER/OpenFoam source $FOAM_BASH # Note: Different on CSF3 cd $FOAM_RUN # mpirun knows how many MPI processes to start mpirun interFoam -parallel
replacing
interFoam
with the OpenFOAM executable appropriate to your job/case.The number after
#SBATCH -n
must match thenumberOfSubdomains
setting you made earlier, if it doesn’t your job will fail. - Submit the job:
sbatch openfoam-par.slurm
- A log of the job will got to the SLURM output file, e.g.
slurm-12345.out
Single node limits
- The minimum number of cores for any parallel job is 2.
- The maximum number of cores in the
multicore
partition is 40.
Parallel batch job submission – multi-node (2 or more nodes)
- Ensure you have followed the Set Up Procedure by making sure it sets
FOAM_RUN
. - You will need to decompose your case before you can run it. Ensure that you have a file called
decomposeParDict
in your job/casesystem
directory ($FOAM_RUN
) specifying the number of cores you wish to use withnumberOfSubdomains
and a suitable decompositon method, e.g. simple, and related settings (see Further Information below for links to documentation that will help you this). - Now run this command from the top level directory of your job/case (the one containing 0, constant and system,
$FOAM_RUN
):decomposePar
- Next, still in the top directory, create a batch submission script, called for example
sge.openfoam.par
containing:#!/bin/bash --login # Job runs in the current directory by default #SBATCH -p multinode # Parallel multi-node job #SBATCH -N 2 # Number of 40-core compute nodes (2 or more) ###### Alternatively, you can specify the total number of cores # #SBATCH -n 80 # 80 cores = 2 x 40-core compute nodes ###### # Load the required version module purge module load openfoam/v2006-foss-2020a mkdir -p /scratch/$USER/OpenFoam export FOAM_RUN=/scratch/$USER/OpenFoam source $FOAM_BASH # Note: Different on CSF3 cd $FOAM_RUN # mpirun knows how many MPI processes to start mpirun interFoam -parallel
replacing
interFoam
with the OpenFOAM executable appropriate to your job/case.The number after
#SBATCH -n
or the number of compute nodes multiplied by 40 must match thenumberOfSubdomains
setting you made earlier, if it doesn’t your job will fail. - Submit the job:
sbatch openfoam-par.slurm
- A log of the job will got to the SLURM output file, e.g.
slurm-12345.out
.
Multinode limits
multinode
– Jobs must be 80 or more cores, in multiples of 40.
Reducing disk space
OpenFOAM can generate a lot of output files – especially if the results of every time-step are written to disk (we strongly discourage this!) Once you’ve post-processed your time-step files, do you need to keep them? If not you could simply delete the files:
# Caution - this will delete a lot of files - scratch is NOT backed up! cd ~/scratch/my-openfoam-sim rm -rf processor*
If you ran the reconstructPar
app to recombine the results from each CPU, you will still have a file for every time-step in the postProcessing
directory. Do you need this – for example, if you have generated a movie file of the results, you might not want the individual time-step files:
# Caution - this will delete a lot of files - scratch is NOT backed up! cd ~/scratch/my-openfoam-sim rm -rf postProcessing
If you do want to keep the files, archiving them in to a single compressed file will save a lot of space. While time-step files might be individually small, the fact that the GPFS filesystem has a minimum block size means very small files actually consume more space than is really needed for them.
The following job-script will archive all of your step files in to a single compressed tar archive:
#!/bin/bash #$ -cwd #$ -pe smp.pe 4 #$ -l short #$ -j y module load pigz/2.4-gcccore-9.3.0 tar cf - processor* postProcessing | pigz -p $NSLOTS > my-openfoam-sim.tar.gz
Submit the job from the directory where your OpenFOAM files are located, using qsub jobscript
where jobscript is the name of your file.
Once the job has finished you can remove the individual directories as shown above.
You should also copy the my-openfoam-sim.tar.gz
file to your home directory:
cp my-openfoam-sim.tar.gz ~
or to your Research Data Storage area.
If you ever need to extract the files from the archive, simply run:
tar xzf my-openfoam-sim.tar.gz
It will recreate the above directories and files in your current directory.
Additional advice
- When changing the number of cores you will need to adjust your input files appropriately and ensure
decomposePar
is re-run. - If the
decomposePar
command takes more than a few minutes to run or uses significant resource on the login node then please include it in your job submission script instead by including it on the line before thempirun
so that it executes as part of the batch job on the compute node.
Further info
- The OpenFOAM tutorials are very good and ideal for testing and getting used to setting up a job on the CSF before you proceed to production runs of your own work.
- OpenFOAM documentation.
- swak4foam website.
- Useful swak4foam tutorials on the NTNU HPC Wiki.