{"id":287,"date":"2020-08-14T14:50:26","date_gmt":"2020-08-14T13:50:26","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf4\/?page_id=287"},"modified":"2026-03-31T17:17:22","modified_gmt":"2026-03-31T16:17:22","slug":"openfoam","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/openfoam\/","title":{"rendered":"OpenFOAM, RheoTool and swak4Foam"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>OpenFOAM (Open source Field Operation And Manipulation) is a C++ toolbox for the development of customized numerical solvers, and pre-\/post-processing utilities for the solution of continuum mechanics problems, including computational fluid dynamics (CFD).<\/p>\n<p>Some versions have been installed from <a href=\"https:\/\/www.openfoam.org\/\">openfoam.org<\/a> and some from <a href=\"https:\/\/www.openfoam.com\/\">openfoam.com<\/a> (see modulefile information below for further notes).<\/p>\n<p>Versions currently available are: see modulefiles below.<\/p>\n<div class=\"warning\">\nUser are requested <em>not<\/em> to output data every timestep of their simulation if not needed. This can create a huge number of files and directories in your scratch area (we have seen millions of files generated). Please ensure you modify your <code>controlDict<\/code> file to turn off writing at every timestep. For example, set <code>purgeWrite 5<\/code> to keep just 5 timesteps worth and set a suitable <code>writeInterval<\/code>. Please check the <a href=\"http:\/\/www.openfoam.com\/documentation\/user-guide\/controlDict.php\">controlDict online documentation<\/a> for more keywords and options.<\/p>\n<p>If you no longer need the individual <code>processorNNN<\/code> directories after recomposing your mesh, you can delete the directories inside your jobscript using: <code>rm -rf processor*<\/code><\/p>\n<p>To check your scratch usage (space consumed and number of files) run the following command on the login node: <code>scrusage<\/code>\n<\/div>\n<h2>Restrictions on use<\/h2>\n<p>OpenFOAM is distributed by the OpenFOAM Foundation and is freely available and open source, licensed under open source licenses. All CSF users may use this software.<\/p>\n<h2>Set up procedure<\/h2>\n<p>This slightly different to most other CSF applications. You must first load a modulefile and then follow the instruction that it will display to source a further file:<\/p>\n<pre>\r\nsource $FOAM_BASH\r\n<\/pre>\n<p>The <code>$FOAM_BASH<\/code> variable is set by the modulefile.<\/p>\n<p>OpenFOAM expects the variable <code>FOAM_RUN<\/code> to be set for your job and to contain the relevant files and directories. It is recommended to use scratch and then copy back any needed results to your home directory<\/p>\n<p>In your jobscript you can use one of the following module load commands for the different versions on CSF4. <\/p>\n<h3>OpenFOAM.org versions<\/h3>\n<pre>\r\nmodule load openfoam\/13-foss-2023a\r\nmodule load openfoam\/12-foss-2023a\r\nmodule load openfoam\/10-foss-2021a\r\nmodule load openfoam\/9-foss-2021a\r\nmodule load openfoam\/8-foss-2020a\r\nmodule load openfoam\/7-foss-2019b-20200508\r\nmodule load openfoam\/6-foss-2019b\r\nmodule load openfoam\/5.0-foss-2019b-20180606\r\n<\/pre>\n<p>In addition, <strong>RheoTool<\/strong> can be loaded with this version. This is a separate modulefile which should be loaded after the openfoam modulefile. For example:<\/p>\n<pre>\r\n# For OF9\r\nmodule load openfoam\/9-foss-2021a\r\nmodule load rheotool\/6.0-foss-2021a\r\n\r\n# For OF6\r\nmodule load openfoam\/6-foss-2019b \r\nmodule load rheotool\/3.0-foss-2019b\r\n<\/pre>\n<p>In addition, <strong>swak4Foam<\/strong> can be loaded with this version. This is a separate modulefile which should be loaded <em>after<\/em> the openfoam modulefile. For example:<\/p>\n<pre>\r\n# For OF9\r\nmodule load openfoam\/9-foss-2021a\r\nmodule load swak4foam\/2021.05-foss-2021a\r\n\r\n# For OF7\r\nmodule load openfoam\/7-foss-2019b-20200508\r\nmodule load swak4foam\/2021.05-foss-2019b\r\n\r\n# For OF6\r\nmodule load openfoam\/6-foss-2019b \r\nmodule load swak4foam\/2021.05-foss-2019b\r\n\r\n# You can load everything on one line using the default swak4foam version (2021.05-foss-2019b):\r\nmodule load openfoam\/6-foss-2019b swak4foam\r\n<\/pre>\n<h3>OpenFOAM.com versions<\/h3>\n<p>This version contains, in addition to the main OpenFOAM tools, &#8220;customer sponsored developments and contributions from the community, including the OpenFOAM Foundation. This Official OpenFOAM release contains several man years of client-sponsored developments of which much has been transferred to, but not released in the OpenFOAM Foundation branch&#8221;<\/p>\n<pre>\r\nmodule load apps\/gcc\/openfoam\/v2506\r\n\r\nmodule load openfoam\/v2306-foss-2021a      # Untested - please report any problems to its-ri-team\r\nmodule load openfoam\/v2212-foss-2021a\r\nmodule load openfoam\/v2206-foss-2021a\r\nmodule load openfoam\/v2106-foss-2021a\r\nmodule load openfoam\/v2012-foss-2020a\r\nmodule load openfoam\/v2006-foss-2020a\r\nmodule load openfoam\/v1912-foss-2020a-220610\r\nmodule load openfoam\/v1906-foss-2019b\r\nmodule load openfoam\/v1812-foss-2019b\r\n<\/pre>\n<p>In addition, <strong>swak4Foam<\/strong> can be loaded with some of these versions. This is a separate modulefile which should be loaded <em>after<\/em> the openfoam modulefile. For example:<\/p>\n<pre>\r\n# For v2006\r\nmodule load openfoam\/v2006-foss-2020a\r\nmodule load swak4foam\/2021.05-foss-2020a\r\n\r\n# If you require swak4Foam for other OF versions, please contact us.\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>User are requested <em>not<\/em> to output data every timestep of their simulation if not needed. This can create a huge number of files and directories in your scratch area (we have seen millions of files generated). Please ensure you modify your <code>controlDict<\/code> file to turn off writing at every timestep. For example, set <code>purgeWrite 5<\/code> to keep just the most recent 5 timesteps worth and set a suitable <code>writeInterval<\/code>. Please check the <a href=\"http:\/\/www.openfoam.com\/documentation\/user-guide\/controlDict.php\">controlDict online documentation<\/a> for more keywords and options.<\/p>\n<p>To check your scratch usage (space consumed and number of files) run the following command on the login node: <code>scrusage<\/code><\/p>\n<h3>Serial batch job submission<\/h3>\n<ol>\n<li>Ensure you have followed the Set Up Procedure by making sure it sets <code>FOAM_RUN<\/code>.<\/li>\n<li>Now in the top directory (<code>$FOAM_RUN<\/code>) where you have set up your job\/case (the one containing <code>0<\/code>, <code>constant<\/code> and <code>system<\/code>) create a batch submission script, called for example <code>openfoam.slurm<\/code> containing:\n<pre>\r\n#!\/bin\/bash --login\r\n# Job runs in current dir by default\r\n\r\n# Load the required version\r\nmodule load openfoam\/v2006-foss-2020a\r\n\r\nmkdir -p \/scratch\/$USER\/OpenFoam\r\nexport FOAM_RUN=\/scratch\/$USER\/OpenFoam\r\nsource $FOAM_BASH                          # Note: Different on CSF3\r\ncd $FOAM_RUN\r\ninterFoam\r\n<\/pre>\n<p>replacing <code>interFoam<\/code> with the OpenFOAM executable appropriate to your job\/case.<\/li>\n<li>Submit the job: <code>sbatch openfoam.slurm<\/code><\/li>\n<li>A log of the job will got to the SLURM output file, e.g. <code>slurm-12345.out<\/code><\/li>\n<\/ol>\n<h3>Parallel batch job submission &#8211; single node<\/h3>\n<ol class=\"gaplist\">\n<li>Ensure you have followed the Set Up Procedure by making sure it sets <code>FOAM_RUN<\/code>.<\/li>\n<li>You will need to decompose your case before you can run it. Ensure that you have a file called <code>decomposeParDict<\/code> in your job\/case <code>system<\/code> directory, specifying the number of cores you wish to use with <code>numberOfSubdomains<\/code> and a suitable decompositon method (e.g. <code>simple<\/code>) and related settings (see Further Information below for links to documentation that will help you this).<\/li>\n<li>Now run this command from the top level directory of your job\/case (the one containing <code>0<\/code>, <code>constant<\/code> and <code>system<\/code>):\n<pre>\r\n# This will run an interactive single-core job\r\nmodule load openfoam\/<em>your-required-version<\/em>\r\nsrun --pty decomposePar\r\n\r\n# Alternatively submit a single-core batch job\r\nmodule load openfoam\/<em>your-required-version<\/em>\r\nsbatch -J decomposePar --wrap=\"decomposePar\"\r\n<\/pre>\n<\/li>\n<li>Once your decompose job has finished, still in the top directory, create a batch submission script, called for example <code>openfoam-par.slurm<\/code> containing:\n<pre>\r\n#!\/bin\/bash --login\r\n# Job runs in the current directory by default\r\n\r\n#SBATCH -p multicore     # Parallel single-node job\r\n#SBATCH -n 4             # 4 cores\r\n\r\n# Load the required version\r\nmodule purge\r\nmodule load openfoam\/v2006-foss-2020a\r\n\r\nmkdir -p \/scratch\/$USER\/OpenFoam\r\nexport FOAM_RUN=\/scratch\/$USER\/OpenFoam\r\nsource $FOAM_BASH                          # Note: Different on CSF3\r\ncd $FOAM_RUN\r\n\r\n# mpirun knows how many MPI processes to start\r\nmpirun interFoam -parallel\r\n<\/pre>\n<p>replacing <code>interFoam<\/code> with the OpenFOAM executable appropriate to your job\/case.<\/p>\n<p>The number after <code>#SBATCH -n <\/code> must match the <code>numberOfSubdomains<\/code> setting you made earlier, if it doesn&#8217;t your job will fail.<\/li>\n<li>Submit the job: <code>sbatch openfoam-par.slurm<\/code><\/li>\n<li>A log of the job will got to the SLURM output file, e.g. <code>slurm-12345.out<\/code><\/li>\n<\/ol>\n<h3>Single node limits<\/h3>\n<ul>\n<li>The minimum number of cores for any parallel job is 2.<\/li>\n<li>The maximum number of cores in the <code>multicore<\/code> partition is 40.<\/li>\n<\/ul>\n<h3>Parallel batch job submission &#8211; multi-node (2 or more nodes)<\/h3>\n<ol>\n<li>Ensure you have followed the Set Up Procedure by making sure it sets <code>FOAM_RUN<\/code>.<\/li>\n<li>You will need to decompose your case before you can run it. Ensure that you have a file called <code>decomposeParDict<\/code> in your job\/case <code>system<\/code> directory (<code>$FOAM_RUN<\/code>) specifying the number of cores you wish to use with <code>numberOfSubdomains<\/code> and a suitable decompositon method, e.g. simple, and related settings (see Further Information below for links to documentation that will help you this).<\/li>\n<li>Now run this command from the top level directory of your job\/case (the one containing 0, constant and system, <code>$FOAM_RUN<\/code>):\n<pre>decomposePar<\/pre>\n<\/li>\n<li>Next, still in the top directory, create a batch submission script, called for example <code>sge.openfoam.par<\/code> containing:\n<pre>\r\n#!\/bin\/bash --login\r\n# Job runs in the current directory by default\r\n\r\n#SBATCH -p multinode     # Parallel multi-node job\r\n#SBATCH -N 2             # Number of 40-core compute nodes (2 or more)\r\n\r\n###### Alternatively, you can specify the total number of cores\r\n#  #SBATCH -n 80            # 80 cores = 2 x 40-core compute nodes\r\n######\r\n\r\n# Load the required version\r\nmodule purge\r\nmodule load openfoam\/v2006-foss-2020a\r\n\r\nmkdir -p \/scratch\/$USER\/OpenFoam\r\nexport FOAM_RUN=\/scratch\/$USER\/OpenFoam\r\nsource $FOAM_BASH                          # Note: Different on CSF3\r\ncd $FOAM_RUN\r\n\r\n# mpirun knows how many MPI processes to start\r\nmpirun interFoam -parallel\r\n<\/pre>\n<p>replacing <code>interFoam<\/code> with the OpenFOAM executable appropriate to your job\/case.<\/p>\n<p>The number after <code>#SBATCH -n <\/code> or the number of compute nodes multiplied by 40 must match the <code>numberOfSubdomains<\/code> setting you made earlier, if it doesn&#8217;t your job will fail.<\/li>\n<li>Submit the job: <code>sbatch openfoam-par.slurm<\/code><\/li>\n<li>A log of the job will got to the SLURM output file, e.g. <code>slurm-12345.out<\/code>.<\/li>\n<\/ol>\n<h3>Multinode limits<\/h3>\n<ul>\n<li><code>multinode<\/code> &#8211; Jobs must be 80 or more cores, in multiples of 40.<\/li>\n<\/ul>\n<h2>Reducing disk space<\/h2>\n<p>OpenFOAM can generate a lot of output files &#8211; especially if the results of every time-step are written to disk (we strongly discourage this!) Once you&#8217;ve post-processed your time-step files, do you need to keep them? If not you could simply delete the files:<\/p>\n<pre>\r\n# Caution - this will delete a lot of files - scratch is NOT backed up!\r\ncd ~\/scratch\/my-openfoam-sim\r\nrm -rf processor*\r\n<\/pre>\n<p>If you ran the <code>reconstructPar<\/code> app to recombine the results from each CPU, you will still have a file for every time-step in the <code>postProcessing<\/code> directory. Do you need this &#8211; for example, if you have generated a movie file of the results, you might not want the individual time-step files:<\/p>\n<pre>\r\n# Caution - this will delete a lot of files - scratch is NOT backed up!\r\ncd ~\/scratch\/my-openfoam-sim\r\nrm -rf postProcessing\r\n<\/pre>\n<p>If you do want to keep the files, archiving them in to a single compressed file will save a lot of space. While time-step files might be individually small, the fact that the GPFS filesystem has a minimum block size means very small files actually consume more space than is really needed for them.<\/p>\n<p>The following job-script will archive all of your step files in to a single compressed tar archive:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#SBATCH -p multicore\r\n#SBATCH -n 4\r\n\r\nmodule purge\r\nmodule load pigz\/2.4-gcccore-9.3.0\r\n\r\n# Name of the archive file we want to write\r\nARCHIVE=my-openfoam-sim.tar.gz\r\n\r\n# Write a gzip compessed 'tar' archive containing all of the processor* directories and files\r\ntar cf - processor* postProcessing |  pigz -p $SLURM_NTASKS > $ARCHIVE\r\n\r\n# Now remove the processor* directories (and everything in them) if a non empty archive exists\r\n[ -s $ARCHIVE ] && rm -rf processor*\r\n<\/pre>\n<p>Submit the job from the directory where your OpenFOAM files are located, using <code>qsub jobscript<\/code> where jobscript is the name of your file.<\/p>\n<p>Once the job has finished you can <strong>remove the individual directories as shown above<\/strong>.<\/p>\n<p>You should also copy the <code>my-openfoam-sim.tar.gz<\/code> file to your home directory:<\/p>\n<pre>\r\ncp my-openfoam-sim.tar.gz ~\r\n<\/pre>\n<p>or to your <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/filesystems\/home-scratch-rds\/#Additional_Research_Data_Storage_RDS\">Research Data Storage area<\/a>.<\/p>\n<p>If you ever need to extract the files from the archive, simply run:<\/p>\n<pre>\r\ntar xzf my-openfoam-sim.tar.gz\r\n<\/pre>\n<p>It will recreate the above directories and files in your current directory.<\/p>\n<h2>Additional advice<\/h2>\n<ul>\n<li>When changing the number of cores you will need to adjust your input files appropriately and ensure <code>decomposePar<\/code> is re-run.<\/li>\n<li>If the <code>decomposePar<\/code> command takes more than a few minutes to run or uses significant resource on the login node then please include it in your job submission script instead by including it on the line before the <code>mpirun<\/code> so that it executes as part of the batch job on the compute node.<\/li>\n<\/ul>\n<h2>Further info<\/h2>\n<ul>\n<li>The <a href=\"http:\/\/www.openfoam.org\/docs\/user\/tutorials.php\">OpenFOAM tutorials<\/a> are very good and ideal for testing and getting used to setting up a job on the CSF before you proceed to production runs of your own work.<\/li>\n<li><a href=\"http:\/\/www.openfoam.org\/docs\/\">OpenFOAM documentation<\/a>.<\/li>\n<li><a href=\"http:\/\/openfoamwiki.net\/index.php\/Contrib\/swak4Foam\">swak4foam website<\/a>.<\/li>\n<li><a href=\"https:\/\/www.hpc.ntnu.no\/display\/hpc\/OpenFOAM+Training+Tutorial\">Useful swak4foam tutorials on the NTNU HPC Wiki<\/a>.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Overview OpenFOAM (Open source Field Operation And Manipulation) is a C++ toolbox for the development of customized numerical solvers, and pre-\/post-processing utilities for the solution of continuum mechanics problems, including computational fluid dynamics (CFD). Some versions have been installed from openfoam.org and some from openfoam.com (see modulefile information below for further notes). Versions currently available are: see modulefiles below. User are requested not to output data every timestep of their simulation if not needed. This.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/openfoam\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"parent":49,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-287","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/287","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/comments?post=287"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/287\/revisions"}],"predecessor-version":[{"id":1523,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/287\/revisions\/1523"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/49"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/media?parent=287"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}