{"id":1061,"date":"2013-09-18T13:34:57","date_gmt":"2013-09-18T13:34:57","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/?page_id=1061"},"modified":"2018-08-09T07:58:32","modified_gmt":"2018-08-09T07:58:32","slug":"openfoam","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/openfoam\/","title":{"rendered":"OpenFOAM"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>OpenFOAM (Open source Field Operation And Manipulation) is a C++ toolbox for the development of customized numerical solvers, and pre-\/post-processing utilities for the solution of continuum mechanics problems, including computational fluid dynamics (CFD).<\/p>\n<p>Versions 2.3.0 (under going user testing), 2.2.2 and 2.2.1 are installed on the CSF. All were compiled using gcc 4.7.0 and openmpi 1.6.<\/p>\n<p>Version 2.3.0 has also been compiled with the PGI 14.10 with ACML(fma4) compiler optimized for AMD Bulldozer nodes. This is an experimental compilation because PGI is not officially supported by OpenFOAM. However, this may give better performance on the AMD Bulldozer (64-core) nodes.<\/p>\n<p>The embedded Paraview and PV3FoamReader module are not installed.<\/p>\n<p><a href=\"http:\/\/openfoamwiki.net\/index.php\/Contrib\/swak4Foam\">swak4foam<\/a> is installed as part of version 2.2.2, 5.0, and 6.<\/p>\n<p>Version 3.0.1 (undergoing user testing, June 2016) was compiled using gcc 4.8.2 and openmpi 1.6.<\/p>\n<p>Version 4.1 was compiled using gcc\u00a06.3.0 and openmpi 1.8.<\/p>\n<p>Version 5.0 was compiled using gcc\u00a06.3.0 and openmpi 1.8.<\/p>\n<p>Version 6 was compiled using gcc 6.3.0 and openmpi 1.8.<\/p>\n<table class=\"warning-wide\">\n<tbody>\n<tr>\n<td>User are requested not to output data every timestep of their simulation if not needed. This can create a huge number of files and directories in your scratch area (we have seen millions of files generated). Please ensure you modify your <code>controlDict<\/code> file to turn off writing at every timestep. For example, set <code>purgeWrite 5<\/code> to keep just 5 timesteps worth and set a suitable <code>writeInterval<\/code>. Please check the <a href=\"http:\/\/www.openfoam.com\/documentation\/user-guide\/controlDict.php\">controlDict online documentation<\/a> for more keywords and options.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Restrictions on use<\/h2>\n<p>OpenFOAM is distributed by the OpenFOAM Foundation and is freely available and open source, licensed under the GNU General Public Licence as detailed on the <a href=\"http:\/\/www.openfoam.org\/licence.php\">OpenFOAM website<\/a>. All CSF users may use this software.<\/p>\n<h2>Set up procedure<\/h2>\n<p>Unfortunately, this is a little complicated and different to most other CSF applications. You must load a modulefile then run a command on the command-line to complete the setup, as described below:<\/p>\n<h3>Step 1: Load the Modulefile<\/h3>\n<p>For standard GNU Compiler (gcc\/g++) builds, which can run on any CSF nodes but is not optimised for AMD hardware:<\/p>\n<ul>\n<li>For single-core or multicore single-node jobs load <strong>one<\/strong> of the following modulefiles:\n<pre>module load apps\/gcc\/openfoam\/6\r\nmodule load apps\/gcc\/openfoam\/5.0\r\nmodule load apps\/gcc\/openfoam\/4.1\r\nmodule load apps\/gcc\/openfoam\/3.0.1\r\nmodule load apps\/gcc\/openfoam\/2.3.0\r\nmodule load apps\/gcc\/openfoam\/2.2.2\r\nmodule load apps\/gcc\/openfoam\/2.2.1<\/pre>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<ul>\n<li>For more than one node (InfiniBand connected) load <strong>one<\/strong> of the following modulefile:\n<pre>module load apps\/gcc\/openfoam\/6-ib\r\nmodule load apps\/gcc\/openfoam\/5.0-ib\r\nmodule load apps\/gcc\/openfoam\/4.1-ib\r\nmodule load apps\/gcc\/openfoam\/3.0.1-ib\r\nmodule load apps\/gcc\/openfoam\/2.3.0-ib\r\nmodule load apps\/gcc\/openfoam\/2.2.2-ib\r\nmodule load apps\/gcc\/openfoam\/2.2.1-ib<\/pre>\n<\/li>\n<\/ul>\n<p>For <em>non-standard<\/em> PGI compiler (pgcc\/pgc++) builds, optimized to run on AMD Bulldozer nodes <strong>only<\/strong> (this will not run anywhere else):<\/p>\n<ul>\n<li>For single-core, single-node or multi-node AMD Bulldozer jobs load the modulefile:\n<pre>module load apps\/pgi-14.10-acml-fma4\/openfoam\/2.3.0<\/pre>\n<\/li>\n<\/ul>\n<p>To understand whether your job will be using a single node or multiple nodes check the limits section below.<\/p>\n<h3>Step 2: Source the dot-file<\/h3>\n<p>The above modulefiles will instruct you to complete the set up by running, either on the login node or in your jobscript:<\/p>\n<pre>source $foamDotFile<\/pre>\n<p>You must run this command for OpenFOAM to run!<\/p>\n<h3>Step 3: Set up a directory in scratch<\/h3>\n<p>It is highly recommended that you run jobs in <em>scratch<\/em> and then copy important files you need to keep back to your home area. OpenFOAM expects the variable <code>FOAM_RUN<\/code> to be set for your job and to contain the relevant files and directories. To use scratch:<\/p>\n<pre>mkdir \/scratch\/$USER\/OpenFoam\r\nexport FOAM_RUN=\/scratch\/$USER\/OpenFoam<\/pre>\n<p>where <code>$USER<\/code> is your username and is automatically set when you login. Then<\/p>\n<pre>cd $FOAM_RUN<\/pre>\n<p>and set up your job\/case directories (0, constant, system etc).<\/p>\n<h2>Running the application<\/h2>\n<p>It is not possible to run OpenFOAM on more than one compute node. If you try to then your job will hang, but keep using CPU resources until the time limit is reached.<\/p>\n<h3>Serial batch job submission<\/h3>\n<ol>\n<li>Ensure you have followed the Set Up Procedure.<\/li>\n<li>Now in the top directory (<code>$FOAM_RUN<\/code>) where you have set up your job\/case (the one containing 0, constant and system) create a batch submission script, called for example <code>sge.openfoam<\/code> containing:\n<pre>#!\/bin\/bash\r\n#$ -V\r\n#$ -cwd\r\n\r\ninterFoam<\/pre>\n<p>replacing <code>interFoam<\/code> with the OpenFOAM executable appropriate to your job\/case.<\/li>\n<li>Submit the job: <code>qsub sge.openfoam<\/code><\/li>\n<li>A log of the job will got to the SGE output file, e.g. <code>sge.openfoam.o12345<\/code><\/li>\n<\/ol>\n<h3>Parallel batch job submission &#8211; single node<\/h3>\n<ol>\n<li>Ensure you have followed the Set Up Procedure.<\/li>\n<li>You will need to decompose your case before you can run it. Ensure that you have a file called <code>decomposeParDict<\/code> in your job\/case <code>system<\/code> directory (<code>$FOAM_RUN<\/code>) specifying the number of cores you wish to use with <code>numberOfSubdomains<\/code> and a suitable decompositon method, e.g. simple, and related settings (see Further Information below for links to documentation that will help you this).<\/li>\n<li>Now run this command from the top level directory of your job\/case (the one containing 0, constant and system, <code>$FOAM_RUN<\/code>):\n<pre>decomposePar<\/pre>\n<\/li>\n<li>Next, still in the top directory, create a batch submission script, called for example <code>sge.openfoam.par<\/code> containing:\n<pre>#!\/bin\/bash\r\n#$ -V\r\n#$ -pe smp.pe 4 \r\n#$ -cwd\r\n\r\nmpirun -np $NSLOTS interFoam -parallel<\/pre>\n<p>replacing <code>interFoam<\/code> with the OpenFOAM executable appropriate to your job\/case.<\/p>\n<p>The number after <code>smp.pe<\/code> must match the <code>numberOfSubdomains<\/code> setting you made earlier, if it doesn&#8217;t your job will fail.<\/li>\n<li>Submit the job: <code>qsub sge.openfoam.par<\/code><\/li>\n<li>A log of the job will got to the SGE output file, e.g. <code>sge.openfoam.par.o12345<\/code><\/li>\n<\/ol>\n<h3>Single node limits<\/h3>\n<ul>\n<li>The minimum number of cores for any parallel job is 2.<\/li>\n<li>The maximum number of cores in <code>smp.pe<\/code> is 16. Jobs run on an Intel node.<\/li>\n<li>You can run on up to 32 cores if you replace <code>smp.pe<\/code> with <code>smp-32mc.pe<\/code>. The job will use an AMD Magny-Cour node.<\/li>\n<li>Jobs up to 64 cores can be run using <code>smp-64bd.pe<\/code>. The job will use an AMD Bulldozer node.<\/li>\n<\/ul>\n<h3>Parallel batch job submission &#8211; multi-node ( 2 or more nodes )<\/h3>\n<ol>\n<li>Ensure you have followed the Set Up Procedure.<\/li>\n<li>You will need to decompose your case before you can run it. Ensure that you have a file called <code>decomposeParDict<\/code> in your job\/case <code>system<\/code> directory (<code>$FOAM_RUN<\/code>) specifying the number of cores you wish to use with <code>numberOfSubdomains<\/code> and a suitable decompositon method, e.g. simple, and related settings (see Further Information below for links to documentation that will help you this).<\/li>\n<li>Now run this command from the top level directory of your job\/case (the one containing 0, constant and system, <code>$FOAM_RUN<\/code>):\n<pre>decomposePar<\/pre>\n<\/li>\n<li>Next, still in the top directory, create a batch submission script, called for example <code>sge.openfoam.par<\/code> containing:\n<pre>#!\/bin\/bash\r\n#$ -V\r\n#$ -pe orte-32-ib.pe 96\r\n#$ -cwd\r\n\r\nmpirun -np $NSLOTS interFoam -parallel<\/pre>\n<p>replacing <code>interFoam<\/code> with the OpenFOAM executable appropriate to your job\/case.<\/p>\n<p>The number after <code>orte-32-ib.pe<\/code> must match the <code>numberOfSubdomains<\/code> setting you made earlier, if it doesn&#8217;t your job will fail.<\/li>\n<li>Submit the job: <code>qsub sge.openfoam.par<\/code><\/li>\n<li>A log of the job will got to the SGE output file, e.g. <code>sge.openfoam.par.o12345<\/code>.<\/li>\n<\/ol>\n<h3>Multi node limits<\/h3>\n<ul>\n<li>The following parallel environments may be used:\n<ul>\n<li><code>orte-32-ib.pe<\/code> &#8211; Jobs must be 64 cores or more and a multiple of 32. Uses AMD Magny-Cour nodes.<\/li>\n<li><code>orte-64bd-ib.pe<\/code> &#8211; Jobs must be 128 cores or more and a multiple of 64. Use AMD Bulldozer nodes.<\/li>\n<li><code>orte-24-ib.pe<\/code> &#8211; Jobs must be 48 cores or more and a multiple of 24. Uses Intel nodes.<\/li>\n<\/ul>\n<\/li>\n<li>All the above parallel environments use nodes connected with Infiniband.<\/li>\n<\/ul>\n<h2>Additional advice<\/h2>\n<ul>\n<li>When changing the number of cores you will need to adjust your input files appropriately and ensure <code>decomposePar<\/code> is re-run.<\/li>\n<li>If the <code>decomposePar<\/code> command takes more than a few minutes to run or uses significant resource on the login node then please include it in your job submission script instead by including it on the line before the <code>mpirun<\/code> so that it executes as part of the batch job on the compute node.<\/li>\n<\/ul>\n<h2>Further info<\/h2>\n<ul>\n<li>The <a href=\"http:\/\/www.openfoam.org\/docs\/user\/tutorials.php\">OpenFOAM tutorials<\/a> are very good and ideal for testing and getting used to setting up a job on the CSF before you proceed to production runs of your own work.<\/li>\n<li><a href=\"http:\/\/www.openfoam.org\/docs\/\">OpenFOAM documentation<\/a>.<\/li>\n<li><a href=\"http:\/\/openfoamwiki.net\/index.php\/Contrib\/swak4Foam\">swak4foam website<\/a>.<\/li>\n<li><a href=\"https:\/\/www.hpc.ntnu.no\/display\/hpc\/OpenFOAM+Training+Tutorial\">Useful swak4foam tutorials on the NTNU HPC Wiki<\/a>.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Overview OpenFOAM (Open source Field Operation And Manipulation) is a C++ toolbox for the development of customized numerical solvers, and pre-\/post-processing utilities for the solution of continuum mechanics problems, including computational fluid dynamics (CFD). Versions 2.3.0 (under going user testing), 2.2.2 and 2.2.1 are installed on the CSF. All were compiled using gcc 4.7.0 and openmpi 1.6. Version 2.3.0 has also been compiled with the PGI 14.10 with ACML(fma4) compiler optimized for AMD Bulldozer nodes&#8230; <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/openfoam\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":31,"menu_order":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1061","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/1061","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/comments?post=1061"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/1061\/revisions"}],"predecessor-version":[{"id":4841,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/1061\/revisions\/4841"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/31"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/media?parent=1061"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}