{"id":3130,"date":"2019-04-13T10:56:27","date_gmt":"2019-04-13T09:56:27","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=3130"},"modified":"2026-01-19T15:23:40","modified_gmt":"2026-01-19T15:23:40","slug":"comsol","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/comsol\/","title":{"rendered":"COMSOL"},"content":{"rendered":"<p><!--\n\n\n<table class=\"warning\"><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_start\"><\/span>\n\n\n<tr>\n\n\n<td>This software is NOT available for general use. It was installed for user-evaluation. The documentation is left here for reference. No access will be given to this installation unless you can provide details of a COMSOL Floating License owned by you or your group (a floating license is required by COMSOL for use on a cluster such as the CSF). If your group only has <em>node-locked<\/em> licenses for use on your group's desktops, the license will not work on the CSF. There are NO campus or faculty-wide licenses available for this software.<\/td>\n\n\n<\/tr>\n\n\n<\/table>\n\n\n--><\/p>\n<h2>Overview<\/h2>\n<p><a href=\"http:\/\/www.comsol.com\/products\/multiphysics\/\">COMSOL Multiphysics<\/a> engineering simulation software is a complete simulation environment allowing geometry specification, meshing, specifying physics, solving and visualisation. On the CSF we are concentrating on the solving stage where a simulation can be run in batch.<\/p>\n<p>COMSOL can be run in parallel using two methods: Shared-memory (OpenMP) parallelism and distributed-memory (MPI) parallelism. The shared-memory method is for single compute-node multi-core jobs (similar to how you run COMSOL on a multi-core workstation). The distributed-memory method is for much larger jobs where multiple CSF compute nodes (and all of the cores in those nodes) are used by COMSOL to utilise many cores and much more memory. See below for how to run both types of jobs.<\/p>\n<p><strong>Versions 6.1, 6.2, 6.3, 6.4 are installed<\/strong>.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>The Faculty of Science and Engineering has negotiated a Batch Academic Term License (BATL) for COMSOL Multiphysics and a wide selection of add-on modules. These licenses are now available for use by researchers within the Faculty of Science and Engineering.<\/p>\n<p>Access to the research COMSOL floating network license and add-ons is being managed via PPMS <strong>NOT Research IT<\/strong>. <\/p>\n<p>Further instructions on obtaining a licence and other relevant information can be found by following this link. <\/p>\n<p><a href=\"https:\/\/wiki.cs.manchester.ac.uk\/tech\/index.php\/COMSOL\">https:\/\/wiki.cs.manchester.ac.uk\/tech\/index.php\/COMSOL<\/a><\/p>\n<p><strong>Unfortunately Research IT cannot provide support for licence related queries <\/strong><\/p>\n<p>Once you have obtained a licence and have received confirmation that your University username has been added to the licence server, please contact Research IT via the <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/hpc-help\">Connect Portal HPC Help form<\/a>, requesting to be added to the group that provides access to COMSOL. Once added to the group you should be able to access and run COMSOL in batch mode on the CSF.<\/p>\n<h2>Set up procedure<\/h2>\n<p>We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job <abbr title=\"add '#$ -V' to your jobscript\">inherit these settings<\/abbr>.<\/p>\n<p>To add the COMSOL installation to your environment, run:<\/p>\n<pre>\r\nmodule load apps\/binapps\/comsol\/6.4\r\nmodule load apps\/binapps\/comsol\/6.3\r\nmodule load apps\/binapps\/comsol\/6.2\r\nmodule load apps\/binapps\/comsol\/6.1<\/pre>\n<h2>Running the application<\/h2>\n<p>The <em>batch<\/em> product should be used to run COMSOL. We do not currently support the <em>client\/server<\/em> mode. You will require an input <code>.mph<\/code> file.<\/p>\n<h3>Serial batch job submission<\/h3>\n<p>It is not expected that this software be run serially.<\/p>\n<h3>Parallel Single-node multi-core batch job submission<\/h3>\n<p>This method will run COMSOL on a <strong>single<\/strong> CSF compute node and use the specified number of cores in that node (up to 32).<\/p>\n<p>Example: Create a text file (e.g., using gedit) named <code>comsol-smp-job.sh<\/code> containing the following:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n\r\n#SBATCH -p multicore\r\n#SBATCH -n 16      # Number of cores - can be 2 -- 168\r\n#SBATCH -t 4-0     # 4 day Wallclock time\r\n                   # Max 7-days (7-0)\r\n\r\n# Load the modulefile in the jobscript\r\nmodule purge\r\nmodule load apps\/binapps\/comsol\/6.2\r\n\r\n# $SLURM_NTASKS is automatically set to the number of cores requested above\r\n\r\ncomsol -np $SLURM_NTASKS batch -usebatchlic -inputfile <em>myinfile<\/em>.mph -outputfile <em>myoutputfile<\/em>.mph -batchlog comsol.$JOB_ID.log\r\n<\/pre>\n<p><!--\n## MW - 2026-01-19 - commented out SGE info\n\n\n<pre class=\"sge\">#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 16            # Number of cores - can be 2 -- 168 (gives up to 6GB per core)\r\n\r\n# Load the modulefile in the jobscript\r\nmodule load apps\/binapps\/comsol\/6.2\r\n\r\n# $NSLOTS is automatically set to the number of cores requested above\r\n\r\ncomsol -np $NSLOTS batch -usebatchlic -inputfile <em>myinfile<\/em>.mph -outputfile <em>myoutputfile<\/em>.mph -batchlog comsol.$JOB_ID.log\r\n<\/pre>\n\n\n--><\/p>\n<p>To submit the job to the queue<\/p>\n<pre class=slurm>sbatch comsol-smp-job.sh\r\n<\/pre>\n<p><!--\n## MW - 2026-01-19 - commented out SGE info\n\n\n<pre class=sge>qsub comsol-smp-job.sh\r\n<\/pre>\n\n\n--><\/p>\n<p>The following flags may also be useful on the comsol command line (add to jobscript above):<\/p>\n<pre>-tmpdir \/scratch\/$USER        # Use scratch for temp files\r\n<\/pre>\n<p><!-- \n##### CG - have commented out the below as there is no multi-node (other than HPC Pool) currently in service ##### \n\n\n<h2>Large Multi-node Parallel Jobs<\/h2>\n\n\nEach CSF compute-node available for large multi-node batch jobs contains 24 cores and 128GB RAM. You should estimate how many such nodes you need to solve your simulation. Adding more compute nodes will give you access to more memory (128GB per compute node). But you may wait longer in the queue for your job to run if you ask for a high number of compute nodes - the current max limit is 120 cores (5 x 24-core compute nodes).\n\nCOMSOL is very flexible in how parallel processes can be run. You may run multiple MPI processes which each use multiple OpenMP threads. OR you may run all MPI processes. You should try different types of jobs with different numbers of cores to see which is most efficient for your simulation.\n\nComsol requires the following flags to describe the parallel processes:\n\n\n<pre>-nn <em>X<\/em>        # X = <em>total number of MPI processes<\/em>\r\n-nnhost <em>Y<\/em>    # Y = <em>number of MPI processes per CSF compute node<\/em>\r\n-np <em>Z<\/em>        # Z = <em>number of OpenMP threads per MPI process<\/em>\r\n<\/pre>\n\n\nTo simplify writing your jobscript we have written a helper script to generate the flags. You run the script within the COMSOL command-line inside the jobscript:\n\n\n<pre>comsol $(csf-comsol-procs 2) ...other comsol flags...\r\n                          #\r\n                          # Number of MPI processes <em>per CSF compute node<\/em>\r\n                          # 2 is recommended (gives 12 OpenMP threads per MPI process). Test!\r\n                          # 24 would give you an all-MPI (no OpenMP threads) job\r\n<\/pre>\n\n\nThe jobscripts below show complete examples of how to run different types of COMSOL parallel jobs.\n\nNote: COMSOL uses Intel MPI (supplied with COMSOL). This will correctly determine the fastest network to use (InfiniBand).\n\n\n<h3>Parallel Hybrid MPI+OpenMP batch job submission<\/h3>\n\n\nThis method will run COMSOL on <strong>multiple<\/strong> CSF compute nodes, using all of the cores in each node (24 cores per node).\n\nYou will instruct COMSOL to run a specified number of MPI processes (COMSOL compute processes) on each CSF compute node. Those MPI processes can then use a specified number of CPU cores in a shared-memory style (OpenMP threads). This <em>hybrid<\/em> parallel approach is often very efficient.\n\nFor example, if we run a job on 3 CSF compute nodes we will have 3 x 24-cores = 72 cores available. Each compute node contains 24 cores, composed of two 12-core Intel CPUs (aka <em>sockets<\/em>). Some tests have shown that running <em>one MPI process per socket<\/em> (i.e. two MPI processes on each CSF Compute node) is most efficient. The remaining cores on each compute node are used by OpenMP threads started by <em>each<\/em> MPI process:\n\n\n<pre>3 node (72 core) job\r\n   |\r\n   |     +====================+\r\n   +-----|24-core compute node|\r\n   |     |   12-core socket   |  &lt;--- Run 1 MPI process with 12 OpenMP threads on socket\r\n   |     |   12-core socket   |  &lt;--- Run 1 MPI process with 12 OpenMP threads on socket\r\n   |     +====================+\r\n   |\r\n   |     +====================+\r\n   +-----|24-core compute node|\r\n   |     |   12-core socket   |  &lt;--- Run 1 MPI process with 12 OpenMP threads on socket\r\n   |     |   12-core socket   |  &lt;--- Run 1 MPI process with 12 OpenMP threads on socket\r\n   |     +====================+\r\n   |\r\n   |     +====================+\r\n   +-----|24-core compute node|\r\n   |     |   12-core socket   |  &lt;--- Run 1 MPI process with 12 OpenMP threads on socket\r\n   |     |   12-core socket   |  &lt;--- Run 1 MPI process with 12 OpenMP threads on socket\r\n   |     +====================+\r\n   |\r\n<\/pre>\n\n\nThe CSF comsol helper script described earlier will calculate the following flags to describe this job:\n\n\n<pre>-nn 6 -nnhost 2 -np 12\r\n<\/pre>\n\n\nThis means 6 total MPI processes, 2 MPI processes per CSF compute node and 12 cores (OpenMP threads) per MPI process.\n\nHere is the complete jobscript for the above job. Create a text file (e.g., using gedit) named <code>comsol-hybrid-job.sh<\/code> containing the following:\n\n\n<pre>#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe mpi-24-ib.pe 72        # Number of cores in multiples of 24 and a minimum of 48\r\n                              # 72 will give us 3 x 24-core compute nodes in the job\r\n\r\n# Load the modulefile within the jobscript\r\nmodule load apps\/binapps\/comsol\/6.2\r\n\r\n# Supply the number of MPI procs per CSF compute node (2 is recommend for our 2-socket hardware)\r\n\r\ncomsol $(csf-comsol-procs 2) batch -usebatchlic -inputfile <em>myinfile<\/em>.mph -outputfile <em>myoutputfile<\/em>.mph -batchlog comsol.$JOB_ID.log\r\n\r\n<\/pre>\n\n\nTo submit the job to the queue\n\n\n<pre>qsub comsol-hybrid-job.sh\r\n<\/pre>\n\n\nThe following flags may also be useful on the comsol command line (add to jobscript above):\n\n\n<pre>-tmpdir \/scratch\/$USER        # Use scratch for temp files\r\n<\/pre>\n\n\n--><\/p>\n<h2>Further info<\/h2>\n<p>Product documentation (PDFs and HTML) is available on the CSF in<\/p>\n<pre>$COMSOL_HOME\/doc\/<\/pre>\n<p>The hybrid use of MPI and OpenMP parallelism allows for a variety of parallel process layouts. The COMSOL Blog article on <a href=\"https:\/\/uk.comsol.com\/blogs\/hybrid-computing-advantages-shared-distributed-memory-combined\/\">the advantages of hybrid parallelism<\/a> describes this in more detail.<\/p>\n<p>See also http:\/\/www.uk.comsol.com\/<\/p>\n<h2>Updates<\/h2>\n","protected":false},"excerpt":{"rendered":"<p>Overview COMSOL Multiphysics engineering simulation software is a complete simulation environment allowing geometry specification, meshing, specifying physics, solving and visualisation. On the CSF we are concentrating on the solving stage where a simulation can be run in batch. COMSOL can be run in parallel using two methods: Shared-memory (OpenMP) parallelism and distributed-memory (MPI) parallelism. The shared-memory method is for single compute-node multi-core jobs (similar to how you run COMSOL on a multi-core workstation). The distributed-memory.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/comsol\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-3130","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/3130","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=3130"}],"version-history":[{"count":21,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/3130\/revisions"}],"predecessor-version":[{"id":11712,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/3130\/revisions\/11712"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=3130"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}