COMSOL

Overview

COMSOL Multiphysics engineering simulation software is a complete simulation environment allowing geometry specification, meshing, specifying physics, solving and visualisation. On the CSF we are concentrating on the solving stage where a simulation can be run in batch.

COMSOL can be run in parallel using two methods: Shared-memory (OpenMP) parallelism and distributed-memory (MPI) parallelism. The shared-memory method is for single compute-node multi-core jobs (similar to how you run COMSOL on a multi-core workstation). The distributed-memory method is for much larger jobs where multiple CSF compute nodes (and all of the cores in those nodes) are used by COMSOL to utilise many cores and much more memory. See below for how to run both types of jobs.

Version 6.1 & 6.2 are installed.

Restrictions on use

The Faculty of Science and Engineering has negotiated a Batch Academic Term License (BATL) for COMSOL Multiphysics and a wide selection of add-on modules. These licenses are now available for use by researchers within the Faculty of Science and Engineering.

Access to the research COMSOL floating network license and add-ons is being managed via PPMS NOT Research IT.

Further instructions on obtaining a licence and other relevant information can be found by following this link.

https://wiki.cs.manchester.ac.uk/tech/index.php/COMSOL

Unfortunately Research IT cannot provide support for licence related queries

Once you have obtained a licence and have received confirmation that your University username has been added to the licence server, please email its-ri-team@manchester.ac.uk requesting to be added to the group that provides access to COMSOL. Once added to the group you should be able to access and run COMSOL in batch mode on the CSF.

Set up procedure

We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.

To add the COMSOL installation to your environment, run:

module load apps/binapps/comsol/6.1
module load apps/binapps/comsol/6.2

Running the application

The batch product should be used to run COMSOL. We do not currently support the client/server mode. You will require an input .mph file.

Serial batch job submission

It is not expected that this software be run serially.

Parallel Single-node multi-core batch job submission

This method will run COMSOL on a single CSF compute node and use the specified number of cores in that node (up to 32).

Example: Create a text file (e.g., using gedit) named comsol-smp-job.sh containing the following:

#!/bin/bash --login
#$ -cwd
#$ -pe smp.pe 16            # Number of cores - can be 2 -- 32 (gives up to 6GB per core)

# Load the modulefile in the jobscript
module load apps/binapps/comsol/6.2

# $NSLOTS is automatically set to the number of cores requested above

comsol -np $NSLOTS batch -usebatchlic -inputfile myinfile.mph -outputfile myoutputfile.mph -batchlog comsol.$JOB_ID.log

To submit the job to the queue

qsub comsol-smp-job.sh

The following flags may also be useful on the comsol command line (add to jobscript above):

-tmpdir /scratch/$USER        # Use scratch for temp files

Short test jobs

For short test runs (e.g., to check you have valid input files etc) use the short resource by adding the following to the jobscript above. Note, however, that these resources use older processors and only have 12 cores per compute node.

#$ -pe smp.pe 12    # The 'short' resource has a max of 12 cores
#$ -l short

High-memory jobs

If you need more that the 192GB RAM offered by a 32 core single-node job, you can run on the higher-memory nodes using up to 16 cores as follows:

#$ -pe smp.pe 16   # Higher memory nodes have up to 16 cores
#$ -l mem512       # mem512 nodes have 32GB RAM per core (see also mem1500 and mem2000)

Note that higher-memory nodes cannot be used for multi-node jobs described below.

Large Multi-node Parallel Jobs

Each CSF compute-node available for large multi-node batch jobs contains 24 cores and 128GB RAM. You should estimate how many such nodes you need to solve your simulation. Adding more compute nodes will give you access to more memory (128GB per compute node). But you may wait longer in the queue for your job to run if you ask for a high number of compute nodes – the current max limit is 120 cores (5 x 24-core compute nodes).

COMSOL is very flexible in how parallel processes can be run. You may run multiple MPI processes which each use multiple OpenMP threads. OR you may run all MPI processes. You should try different types of jobs with different numbers of cores to see which is most efficient for your simulation.

Comsol requires the following flags to describe the parallel processes:

-nn X        # X = total number of MPI processes
-nnhost Y    # Y = number of MPI processes per CSF compute node
-np Z        # Z = number of OpenMP threads per MPI process

To simplify writing your jobscript we have written a helper script to generate the flags. You run the script within the COMSOL command-line inside the jobscript:

comsol $(csf-comsol-procs 2) ...other comsol flags...
                          #
                          # Number of MPI processes per CSF compute node
                          # 2 is recommended (gives 12 OpenMP threads per MPI process). Test!
                          # 24 would give you an all-MPI (no OpenMP threads) job

The jobscripts below show complete examples of how to run different types of COMSOL parallel jobs.

Note: COMSOL uses Intel MPI (supplied with COMSOL). This will correctly determine the fastest network to use (InfiniBand).

Parallel Hybrid MPI+OpenMP batch job submission

This method will run COMSOL on multiple CSF compute nodes, using all of the cores in each node (24 cores per node).

You will instruct COMSOL to run a specified number of MPI processes (COMSOL compute processes) on each CSF compute node. Those MPI processes can then use a specified number of CPU cores in a shared-memory style (OpenMP threads). This hybrid parallel approach is often very efficient.

For example, if we run a job on 3 CSF compute nodes we will have 3 x 24-cores = 72 cores available. Each compute node contains 24 cores, composed of two 12-core Intel CPUs (aka sockets). Some tests have shown that running one MPI process per socket (i.e. two MPI processes on each CSF Compute node) is most efficient. The remaining cores on each compute node are used by OpenMP threads started by each MPI process:

3 node (72 core) job
   |
   |     +====================+
   +-----|24-core compute node|
   |     |   12-core socket   |  <--- Run 1 MPI process with 12 OpenMP threads on socket
   |     |   12-core socket   |  <--- Run 1 MPI process with 12 OpenMP threads on socket
   |     +====================+
   |
   |     +====================+
   +-----|24-core compute node|
   |     |   12-core socket   |  <--- Run 1 MPI process with 12 OpenMP threads on socket
   |     |   12-core socket   |  <--- Run 1 MPI process with 12 OpenMP threads on socket
   |     +====================+
   |
   |     +====================+
   +-----|24-core compute node|
   |     |   12-core socket   |  <--- Run 1 MPI process with 12 OpenMP threads on socket
   |     |   12-core socket   |  <--- Run 1 MPI process with 12 OpenMP threads on socket
   |     +====================+
   |

The CSF comsol helper script described earlier will calculate the following flags to describe this job:

-nn 6 -nnhost 2 -np 12

This means 6 total MPI processes, 2 MPI processes per CSF compute node and 12 cores (OpenMP threads) per MPI process.

Here is the complete jobscript for the above job. Create a text file (e.g., using gedit) named comsol-hybrid-job.sh containing the following:

#!/bin/bash --login
#$ -cwd
#$ -pe mpi-24-ib.pe 72        # Number of cores in multiples of 24 and a minimum of 48
                              # 72 will give us 3 x 24-core compute nodes in the job

# Load the modulefile within the jobscript
module load apps/binapps/comsol/6.2

# Supply the number of MPI procs per CSF compute node (2 is recommend for our 2-socket hardware)

comsol $(csf-comsol-procs 2) batch -usebatchlic -inputfile myinfile.mph -outputfile myoutputfile.mph -batchlog comsol.$JOB_ID.log

To submit the job to the queue

qsub comsol-hybrid-job.sh

The following flags may also be useful on the comsol command line (add to jobscript above):

-tmpdir /scratch/$USER        # Use scratch for temp files

Parallel MPI batch job submission

This method will run COMSOL on multiple CSF compute nodes, using all of the cores in each node (24 cores per node).

Modifying the above job, it is possible to run an entirely MPI parallel job. This is where you run an MPI process on every core available to the batch job without any additional OpenMP threads. For example, a 72-core job can be run with 72 MPI processes.

The CSF comsol helper script will calculate the flags needed by comsol. Give it the number of MPI processes per CSF compute node (24):

comsol $(csf-comsol-procs 24) ...args...

It will generate the following flags:

-nn 72 -nnhost 24 -np 1

This means 72 total MPI processes, 24 MPI processes per CSF compute node and 1 core (thread) per MPI process.

The following jobscript would achieve that. Create a text file (e.g., using gedit) named comsol-mpi-job.sh containing the following:

#!/bin/bash --login
#$ -cwd
#$ -pe mpi-24-ib.pe 72        # 72 cores (3 x 24-core CSF compute nodes)

# Load the modulefile in the jobscript
module load apps/binapps/comsol/6.2

# The number of MPI processes per node is 24 (we have 24-core compute nodes)
comsol $(csf-comsol-procs 24) batch -usebatchlic -inputfile myinfile.mph -outputfile myoutputfile.mph -batchlog comsol.$JOB_ID.log

To submit the job to the queue

qsub comsol-mpi-job.sh

Further info

Product documentation (PDFs and HTML) is available on the CSF in

$COMSOL_HOME/doc/

The hybrid use of MPI and OpenMP parallelism allows for a variety of parallel process layouts. The COMSOL Blog article on the advantages of hybrid parallelism describes this in more detail.

See also http://www.uk.comsol.com/

Updates

Last modified on July 24, 2024 at 2:45 pm by George Leaver