The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
GROMACS v5.0.4
Overview
GROMACS is a package for computing molecular dynamics, simulating Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed for biochemical molecules with complicated bonded interactions (e.g. proteins, lipids, nucleic acids) but can also be used for non-biological systems (e.g. polymers).
Please do not add the -v flag to your mdrun command.It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers. |
This version is v5.0.4. The following flavours are available:
5.0.4 for all Intel node types
Note: ability to run on all Intel nodes implies lower optimization.
- Single and double precision multi-threaded (OpenMP) versions:
mdrun
andmdrun_d
- Single and double precision MPI (not threaded) versions:
mdrun_mpi
andmdrun_d_mpi
- Compiled with Intel 14.0.3 compiler with the associated Intel MKL providing the FFT functions.
ngmx
has been included.
5.0.4 for Sandybridge and Ivybridge (and Haswell, Broadwell nodes) only
Note: ability to run on only Sandybridge, Ivybridge (and Haswell) nodes implies higher optimization. Note that an even higher level of optimization, and an MPI version, is available for Haswell nodes (see below).
- Single and double precision multi-threaded (OpenMP) versions:
mdrun
andmdrun_d
- Compiled with Intel 14.0.3 compiler with the associated Intel MKL providing the FFT functions and with AVX_256 (an instruction set specific to these nodes) so WILL NOT work on Westmere nodes and NONE of the commands can be run on the login nodes.
- We have no Sandybridge or Ivybridge nodes connected by Infiniband which means ONLY
smp.pe
(single-node, multicore) jobs for these types nodes. - There are no MPI versions of 5.0.4 for Sandybridge and Ivybridge nodes available on the CSF.
- This version will not run on highmem, twoday or short nodes (they are all Westmere).
ngmx
has been included.
5.0.4 for Haswell and Broadwell nodes only
Note: ability to run on only Haswell and Broadwell nodes implies higher optimization.
- Single and double precision single-node, multi-threaded (OpenMP) versions:
mdrun
andmdrun_d
- Single and double precision multi-node (MPI) versions:
mdrun_mpi
andmdrun_mpi_d
- Compiled with Intel 14.0.3 compiler with the associated Intel MKL providing the FFT functions and with AVX2_256 (an instruction set specific to these nodes which provides further optimization than AVX_256) so WILL NOT work on Westmere, Sandybridge and Ivybridge nodes.
- Single-node multi-core
smp.pe
jobs can use these nodes. - Multi-core MPI
orte-24-ib.pe
jobs can use these nodes – the Haswell and Broadwell nodes have InfiniBand networking. - This version will not run on highmem, twoday or short nodes (they are all Westmere).
ngmx
has been included.
Bugfix for g_hbond
Version 5.0.4 has the g_hbond fix included by default and so no separate build has been made for this version. See the GROMACS v4.5.4 CSF documentation for a description of that issue.
Restrictions on use
GROMACS is free software, available under the GNU General Public License.
Set up procedure
You must load the appropriate modulefile:
module load modulefile
replacing modulefile with one of the modules listed in the table below.
Version | Modulefile | Notes | Typical Executable name |
---|---|---|---|
Single precision multi-threaded (single-node) | apps/intel-14.0/gromacs/5.0.4/single | non-MPI | mdrun |
Double precision multi-threaded (single-node) | apps/intel-14.0/gromacs/5.0.4/double | non-MPI | mdrun_d |
Single precision MPI (single-node) | apps/intel-14.0/gromacs/5.0.4/single-mpi | For MPI on Intel nodes using gigabit ethernet | mdrun_mpi |
Single precision MPI (multi-node, Infiniband) | apps/intel-14.0/gromacs/5.0.4/single-mpi-ib | For MPI on Intel or AMD nodes using infiniband | mdrun_mpi |
Double precision MPI (single-node) | apps/intel-14.0/gromacs/5.0.4/double-mpi | For MPI on Intel nodes using gigabit ethernet | mdrun_mpi_d |
Double precision MPI (multi-node, Infiniband) | apps/intel-14.0/gromacs/5.0.4/double-mpi-ib | For MPI on Intel or AMD nodes using Infiniband | mdrun_mpi_d |
AVX optimized builds for Sandybridge and Ivybridge nodes | |||
Single precision multi-threaded for AVX (single-node) | apps/intel-14.0/gromacs/5.0.4/single-avx | non-MPI, Sandybridge and Ivybridge only | mdrun |
Double precision multi-threaded for AVX (single-node) | apps/intel-14.0/gromacs/5.0.4/double-avx | non-MPI, Sandybridge and Ivybridge only | mdrun_d |
AVX2 optimized builds for Haswell nodes (new April 2016) | |||
Single precision multi-threaded for AVX2 (single-node) | apps/intel-14.0/gromacs/5.0.4/single-avx2 | non-MPI, Haswell only | mdrun |
Double precision multi-threaded for AVX2 (single-node) | apps/intel-14.0/gromacs/5.0.4/double-avx2 | non-MPI, Haswell only | mdrun_d |
Single precision MPI (single/multi-node, Infiniband) for AVX2 | apps/intel-14.0/gromacs/5.0.4/single-avx2-mpi-ib | For MPI on Intel Haswell nodes using infiniband | mdrun_mpi |
Double precision MPI (single/multi-node, Infiniband) for AVX2 | apps/intel-14.0/gromacs/5.0.4/double-avx2-mpi-ib | For MPI on Intel Haswell nodes using infiniband | mdrun_mpi_d |
Interactive/Non-batch work/Job preparation
In order to prepare your jobs or post process them you may need to make use of commands such as grompp
. These will not work on the CSF login node because the software was compiled with AVX_256 which is not compatible with the login nodes. We have therefore allocated ONE sandybridge node to allow you to run these commands via qrsh. To do so type:
qrsh -l inter -l short -l sandybridge
which will give access to the sandybridge compute node. Then run your commands. When you have finished close the connection to the compute node with exit
(failure to do this may result in the compute node being unavailable to other users who need it). Then submit your computation/simulation to batch as per the above examples.
DO NOT run mdrun on this compute node – all computational work MUST be submitted to batch.
Running the application in batch
First load the required module (see above) and create a directory containing the required input data files.
Please NOTE the following which important for running jobs correctly and efficiently:
Ensure you inform gromacs how many cores it can use. This is done using either
mpiexec -n $NSLOTS mdrun_mpi # Multi-node MPI job
or
export OMP_NUM_THREADS=$NSLOTS # Single-node multi-threaded job mdrun
in your jobscript (see below for which to use).
The examples below can be used for single precision or double precision gromacs. Simply run mdrun
(single precision) or mdrun_d
(double precision).
Please do not add the -v flag to your mdrun command.It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers. |
Multi-threaded single-precision on Intel nodes, 2 to 24 cores
Note that GROMACS v5.0.4 (unlike v4.5.4) does not support the -nt
flag to set the number of threads when using the multithreaded OpenMP (non-MPI) verison. Instead set the OMP_NUM_THREADS
environment variable as shown below.
An example batch submission script to run the single-precision mdrun executable with 12 threads:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 12 # Can specify 2 to 24 cores in smp.pe # 2-12 includes Westmere, Sandybridge, Ivybridge, Haswell # 13-16 forces use of Ivybridge. # 17-24 forces use of Haswell. # Can force use of a particular architecture (see below) export OMP_NUM_THREADS=$NSLOTS mdrun
Submit with the command: qsub scriptname
The system will run your job on a Westmere, a Sandybridge or an Ivybridge node depending on what is available. This option goes to the biggest pool of nodes. To get a more optimised run on Sandybridge or Ivybridge you should be using a modulefile with ‘avx’ in the name and using the instructions below.
Multi-threaded double-precision, AVX on Sandybridge nodes, 2 to 12 cores
An example batch submission script to run the double-precision mdrun_d
executable with 8 threads:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 8 #$ -l sandybridge # Force use of sandybridge nodes export OMP_NUM_THREADS=$NSLOTS mdrun_d
Submit with the command: qsub scriptname
Multi-threaded single-precision, AVX on Ivybridge nodes, 2 to 16 cores
Note that GROMACS v5.0.4 (unlike v4.5.4) does not support the -nt
flag to set the number of threads when using the multithreaded OpenMP (non-MPI) verison. Instead set the OMP_NUM_THREADS
environment variable as shown below.
An example batch submission script to run the single-precision mdrun executable with 16 threads:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 16 #$ -l ivybridge # Force use of Ivybridge nodes export OMP_NUM_THREADS=$NSLOTS mdrun
Submit with the command: qsub scriptname
Multi-threaded single-precision, AVX2 on Haswell nodes, 2 to 24 cores
Note that GROMACS v5.0.4 (unlike v4.5.4) does not support the -nt
flag to set the number of threads when using the multithreaded OpenMP (non-MPI) verison. Instead set the OMP_NUM_THREADS
environment variable as shown below.
An example batch submission script to run the single-precision mdrun executable with 16 threads:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 24 #$ -l haswell # Force use of Haswell nodes export OMP_NUM_THREADS=$NSLOTS mdrun
Submit with the command: qsub scriptname
Single precision MPI (single-node), 2 to 24 cores
An example batch submission script to run the double-precision mdrun_mpi
executable on 8 cores using mpi:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 8 mpiexec -n $NSLOTS mdrun_mpi
Submit with the command: qsub scriptname
Double precision MPI (single-node), 2 to 24 cores
An example batch submission script to run the double-precision mdrun_mpi_d
executable on 8 cores using mpi:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 4 mpiexec -n $NSLOTS mdrun_mpi_d
Submit with the command: qsub scriptname
Single-precision AVX2, MPI with Infiniband, 48 cores or more in multiples of 24
An example batch submission script to run the single precision mdrun_mpi
executable with 48 MPI processes (48 cores on two 24-core nodes) with the orte-24-ib.pe
parallel environment (Intel Haswell nodes using infiniband):
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe orte-24-ib.pe 48 # EG: Two 24-core Intel Haswell nodes mpiexec -n $NSLOTS mdrun_mpi
Submit with the command: qsub scriptname
Illegal instruction
If during a batch job you try to run gromacs and get the following error:
Illegal instruction
This is because you have an AVX or AVX2 only version of the modulefile loaded which is not compatible with the compute nodes on which your job is running. Ensure your jobscript requests the correct type of compute node.
Error about OpenMP and cut-off scheme
If you encounter the following error:
OpenMP threads have been requested with cut-off scheme Group, but these are only supported with cut-off scheme Verlet
then please try using the mpi version of the software. Note that is is possible to run mpi versions on a single node (example above).
Further info
- You can see a list of all the installed GROMACS utilities with the command:
ls $GMXDIR/bin
- GROMACS web page
- GROMACS manuals
- GROMACS user mailing list
Updates
Apr 2015 – 5.0.4 installed with AVX support (GPU support with Intel compiler not possible)
Dec 2014 – 4.6.7 installed with AVX support (specific user request for this) and documentation written.
Nov 2013 – Documentation for 4.5.4 and 4.6.1 split in to two pages.
May 2013 – Gromacs 4.6.1 and Plumed 1.3 installed.