The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
GROMACS v4.5.4
Overview
GROMACS is a package for computing molecular dynamics, simulating Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed for biochemical molecules with complicated bonded interactions (e.g. proteins, lipids, nucleic acids) but can also be used for non-biological systems (e.g. polymers).
Please do not add the -v flag to your mdrun command.It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers. |
This version is v4.5.4 (with and without the g_hbond bugfix). The following flavours are available:
- Single and double precision multi-threaded (OpenMP) versions:
mdrun
andmdrun_d
- Single and double precision MPI (not threaded) versions:
mdrun_mpi
andmdrun_d_mpi
- Compiled with Intel 11.1 compiler, Intel MKL 10.2u7 providing the FFT functions.
Bugfix for g_hbond in 4.5.4
An issue has been identified by the GROMACS developers that causes “inconsistencies between older (<= 4.0.7?) and newer versions of g_hbond", this can be fixed by editing the file src/tools/gmx_hbond.c as described in this link.
To use the fixed version of GROMACS use the 4.5.4_ghbondfix version of the modules described below.
GROMACS tools for analyzing membrane trajectories
Luca Monticelli’s tools have been added to the multi-thread single and double precision versions of 4.5.4 (both the versions with and without a g_hbond bugfix).
- Please ensure that any usage of these tools is correctly cited.
- The MPI versions do not contain these tools.
- Membrane undulation spectrum is not currently available.
- For documentation provided as part of the download tarball see the text files located in
/opt/gridware/apps/intel-11.1/gromacs/membrane_trajectories_tools/docs
. - If using the double precision version add
_d
to the executable name e.g.g_mydensity
becomesg_mydensity_d
Restrictions on use
GROMACS is free software, available under the GNU General Public License.
Set up procedure
You must load the appropriate modulefile:
module load modulefile
replacing modulefile with one of the modules listed in the table below
For the version of GROMACS installed with the unmodified source code extracted from gromacs-4.5.4.tar.gz use:
Version | Modulefile | Notes |
Single precision multi-threaded | apps/intel-11.1/gromacs/4.5.4/single | non-MPI |
Double precision multi-threaded | apps/intel-11.1/gromacs/4.5.4/double | non-MPI |
Single precision MPI | apps/intel-11.1/gromacs/4.5.4/single-mpi | For MPI on Intel nodes using gigabit ethernet |
Single precision MPI – Infiniband | apps/intel-11.1/gromacs/4.5.4/single-mpi-ib | For MPI on Intel or AMD nodes using infiniband |
Double precision MPI | apps/intel-11.1/gromacs/4.5.4/double-mpi | For MPI on Intel nodes using gigabit ethernet |
Double precision MPI – Infiniband | apps/intel-11.1/gromacs/4.5.4/double-mpi-ib | For MPI on Intel or AMD nodes using Infiniband |
For the version of GROMACS installed with the source code modified as described in section bugfix for g_hbond (see above) use the following modules:
Version | Modulefile | Notes |
Single precision multi-threaded | apps/intel-11.1/gromacs/4.5.4_ghbondfix/single | non-MPI |
Double precision multi-threaded | apps/intel-11.1/gromacs/4.5.4_ghbondfix/double | non-MPI |
Running the application
First load the required module (see above) and create a directory containing the required input data files.
You MUST ensure that as well as requesting a number of cores from a suitable parallel environment you also tell gromacs how many cores it may use. These two numbers must be the same, which can be ensured through correct use of some variables and/or flags depending on the version of gromacs being used. Failure to set this information causes gromacs to run incorrectly, overload compute nodes and potentially trample on jobs belonging to other users. All of the examples below ensure that jobs use the cores requested.
Please do not add the -v flag to your mdrun command.It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers. |
Multi-threaded single-precision, 2 to 16 cores
An example batch submission script to run the single-precision mdrun executable with 12 threads:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 12 mdrun -nt $NSLOTS
Submit with the command: qsub scriptname
Note: smp.pe
is the only PE suitable for multi-threaded GROMACS. 16 cores is the maximum job size in this PE.
Multi-threaded double-precision, 2 to 16 cores
An example batch submission script to run the double-precision mdrun_d
executable with 12 threads:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 12 mdrun_d -nt $NSLOTS
Submit with the command: qsub scriptname
Note: smp.pe
is the only pe suitable for multi-threaded GROMACS. 16 cores the maximum job size in this PE.
Single-precision MPI with Infiniband, 48 cores or more and a multiple of 24
An example batch submission script to run the single precision mdrun_mpi
executable with 24 MPI processes on 24 cores with the orte-24-ib.pe parallel environment (Intel nodes using infiniband):
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe orte-24-ib.pe 48 mpiexec -n $NSLOTS mdrun_mpi
Submit with the command: qsub scriptname
Notes
orte-24-ib.pe
– you must request multiples of 24 and a minimum of 48 cores.- If you wish to use the AMD nodes please use
orte-32-ib.pe
– you must request multiples of 32.
Double precision MPI with Infiniband, 24 cores or more and a multiple of 12
An example batch submission script to run the double precision mdrun_d_mpi executable with 48 MPI processes on 48 cores with the orte-24-ib.pe parallel environment (Intel nodes using infiniband):
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe orte-24-ib.pe 48 mpiexec -n $NSLOTS mdrun_d_mpi
Submit with the command: qsub scriptname
Notes
orte-24-ib.pe
– you must request multiples of 24 and a minimum of 48 cores.- If you wish to use the AMD nodes please use
orte-32-ib.pe
– you must request multiples of 32.
I want to use a number of cores not recommended above, what are the options?
Please contact its-ri-team@manchester.ac.uk for advice.
Using restart on the CSF
Restarting of jobs for version 4.5.4 of GROMACS is known to be incompatible with lustre file systems. This is the file system used for scratch storage on the CSF, and will cause an error Failed to lock: md.log. Function not implemented when you run jobs from a directory within /scratch.
This incompatibility can be fixed by copying the md.log
file to a location in your HOME directory and using a symbolic link from the directory within /scratch
. For example, you can create a new directory in HOME called GromacsRestart_1
, move the md.log file to this new directory, and create the symbolic link from a scratch directory by entering the following commands
mkdir $HOME/GromacsRestart_1 mv md.log $HOME/GromacsRestart_1 ln -s $HOME/GromacsRestart_1/md.log md.log
You should then be able to restart your GROMACS job. Note: you will need a different location for each md.log file copied to HOME if restarting more than one job.
Further info
- You can see all of the installed GROMACS utilities with the command:
ls $GROMACSHOME/bin
- GROMACS web page
- GROMACS manuals
- GROMACS user mailing list
Updates
Nov 2013 – Documentation for 4.5.4 and 4.6.1 split in to two pages.
May 2013 – Gromacs 4.6.1 and Plumed 1.3 installed.