The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
GROMACS v4.6.1
Overview
GROMACS is a package for computing molecular dynamics, simulating Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed for biochemical molecules with complicated bonded interactions (e.g. proteins, lipids, nucleic acids) but can also be used for non-biological systems (e.g. polymers).
Please do not add the -v flag to your mdrun command.It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers. |
This version is v4.6.1 (with and without the Plumed 1.3 plugin). PLUMED is an open source plugin for free energy calculations in molecular systems. The following flavours are available:
- Single and double precision multi-threaded (OpenMP) versions:
mdrun
andmdrun_d
- Single and double precision MPI (not threaded) versions:
mdrun_mpi
andmdrun_d_mpi
- Single and double precision MPI (not threaded) with Plumed 1.3 plugin patch applied to source code:
mdrun_mpi
andmdrun_d_mpi
- Compiled with Intel 12.0.5 compiler, Intel MKL 10.3u5 providing the FFT functions.
Bugfix for g_hbond
Version 4.6.1 has the g_hbond fix included by default and so no separate build has been made for this version. See the GROMACS v4.5.4 CSF documentation for a description of that issue.
Restrictions on use
GROMACS is free software, available under the GNU General Public License.
Set up procedure
You must load the appropriate modulefile:
module load modulefile
replacing modulefile with one of the modules listed in the table below.
For the version of GROMACS installed with the unmodified source code extracted from gromacs-4.6.1.tar.gz use:
Version | Modulefile | Notes |
Single precision multi-threaded | apps/intel-12.0/gromacs/4.6.1/single | non-MPI |
Double precision multi-threaded | apps/intel-12.0/gromacs/4.6.1/double | non-MPI |
Single precision MPI | apps/intel-12.0/gromacs/4.6.1/single-mpi | For MPI on Intel nodes using gigabit ethernet |
Single precision MPI – Infiniband | apps/intel-12.0/gromacs/4.6.1/single-mpi-ib | For MPI on Intel or AMD nodes using infiniband |
Double precision MPI | apps/intel-12.0/gromacs/4.6.1/double-mpi | For MPI on Intel nodes using gigabit ethernet |
Double precision MPI – Infiniband | apps/intel-12.0/gromacs/4.6.1/double-mpi-ib | For MPI on Intel or AMD nodes using Infiniband |
For the version of GROMACS installed with the Plumed 1.3 patched source code extracted from gromacs-4.6.1.tar.gz use:
Version | Modulefile | Notes |
Single precision MPI | apps/intel-12.0/gromacs/4.6.1_plumed/single-mpi | For MPI on Intel nodes using gigabit ethernet |
Single precision MPI – Infiniband | apps/intel-12.0/gromacs/4.6.1_plumed/single-mpi-ib | For MPI on Intel or AMD nodes using infiniband |
Double precision MPI | apps/intel-12.0/gromacs/4.6.1_plumed/double-mpi | For MPI on Intel nodes using gigabit ethernet |
Double precision MPI – Infiniband | apps/intel-12.0/gromacs/4.6.1_plumed/double-mpi-ib | For MPI on Intel or AMD nodes using Infiniband |
Plumed 1.3 Notes
- Plumed does not work with multi-threaded Gromacs. Hence only MPI versions are available. If you require single-threaded non-MPI versions of Gromacs for use with Plumed please contact its-ri-team@manchester.ac.uk.
- The Plumed utilities are available in your PATH once one of the above plumed modulefiles has been loaded. These include:
- bias-exchange.sh
- driver
- exchange-tool.x
- plumed_standalone
- reweight
- sum_hills.x
- sum_hills_mpi.x
- For the Plumed manual run:
evince $PLUMEDHOME/manual/manual.pdf
Running the application
First load the required module (see above) and create a directory containing the required input data files.
You MUST ensure that as well as requesting a number of cores from a suitable parallel environment you also tell gromacs how many cores it may use. These two numbers must be the same, which can be ensured through correct use of some variables and/or flags depending on the version of gromacs being used. Failure to set this information causes gromacs to run incorrectly, overload compute nodes and potentially trample on jobs belonging to other users. All of the examples below ensure that jobs use the cores requested.
Please do not add the -v flag to your mdrun command.It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers. |
Multi-threaded single-precision, 2 to 24 cores
Note that GROMACS v4.6.1 (unlike v4.5.4) does not support the -nt
flag to set the number of threads when using the multithreaded OpenMP (non-MPI) verison. Instead set the OMP_NUM_THREADS
environment variable as shown below.
An example batch submission script to run the single-precision mdrun executable with 12 threads:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 12 export OMP_NUM_THREADS=$NSLOTS mdrun
Submit with the command: qsub scriptname
Note: smp.pe
is the only pe suitable for multi-threaded GROMACS. 16 cores the maximum job size in that PE.
Multi-threaded double-precision, 2 to 24 cores
An example batch submission script to run the double-precision mdrun_d
executable with 12 threads:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 12 export OMP_NUM_THREADS=$NSLOTS mdrun_d
Submit with the command: qsub scriptname
Note: smp.pe
is the only pe suitable for multi-threaded GROMACS. 24 cores is the maximum job size in that PE.
Single-precision MPI with Infiniband, 48 cores or more in multiples of 24
An example batch submission script to run the single precision mdrun_mpi
executable with 48 MPI processes on 48 cores with the orte-24-ib.pe parallel environment (Intel nodes using InfiniBand):
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe orte-24-ib.pe 48 mpiexec -n $NSLOTS mdrun_mpi
Submit with the command: qsub scriptname
Notes
orte-24-ib.pe
– you must request multiples of 24 and a minimum of 48.- If you wish to use the AMD nodes please use
orte-32-ib.pe
– you must request multiples of 32.
Double precision MPI with Infiniband, 48 or more in multiples of 24
An example batch submission script to run the double precision mdrun_d_mpi executable with 48 MPI processes on 48 cores with the orte-24-ib.pe parallel environment (Intel nodes using infiniband):
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe orte-24-ib.pe 48 mpiexec -n $NSLOTS mdrun_d_mpi
Submit with the command: qsub scriptname
Notes
orte-24-ib.pe
– you must request multiples of 24 and a minimum of 48.- If you wish to use the AMD nodes please use
orte-32-ib.pe
– you must request multiples of 32.
I want to use a number of cores not recommended above, what are the options?
Please contact its-ri-team@manchester.ac.uk for advice.
Further info
- You can see a list of all the installed GROMACS utilities with the command:
ls $GMXDIR/bin
- GROMACS web page
- GROMACS manuals
- GROMACS user mailing list
Updates
Nov 2013 – Documentation for 4.5.4 and 4.6.1 split in to two pages.
May 2013 – Gromacs 4.6.1 and Plumed 1.3 installed.