GROMACS
Overview
GROMACS is a package for computing molecular dynamics, simulating Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed for biochemical molecules with complicated bonded interactions (e.g. proteins, lipids, nucleic acids) but can also be used for non-biological systems (e.g. polymers).
Versions 2018.4 and 2020.1 (single and double precision, multi-core and MPI parallel) are installed.
Please do not add the -v flag to your mdrun command.It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers. |
Significant Change in this Version
Within Gromacs 2018 and 2020, the different gromacs commands (e.g., mdrun
, grompp
, g_hbond
) should now be run using the command:
gmx command
where command
is the name of the command you wish to run (without any g_
prefix), for example:
gmx mdrun
The gmx
command changes its name to reflect the gromacs flavour being used but the command
does not change. For example, if using the mdrun
command:
# New 20XY method # Previous 5.0.4 method (not available on CSF4) # =============== # ===================== gmx mdrun mdrun gmx_d mdrun mdrun_d mpirun gmx_mpi mdrun mpirun -n $NSLOTS mdrun_mpi mpirun gmx_mpi_d mdrun mpirun -n $NSLOTS mdrun_mpi_d
The complete list of command
names can be found by running the following on the login node:
gmx help commands
# The following commands are available: anadock gangle rdf anaeig genconf rms analyze genion rmsdist angle genrestr rmsf awh grompp rotacf bar gyrate rotmat bundle h2order saltbr check hbond sans chi helix sasa cluster helixorient saxs clustsize help select confrms hydorder sham convert-tpr insert-molecules sigeps covar lie solvate current make_edi sorient density make_ndx spatial densmap mdmat spol densorder mdrun tcaf dielectric mindist traj dipoles mk_angndx trajectory disre morph trjcat distance msd trjconv do_dssp nmeig trjorder dos nmens tune_pme dump nmtraj vanhove dyecoupl order velacc dyndom pairdist view editconf pdb2gmx wham eneconv pme_error wheel enemat polystat x2top energy potential xpm2ps filter principal freevolume rama
Notice that the command names do NOT start with g_
and do NOT reference the flavour being run (e.g., _mpi_d
). Only the main gmx
command changes its name to reflect the flavour (see below for list of modulefiles for the full list of flavours available).
To obtain more help about a particular command run:
gmx help command
For example
gmx help mdrun
Available Flavours
For versions 2018.4 and 2020.1 we have compiled multiple versions of Gromacs, for CPU jobs only (there are no GPUs in CSF4). You can use single or double precision executables for parallel multi-core (threads) or larger multi-node (MPI) jobs.
Restrictions on use
GROMACS is free software, available under the GNU General Public License.
Set up procedure
You must load one of the following modulefiles:
module load gromacs/2020.1-iomkl-2020.02-python-3.8.2 module load gromacs/2018.4-iomkl-2020.02
The following executables are available for use in your jobscripts:
gmx # Single precision multicore (single compute node job) gmx_d # Double precision multicore (single compute node job) gmx_mpi # Single precision MPI (multi-node job) gmx_mpi_d # Double precision MPI (multi-node job)
Remember you will need to add the command to be run by gromacs to the gmx
command-line in your jobscript. For example:
# Double-precision multicore (single compute node) gmx_d mdrun args...
Running the application
Please do not run GROMACS on the login node.
Important notes regarding running jobs in batch
We now recommend that the module file is loaded as part of your batch script.
It is not necessary to tell mpirun
how many cores to use if using the MPI executables. This is because SLURM knows this automatically.
# Multi-core (single-node) or large Multi-node MPI job. SLURM knows how many cores to use. mpirun gmx_mpi mdrun # New method (v5.1.4 and later) mpirun gmx_mpi_d mdrun # New method (v5.1.4 and later)
However, if using the multicore (single compute node) executables, you must inform GROMACS how many cores to use with the $SLURM_NTASKS
variable:
# Single-node multi-threaded job export OMP_NUM_THREADS=$SLURM_NTASKS # Inform GROMACS how many cores to use gmx mdrun # New method (v5.1.4 and later) gmx_d mdrun # New method (v5.1.4 and later)
The examples below can be used for single precision or double precision GROMACS. Simply run mdrun
(single precision) or mdrun_d
(double precision).
Please do not add the -v flag to your mdrun command.It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers. |
Multi-threaded single-precision, 2 to 40 cores
Note that GROMACS 2020.1 (unlike v4.5.4) does not support the -nt
flag to set the number of threads when using the multithreaded OpenMP (non-MPI) version. Instead set the OMP_NUM_THREADS
environment variable as shown below.
An example batch submission script to run the single-precision mdrun executable with 16 threads:
#!/bin/bash --login #SBATCH -p multicore # (--partition=multicore) #SBATCH -n 16 # Can specify 2 to 40 cores in the multicore partition module load gromacs/2020.1-iomkl-2020.02-python-3.8.2 export OMP_NUM_THREADS=$SLURM_NTASKS gmx mdrun
Submit with the command: sbatch scriptname
where scriptname
is the name of your jobscript.
Multi-threaded double-precision, 2 to 40 cores
An example batch submission script to run the double-precision mdrun
executable with 16 threads:
#!/bin/bash --login #SBATCH -p multicore # (--partition=multicore) #SBATCH -n 16 # Can specify 2 to 40 cores in the multicore partition module load gromacs/2020.1-iomkl-2020.02-python-3.8.2 export OMP_NUM_THREADS=$SLURM_NTASKS gmx_d mdrun
Submit with the command: sbatch scriptname
where scriptname
is the name of your jobscript.
Single precision MPI (single-node), 2 to 40 cores
An example batch submission script to run the single-precision mdrun
executable on 16 cores using MPI:
#!/bin/bash --login #SBATCH -p multicore # (--partition=multicore) #SBATCH -n 16 # Can specify 2 to 40 cores in the multicore partition module load gromacs/2020.1-iomkl-2020.02-python-3.8.2 mpirun gmx_mpi mdrun
Submit with the command: sbatch scriptname
where scriptname
is the name of your jobscript.
Double precision MPI (single-node), 2 to 40 cores
An example batch submission script to run the double-precision mdrun
executable on 16 cores using MPI:
#!/bin/bash --login #SBATCH -p multicore # (--partition=multicore) #SBATCH -n 16 # Can specify 2 to 40 cores in the multicore partition module load gromacs/2020.1-iomkl-2020.02-python-3.8.2 mpirun gmx_mpi mdrun
Submit with the command: sbatch scriptname
where scriptname
is the name of your jobscript.
Single-precision, MPI, 80 cores or more in multiples of 40
An example batch submission script to run the single-precision mdrun
executable on 80 cores (2 x 40-core compute nodes) using MPI:
#!/bin/bash --login #SBATCH -p multinode # (--partition=multinode) #SBATCH -n 80 # 80 cores is 2 x 40-core compute nodes. Must be a multiple of 40. module load gromacs/2020.1-iomkl-2020.02-python-3.8.2 mpirun gmx_mpi mdrun
Submit with the command: sbatch scriptname
where scriptname
is the name of your jobscript.
Double-precision, MPI, 80 cores or more in multiples of 40
An example batch submission script to run the double-precision mdrun
executable on 80 cores (2 x 40-core compute nodes) using MPI:
#!/bin/bash --login #SBATCH -p multinode # (--partition=multinode) #SBATCH -n 80 # 80 cores is 2 x 40-core compute nodes. Must be a multiple of 40. module load gromacs/2020.1-iomkl-2020.02-python-3.8.2 mpirun gmx_mpi_d mdrun
Submit with the command: sbatch scriptname
where scriptname
is the name of your jobscript.
Error about OpenMP and cut-off scheme
If you encounter the following error:
OpenMP threads have been requested with cut-off scheme Group, but these are only supported with cut-off scheme Verlet
then please try using the MPIversion of the software. Note that it is possible to run MPI versions on a single node (see example above).
Further info
- You can see a list of all the installed GROMACS utilities with the command:
ls $GMXDIR/bin
- GROMACS web page
- GROMACS manuals
- GROMACS user mailing list
Updates
Oct 2020 – First version