Gromacs 2023.3 (CPU & GPU, with and without Plumed)
Overview
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles and is a community-driven project.
It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers and fluid dynamics.
-v
flag to your mdrun
command.
It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers.
Please do not run source GMXRC
, it is not required. Loading the module does everything.
Please note that the convention and syntax used for this installation are as per official Gromacs 2023.3 documentation. Old legacy (5.x and earlier) command options are not applicable any more for this installation. New command syntax are demonstrated in the example jobscripts below.
Restrictions on use
GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL), version 2.1.
Available Builds/Modules
Following Single Precision modules are available for version 2023.3
apps/gcc/gromacs/2023.3/single apps/gcc/gromacs/2023.3/single_avx512 apps/gcc/gromacs/2023.3/single_gpu apps/gcc/gromacs/2023.3/single_mpi apps/gcc/gromacs/2023.3/single_mpi_avx512 apps/gcc/gromacs/2023.3/single_mpi-plumed apps/gcc/gromacs/2023.3/single_mpi_avx512-plumed
Following Double Precision modules are available for version 2023.3
apps/gcc/gromacs/2023.3/double apps/gcc/gromacs/2023.3/double_mpi apps/gcc/gromacs/2023.3/double_mpi-plumed apps/gcc/gromacs/2023.3/double_mpi_avx512 apps/gcc/gromacs/2023.3/double_mpi_avx512-plumed
NOTE:
- The avx512 builds are suitable only for Skylake, Cascadelake and Genoa (new AMD) processor based nodes and HPC-Pool
- The MPI builds can be run on single node as well as on multi(infiniband) nodes
- PLUMED version used for Plumed supported builds is PLUMED v2.9.1
- Double Precision build of Gromacs with GPU support is not possible
Set up procedure
You must load the appropriate modulefile:
module load modulefile
Syntax change in newer version
Following are the new syntax:
Single Precision with and without GPU | Double Precision | MPI with and without GPU | Double Precision MPI | PLUMED with and without GPU |
---|---|---|---|---|
gmx command | gmx_d command | gmx_mpi command | gmx_mpi_d command | gmx_mpi command |
Examples:
Single Precision with and without GPU | Double Precision | MPI with and without GPU | Double Precision MPI | PLUMED with and without GPU |
---|---|---|---|---|
gmx mdrun | gmx_d mdrun | gmx_mpi mdrun | gmx_mpi_d command | gmx_mpi mdrun |
The complete list of command
names can be found by first loading the desired module and then running the following on the login node:
#For single precision gmx help commands #For double precision gmx_d help commands
To obtain more help about a particular command run:
gmx help command #or gmx_d help command
For example
gmx help mdrun
Running the application
Please do not run GROMACS on the login node.
-v
flag to your mdrun
command.
It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers.
Please do not forget to add the option $NSLOTS which tells Gromacs how many cores/threads to run on.
This value is automatically obtained from the value requested in the jobscript using: #$ -pe smp.pe N
Please note that the option used for Multi-threaded and MPI are different and jobs will fail if not set correctly:
For Multi-threaded we use the option:…-nt $NSLOTS
For MPI we use the option:……………..-np $NSLOTS
Multi-threaded single-precision on 2 to 32 cores
An example batch submission script to run the single-precision gmx mdrun
command with 12 threads:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 12 # Can specify 2 to 32 cores in smp.pe module load apps/gcc/gromacs/2023.3/single gmx mdrun -nt $NSLOTS -deffnm step1
Submit with the command: qsub scriptname
Multi-threaded double-precision on 2 to 32 cores
An example batch submission script to run the double-precision gmx_d mdrun
command with 16 threads:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 16 module load apps/gcc/gromacs/2023.3/double gmx_d mdrun -nt $NSLOTS -deffnm step1
Submit with the command: qsub scriptname
Single precision MPI (single-node), 2 to 32 cores
If you want to use OpenMPI instead of internal multi-threading of Gromacs you can use the single precision MPI module.
An example batch submission script to run the double-precision gmx_mpi mdrun
command on 16 cores using mpi:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 16 export OMP_NUM_THREADS=1 # Setting this as 1 is important when running MPI build, # failing which each MPI process will start 4 OpenMP threads. # Alternately -ntomp 1 can be set as mdrun option # to the same effect (refer below) module load apps/gcc/gromacs/2023.3/single_mpi mpirun -np $NSLOTS gmx_mpi mdrun -deffnm step1 #mpirun -np $NSLOTS gmx_mpi mdrun -ntomp 1 -deffnm step1
Submit with the command: qsub scriptname
Double precision MPI (single-node), 2 to 32 cores
If you want to use OpenMPI instead of internal multi-threading of Gromacs you can use the double precision MPI module.
An example batch submission script to run the double-precision gmx_mpi_d mdrun
command on 16 cores using mpi:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 16 export OMP_NUM_THREADS=1 # Setting this as 1 is important when running MPI build, # failing which each MPI process will start 4 OpenMP threads. # Alternately -ntomp 1 can be set as mdrun option # to the same effect (refer below) module load apps/gcc/gromacs/2023.3/double_mpi mpirun -np $NSLOTS gmx_mpi_d mdrun -deffnm step1 #mpirun -np $NSLOTS gmx_mpi_d mdrun -ntomp 1 -deffnm step1
Submit with the command: qsub scriptname
Single-precision, MPI Multinode, 48 cores or more in multiples of 24
An example batch submission script to run the single precision gmx_mpi mdrun
command with 48 MPI processes (48 cores on two 24-core nodes) with the mpi-24-ib.pe
parallel environment (Intel Haswell nodes using infiniband):
#!/bin/bash --login #$ -cwd #$ -pe mpi-24-ib.pe 48 # EG: Two 24-core Intel Haswell nodes export OMP_NUM_THREADS=1 # Setting this as 1 is important when running MPI build, # failing which each MPI process will start 4 OpenMP threads. # Alternately -ntomp 1 can be set as mdrun option # to the same effect (refer below) module load apps/gcc/gromacs/2023.3/single_mpi mpirun -np $NSLOTS gmx_mpi mdrun -deffnm step1 #mpirun -np $NSLOTS gmx_mpi mdrun -ntomp 1 -deffnm step1
Submit with the command: qsub scriptname
Double-precision, MPI, 48 cores or more in multiples of 24
An example batch submission script to run the double precision gmx_mpi_d mdrun
command with 48 MPI processes (48 cores on two 24-core nodes) with the mpi-24-ib.pe
parallel environment (Intel Haswell nodes using infiniband):
#!/bin/bash --login #$ -cwd #$ -pe mpi-24-ib.pe 48 # EG: Two 24-core Intel Haswell nodes export OMP_NUM_THREADS=1 # Setting this as 1 is important when running MPI build, # failing which each MPI process will start 4 OpenMP threads. # Alternately -ntomp 1 can be set as mdrun option # to the same effect (refer below) module apps/gcc/gromacs/2023.3/double_mpi mpirun -np $NSLOTS gmx_mpi_d mdrun -deffnm step1 #mpirun -np $NSLOTS gmx_mpi_d mdrun -ntomp 1 -deffnm step1
Submit with the command: qsub scriptname
Multi-threaded single-precision on a single node with one GPU.
You need to request being added to the relevant group to access GPUs before you can run GROAMACS on them.
Please note that if you have ‘free at the point of use’ access to the GPUs then the maximum number of GPUs you can request is 2
The maximum number of CPU cores that anyone can request is 8 per GPU.
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 8 #Specify the number of CPUs, maximum of 8 per GPU. #$ -l v100 #This requests a single GPU. module load apps/gcc/gromacs/2023.3/single_gpu gmx mdrun -nt $NSLOTS -deffnm md_0_1 -nb gpu
Submit with the command: qsub scriptname
Multi-threaded single-precision on a single node with multiple GPUs
You need to request being added to the relevant group to access GPUs before you can run GROAMACS on them.
Please note that if you have ‘free at the point of use’ access to the GPUs then the maximum number of GPUs you can request is 2 (please therefore follow the previous example).
The maximum number of CPU cores that anyone can request is 8 per GPU requested e.g. 1 GPU and 8 cores, 2 GPUs and 16 cores.
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 16 #Specify the number of CPUs, maximum of 8 per GPU. #$ -l v100=2 #Specify we want a GPU (nvidia_v100) node with two GPUs, maximum is 4. module load apps/gcc/gromacs/2023.3/single_gpu export OMP_NUM_THREADS=$((NSLOTS/NGPUS)) gmx mdrun -ntmpi ${NGPUS} -ntomp ${OMP_NUM_THREADS} -deffnm md_0_1 -nb gpu
Submit with the command: qsub scriptname
Advanced options – Setting number of MPI Rank and Thread
Instead of using the option -nt $NSLOTS
which is used to specify only the number of Threads, as shown in the example jobscripts above, there are other mdrun options which can be used to set/specify number of MPI Rank and Thread for your job according to your requirement:
Option: -ntmpi
is used to set/specify the number of thread-MPI ranks to be started
Option: -ntomp
is used to set/specify the number of threads per rank to be started
For the example when you have requested 16 CPU cores (#$ -pe smp.pe 16), the possible combinations can be:
-ntmpi 2 -ntomp 8 # 2 CPU Ranks and 8 threads per rank -ntmpi 4 -ntomp 4 # 4 CPU Ranks and 4 threads per rank -ntmpi 8 -ntomp 2 # 8 CPU Ranks and 2 threads per rank
Explanation:
-ntmpi 2 -ntomp 8
This means 2 MPI threads of Gromacs will be run, each of which will fork 8 OpenMP Threads.
Example:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 16 #Specify the number of CPUs, maximum of 8 per GPU. #$ -l v100=2 #Specify we want a GPU (nvidia_v100) node with two GPUs, maximum is 4. module load apps/gcc/gromacs/2023.3/single_gpu gmx mdrun -ntmpi 2 -ntomp 8 -deffnm md_0_1 -nb gpu
If you want to experiment with these and other available mdrun options in Gromacs you can go through official Gromacs 2023 Documentation and try each combination and other available options to see which one gives you the best performance.
Error about OpenMP and cut-off scheme
If you encounter the following error:
OpenMP threads have been requested with cut-off scheme Group, but these are only supported with cut-off scheme Verlet
then please try using the MPI version of the software. Note that it is possible to run MPI versions on a single node (example above).
Further info
- You can see a list of all the installed GROMACS utilities with the command:
ls $GMXDIR/bin
- GROMACS website
- GROMACS 2023 manual/documentation
- GROMACS 2023.3 User Guide
- GROMACS forum
Updates
- No Updates