{"id":8092,"date":"2024-09-25T15:20:35","date_gmt":"2024-09-25T14:20:35","guid":{"rendered":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=8092"},"modified":"2026-03-19T11:46:13","modified_gmt":"2026-03-19T11:46:13","slug":"gromacs-2023-3-cpu-gpu-with-and-without-plumed","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/gromacs\/gromacs-2023-3-cpu-gpu-with-and-without-plumed\/","title":{"rendered":"Gromacs 2023.3 (CPU &#038; GPU, with and without Plumed)"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><strong>GROMACS<\/strong> is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles and is a community-driven project.<\/p>\n<p>It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers and fluid dynamics.<\/p>\n<div class=\"warning\">\n<em>Please do <strong>not<\/strong> add the <code>-v<\/code> flag to your <code>mdrun<\/code> command.<\/em><\/p>\n<p>It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers.<\/p>\n<p>Please do not run <code><strong>source GMXRC<\/strong><\/code>, it is not required. Loading the module does everything.<\/p>\n<p>Please note that the convention and syntax used for this installation are as per official Gromacs 2023.3 documentation. Old legacy (5.x and earlier) command options are not applicable any more for this installation. New command syntax are demonstrated in the example jobscripts below.\n<\/p><\/div>\n<h2>Restrictions on use<\/h2>\n<p>GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL), version 2.1.<\/p>\n<h2>Available Builds\/Modules<\/h2>\n<p>Following <strong>Single Precision<\/strong> modules are available for version 2023.3<\/p>\n<pre>\r\n  apps\/gcc\/gromacs\/2023.3\/single\r\n  apps\/gcc\/gromacs\/2023.3\/single_avx512\r\n  apps\/gcc\/gromacs\/2023.3\/single_gpu\r\n  apps\/gcc\/gromacs\/2023.3\/single_mpi\r\n  apps\/gcc\/gromacs\/2023.3\/single_mpi_avx512 <!---  apps\/gcc\/gromacs\/2023.3\/single_mpi_gpu --->\r\n  apps\/gcc\/gromacs\/2023.3\/single_mpi-plumed\r\n  apps\/gcc\/gromacs\/2023.3\/single_mpi_avx512-plumed <!---  apps\/gcc\/gromacs\/2023.3\/single_mpi_gpu-plumed --->\r\n<\/pre>\n<p>Following <strong>Double Precision<\/strong> modules are available for version 2023.3<\/p>\n<pre>\r\n  apps\/gcc\/gromacs\/2023.3\/double\r\n  apps\/gcc\/gromacs\/2023.3\/double_mpi\r\n  apps\/gcc\/gromacs\/2023.3\/double_mpi-plumed\r\n  apps\/gcc\/gromacs\/2023.3\/double_mpi_avx512\r\n  apps\/gcc\/gromacs\/2023.3\/double_mpi_avx512-plumed\r\n<\/pre>\n<p><strong>NOTE:<\/strong><\/p>\n<ul>\n<li>The <strong>avx512<\/strong> builds are suitable only for Skylake, Cascadelake and Genoa (new AMD) processor based nodes and HPC-Pool<\/li>\n<li>The <strong>MPI<\/strong> builds can be run on single node as well as on multi(infiniband) nodes<\/li>\n<li>PLUMED version used for Plumed supported builds is <strong><a href=\"https:\/\/www.plumed.org\/doc-v2.9\/user-doc\/html\/index.html\" target=\"_blank\" rel=\"noopener\">PLUMED v2.9.1<\/a><\/strong><\/li>\n<li>Double Precision build of Gromacs with GPU support is not possible<\/li>\n<\/ul>\n<p><!---\n<#h2>Description<#\/h2>\n<strong>apps\/gcc\/gromacs\/2023.3\/single<\/strong>\nSingle precision with AXV2 optimization.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/single_avx512<\/strong>\nSingle precision with AXV_512 optimization. Suitable only for Skylake and Genoa (new AMD) nodes and HPC-Pool.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/single_gpu<\/strong>\nSingle precision with AXV2 optimization and GPU support.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/single_mpi<\/strong>\nSingle precision with AXV2 optimization and MPI support. Suitable for Multinode jobs.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/single_mpi_avx512<\/strong>\n*Single precision with AXV_512 optimization and MPI support. Suitable for HPC-Pool Multinode jobs.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/single_mpi_gpu<\/strong>\nSingle precision with AXV_512 optimization, MPI and GPU support. Suitable for Multi GPU jobs.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/single_mpi-plumed<\/strong>\n*Single precision with AXV2 optimization, MPI and Plumed support.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/single_mpi_gpu-plumed<\/strong>\n*Single precision with AXV2 optimization, MPI, GPU and Plumed support.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/single_mpi_avx512-plumed<\/strong>\n*Single precision with AXV_512 optimization, MPI, GPU and Plumed support. Suitable only for Skylake and Genoa (new AMD) nodes and HPC-Pool.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/double<\/strong>\nDouble precision with AXV2 optimization.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/double_mpi<\/strong>\nDouble precision with AXV2 optimization and MPI support. Suitable for Multinode jobs.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/double_mpi-plumed<\/strong>\n*Double precision with AXV2 optimization, MPI and Plumed support.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/double_mpi_avx512<\/strong>\nDouble precision with AXV_512 optimization and MPI support. Suitable only for Skylake and Genoa (new AMD) nodes and HPC-Pool.\n\n<strong>apps\/gcc\/gromacs\/2023.3\/double_mpi_avx512-plumed<\/strong>\nDouble precision with AXV_512 optimization, MPI and Plumed support. Suitable only for Skylake and Genoa (new AMD) nodes and HPC-Pool.\n\n<strong>Please note that Double Precision with GPU support is not possible.<\/strong>\n---><\/p>\n<h2>Set up procedure<\/h2>\n<p>You must load the appropriate modulefile:<\/p>\n<pre>module load <em>modulefile<\/em>\r\n<\/pre>\n<h2>Syntax change in newer version<\/h2>\n<p>Following are the new syntax:<\/p>\n<table>\n<tr>\n<th width=\"125\">Single Precision with and without GPU<\/th>\n<th width=\"125\">Double Precision<\/th >\n<th width=\"125\">MPI with and without GPU<\/th>\n<th width=\"125\">Double Precision MPI<\/th>\n<th width=\"125\">PLUMED with and without GPU<\/th>\n<\/tr>\n<tr>\n<td>gmx command<\/td>\n<td>gmx_d command<\/td>\n<td>gmx_mpi command<\/td>\n<td>gmx_mpi_d command<\/td>\n<td>gmx_mpi command<\/td>\n<\/tr>\n<\/table>\n<p>Examples:<\/p>\n<table>\n<tr>\n<th width=\"125\">Single Precision with and without GPU<\/th>\n<th width=\"125\">Double Precision<\/th>\n<th width=\"125\">MPI with and without GPU<\/th>\n<th width=\"125\">Double Precision MPI<\/th>\n<th width=\"125\">PLUMED with and without GPU<\/th>\n<\/tr>\n<tr>\n<td>gmx mdrun<\/td>\n<td>gmx_d mdrun<\/td>\n<td>gmx_mpi mdrun<\/td>\n<td>gmx_mpi_d command<\/td>\n<td>gmx_mpi mdrun<\/td>\n<\/tr>\n<\/table>\n<p>The complete list of <code><em>command<\/em><\/code> names can be found by first loading the desired module and then running the following on the login node:<\/p>\n<pre>\r\n#For single precision\r\ngmx help command<strong>s<\/strong>\r\n\r\n#For double precision\r\ngmx_d help command<strong>s<\/strong>\r\n<\/pre>\n<p>To obtain more help about a particular command run:<\/p>\n<pre>\r\ngmx help <em>command<\/em>\r\n#or\r\ngmx_d help <em>command<\/em>\r\n<\/pre>\n<p>For example<\/p>\n<pre>\r\ngmx help mdrun\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do <strong>not<\/strong> run GROMACS on the login node.<\/p>\n<div class=\"warning\">\n<em>Please do <strong>not<\/strong> add the <code>-v<\/code> flag to your <code>mdrun<\/code> command.<\/em><\/p>\n<p>It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers.<\/p>\n<p>Please do not forget to add the option <strong>$SLURM_NTASKS<\/strong> which tells Gromacs how many cores\/threads to run on.<br \/>\nThis value is automatically obtained from the value requested in the jobscript using: <strong>#$ -pe smp.pe N<\/strong><\/p>\n<p>Please note that the option used for <strong>Multi-threaded<\/strong> and <strong>MPI<\/strong> are different and jobs will fail if not set correctly:<br \/>\nFor <strong>Multi-threaded<\/strong> we use the option:&#8230;<code>-n<strong>t<\/strong> $SLURM_NTASKS<\/code><br \/>\nFor <strong>MPI<\/strong> we use the option:&#8230;&#8230;&#8230;&#8230;&#8230;..<code>-n<strong>p<\/strong> $SLURM_NTASKS<\/code>\n<\/div>\n<h3>Multi-threaded single-precision on 2 to 168 cores<\/h3>\n<p>An example batch submission script to run the <strong>single-precision<\/strong> <code>gmx mdrun<\/code> command with 12 threads:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore      # AMD Genoa 168-core nodes\r\n#SBATCH -n 12             # (or --ntasks=) Number of cores (can be 2--168)\r\n#SBATCH -t 2-0            # Wallclock time limit (2-0 is 2 days, max permitted is 7-0)\r\n\r\nmodule purge                           \r\nmodule load apps\/gcc\/gromacs\/2023.3\/single\r\n\r\ngmx grompp -f md.mdp -c npt.gro -t npt.cpt -p topol.top -o md_0_1.tpr\r\ngmx mdrun -nt $SLURM_NTASKS -deffnm step1\r\n\r\n<\/pre>\n<p>Submit with the command: <code>sbatch scriptname<\/code><\/p>\n<pre class=\"sge\">\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 12            # Can specify 2 to 32 cores in smp.pe\r\n                           \r\nmodule load apps\/gcc\/gromacs\/2023.3\/single\r\n\r\ngmx mdrun -nt $NSLOTS -deffnm step1\r\n\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h3>Multi-threaded double-precision on 2 to 168 cores<\/h3>\n<p>An example batch submission script to run the <strong>double-precision<\/strong> <code>gmx_d mdrun<\/code> command with 16 threads:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore       # AMD Genoa 168-core nodes\r\n#SBATCH -n 16              # (or --ntasks=) Number of cores (can be 2--168)\r\n#SBATCH -t 2-0             # Wallclock time limit (2-0 is 2 days, max permitted is 7-0)\r\n\r\nmodule purge\r\nmodule load apps\/gcc\/gromacs\/2023.3\/double\r\n\r\ngmx_d grompp -f md.mdp -c npt.gro -t npt.cpt -p topol.top -o md_0_1.tpr\r\ngmx_d mdrun -nt $SLURM_NTASKS -deffnm step1\r\n\r\n<\/pre>\n<p>Submit with the command: <code>sbatch scriptname<\/code><\/p>\n<pre class=\"sge\">\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 16\r\nmodule load apps\/gcc\/gromacs\/2023.3\/double\r\n\r\ngmx_d mdrun -nt $NSLOTS -deffnm step1\r\n\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h3>Single precision MPI (single-node), 2 to 168 cores<\/h3>\n<p>If you want to use OpenMPI instead of internal multi-threading of Gromacs you can use the single precision MPI module.<br \/>\nAn example batch submission script to run the single-precision <code>gmx_mpi mdrun<\/code> command on 16 cores using mpi:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore       # AMD Genoa 168-core nodes\r\n#SBATCH -n 16              # (or --ntasks=) Number of cores (can be 2--168)\r\n#SBATCH -t 2-0             # Wallclock time limit (2-0 is 2 days, max permitted is 7-0)\r\n  \r\nexport OMP_NUM_THREADS=1       # Setting this as 1 is important when running MPI build, \r\n                               # failing which each MPI process will start 4 OpenMP threads.\r\n                               # Alternately -ntomp 1 can be set as mdrun option \r\n                               # to the same effect <a href=\"#Advanced_options_8211_Setting_number_of_MPI_Rank_and_Thread\">(refer below)<\/a>\r\n         \r\nmodule purge\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_mpi \r\n\r\nmpirun -np 1 gmx_mpi grompp -f md.mdp -c npt.gro -t npt.cpt -p topol.top -o md_0_1.tpr                                         \r\nmpirun -np $SLURM_NTASKS gmx_mpi mdrun -deffnm step1\r\n#mpirun -np $SLURM_NTASKS gmx_mpi mdrun -ntomp 1 -deffnm step1\r\n\r\n<\/pre>\n<p>Submit with the command: <code>sbatch scriptname<\/code><\/p>\n<pre class=\"sge\">\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 16  \r\nexport OMP_NUM_THREADS=1       # Setting this as 1 is important when running MPI build, \r\n                               # failing which each MPI process will start 4 OpenMP threads.\r\n                               # Alternately -ntomp 1 can be set as mdrun option \r\n                               # to the same effect <a href=\"#Advanced_options_8211_Setting_number_of_MPI_Rank_and_Thread\">(refer below)<\/a>\r\n         \r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_mpi\r\n                                         \r\nmpirun -np $NSLOTS gmx_mpi mdrun -deffnm step1\r\n#mpirun -np $NSLOTS gmx_mpi mdrun -ntomp 1 -deffnm step1\r\n\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h3>Double precision MPI (single-node), 2 to 168 cores<\/h3>\n<p>If you want to use OpenMPI instead of internal multi-threading of Gromacs you can use the double precision MPI module.<br \/>\nAn example batch submission script to run the <strong>double-precision<\/strong> <code>gmx_mpi_d mdrun<\/code> command on 16 cores using mpi:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore       # AMD Genoa 168-core nodes\r\n#SBATCH -n 16              # (or --ntasks=) Number of cores (can be 2--168)\r\n#SBATCH -t 2-0             # Wallclock time limit (2-0 is 2 days, max permitted is 7-0)\r\n\r\nexport OMP_NUM_THREADS=1       # Setting this as 1 is important when running MPI build,  \r\n                               # failing which each MPI process will start 4 OpenMP threads.\r\n                               # Alternately -ntomp 1 can be set as mdrun option \r\n                               # to the same effect <a href=\"#Advanced_options_8211_Setting_number_of_MPI_Rank_and_Thread\">(refer below)<\/a>\r\n\r\nmodule purge\r\nmodule load apps\/gcc\/gromacs\/2023.3\/double_mpi\r\n\r\nmpirun -np 1 gmx_mpi_d grompp -f md.mdp -c npt.gro -t npt.cpt -p topol.top -o md_0_1.tpr                                           \r\nmpirun -np $SLURM_NTASKS gmx_mpi_d mdrun -deffnm step1\r\n#mpirun -np $SLURM_NTASKS gmx_mpi_d mdrun -ntomp 1 -deffnm step1\r\n\r\n<\/pre>\n<p>Submit with the command: <code>sbatch scriptname<\/code><\/p>\n<pre class=\"sge\">\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 16\r\nexport OMP_NUM_THREADS=1       # Setting this as 1 is important when running MPI build,  \r\n                               # failing which each MPI process will start 4 OpenMP threads.\r\n                               # Alternately -ntomp 1 can be set as mdrun option \r\n                               # to the same effect <a href=\"#Advanced_options_8211_Setting_number_of_MPI_Rank_and_Thread\">(refer below)<\/a>\r\n\r\nmodule load apps\/gcc\/gromacs\/2023.3\/double_mpi                                           \r\nmpirun -np $NSLOTS gmx_mpi_d mdrun -deffnm step1\r\n#mpirun -np $NSLOTS gmx_mpi_d mdrun -ntomp 1 -deffnm step1\r\n\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h3>Single-precision, MPI Multinode (NOTE: Not for CSF3, for <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/hpc-pool\/\" target=\"_blank\" rel=\"noopener\">HPC-POOL<\/a> only)<\/h3>\n<p>An example batch submission script to run the <strong>single precision<\/strong> <code>gmx_mpi mdrun<\/code> command with 128 MPI processes (128 cores on four 32-core nodes) in the <code>HPC Pool<\/code> using infiniband:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n### All of the following flags are required!\r\n#SBATCH -p hpcpool             # The \"partition\" - named hpcpool\r\n#SBATCH -N 4                   # (or --nodes=) Minimum is 4, Max is 32. Job uses 32 cores on each node.\r\n#SBATCH -n 128                 # (or --ntasks=) TOTAL number of tasks. Max is 1024.\r\n#SBATCH -t 1-0                 # Wallclock limit. 1-0 is 1 day. Maximum permitted is 4-0 (4-days).\r\n#SBATCH -A hpc-proj-name       # Use your HPC project code\r\n\r\nexport OMP_NUM_THREADS=1       # Setting this as 1 is important when running MPI build, \r\n                               # failing which each MPI process will start 4 OpenMP threads.\r\n                               # Alternately -ntomp 1 can be set as mdrun option \r\n                               # to the same effect <a href=\"#Advanced_options_8211_Setting_number_of_MPI_Rank_and_Thread\">(refer below)<\/a>\r\n\r\nmodule purge\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_mpi\r\nmpirun -np $SLURM_NTASKS gmx_mpi mdrun -deffnm step1\r\n#mpirun -np $SLURM_NTASKS gmx_mpi mdrun -ntomp 1 -deffnm step1\r\n\r\n<\/pre>\n<p>Submit with the command: <code>sbatch scriptname<\/code><\/p>\n<h3>Double-precision, MPI Multinode (NOTE: Not for CSF3, for <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/hpc-pool\/\" target=\"_blank\" rel=\"noopener\">HPC-POOL<\/a> only)<\/h3>\n<p>An example batch submission script to run the <strong>double precision<\/strong> <code>gmx_mpi_d mdrun<\/code> command with 128 MPI processes (128 cores on four 32-core nodes) in the <code>HPC Pool<\/code> using infiniband:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n### All of the following flags are required!\r\n#SBATCH -p hpcpool             # The \"partition\" - named hpcpool\r\n#SBATCH -N 4                   # (or --nodes=) Minimum is 4, Max is 32. Job uses 32 cores on each node.\r\n#SBATCH -n 128                 # (or --ntasks=) TOTAL number of tasks. Max is 1024.\r\n#SBATCH -t 1-0                 # Wallclock limit. 1-0 is 1 day. Maximum permitted is 4-0 (4-days).\r\n#SBATCH -A hpc-proj-name       # Use your HPC project code\r\n\r\nexport OMP_NUM_THREADS=1       # Setting this as 1 is important when running MPI build, \r\n                               # failing which each MPI process will start 4 OpenMP threads.\r\n                               # Alternately -ntomp 1 can be set as mdrun option \r\n                               # to the same effect <a href=\"#Advanced_options_8211_Setting_number_of_MPI_Rank_and_Thread\">(refer below)<\/a>\r\n\r\nmodule purge\r\nmodule apps\/gcc\/gromacs\/2023.3\/double_mpi\r\nmpirun -np $SLURM_NTASKS gmx_mpi_d mdrun -deffnm step1\r\n#mpirun -np $SLURM_NTASKS gmx_mpi_d mdrun -ntomp 1 -deffnm step1\r\n\r\n<\/pre>\n<p>Submit with the command: <code>sbatch scriptname<\/code><\/p>\n<h3>Multi-threaded single-precision on a single node with one GPU.<\/h3>\n<p><strong>You need to request being added to the relevant group to access <a href=\"\/csf3\/batch\/gpu-jobs\/\" target=\"_blank\" rel=\"noopener\">GPUs<\/a> before you can run GROAMACS on them.<\/strong><\/p>\n<p>Please note that if you have <em>&#8216;free at the point of use&#8217;<\/em> access to the GPUs then the maximum number of GPUs you can request is 2<\/p>\n<p>The maximum number of CPU cores that anyone can request is 8 per GPU for V100 and 12 per GPU for A100 and L40S.<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p gpuX              # Select the type of GPU (where X = A, L or A40GB)\r\n#SBATCH -G 1                 # 1 GPU\r\n#SBATCH -n 8                 # Select the no. of CPU cores. 8 for V100 & 12 for rest if using 1 GPU\r\n#SBATCH -t 2-0               # Job \"wallclock\" is required. Max permitted is 4 days (4-0)\r\n\r\nmodule purge\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_gpu\r\n\r\ngmx mdrun -nt $SLURM_NTASKS -deffnm md_0_1 -nb gpu\r\n\r\n<\/pre>\n<p>Submit with the command: <code>sbatch scriptname<\/code><\/p>\n<pre class=\"sge\">\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 8            #Specify the number of CPUs, maximum of 8 per GPU.\r\n#$ -l v100                 #This requests a single GPU.\r\n\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_gpu\r\n\r\ngmx mdrun -nt $NSLOTS -deffnm md_0_1 -nb gpu\r\n\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h3>Multi-threaded single-precision on a single node with multiple GPUs<\/h3>\n<p><strong>You need to request being added to the relevant group to access <a href=\"\/csf3\/batch\/gpu-jobs\/\" target=\"_blank\" rel=\"noopener\">GPUs<\/a> before you can run GROAMACS on them.<\/strong><\/p>\n<p>Please note that if you have <em>&#8216;free at the point of use&#8217;<\/em> access to the GPUs then the maximum number of GPUs you can request is 2 (please therefore follow the previous example).<\/p>\n<p>The maximum number of CPU cores that anyone can request is 8 or 12 per GPU requested e.g.: <\/p>\n<ul>\n<li>1 GPU and 8 cores, 2 GPUs and 16 cores for V100<\/li>\n<li>1 GPU and 12 cores, 2 GPUs and 24 cores for other GPU types<\/li>\n<\/ul>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p gpuX              # Select the type of GPU (where X = A, L or A40GB)\r\n#SBATCH -G 2                 # 2 GPUs\r\n#SBATCH -n 16                 # Select the no. of CPU cores. 16 for V100 & 24 for rest if using 2 GPU\r\n#SBATCH -t 2-0               # Job \"wallclock\" is required. Max permitted is 4 days (4-0)\r\n\r\nmodule purge\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_gpu\r\n\r\nexport OMP_NUM_THREADS=$((SLURM_NTASKS\/SLURM_GPUS))\r\n\r\ngmx mdrun -ntmpi ${SLURM_GPUS} -ntomp ${OMP_NUM_THREADS} -deffnm md_0_1 -nb gpu\r\n\r\n<\/pre>\n<p>Submit with the command: <code>sbatch scriptname<\/code><\/p>\n<pre class=\"sge\">\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 16          #Specify the number of CPUs, maximum of 8 per GPU.\r\n#$ -l v100=2              #Specify we want a GPU (nvidia_v100) node with two GPUs, maximum is 4.\r\n\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_gpu\r\n\r\nexport OMP_NUM_THREADS=$((NSLOTS\/NGPUS))\r\n\r\ngmx mdrun -ntmpi ${NGPUS} -ntomp ${OMP_NUM_THREADS} -deffnm md_0_1 -nb gpu\r\n\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h2>Advanced options &#8211; Setting number of MPI Rank and Thread<\/h2>\n<p>Instead of using the option <code>-nt $SLURM_NTASKS<\/code> which is used to specify only the number of Threads, as shown in the example jobscripts above, there are other <strong>mdrun<\/strong> options which can be used to set\/specify number of MPI Rank and Thread for your job according to your requirement: <\/p>\n<p>Option: <code>-ntmpi<\/code> is used to set\/specify the number of thread-MPI ranks to be started<br \/>\nOption: <code>-ntomp<\/code> is used to set\/specify the number of threads per rank to be started<\/p>\n<p>For the example when you have requested 16 CPU cores <strong>(#SBATCH -n 16)<\/strong>, the possible combinations can be:<\/p>\n<pre>\r\n-ntmpi 2 -ntomp 8         # 2 CPU Ranks and 8 threads per rank\r\n-ntmpi 4 -ntomp 4         # 4 CPU Ranks and 4 threads per rank\r\n-ntmpi 8 -ntomp 2         # 8 CPU Ranks and 2 threads per rank\r\n<\/pre>\n<p><strong>Explanation:<\/strong><\/p>\n<p><code>-ntmpi 2 -ntomp 8<\/code><\/p>\n<p>This means 2 MPI threads of Gromacs will be run, each of which will fork 8 OpenMP Threads.<\/p>\n<p><strong>Example:<\/strong><\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p gpuV              # Select the type of GPU (where X = V, A, L or A40GB)\r\n#SBATCH -G 2                 # 2 GPUs\r\n#SBATCH -n 16                # Select the no. of CPU cores. 16 for V100 & 24 for rest if using 2 GPU\r\n#SBATCH -t 2-0               # Job \"wallclock\" is required. Max permitted is 4 days (4-0)\r\n\r\nmodule purge\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_gpu\r\ngmx mdrun -ntmpi 2 -ntomp 8 -deffnm md_0_1 -nb gpu\r\n\r\n<\/pre>\n<pre class=\"sge\">\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 16          #Specify the number of CPUs, maximum of 8 per GPU.\r\n#$ -l v100=2              #Specify we want a GPU (nvidia_v100) node with two GPUs, maximum is 4.\r\n\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_gpu\r\ngmx mdrun -ntmpi 2 -ntomp 8 -deffnm md_0_1 -nb gpu\r\n\r\n<\/pre>\n<p>If you want to experiment with these and other available mdrun options in Gromacs you can go through <a href=\"https:\/\/manual.gromacs.org\/2023-current\/onlinehelp\/gmx-mdrun.html\" target=\"_blank\" rel=\"noopener\">official Gromacs 2023 Documentation<\/a> and try each combination and other available options to see which one gives you the best performance.<\/p>\n<h2>Error about OpenMP and cut-off scheme<\/h2>\n<p>If you encounter the following error:<\/p>\n<pre>OpenMP threads have been requested with cut-off scheme Group, but these \r\nare only supported with cut-off scheme Verlet\r\n<\/pre>\n<p>then please try using the MPI version of the software. Note that it is possible to run MPI versions on a single node <a href=\"#Single_precision_MPI_single-node_2_to_32_cores\">(example above)<\/a>.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li>You can see a list of all the installed GROMACS utilities with the command: <code>ls $GMXDIR\/bin<\/code><\/li>\n<li><a href=\"https:\/\/www.gromacs.org\/\" target=\"_blank\" rel=\"noopener\">GROMACS website<\/a><\/li>\n<li><a href=\"https:\/\/manual.gromacs.org\/2023-current\/index.html\" target=\"_blank\" rel=\"noopener\">GROMACS 2023 manual\/documentation<\/a><\/li>\n<li><a href=\"https:\/\/manual.gromacs.org\/2023-current\/user-guide\/index.html\" target=\"_blank\" rel=\"noopener\">GROMACS 2023.3 User Guide<\/a><\/li>\n<li><a href=\"https:\/\/gromacs.bioexcel.eu\/\" target=\"_blank\" rel=\"noopener\">GROMACS forum<\/a><\/li>\n<\/ul>\n<p>Updates<\/p>\n<ul>\n<li>No Updates<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Overview GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles and is a community-driven project. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/gromacs\/gromacs-2023-3-cpu-gpu-with-and-without-plumed\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":21,"featured_media":0,"parent":647,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-8092","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/8092","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/21"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=8092"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/8092\/revisions"}],"predecessor-version":[{"id":11375,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/8092\/revisions\/11375"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/647"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=8092"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}