{"id":442,"date":"2020-10-19T18:38:58","date_gmt":"2020-10-19T17:38:58","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf4\/?page_id=442"},"modified":"2025-10-06T15:22:13","modified_gmt":"2025-10-06T14:22:13","slug":"gromacs","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/gromacs\/","title":{"rendered":"GROMACS"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>GROMACS is a package for computing molecular dynamics, simulating Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed for biochemical molecules with complicated bonded interactions (e.g. proteins, lipids, nucleic acids) but can also be used for non-biological systems (e.g. polymers).<\/p>\n<p>Versions 2018.4, 2020.1 and 2023.3 (single and double precision, multi-core and MPI parallel) are installed.<\/p>\n<table class=\"warning\">\n<tbody>\n<tr>\n<td><em>Please do <strong>not<\/strong> add the <code>-v<\/code> flag to your <code>mdrun<\/code> command.<\/em><br \/>\nIt will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Significant Change in Version 2018 or Later<\/h2>\n<p>Within Gromacs 2018, 2020 and 2023, the different gromacs commands (e.g., <code>mdrun<\/code>, <code>grompp<\/code>, <code>g_hbond<\/code>) should now be run using the command:<\/p>\n<pre>gmx <em>command<\/em>\r\n<\/pre>\n<p>where <code><em>command<\/em><\/code> is the name of the command you wish to run (without any <code>g_<\/code> prefix), for example:<\/p>\n<pre>gmx mdrun\r\n<\/pre>\n<p>The <code>gmx<\/code> command changes its name to reflect the gromacs flavour being used but the <code><em>command<\/em><\/code> does not change. For example, if using the <code>mdrun<\/code> command:<\/p>\n<pre>\r\n# New 20XY method              # Previous 5.0.4 method (not available on CSF4)\r\n# ===============              # =====================\r\ngmx   mdrun                    mdrun\r\ngmx_d mdrun                    mdrun_d\r\nmpirun gmx_mpi   mdrun         mpirun -n $NSLOTS mdrun_mpi\r\nmpirun gmx_mpi_d mdrun         mpirun -n $NSLOTS mdrun_mpi_d\r\n<\/pre>\n<p>The complete list of <code><em>command<\/em><\/code> names can be found by running the following on the login node:<\/p>\n<pre>gmx help commands<\/pre>\n<pre># The following commands are available:\r\nanadock\t\t\tgangle\t\t\trdf\r\nanaeig\t\t\tgenconf\t\t\trms\r\nanalyze\t\t\tgenion\t\t\trmsdist\r\nangle\t\t\tgenrestr\t\trmsf\r\nawh\t\t\tgrompp\t\t\trotacf\r\nbar\t\t\tgyrate\t\t\trotmat\r\nbundle\t\t\th2order\t\t\tsaltbr\r\ncheck\t\t\thbond\t\t\tsans\r\nchi\t\t\thelix\t\t\tsasa\r\ncluster\t\t\thelixorient\t\tsaxs\r\nclustsize\t\thelp\t\t\tselect\r\nconfrms\t\t\thydorder\t\tsham\r\nconvert-tpr\t\tinsert-molecules\tsigeps\r\ncovar\t\t\tlie\t\t\tsolvate\r\ncurrent\t\t\tmake_edi\t\tsorient\r\ndensity\t\t\tmake_ndx\t\tspatial\r\ndensmap\t\t\tmdmat\t\t\tspol\r\ndensorder\t\tmdrun\t\t\ttcaf\r\ndielectric\t\tmindist\t\t\ttraj\r\ndipoles\t\t\tmk_angndx\t\ttrajectory\r\ndisre\t\t\tmorph\t\t\ttrjcat\r\ndistance\t\tmsd\t\t\ttrjconv\r\ndo_dssp\t\t\tnmeig\t\t\ttrjorder\r\ndos\t\t\tnmens\t\t\ttune_pme\r\ndump\t\t\tnmtraj\t\t\tvanhove\r\ndyecoupl\t\torder\t\t\tvelacc\r\ndyndom\t\t\tpairdist\t\tview\r\neditconf\t\tpdb2gmx\t\t\twham\r\neneconv\t\t\tpme_error\t\twheel\r\nenemat\t\t\tpolystat\t\tx2top\r\nenergy\t\t\tpotential\t\txpm2ps\r\nfilter\t\t\tprincipal\r\nfreevolume\t\trama\r\n<\/pre>\n<p>Notice that the command names do NOT start with <code>g_<\/code> and do NOT reference the flavour being run (e.g., <code>_mpi_d<\/code>). Only the main <code>gmx<\/code> command changes its name to reflect the flavour (see below for list of modulefiles for the full list of flavours available).<\/p>\n<p>To obtain more help about a particular command run:<\/p>\n<pre>gmx help <em>command<\/em>\r\n<\/pre>\n<p>For example<\/p>\n<pre>gmx help mdrun\r\n<\/pre>\n<h2>Available Flavours<\/h2>\n<p>For versions 2018.4, 2020.1 and 2023.3 we have compiled multiple versions of Gromacs, for CPU jobs only (there are no GPUs in CSF4). You can use single or double precision executables for parallel multi-core (threads) or larger multi-node (MPI) jobs.<\/p>\n<p>All versions are compiled with AVX512 SIMD instructions enabled by default, as they are supported in all nodes on the CSF4 and provide optimised performance.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>GROMACS is free software, available under the GNU General Public License.<\/p>\n<h2>Set up procedure<\/h2>\n<p>You must load <em>one<\/em> of the following modulefiles:<\/p>\n<pre>\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_avx512          # executable is gmx\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_mpi_avx512      # executable is gmx_mpi\r\nmodule load apps\/gcc\/gromacs\/2023.3\/double_avx512          # executable is gmx_d  \r\nmodule load apps\/gcc\/gromacs\/2023.3\/double_mpi_avx512      # executable is gmx_mpi_d\r\nmodule load gromacs\/2020.1-iomkl-2020.02-python-3.8.2\r\nmodule load gromacs\/2018.4-iomkl-2020.02\r\n<\/pre>\n<p>The following executables are available for use in your jobscripts:<\/p>\n<pre>\r\ngmx          # Single precision multicore (single compute node job)\r\ngmx_d        # Double precision multicore (single compute node job)\r\n\r\ngmx_mpi      # Single precision MPI (multi-node job)\r\ngmx_mpi_d    # Double precision MPI (multi-node job)\r\n<\/pre>\n<p>Modulefiles for versions 2020.1 and 2018.4, include all 4 flavours of above executables.<br \/>\nFor version 2023.3 there are 4 different modulefiles, each one includes the executable of only the named flavour<\/p>\n<p>Remember you will need to add the command to be run by gromacs to the <code>gmx<\/code> command-line in your jobscript. For example:<\/p>\n<pre>\r\n# Double-precision multicore (single compute node)\r\ngmx_d mdrun <em>args...<\/em>\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do <strong>not<\/strong> run GROMACS on the login node.<\/p>\n<h3>Important notes regarding running jobs in batch<\/h3>\n<p>We now recommend that the module file is loaded as part of your batch script.<\/p>\n<p>It is not necessary to tell <code>mpirun<\/code> how many cores to use if using the MPI executables. This is because SLURM knows this automatically.<\/p>\n<pre>\r\n# Multi-core (single-node) or large Multi-node MPI job. SLURM knows how many cores to use.\r\nmpirun gmx_mpi mdrun     # New method (v5.1.4 and later)\r\nmpirun gmx_mpi_d mdrun   # New method (v5.1.4 and later)\r\n<\/pre>\n<p>However, if using the multicore (single compute node) executables, you must inform GROMACS how many cores to use with the <code>$SLURM_NTASKS<\/code> variable:<\/p>\n<pre>\r\n# Single-node multi-threaded job\r\n<strong>export OMP_NUM_THREADS=$SLURM_NTASKS<\/strong>      # Inform GROMACS how many cores to use\r\ngmx mdrun                                 # New method (v5.1.4 and later)\r\ngmx_d mdrun                               # New method (v5.1.4 and later)\r\n<\/pre>\n<p>The examples below can be used for single precision or double precision GROMACS. Simply run <code>mdrun<\/code> (single precision) or <code>mdrun_d<\/code> (double precision).<\/p>\n<table class=\"warning\">\n<tbody>\n<tr>\n<td><em>Please do <strong>not<\/strong> add the <code>-v<\/code> flag to your <code>mdrun<\/code> command.<\/em><br \/>\nIt will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>Multi-threaded single-precision, 2 to 40 cores<\/h3>\n<p>Note that GROMACS 2020.1 (unlike v4.5.4) does <strong>not<\/strong> support the <code>-nt<\/code> flag to set the number of threads when using the multithreaded OpenMP (non-MPI) version. Instead set the <code>OMP_NUM_THREADS<\/code> environment variable as shown below.<\/p>\n<p>An example batch submission script to run the <strong>single-precision<\/strong> mdrun executable with 16 threads:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore     # (--partition=multicore) \r\n#SBATCH -n 16            # Can specify 2 to 40 cores in the multicore partition\r\n                           \r\nmodule load gromacs\/2020.1-iomkl-2020.02-python-3.8.2\r\nexport OMP_NUM_THREADS=$SLURM_NTASKS\r\ngmx mdrun\r\n<\/pre>\n<p>Submit with the command: <code>sbatch <em>scriptname<\/em><\/code> where <code><em>scriptname<\/em><\/code> is the name of your jobscript.<\/p>\n<h3>Multi-threaded double-precision, 2 to 40 cores<\/h3>\n<p>An example batch submission script to run the <strong>double-precision<\/strong> <code>mdrun<\/code> executable with 16 threads:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore     # (--partition=multicore) \r\n#SBATCH -n 16            # Can specify 2 to 40 cores in the multicore partition\r\n                           \r\nmodule load gromacs\/2020.1-iomkl-2020.02-python-3.8.2\r\nexport OMP_NUM_THREADS=$SLURM_NTASKS\r\ngmx_d mdrun\r\n<\/pre>\n<p>Submit with the command: <code>sbatch <em>scriptname<\/em><\/code> where <code><em>scriptname<\/em><\/code> is the name of your jobscript.<\/p>\n<h3>Single precision MPI (single-node), 2 to 40 cores<\/h3>\n<p>An example batch submission script to run the <strong>single-precision<\/strong> <code>mdrun<\/code> executable on 16 cores using MPI:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore     # (--partition=multicore) \r\n#SBATCH -n 16            # Can specify 2 to 40 cores in the multicore partition\r\n                           \r\nmodule load gromacs\/2020.1-iomkl-2020.02-python-3.8.2\r\nmpirun gmx_mpi mdrun\r\n<\/pre>\n<p>Submit with the command: <code>sbatch <em>scriptname<\/em><\/code> where <code><em>scriptname<\/em><\/code> is the name of your jobscript.<\/p>\n<h3>Double precision MPI (single-node), 2 to 40 cores<\/h3>\n<p>An example batch submission script to run the <strong>double-precision<\/strong> <code>mdrun<\/code> executable on 16 cores using MPI:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore     # (--partition=multicore) \r\n#SBATCH -n 16            # Can specify 2 to 40 cores in the multicore partition\r\n                           \r\nmodule load gromacs\/2020.1-iomkl-2020.02-python-3.8.2\r\nmpirun gmx_mpi mdrun\r\n<\/pre>\n<p>Submit with the command: <code>sbatch <em>scriptname<\/em><\/code> where <code><em>scriptname<\/em><\/code> is the name of your jobscript.<\/p>\n<h3>Single-precision, MPI, 80 cores or more in multiples of 40<\/h3>\n<p>An example batch submission script to run the <strong>single-precision<\/strong> <code>mdrun<\/code> executable on 80 cores (2 x 40-core compute nodes) using MPI:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p <strong>multinode<\/strong>     # (--partition=multinode) \r\n#SBATCH -N 2             # 2 computer nodes\r\n#SBATCH -n 80            # 80 cores is 2 x 40-core compute nodes. Must be a multiple of 40.\r\n                           \r\nmodule load gromacs\/2020.1-iomkl-2020.02-python-3.8.2\r\nmpirun gmx_mpi mdrun\r\n<\/pre>\n<p>Submit with the command: <code>sbatch <em>scriptname<\/em><\/code> where <code><em>scriptname<\/em><\/code> is the name of your jobscript.<\/p>\n<h3>Double-precision, MPI, 80 cores or more in multiples of 40<\/h3>\n<p>An example batch submission script to run the <strong>double-precision<\/strong> <code>mdrun<\/code> executable on 80 cores (2 x 40-core compute nodes) using MPI:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p <strong>multinode<\/strong>     # (--partition=multinode)\r\n#SBATCH -N 2             # 2 computer nodes\r\n#SBATCH -n 80            # 80 cores is 2 x 40-core compute nodes. Must be a multiple of 40.\r\n                           \r\nmodule load gromacs\/2020.1-iomkl-2020.02-python-3.8.2\r\nmpirun gmx_mpi_d mdrun\r\n<\/pre>\n<p>Submit with the command: <code>sbatch <em>scriptname<\/em><\/code> where <code><em>scriptname<\/em><\/code> is the name of your jobscript.<\/p>\n<h3>Single-precision, MPI+OpenMP mixed-mode, 80 cores or more in multiples of 40<\/h3>\n<p>An example batch submission script to run the <strong>single-precision<\/strong> <code>mdrun<\/code> executable on 80 cores (2 x 40-core compute nodes) using MPI:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multinode    # (or --partition=multinode)\r\n#SBATCH -N 2            # (or --nodes=2)  Use all cores on this many compute nodes (2 or more.)\r\n#SBATCH -n 4            # (or --ntasks=)4 Number of MPI processes to run in total. They will be\r\n                        #                 spread across the requested number of nodes.\r\n#SBATCH -c 20           # (or --cpus-per-task=20) Number of cores to use for OpenMP in each MPI process.\r\n                          \r\nmodule purge\r\nmodule load apps\/gcc\/gromacs\/2023.3\/single_mpi_avx512\r\n\r\nexport OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK\r\n\r\nmpirun --map-by ppr:1:socket:pe=$OMP_NUM_THREADS gmx_mpi mdrun\r\n                   #        #\r\n                   #        # pe=$OMP_NUM_THREADS gives each MPI process access to\r\n                   #        # the cores needed for its OpenMP threads (see below).\r\n                   #\r\n                   # ppr is 'processes per resource'. It means we are about to specify how\r\n                   # many MPI processes and OpenMP threads should be placed on the nodes.\r\n                   # Here 1 MPI process will be placed on each socket (each node has 2 sockets)\r\n\r\n<\/pre>\n<p>Submit with the command: <code>sbatch <em>scriptname<\/em><\/code> where <code><em>scriptname<\/em><\/code> is the name of your jobscript.<\/p>\n<h2>Error about OpenMP and cut-off scheme<\/h2>\n<p>If you encounter the following error:<\/p>\n<pre>\r\nOpenMP threads have been requested with cut-off scheme Group, but these \r\nare only supported with cut-off scheme Verlet\r\n<\/pre>\n<p>then please try using the MPIversion of the software. Note that it <em>is possible<\/em> to run MPI versions on a single node (see example above).<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/www.gromacs.org\/About_Gromacs\">GROMACS web page<\/a><\/li>\n<li><a href=\"http:\/\/www.gromacs.org\/Documentation\/Manual\">GROMACS manuals<\/a><\/li>\n<li><a href=\"http:\/\/www.gromacs.org\/Support\/Mailing_Lists\">GROMACS user mailing list<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>Oct 2020 &#8211; First version<br \/>\nOct 2025 &#8211; Updated for GROMACS v2023, added multinode MPI\/openMP Hybrid example<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview GROMACS is a package for computing molecular dynamics, simulating Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed for biochemical molecules with complicated bonded interactions (e.g. proteins, lipids, nucleic acids) but can also be used for non-biological systems (e.g. polymers). Versions 2018.4, 2020.1 and 2023.3 (single and double precision, multi-core and MPI parallel) are installed. Please do not add the -v flag to your mdrun command. It will.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/gromacs\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"parent":49,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-442","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/442","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/comments?post=442"}],"version-history":[{"count":18,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/442\/revisions"}],"predecessor-version":[{"id":1452,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/442\/revisions\/1452"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/49"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/media?parent=442"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}