{"id":4212,"date":"2017-09-11T10:13:15","date_gmt":"2017-09-11T10:13:15","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/?page_id=4212"},"modified":"2017-09-12T09:36:01","modified_gmt":"2017-09-12T09:36:01","slug":"v514","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/gromacs\/v514\/","title":{"rendered":"GROMACS v5.1.4"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>GROMACS is a package for computing molecular dynamics, simulating Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed for biochemical molecules with complicated bonded interactions (e.g. proteins, lipids, nucleic acids) but can also be used for non-biological systems (e.g. polymers).<\/p>\n<table class=\"warning\">\n<tr>\n<td><em>Please do <strong>not<\/strong> add the <code>-v<\/code> flag to your <code>mdrun<\/code> command.<\/em><br \/>It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers.<\/td>\n<\/tr>\n<\/table>\n<h2>Significant Change in this Version<\/h2>\n<p>As of Gromacs v5.1.x, the different gromacs commands (e.g., <code>mdrun<\/code>, <code>grompp<\/code>, <code>g_hbond<\/code>) should now be run using the command:<\/p>\n<pre>\r\ngmx <em>command<\/em>\r\n<\/pre>\n<p>where <code><em>command<\/em><\/code> is the name of the command you wish to run (without any <code>g_<\/code> prefix), for example:<\/p>\n<pre>\r\ngmx mdrun\r\n<\/pre>\n<p>The <code>gmx<\/code> command changes its name to reflect the gromacs flavour being used but the <code><em>command<\/em><\/code> does not change. For example, if using the <code>mdrun<\/code> command:<\/p>\n<pre>\r\n# New 5.1.4 method                     # Previous 5.0.4 method\r\ngmx   mdrun                            mdrun\r\ngmx_d mdrun                            mdrun_d\r\nmpirun -n $NSLOTS gmx_mpi   mdrun      mpirun -n $NSLOTS mdrun_mpi\r\nmpirun -n $NSLOTS gmx_mpi_d mdrun      mpirun -n $NSLOTS mdrun_mpi_d\r\n<\/pre>\n<p>The complete list of <code><em>command<\/em><\/code> names can be found by running the following on the login node:<\/p>\n<pre>\r\nmodule load apps\/intel-14.0\/gromacs\/5.1.4\/single\r\ngmx help commands\r\n\r\n# The following commands are available:\r\nanadock\t\t\tdyecoupl\t\tmdmat\t\tsans\r\nanaeig\t\t\tdyndom\t\t\tmdrun\t\tsasa\r\nanalyze\t\t\teditconf\t\tmindist\t\tsaxs\r\nangle\t\t\teneconv\t\t\tmk_angndx\tselect\r\nbar\t\t\tenemat\t\t\tmorph\t\tsham\r\nbundle\t\t\tenergy\t\t\tmsd\t\tsigeps\r\ncheck\t\t\tfilter\t\t\tnmeig\t\tsolvate\r\nchi\t\t\tfreevolume\t\tnmens\t\tsorient\r\ncluster\t\t\tgangle\t\t\tnmtraj\t\tspatial\r\nclustsize\t\tgenconf\t\t\torder\t\tspol\r\nconfrms\t\t\tgenion\t\t\tpairdist\ttcaf\r\nconvert-tpr\t\tgenrestr\t\tpdb2gmx\t\ttraj\r\ncovar\t\t\tgrompp\t\t\tpme_error\ttrjcat\r\ncurrent\t\t\tgyrate\t\t\tpolystat\ttrjconv\r\ndensity\t\t\th2order\t\t\tpotential\ttrjorder\r\ndensmap\t\t\thbond\t\t\tprincipal\ttune_pme\r\ndensorder\t\thelix\t\t\trama\t\tvanhove\r\ndielectric\t\thelixorient\t\trdf\t\tvelacc\r\ndipoles\t\t\thelp\t\t\trms\t\tview\r\ndisre\t\t\thydorder\t\trmsdist\t\twham\r\ndistance\t\tinsert-molecules\trmsf\t\twheel\r\ndo_dssp\t\t\tlie\t\t\trotacf\t\tx2top\r\ndos\t\t\tmake_edi\t\trotmat\t\txpm2ps\r\ndump\t\t\tmake_ndx\t\tsaltbr\r\n<\/pre>\n<p>Notice that the command names do NOT start with <code>g_<\/code> and do NOT reference the flavour being run (e.g., <code>_mpi_d<\/code>). Only the main <code>gmx<\/code> command changes its name to reflect the flavour (see below for list of modulefiles for the full list of flavours available).<\/p>\n<p>To obtain more help about a particular command run:<\/p>\n<pre>\r\ngmx help <em>command<\/em><\/code>\r\n<\/pre>\n<p>For example<\/p>\n<pre>\r\ngmx help mdrun\r\n<\/pre>\n<h3>Helper scripts<\/h3>\n<p>To assist with moving to the new command calling method, we have recreated some of the individual commands that you may have used in your jobscript. For example, you can continue to use <code>mdrun<\/code> (or <code>mdrun_d<\/code>) instead of the new <code>gmx mdrun<\/code> (or <code>gmx_d mdrun<\/code>) in this release. These extra commands are automatically included in your environment when you load the gromacs modulefiles. This old method uses the flavour of gromacs in the command name (see above for comparison of new and old commands). <\/p>\n<p>However, please note that the following commands are new to v5.1.4 and so can only be run using the new method (<code>gmx <em>command<\/em><\/code>):<\/p>\n<pre>\r\n# New commands that can only be run using: gmx <em>command<\/em>\r\n\r\ncheck\t\t\thelp\r\nconvert-tpr\t\tinsert-molecules\r\ndistance\t\tpairdist\r\ndump\t\t\tsasa\r\nfreevolume\t\tsolvate\r\ngangle\t\t\tview\r\n<\/pre>\n<h2>Available Flavours<\/h2>\n<p>This version is v5.1.4. The following flavours are available:<\/p>\n<h3>5.1.4 for all Intel node types<\/h3>\n<p>Note: ability to run on <em>all<\/em> Intel nodes implies lower optimization.<\/p>\n<ul>\n<li>Single and double precision multi-threaded (OpenMP) versions: <code>mdrun<\/code> and <code>mdrun_d<\/code><\/li>\n<li>Single and double precision MPI (not threaded) versions: <code>mdrun_mpi<\/code> and <code>mdrun_d_mpi<\/code><\/li>\n<li>Compiled with Intel 14.0.3 compiler with the associated Intel MKL providing the FFT functions.<\/li>\n<li><code>ngmx<\/code> has been included.<\/li>\n<\/ul>\n<h3>5.1.4 for Sandybridge and Ivybridge (and Haswell, Broadwell nodes) only<\/h3>\n<p>Note: ability to run on <em>only<\/em> Sandybridge, Ivybridge (and Haswell) nodes implies higher optimization. Note that an even higher level of optimization, and an MPI version, is available for Haswell nodes (see below).<\/p>\n<ul>\n<li>Single and double precision multi-threaded (OpenMP) versions: <code>mdrun<\/code> and <code>mdrun_d<\/code><\/li>\n<li>Compiled with Intel 14.0.3 compiler with the associated Intel MKL providing the FFT functions and with AVX_256 (an instruction set specific to these nodes) so WILL NOT work on Westmere nodes and NONE of the commands can be run on the login nodes.<\/li>\n<li>We have no Sandybridge or Ivybridge nodes connected by Infiniband which means ONLY <code>smp.pe<\/code> (single-node, multicore) jobs for these types nodes.<\/li>\n<li>There are no MPI versions of 5.1.4 for Sandybridge and Ivybridge nodes available on the CSF.<\/li>\n<li>This version will not run on highmem, twoday or short nodes (they are all Westmere).<\/li>\n<li><code>ngmx<\/code> has been included.<\/li>\n<\/ul>\n<h3>5.1.4 for Haswell and Broadwell nodes only<\/h3>\n<p>Note: ability to run on <em>only<\/em> Haswell and Broadwell nodes implies higher optimization.<\/p>\n<ul>\n<li>Single and double precision single-node, multi-threaded (OpenMP) versions: <code>mdrun<\/code> and <code>mdrun_d<\/code><\/li>\n<li>Single and double precision multi-node (MPI) versions: <code>mdrun_mpi<\/code> and <code>mdrun_mpi_d<\/code><\/li>\n<li>Compiled with Intel 14.0.3 compiler with the associated Intel MKL providing the FFT functions and with AVX2_256 (an instruction set specific to these nodes which provides further optimization than AVX_256) so WILL NOT work on Westmere, Sandybridge and Ivybridge nodes.<\/li>\n<li>Single-node multi-core <code>smp.pe<\/code> jobs can use these nodes.<\/li>\n<li>Multi-core MPI <code>orte-24-ib.pe<\/code> jobs can use these nodes &#8211; the Haswell and Broadwell nodes have InfiniBand networking.<\/li>\n<li>This version will not run on highmem, twoday or short nodes (they are all Westmere).<\/li>\n<li><code>ngmx<\/code> has been included.<\/li>\n<\/ul>\n<h3>Bugfix for g_hbond<\/h3>\n<p>Version 5.1.4 has the <em>g_hbond<\/em> fix included by default and so no separate build has been made for this version. See the <a href=\"..\/v454\">GROMACS v4.5.4 CSF documentation<\/a> for a description of that issue.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>GROMACS is free software, available under the GNU General Public License.<\/p>\n<h2>Set up procedure<\/h2>\n<p>You must load the appropriate modulefile:<\/p>\n<pre>\r\nmodule load <em>modulefile<\/em>\r\n<\/pre>\n<p>replacing <em>modulefile<\/em> with one of the modules listed in the table below.<\/p>\n<table class=\"striped\">\n<tr>\n<th style=\"width: 20%\">Version<\/th>\n<th style=\"width: 45%\"> Modulefile<\/th>\n<th style=\"width: 20%\">Notes<\/th>\n<th style=\"width: 15%\">Typical Executable name<\/th>\n<\/tr>\n<tr>\n<td>Single precision multi-threaded (single-node)<\/td>\n<td>apps\/intel-14.0\/gromacs\/5.1.4\/single<\/td>\n<td>non-MPI<\/td>\n<td><code>mdrun<\/code> or <code>gmx mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<td>Double precision multi-threaded (single-node)<\/td>\n<td>apps\/intel-14.0\/gromacs\/5.1.4\/double<\/td>\n<td>non-MPI<\/td>\n<td><code>mdrun_d<\/code> or <code>gmx_d mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<td>Single precision MPI (single-node)<\/td>\n<td> apps\/intel-14.0\/gromacs\/5.1.4\/single-mpi <\/td>\n<td>For MPI on Intel nodes using gigabit ethernet<\/td>\n<td><code>mdrun_mpi<\/code> or <code>gmx_mpi mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<td>Single precision MPI (multi-node, Infiniband)<\/td>\n<td>apps\/intel-14.0\/gromacs\/5.1.4\/single-mpi-ib<\/td>\n<td>For MPI on Intel or AMD nodes using infiniband<\/td>\n<td><code>mdrun_mpi<\/code> or <code>gmx_mpi mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<td>Double precision MPI (single-node)<\/td>\n<td> apps\/intel-14.0\/gromacs\/5.1.4\/double-mpi <\/td>\n<td>For MPI on Intel nodes using gigabit ethernet<\/td>\n<td><code>mdrun_mpi_d<\/code> or <code>gmx_mpi_d mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<td>Double precision MPI (multi-node, Infiniband)<\/td>\n<td> apps\/intel-14.0\/gromacs\/5.1.4\/double-mpi-ib<\/td>\n<td>For MPI on Intel or AMD nodes using Infiniband<\/td>\n<td><code>mdrun_mpi_d<\/code> or <code>gmx_mpi_d mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<th colspan=\"4\">AVX optimized builds for Sandybridge and Ivybridge nodes<\/th>\n<\/tr>\n<tr>\n<td>Single precision multi-threaded for AVX (single-node)<\/td>\n<td>apps\/intel-14.0\/gromacs\/5.1.4\/single-avx<\/td>\n<td>non-MPI, Sandybridge and Ivybridge only<\/td>\n<td><code>mdrun<\/code> or <code>gmx mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<td>Double precision multi-threaded for AVX (single-node)<\/td>\n<td>apps\/intel-14.0\/gromacs\/5.1.4\/double-avx<\/td>\n<td>non-MPI, Sandybridge and Ivybridge only<\/td>\n<td><code>mdrun_d<\/code> or <code>gmx_d mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<th colspan=\"4\">AVX2 optimized builds for Haswell and Broadwell 24-core nodes (new April 2016)<\/th>\n<\/tr>\n<tr>\n<td>Single precision multi-threaded for AVX2 (single-node)<\/td>\n<td>apps\/intel-14.0\/gromacs\/5.1.4\/single-avx2<\/td>\n<td>non-MPI, Haswell only<\/td>\n<td><code>mdrun<\/code> or <code>gmx mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<td>Double precision multi-threaded for AVX2 (single-node)<\/td>\n<td>apps\/intel-14.0\/gromacs\/5.1.4\/double-avx2<\/td>\n<td>non-MPI, Haswell or Broadwell only<\/td>\n<td><code>mdrun_d<\/code> or <code>gmx_d mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<td>Single precision MPI (single\/multi-node, Infiniband) for AVX2<\/td>\n<td>apps\/intel-14.0\/gromacs\/5.1.4\/single-avx2-mpi-ib<\/td>\n<td>For MPI on Intel Haswell nodes using infiniband<\/td>\n<td><code>mdrun_mpi<\/code> or <code>gmx_mpi mdrun<\/code><\/td>\n<\/tr>\n<tr>\n<td>Double precision MPI (single\/multi-node, Infiniband) for AVX2<\/td>\n<td>apps\/intel-14.0\/gromacs\/5.1.4\/double-avx2-mpi-ib<\/td>\n<td>For MPI on Intel Haswell or Broadwell nodes using infiniband<\/td>\n<td><code>mdrun_mpi_d<\/code> or <code>gmx_mpi_d mdrun<\/code><\/td>\n<\/tr>\n<\/table>\n<div style=\"display: none;\">\n<h2>Interactive\/Non-batch work\/Job preparation<\/h2>\n<p>In order to prepare your jobs or post process them you may need to make use of commands such as <code>grompp<\/code>. These will not work on the CSF login node because the software was compiled with AVX_256 which is not compatible with the login nodes. We have therefore allocated ONE sandybridge node to allow you to run these commands via qrsh. To do so type:<\/p>\n<pre>\r\nqrsh -l inter -l short -l sandybridge\r\n<\/pre>\n<p>which will give access to the sandybridge compute node. Then run your commands. When you have finished <strong>close the connection to the compute node<\/strong> with <code>exit<\/code> (failure to do this may result in the compute node being unavailable to other users who need it). Then submit your computation\/simulation to batch as per the above examples. <\/p>\n<p>DO NOT run mdrun on this compute node &#8211; all computational work MUST be submitted to batch.\n<\/p><\/div>\n<h2>Running the application in batch<\/h2>\n<p>First load the required module (see above) and create a directory containing the required input data files.<\/p>\n<p>Please NOTE the following which important for running jobs correctly and efficiently:<\/p>\n<p>Ensure you inform gromacs how many cores it can use. This is done using the <code>$NSLOTS<\/code> variable which is automatically set for you in the jobscript to be the number of cores you request in the jobscript header (see later for complete examples). You can use either of the following methods depending whether you want a multi-core job (running on a single compute node) or a larger job running across multiple compute nodes:<\/p>\n<pre>\r\n# Multi-core (single-node) or Multi-node MPI jobs\r\n\r\nmpirun -n $NSLOTS mdrun_mpi         # Old method (v5.0.4 and earlier)\r\nmpirun -n $NSLOTS mdrun_mpi_d       # Old method (v5.0.4 and earlier)\r\n\r\nmpirun -n $NSLOTS gmx_mpi mdrun     # New method (v5.1.4 and later)\r\nmpirun -n $NSLOTS gmx_mpi_d mdrun   # New method (v5.1.4 and later)<\/pre>\n<p>or<\/p>\n<pre>\r\n# Single-node multi-threaded job\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS      # Do this for all versions\r\nmdrun                               # Old method (v5.0.4 and earlier)\r\nmdrun_d                             # Old method (v5.0.4 and earlier)\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS      # Do this for all versions\r\ngmx mdrun                           # New method (v5.1.4 and later)\r\ngmx_d mdrun                         # New method (v5.1.4 and later)\r\n<\/pre>\n<p>The examples below can be used for single precision or double precision gromacs. Simply run <code>mdrun<\/code> (single precision) or <code>mdrun_d<\/code> (double precision).<\/p>\n<table class=\"warning\">\n<tr>\n<td><em>Please do not add the <code>-v<\/code> flag to your <code>mdrun<\/code> command.<\/em><br \/>It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers.<\/td>\n<\/tr>\n<\/table>\n<h3>Multi-threaded single-precision on Intel nodes, 2 to 24 cores<\/h3>\n<p>Note that GROMACS v5.1.4 (unlike v4.5.4) does <strong>not<\/strong> support the <code>-nt<\/code> flag to set the number of threads when using the multithreaded OpenMP (non-MPI) version. Instead set the <code>OMP_NUM_THREADS<\/code> environment variable as shown below.<\/p>\n<p>An example batch submission script to run the <strong>single-precision<\/strong> mdrun executable with 12 threads:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe smp.pe 12            # Can specify 2 to 24 cores in smp.pe\r\n                            # 2-12 includes Westmere, Sandybridge, Ivybridge, Haswell, Broadwell\r\n                            # 13-16 forces use of Ivybridge\r\n                            # 17-24 forces use of Haswell or Broadwell\r\n                            # Can force use of a particular architecture (see below)\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS\r\nmdrun\r\n  #\r\n  # This is the old naming convention (it will still work in this release)\r\n  # The new gromacs convention is to run: gmx mdrun\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<p>The system will run your job on a Westmere, a Sandybridge or an Ivybridge node depending on what is available. This option goes to the biggest pool of nodes. To get a more optimised run on Sandybridge or Ivybridge you should be using a modulefile with &#8216;avx&#8217; in the name and using the instructions below.<\/p>\n<h3>Multi-threaded double-precision, AVX on Sandybridge nodes, 2 to 12 cores<\/h3>\n<p>An example batch submission script to run the <strong>double-precision<\/strong> <code>mdrun_d<\/code> executable with 8 threads:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe smp.pe 8\r\n#$ -l sandybridge               # Force use of sandybridge nodes\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS\r\nmdrun_d\r\n  #\r\n  # This is the old naming convention (it will still work in this release)\r\n  # The new gromacs convention is to run: gmx_d mdrun<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h3>Multi-threaded single-precision, AVX on Ivybridge nodes, 2 to 16 cores<\/h3>\n<p>Note that GROMACS v5.1.4 (unlike v4.5.4) does <strong>not<\/strong> support the <code>-nt<\/code> flag to set the number of threads when using the multithreaded OpenMP (non-MPI) version. Instead set the <code>OMP_NUM_THREADS<\/code> environment variable as shown below.<\/p>\n<p>An example batch submission script to run the single-precision mdrun executable with 16 threads:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe smp.pe 16\r\n#$ -l ivybridge                 # Force use of Ivybridge nodes\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS\r\nmdrun\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h3>Multi-threaded single-precision, AVX2 on Haswell nodes, 2 to 24 cores<\/h3>\n<p>Note that GROMACS v5.1.4 (unlike v4.5.4) does <strong>not<\/strong> support the <code>-nt<\/code> flag to set the number of threads when using the multithreaded OpenMP (non-MPI) version. Instead set the <code>OMP_NUM_THREADS<\/code> environment variable as shown below.<\/p>\n<p>An example batch submission script to run the single-precision mdrun executable with 16 threads:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe smp.pe 24\r\n#$ -l haswell                  # Force use of Haswell nodes\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS\r\nmdrun\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h3>Single precision MPI (single-node), 2 to 24 cores<\/h3>\n<p>An example batch submission script to run the double-precision <code>mdrun_mpi<\/code> executable on 8 cores using mpi:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe smp.pe 8            \r\n                                          \r\nmpirun -n $NSLOTS mdrun_mpi\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><br \/>\n<\/h3>\n<h3>Double precision MPI (single-node), 2 to 24 cores<\/h3>\n<p>An example batch submission script to run the <strong>double-precision<\/strong> <code>mdrun_mpi_d<\/code> executable on 8 cores using mpi:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe smp.pe 4             \r\n                                          \r\nmpirun -n $NSLOTS mdrun_mpi_d\r\n  #\r\n  # This is the old naming convention (it will still work in this release)\r\n  # The new gromacs convention is to run: mpirun -n $NSLOTS gmx_mpi_d mdrun\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><br \/>\n<\/h3>\n<h3>Single-precision AVX2, MPI with Infiniband, 48 cores or more in multiples of 24<\/h3>\n<p>An example batch submission script to run the <strong>single precision<\/strong> <code>mdrun_mpi<\/code> executable with 48 MPI processes (48 cores on two 24-core nodes) with the <code>orte-24-ib.pe<\/code> parallel environment (Intel Haswell nodes using infiniband):<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe orte-24-ib.pe 48           # EG: Two 24-core Intel Haswell nodes\r\n\r\nmpirun -n $NSLOTS mdrun_mpi\r\n  #\r\n  # This is the old naming convention (it will still work in this release)\r\n  # The new gromacs convention is to run: mpirun -n $NSLOTS gmx_mpi mdrun\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h2>Illegal instruction<\/h2>\n<p>If during a batch job you try to run gromacs and get the following error:<\/p>\n<pre>\r\nIllegal instruction\r\n<\/pre>\n<p>This is because you have an AVX or AVX2 only version of the modulefile loaded which is not compatible with the compute nodes on which your job is running. Ensure your jobscript requests the correct type of compute node.<\/p>\n<h2>Error about OpenMP and cut-off scheme<\/h2>\n<p>If you encounter the following error:<\/p>\n<pre>\r\nOpenMP threads have been requested with cut-off scheme Group, but these \r\nare only supported with cut-off scheme Verlet\r\n<\/pre>\n<p>then please try using the mpi version of the software. Note that is is possible to run mpi versions on a single node (example above).<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li>You can see a list of all the installed GROMACS utilities with the command: <code>ls $GMXDIR\/bin<\/code><\/li>\n<li><a href=\"http:\/\/www.gromacs.org\/About_Gromacs\">GROMACS web page<\/a><\/li>\n<li><a href=\"http:\/\/www.gromacs.org\/Documentation\/Manual\">GROMACS manuals<\/a><\/li>\n<li><a href=\"http:\/\/www.gromacs.org\/Support\/Mailing_Lists\">GROMACS user mailing list<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>Sep 2017 &#8211; 5.1.4 installed with AVX support (GPU support with Intel compiler not possible)<br \/>\nApr 2015 &#8211; 5.0.4 installed with AVX support (GPU support with Intel compiler not possible)<br \/>\nDec 2014 &#8211; 4.6.7 installed with AVX support (specific user request for this) and documentation written.<br \/>\nNov 2013 &#8211; Documentation for 4.5.4 and 4.6.1 split in to two pages.<br \/>\nMay 2013 &#8211; Gromacs 4.6.1 and Plumed 1.3 installed.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview GROMACS is a package for computing molecular dynamics, simulating Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed for biochemical molecules with complicated bonded interactions (e.g. proteins, lipids, nucleic acids) but can also be used for non-biological systems (e.g. polymers). Please do not add the -v flag to your mdrun command.It will write to a log file every second for the duration of your job and can lead.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/gromacs\/v514\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":15,"featured_media":0,"parent":194,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-4212","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/4212","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/users\/15"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/comments?post=4212"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/4212\/revisions"}],"predecessor-version":[{"id":4237,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/4212\/revisions\/4237"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/194"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/media?parent=4212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}