{"id":2120,"date":"2014-12-18T11:02:42","date_gmt":"2014-12-18T11:02:42","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/?page_id=2120"},"modified":"2017-11-08T17:05:18","modified_gmt":"2017-11-08T17:05:18","slug":"v467","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/gromacs\/v467\/","title":{"rendered":"GROMACS v4.6.7"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>GROMACS is a package for computing molecular dynamics, simulating Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed for biochemical molecules with complicated bonded interactions (e.g. proteins, lipids, nucleic acids) but can also be used for non-biological systems (e.g. polymers).<\/p>\n<table style=\"text-align: center; width:66%; margin-left:22%; margin-right:22%;\">\n<tr>\n<td><em>Please do <strong>not<\/strong> add the <code>-v<\/code> flag to your <code>mdrun<\/code> command.<\/em><br \/>It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers.<\/td>\n<\/tr>\n<\/table>\n<p>This version is v4.6.7. The following flavours are available:<\/p>\n<h3>4.6.7 for all intel node types<\/h3>\n<ul>\n<li>Single and double precision multi-threaded (OpenMP) versions: <code>mdrun<\/code> and <code>mdrun_d<\/code><\/li>\n<li>Single and double precision MPI (not threaded) versions: <code>mdrun_mpi<\/code> and <code>mdrun_d_mpi<\/code><\/li>\n<li>Compiled with Intel 14.0.3 compiler with the associated Intel MKL providing the FFT functions.<\/li>\n<li><code>ngmx<\/code> has been included.<\/li>\n<\/ul>\n<h3>4.6.7 for Sandybridge and Ivybridge nodes only<\/h3>\n<ul>\n<li>Single and double precision multi-threaded (OpenMP) versions: <code>mdrun<\/code> and <code>mdrun_d<\/code><\/li>\n<li>Compiled with Intel 14.0.3 compiler with the associated Intel MKL providing the FFT functions and with <code>AVX_256<\/code> (an instruction set specific to these nodes) so WILL NOT work on Westmere nodes and NONE of the commands can be run on the login nodes.<\/li>\n<li>We have no Sandybridge or Ivybridge nodes connected by Infiniband which means ONLY <code>smp.pe<\/code> job for this install.<\/li>\n<li>There are no mpi versions of 4.6.7 for Sandybridge and Ivybridge nodes available on the CSF.<\/li>\n<li>This version will not run on highmem, twoday or short nodes (they are all Westmere).<\/li>\n<li><code>ngmx<\/code> has been included.<\/li>\n<\/ul>\n<h3>Bugfix for g_hbond<\/h3>\n<p>Version 4.6.7 has the <em>g_hbond<\/em> fix included by default and so no separate build has been made for this version. See the <a href=\"..\/v454\">GROMACS v4.5.4 CSF documentation<\/a> for a description of that issue.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>GROMACS is free software, available under the GNU General Public License.<\/p>\n<h2>Set up procedure<\/h2>\n<p>You must load the appropriate modulefile:<\/p>\n<pre>\r\nmodule load <em>modulefile<\/em>\r\n<\/pre>\n<p>replacing <em>modulefile<\/em> with one of the modules listed in the table below.<\/p>\n<table>\n<tr>\n<th style=\"width: 20%\">Version<\/th>\n<th style=\"width: 45%\">Modulefile<\/th>\n<th style=\"width: 20%\">Notes<\/th>\n<th style=\"width: 15%\">Typical Executable name<\/th>\n<\/tr>\n<tr>\n<td>Single precision multi-threaded<\/td>\n<td>apps\/intel-14.0\/gromacs\/4.6.7\/single<\/td>\n<td>non-MPI<\/td>\n<td>mdrun<\/td>\n<\/tr>\n<tr>\n<td>Double precision multi-threaded<\/td>\n<td>apps\/intel-14.0\/gromacs\/4.6.7\/double<\/td>\n<td>non-MPI<\/td>\n<td>mdrun_d<\/td>\n<\/tr>\n<tr>\n<td>Single precision MPI<\/td>\n<td> apps\/intel-14.0\/gromacs\/4.6.7\/single-mpi <\/td>\n<td>For MPI on Intel nodes using gigabit ethernet<\/td>\n<td>mdrun_mpi<\/td>\n<\/tr>\n<tr>\n<td>Single precision MPI &#8211; Infiniband<\/td>\n<td>apps\/intel-14.0\/gromacs\/4.6.7\/single-mpi-ib<\/td>\n<td>For MPI on Intel or AMD nodes using infiniband<\/td>\n<td>mdrun_mpi<\/td>\n<\/tr>\n<tr>\n<td>Double precision MPI<\/td>\n<td> apps\/intel-14.0\/gromacs\/4.6.7\/double-mpi <\/td>\n<td>For MPI on Intel nodes using gigabit ethernet<\/td>\n<td>mdrun_mpi_d<\/td>\n<\/tr>\n<tr>\n<td>Double precision MPI &#8211; Infiniband<\/td>\n<td> apps\/intel-14.0\/gromacs\/4.6.7\/double-mpi-ib<\/td>\n<td>For MPI on Intel or AMD nodes using Infiniband<\/td>\n<td>mdrun_mpi_d<\/td>\n<\/tr>\n<tr>\n<td>Single precision multi-threaded for AVX<\/td>\n<td>apps\/intel-14.0\/gromacs\/4.6.7\/single-avx<\/td>\n<td>non-MPI, Sandybridge and Ivybridge only<\/td>\n<td>mdrun<\/td>\n<\/tr>\n<tr>\n<td>Double precision multi-threaded for AVX<\/td>\n<td>apps\/intel-14.0\/gromacs\/4.6.7\/double-avx<\/td>\n<td>non-MPI, Sandybridge and Ivybridge only<\/td>\n<td>mdrun_d<\/td>\n<\/tr>\n<\/table>\n<div style=\"display: none;\">\n<h2>Interactive\/Non-batch work\/Job preparation<\/h2>\n<p>In order to prepare your jobs or post process them you may need to make use of commands such as <code>grompp<\/code>. These will not work on the CSF login node because the software was compiled with <code>AVX_256<\/code> which is not compatible with the login nodes. We have therefore allocated ONE sandybridge node to allow you to run these commands via qrsh. To do so type:<\/p>\n<pre>\r\nqrsh -l inter -l short -l sandybridge\r\n<\/pre>\n<p>which will give access to the sandybridge compute node. Then run your commands. When you have finished <strong>close the connection to the compute node<\/strong> with <code>exit<\/code> (failure to do this may result in the compute node being unavailable to other users who need it). Then submit your computation\/simulation to batch as per the above examples. <\/p>\n<p>DO NOT run mdrun on this compute node &#8211; all computational work MUST be submitted to batch.\n<\/p><\/div>\n<h2>Running the application in batch<\/h2>\n<p>First load the required module (see above) and create a directory containing the required input data files.<\/p>\n<p>You MUST ensure that as well as requesting a number of cores from a suitable parallel environment you also tell gromacs how many cores it may use. These two numbers must be the same, which can be ensured through correct use of some variables and\/or flags depending on the version of gromacs being used. Failure to set this information causes gromacs to run incorrectly, overload compute nodes and potentially trample on jobs belonging to other users. All of the examples below ensure that jobs use the cores requested.<\/p>\n<p>If you are running a different version to those used in the examples below ensure you replace the executable, e.g. <code>mdrun<\/code> with the executable appropriate to the version based on the table above.<\/p>\n<table style=\"text-align: center; width:66%; margin-left:22%; margin-right:22%;\">\n<tr>\n<td><em>Please do not add the <code>-v<\/code> flag to your <code>mdrun<\/code> command.<\/em><br \/>It will write to a log file every second for the duration of your job and can lead to severe overloading of the file servers.<\/td>\n<\/tr>\n<\/table>\n<h3>Multi-threaded single-precision on intel nodes, 2 to 16 cores<\/h3>\n<p>Note that GROMACS v4.6.7 (unlike v4.5.4) does <strong>not<\/strong> support the <code>-nt<\/code> flag to set the number of threads when using the multithreaded OpenMP (non-MPI) verison. Instead set the <code>OMP_NUM_THREADS<\/code> environment variable as shown below.<\/p>\n<p>An example batch submission script to run the single-precision mdrun executable with 12 threads:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe smp.pe 12\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS\r\nmdrun\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<p>The system will run your job on a Westmere, a Sandybridge or an Ivybridge node depending on what is available. This option goes to the biggest pool of nodes. To get a more optimised run on Sandybridge or Ivybridge you should be using a modulefile with &#8216;avx&#8217; in the name and using the instructions below.<\/p>\n<h3>Multi-threaded single-precision AVX on Sandybridge nodes 2 to 12 cores<\/h3>\n<p>An example batch submission script to run the double-precision <code>mdrun_d<\/code> executable with 8 threads:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe smp.pe 8\r\n#$ -l sandybridge\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS\r\nmdrun\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h3>Multi-threaded single-precision AVX on Ivybridge nodes, 2 to 16 cores<\/h3>\n<p>Note that GROMACS v4.6.7 (unlike v4.5.4) does <strong>not<\/strong> support the <code>-nt<\/code> flag to set the number of threads when using the multithreaded OpenMP (non-MPI) verison. Instead set the <code>OMP_NUM_THREADS<\/code> environment variable as shown below.<\/p>\n<p>An example batch submission script to run the single-precision mdrun executable with 12 threads:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe smp.pe 12\r\n#$ -l ivybridge\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS\r\nmdrun\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h3>Single-precision MPI with Infiniband, 48 cores or more in multiples of 24<\/h3>\n<p>An example batch submission script to run the single precision <code>mdrun_mpi<\/code> executable with 24 MPI processes on 24 cores with the orte-24-ib.pe parallel environment (Intel nodes using infiniband):<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe orte-24-ib.pe 48\r\n\r\nmpiexec -n $NSLOTS mdrun_mpi\r\n\r\n<\/pre>\n<p>Submit with the command: <code>qsub scriptname<\/code><\/p>\n<h2>Illegal instruction<\/h2>\n<p>If during job setup\/post-processing you try to run gromacs commands on the login nodes and get the following error:<\/p>\n<pre>\r\nIllegal instruction\r\n<\/pre>\n<p>this is because you have an AVX only version of the modulefile loaded which is not compatible with the login node. You should load the non-AVX version and then unload it and reload the AVX version at job submission time.<\/p>\n<p>If you get it in a batch job then it is because you did not specify whether to run on a sandybridge or ivybridge node in your job script.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li>You can see a list of all the installed GROMACS utilities with the command: <code>ls $GMXDIR\/bin<\/code><\/li>\n<li><a href=\"http:\/\/www.gromacs.org\/About_Gromacs\">GROMACS web page<\/a><\/li>\n<li><a href=\"http:\/\/www.gromacs.org\/Documentation\/Manual\">GROMACS manuals<\/a><\/li>\n<li><a href=\"http:\/\/www.gromacs.org\/Support\/Mailing_Lists\">GROMACS user mailing list<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>Dec 2014 &#8211; 4.6.7 installed with AVX support (specific user request for this) and documentation written.<br \/>\nNov 2013 &#8211; Documentation for 4.5.4 and 4.6.1 split in to two pages.<br \/>\nMay 2013 &#8211; Gromacs 4.6.1 and Plumed 1.3 installed.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview GROMACS is a package for computing molecular dynamics, simulating Newtonian equations of motion for systems with hundreds to millions of particles. GROMACS is designed for biochemical molecules with complicated bonded interactions (e.g. proteins, lipids, nucleic acids) but can also be used for non-biological systems (e.g. polymers). Please do not add the -v flag to your mdrun command.It will write to a log file every second for the duration of your job and can lead.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/gromacs\/v467\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":194,"menu_order":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-2120","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/2120","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/comments?post=2120"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/2120\/revisions"}],"predecessor-version":[{"id":4346,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/2120\/revisions\/4346"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/194"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/media?parent=2120"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}