{"id":4844,"date":"2020-11-23T17:42:56","date_gmt":"2020-11-23T17:42:56","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=4844"},"modified":"2025-08-07T14:00:44","modified_gmt":"2025-08-07T13:00:44","slug":"gaussian16","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/gaussian16\/","title":{"rendered":"Gaussian 16"},"content":{"rendered":"<p><a href=\"https:\/\/www.gaussian.com\">Gaussian<\/a> is a general purpose suite of electronic structure programs.<\/p>\n<p>Versions g16c01 (and g16a03) are installed on the CSF. They are available as binaries only. The source code is not available on the CSF.<\/p>\n<p>Gaussian 09 is also <a href=\"..\/gaussian\">available on the CSF<\/a>.<br \/>\nGaussian 16 Linda (multi-node) <a href=\"..\/gaussian16-linda\">available on the CSF<\/a>.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>The University of Manchester site license allows access for all staff and students of the university, however strict licensing restrictions are in place. Access to this software is not automatic.<\/p>\n<p>Please contact us via our <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/help\/\">help form<\/a> to request access to Gaussian 16.<\/p>\n<h2>Set up procedure<\/h2>\n<p>G16 has been been installed on CSF with optimized versions for the different Intel com pute node architectures. In general a less optimized version will run on more compute nodes. But a more optimized version requires newer architectures and so will not run on older compute nodes.<\/p>\n<p>The <code>detectcpu<\/code> modulefile will use the best version for the compute node your job is running on. This modulefile must be loaded inside your jobscript, not on the login node. If you prefer to use exactly the same version for all of your job runs, there are also modulefiles to select a specific version.<\/p>\n<p>After being added to the relevant unix group, you will be able to access the executables by loading the modulefile<\/p>\n<h3>For G16 C01<\/h3>\n<pre># <strong>This can ONLY be loaded inside your jobscript. It won't load on the login node.<\/strong>\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_detectcpu    # Detects the CPU type and uses the\r\n                                                            # fastest version for that CPU\r\n\r\n# <strong>These can be loaded on the login node and inherited by your job or loaded in the jobscript.<\/strong>\r\n# Least optimized to most optimized:\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t              # Nehalem\/Westmere (SSE4.2) any node\r\n\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_nehalem      # (as above)\r\n\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_sandybridge  # For Sandybridge (AVX), Ivybridge,\r\n                                                            # Haswell, Broadwell, Skylake nodes\r\n\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_haswell      # For Haswell (AVX2), Broadwell and\r\n                                                            # Skylake nodes\r\n<\/pre>\n<h3>For G16 C01 with Dipole Moments Output<\/h3>\n<p>This version uses a modified <code>l914.exe<\/code> to allow it to output Dipole Moments, which are not normally output.<\/p>\n<pre># <strong>This can ONLY be loaded inside your jobscript. It won't load on the login node.<\/strong>\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_<strong>dm<\/strong>_detectcpu    # Detects the CPU type and uses the\r\n                                                               # fastest version for that CPU\r\n\r\n# <strong>These can be loaded on the login node and inherited by your job or loaded in the jobscript.<\/strong>\r\n# Least optimized to most optimized:\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_<strong>dm<\/strong>              # Nehalem\/Westmere (SSE4.2) any node\r\n\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_<strong>dm<\/strong>_nehalem      # (as above)\r\n\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_<strong>dm<\/strong>_sandybridge  # For Sandybridge (AVX), Ivybridge,\r\n                                                               # Haswell, Broadwell, Skylake nodes\r\n\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_<strong>dm<\/strong>_haswell      # For Haswell (AVX2), Broadwell and\r\n                                                               # Skylake nodes\r\n<\/pre>\n<h3>G16 A03<\/h3>\n<p>Note: we <strong>strongly<\/strong> recommend using the more recent versions above.<\/p>\n<pre># Replace the c01 part of the modulefile name with a03 in the above modulefiles. For example:\r\nmodule load apps\/binapps\/gaussian\/g16a03_em64t_detectcpu\r\n<\/pre>\n<p>We recommend you do this in your jobscript, see examples below (not on the command line before job submission as per the previous CSF2).<\/p>\n<p>Gaussian <strong>MUST ONLY<\/strong> be run in batch. Please <strong>DO NOT<\/strong> run <code>g16<\/code> on the login nodes. Computational work found to be running on the login nodes will be killed <strong>WITHOUT WARNING<\/strong>.<\/p>\n<h2>Gaussian Scratch<\/h2>\n<p>Gaussian uses an environment variable <code>$GAUSS_SCRDIR<\/code> to specify a directory path for where to write <em>scratch<\/em> (temporary) files (two-electron integral files, integral derivative files and a <em>read-write<\/em> file for temporary workings).<\/p>\n<p>It is set to your <em>scratch<\/em> directory (<code>~\/scratch<\/code>) when you load the modulefile. This is a Lustre filesystem which provides good I\/O performance. Do not be tempted to use your home directory for Gaussian scratch files &#8211; the files can be huge, making the home area at risk of going over quota. So this gives you a sensible default location for temp files if you wish. But we advise how you can use a better setting below.<\/p>\n<h3>Using a Scratch Directory per Job<\/h3>\n<p><strong>We recommend using a different scratch directory for each job<\/strong>. This improves file access times if you run many jobs &#8211; writing 1000s of scratch files to a single directory can slow down your jobs. It is much better to create a directory <em>for each<\/em> job within your <code>scratch<\/code> area. It is also then easy to delete the entire directory if Gaussian has left unwanted scratch files behind.<\/p>\n<pre>\r\n# In the jobscript:\r\nexport GAUSS_SCRDIR=\/scratch\/$USER\/gau_temp_$SLURM_JOB_ID\r\nmkdir -p $GAUSS_SCRDIR\r\n<\/pre>\n<p>See below for <a href=\"#scrdir\">example jobscripts<\/a> that do this.<\/p>\n<h3>Using a node-local directory<\/h3>\n<p>A faster, but smaller, local <code>$TMPDIR<\/code> on each compute node is available should users prefer to use that. It can be more efficient if you have a need to create lots of small files, but space is limited. The AMD 168-core Genoa nodes in the <code>multicore<\/code> partition all have a 1.6TB local disk. <strong>Use this with caution &#8211; if you fill the local disk, your job will crash!<\/strong>.<\/p>\n<pre>\r\n# Slurm creates the temp $TMPDIR. <strong>FOR SMALL FILES ONLY.<\/strong>\r\n# It also deletes the folder (and all files) at the end of the job.\r\n# You should take a copy of any files from this area <em>within your jobscript<\/em>.\r\n# You will NOT have access to this temp area after the job has finished.\r\nexport GAUSS_SCRDIR=$TMPDIR\r\n<\/pre>\n<p>If your job writes a huge <code>.rwf<\/code>, say, then it will likely run out of space in this area and your job will crash. <\/p>\n<h3>Cleaning up scratch files<\/h3>\n<p>Gaussian <em>should<\/em> delete scratch files automatically when a job completes successfully or dies cleanly. However, it often fails to do this. Scratch files are also <em>not<\/em> deleted when a job is killed externally or terminates abnormally so that you can use the scratch files to restart the job (if possible). Consequently, leftover files may accumulate in the scratch directory, and it is your responsibility to delete these files. Please <strong>check periodically<\/strong> whether you have a lot of temporary Gaussian files that can be deleted.<\/p>\n<h3>Very large Gaussian scratch files<\/h3>\n<p>Occasionally some jobs create <code>.rwf<\/code> files which are very large (several TB). The batch system will not permit a job to create files bigger than 4TB. If your gaussian job fails and the <code>.rwf<\/code> file is 4TB then it may be that this limit has prevented your job from completing. You should re-run the job and in your <strong>input<\/strong> file request that the <code>.rwf<\/code> file be split into multiple files. For example to split the file into two 3TB files add the following Link 0 line at the upper part of your input file:<\/p>\n<pre>%rwf=\/scratch\/myusername\/myjob\/one.rwf,3000GB,\/scratch\/myusername\/myjob\/two.rwf,3000GB\r\n<\/pre>\n<h2>Serial batch job<\/h2>\n<p>In the examples below we give example jobscripts using the BASH shell (the default used by most CSF users) and also the C shell, which is popular amongst computational chemists.<\/p>\n<h3 id=\"scrdir\">Example job submission<\/h3>\n<p>It is recommended you run from within your <code>scratch<\/code> area and use one directory per job:<\/p>\n<pre>cd ~\/scratch\r\nmkdir job1\r\ncd job1\r\n<\/pre>\n<p>Create a job script, for example:<\/p>\n<ul>\n<li>BASH shell version:\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p serial    # Intel nodes, dedicated to serial jobs.\r\n#SBATCH -t 4-0       # This requests a 4-day time limit. Max permitted is 7 days.\r\n\r\n# Load g16 for the CPU type our job is running on\r\nmodule purge\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_detectcpu\r\n\r\n## Set up scratch dir (please do this!)\r\nexport GAUSS_SCRDIR=\/scratch\/$USER\/gau_temp_$SLURM_JOB_ID\r\nmkdir -p $GAUSS_SCRDIR\r\n\r\n## Say how much memory to use (4GB per core on \"serial\" nodes)\r\nexport GAUSS_MDEF=$((SLURM_NTASKS*4))GB\r\n\r\n$g16root\/g16\/g16 &lt; file.inp &gt; file.out\r\n<\/pre>\n<\/li>\n<li>C shell version:\n<pre class=\"slurm\">\r\n#!\/bin\/csh           # No -f so that 'module' commands work\r\n#SBATCH -p serial    # Intel nodes, dedicated to serial jobs.\r\n#SBATCH -t 4-0       # This requests a 4-day time limit. Max permitted is 7 days.\r\n\r\n# Load g16 for the CPU type our job is running on\r\nmodule purge\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_detectcpu\r\n\r\n# Set up scratch dir (please do this!)\r\nsetenv GAUSS_SCRDIR \/scratch\/$USER\/gau_temp_$SLURM_JOB_ID\r\nmkdir -p $GAUSS_SCRDIR\r\n\r\n## Say how much memory to use (4GB per core on \"serial\" nodes)\r\n@ mem = ( $SLURM_NTASKS * 4 )\r\nsetenv GAUSS_MDEF ${mem}GB\r\n\r\n$g16root\/g16\/g16 &lt; file.inp &gt; file.out\r\n<\/pre>\n<\/li>\n<li>Submit with the command:\n<pre>sbatch <em>jobscript<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your job script.<\/li>\n<\/ul>\n<p>When the job has finished check whether Gaussian has left behind unwanted scratch files (you&#8217;ll need to know the job id). For example, assuming your job id was 456789:<\/p>\n<pre>cd ~\/scratch\/gau_temp_456789\r\nls\r\nGau-21738.inp  Gau-21738.chk  Gau-21738.d2e  Gau-21738.int  Gau-21738.scr\r\n\r\n# Example: Remove a specific scratch file\r\nrm Gau-21738.scr\r\n\r\n# Example: Remove all files in the directory (use with caution)\r\nrm Gau*\r\n\r\n# Example: go up and remove the empty directory\r\ncd ..\r\nrmdir gau_temp_456789\r\n<\/pre>\n<h2>Parallel batch job<\/h2>\n<p>On the CSF Gaussian is a multi-threaded application (shared memory) only, so a job will <strong>not<\/strong> run across multiple compute nodes (see the <a href=\"..\/gaussian16-linda\">Linda version<\/a> for multi-node jobs.) Hence you are limited to a maximum of 32 cores. This means that you <em>must<\/em> run in <code>smp.pe<\/code> to confine your job to a single node.<\/p>\n<p>Follow the steps below to submit a parallel Gaussian job.<\/p>\n<h3>Important Information About Requesting cores<\/h3>\n<div class=\"hint\">You MUST declare the number of cores for your job twice &#8211; via the <code>-n<\/code> request in your jobscript <em>and<\/em> using a Gaussian specific environment variable, also set in the jobscript. See below for further details and examples.<\/div>\n<p><strong>Old method<\/strong>: We <em>used<\/em> to advise setting the number of cores to use for a job in the Gaussian input file using <code>%NProcsShared<\/code> or <code>%nprocs<\/code>. But this can easily lead to mistakes &#8211; if you change the number of cores in the jobscript but forget to also change it in the Gaussian input file you will either use too few cores (some of the cores your job requested are sat idle) or too many cores (your job is trying to use cores it shouldn&#8217;t, possibly trampling on another user&#8217;s job).<\/p>\n<p><strong>New method<\/strong>: We now recommend setting the <code>GAUSS_PDEF<\/code> environment variable in your jobscript (set it to <code>$SLURM_NTASKS<\/code>) so that it always tells Gaussian the correct number of cores to use. This also means you don&#8217;t have to keep editing your Gaussian input file each time you want to run the input deck with a different number of cores.<\/p>\n<p>For example, depending which <em>shell<\/em> you use (look at the first line of your jobscript to find out):<\/p>\n<pre class=\"slurm\"># If using BASH (the default shell used by most CSF users):\r\nexport GAUSS_PDEF=$SLURM_NTASKS\r\n\r\n# If using CSH (the 'traditional' shell used by chemistry users):\r\nsetenv GAUSS_PDEF $SLURM_NTASKS\r\n<\/pre>\n<p>Remember that <code>$SLURM_NTASKS<\/code> is automatically set by the batch system to the number of cores you requested on the <code>#SBATCH -n <em>NUM<\/em><\/code> line in the jobscript. Hence there is only one number-of-cores to change if you want to run the job with a different number of cores.<\/p>\n<p>Note: <code>%NProcShared<\/code> in the input file takes precedence over <code>GAUSS_PDEF<\/code>, so one could override the latter by setting <code>%NProcShared<\/code> in the input file. If you are using our recommended method of setting <code>GAUSS_PDEF<\/code> in the jobscript, please remove any <code>%NProcShared<\/code> line from your Gaussian input files.<\/p>\n<h3>Example job submission<\/h3>\n<div class=\"hint\">You MUST declare the number of cores for your job twice &#8211; via the <code>-n<\/code> request in your jobscript and using a Gaussian specific variable, also set in the jobscript. See the above explanation for further details.<\/div>\n<p>It is recommended you run from within your <code>scratch<\/code> area and use one directory per job:<\/p>\n<pre>cd ~\/scratch\r\nmkdir job1\r\ncd job1\r\n<\/pre>\n<p>Create a job script, for example:<\/p>\n<ul>\n<li>BASH shell verison:\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore  # Run on AMD Genoa nodes\r\n#SBATCH -n 8          # Number of cores (maximum is 168 cores)\r\n#SBATCH -t 4-0        # This requests a 4-day time limit. Max permitted is 7 days.\r\n\r\n# Load g16 for the CPU type our job is running on\r\nmodule purge\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_detectcpu\r\n\r\n## Set up scratch dir (please do this!)\r\nexport GAUSS_SCRDIR=\/scratch\/$USER\/gau_temp_$SLURM_JOB_ID\r\nmkdir -p $GAUSS_SCRDIR\r\n\r\n## Say how much memory to use (8GB per core on AMD nodes)\r\nexport GAUSS_MDEF=$((SLURM_NTASKS*8))GB\r\n\r\n## Inform Gaussian how many cores to use\r\nexport GAUSS_PDEF=$SLURM_NTASKS\r\n\r\n$g16root\/g16\/g16 &lt; file.inp &gt; file.out\r\n<\/pre>\n<\/li>\n<li>C shell version:\n<pre class=\"slurm\">\r\n#!\/bin\/csh\r\n#SBATCH -p multicore  # Run on AMD Genoa nodes\r\n#SBATCH -n 8          # Number of cores (maximum is 168 cores)\r\n#SBATCH -t 4-0        # This requests a 4-day time limit. Max permitted is 7 days.\r\n\r\n# Load g16 for the CPU type our job is running on\r\nmodule purge\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t_detectcpu\r\n\r\n# Set up scratch dir (please do this!)\r\nsetenv GAUSS_SCRDIR \/scratch\/$USER\/gau_temp_$SLURM_JOB_ID\r\nmkdir -p $GAUSS_SCRDIR\r\n\r\n## Say how much memory to use (8GB per core on AMD nodes)\r\n@ mem = ( $SLURM_NTASKS * 8 )\r\nsetenv GAUSS_MDEF ${mem}GB\r\n\r\n## Inform Gaussian how many cores to use\r\nsetenv GAUSS_PDEF $SLURM_NTASKS\r\n\r\n$g16root\/g16\/g16 &lt; file.inp &gt; file.out\r\n<\/pre>\n<\/li>\n<li>Submit with the command:\n<pre>sbatch <em>jobscript<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your job script.<\/li>\n<\/ul>\n<h3>GAUSS_PDEF vs GAUSS_CDEF<\/h3>\n<p>Gaussian has two environment variables that can be used to say how many cores to use. We saw the <code>GAUSS_PDEF<\/code> variable above. Alternatively the <code>GAUSS_CDEF<\/code> variable can be set but this must <em>only<\/em> be used when you are using <em>all<\/em> of the cores on a compute node. If you are unsure whether your job does this, please use the <code>GAUSS_PDEF<\/code> variable as shown above.<\/p>\n<p>The <code>GAUSS_CDEF<\/code> variable <em>may<\/em> give increased performance because it pins <code>g16<\/code> <em>threads<\/em> (used to do the parallel processing in Gaussian) to specific CPU cores. Without pinning, Linux is free to move the threads between cores, although it tries not to do this. When a thread is moved it invalidates the low-level memory caches which may reduce performance.<\/p>\n<p>The <code>GAUSS_CDEF<\/code> variable uses a slightly different format to the <code>GAUSS_PDEF<\/code> variable, as shown below:<\/p>\n<pre class=\"slurm\">\r\n... jobscript options ...\r\n\r\n# Say which cores to use, e.g., 0-31 (BASH shell):\r\nexport GAUSS_CDEF=0-$((SLURM_NTASKS-1))\r\n\r\n# Say which cores to use, e.g., 0-31 (C shell):\r\n@ maxcore = ( $SLURM_NTASKS - 1 )\r\nsetenv GAUSS_CDEF 0-$maxcore\r\n<\/pre>\n<p>Reminder: the <code>GAUSS_CDEF<\/code> variable should only be used when you are using <em>all<\/em> cores on a compute node. Jobs found to be using this variable incorrectly will be killed without warning because you will be slowing down other users&#8217; jobs.<\/p>\n<h2>Gaussview<\/h2>\n<p>GaussView is available in all versions of Gaussian16, there is no optimised version of Gaussview.<\/p>\n<p>You will need to log in to the CSF with <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/getting-started\/connecting\/gui-apps\/\">remote X11 enabled<\/a>.<\/p>\n<p>Please <strong>do not<\/strong> run GaussView on the login node. An interactive session on a compute node can be used as follows:<\/p>\n<p>On the CSF3 login node:<\/p>\n<pre class=\"slurm\">srun-x11\r\n<\/pre>\n<p>wait until you are logged in to a compute node, then:<\/p>\n<pre>\r\nmodule purge\r\nmodule load apps\/binapps\/gaussian\/g16c01_em64t\r\ngv\r\n\r\nOR\r\nmodule purge\r\nmodule load apps\/binapps\/gaussian\/g16a03_em64t\r\ngv\r\n<\/pre>\n<p>If you get error about rendering or opening windows, try<\/p>\n<pre>gv -mesagl\r\n<\/pre>\n<h2>Further info<\/h2>\n<ul>\n<li>The <a href=\"http:\/\/www.applications.itservices.manchester.ac.uk\/show_product.php?id=22\">IT Services Gaussian webpage<\/a> contains important information applicable to all users of the software.<\/li>\n<li>Gaussian Inc. <a href=\"http:\/\/www.gaussian.com\/man\">g16 Users Reference pages<\/a>.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Gaussian is a general purpose suite of electronic structure programs. Versions g16c01 (and g16a03) are installed on the CSF. They are available as binaries only. The source code is not available on the CSF. Gaussian 09 is also available on the CSF. Gaussian 16 Linda (multi-node) available on the CSF. Restrictions on use The University of Manchester site license allows access for all staff and students of the university, however strict licensing restrictions are in.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/gaussian16\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-4844","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/4844","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=4844"}],"version-history":[{"count":21,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/4844\/revisions"}],"predecessor-version":[{"id":10783,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/4844\/revisions\/10783"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=4844"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}