Gaussian 16

Gaussian is a general purpose suite of electronic structure programs.

Versions g16c01 (and g16a03) are installed on the CSF. They are available as binaries only. The source code is not available on the CSF.

Gaussian 09 is also available on the CSF.

Restrictions on use

The University of Manchester site license allows access for all staff and students of the university, however strict licensing restrictions are in place. Access to this software is not automatic.

Please contact us via its-ri-team@manchester.ac.uk to request access to Gaussian 16.

Set up procedure

G16 has been been installed on CSF with optimized versions for the different Intel compute node architectures. In general a less optimized version will run on more compute nodes. But a more optimized version requires newer architectures and so will not run on older compute nodes.

The detectcpu modulefile will use the best version for the compute node your job is running on. This modulefile must be loaded inside your jobscript, not on the login node. If you prefer to use exactly the same version for all of your job runs, there are also modulefiles to select a specific version.

After being added to the relevant unix group, you will be able to access the executables by loading the modulefile

For G16 C01

# This can ONLY be loaded inside your jobscript. It won't load on the login node.
module load apps/binapps/gaussian/g16c01_em64t_detectcpu    # Detects the CPU type and uses the
                                                            # fastest version for that CPU

# These can be loaded on the login node and inherited by your job or loaded
# in the jobscript (recommended). Less optimized to most optimized:
module load apps/binapps/gaussian/g16c01_em64t              # Nehalem/Westmere (SSE4.2) any node

module load apps/binapps/gaussian/g16c01_em64t_nehalem      # (as above)

module load apps/binapps/gaussian/g16c01_em64t_sandybridge  # For Sandybridge (AVX), Ivybridge,
                                                            # Haswell, Broadwell, Skylake nodes

module load apps/binapps/gaussian/g16c01_em64t_haswell      # For Haswell (AVX2), Broadwell and
                                                            # Skylake nodes

For G16 C01 with Dipole Moments Output

This version uses a modified l914.exe to allow it to output Dipole Moments, which are not normally output.

# This can ONLY be loaded inside your jobscript. It won't load on the login node.
module load apps/binapps/gaussian/g16c01_em64t_dm_detectcpu    # Detects the CPU type and uses the
                                                               # fastest version for that CPU

# These can be loaded on the login node and inherited by your job or loaded
# in the jobscript (recommended). Less optimized to most optimized:
module load apps/binapps/gaussian/g16c01_em64t_dm              # Nehalem/Westmere (SSE4.2) any node

module load apps/binapps/gaussian/g16c01_em64t_dm_nehalem      # (as above)

module load apps/binapps/gaussian/g16c01_em64t_dm_sandybridge  # For Sandybridge (AVX), Ivybridge,
                                                               # Haswell, Broadwell, Skylake nodes

module load apps/binapps/gaussian/g16c01_em64t_dm_haswell      # For Haswell (AVX2), Broadwell and
                                                               # Skylake nodes

G16 A03

Note: we strongly recommend using the more recent versions above.

# Replace the c01 part of the modulefile name with a03 in the above modulefiles. For example:
module load apps/binapps/gaussian/g16a03_em64t_detectcpu

We recommend you do this in your jobscript, see examples below (not on the command line before job submission as per the previous CSF2).

Gaussian MUST ONLY be run in batch. Please DO NOT run g16 on the login nodes. Computational work found to be running on the login nodes will be killed WITHOUT WARNING.

Gaussian Scratch

Gaussian uses an environment variable $GAUSS_SCRDIR to specify a directory path for where to write scratch (temporary) files (two-electron integral files, integral derivative files and a read-write file for temporary workings). It is set to your scratch directory (~/scratch) when you load the modulefile. This is a Lustre filesystem which provides good I/O performance. Do not be tempted to use your home directory for Gaussian scratch files – the files can be huge making the home area at risk of going over quota. We also recommend using a directory-per-job in your scratch area. See below for how to do this.

A faster, but smaller, local /tmp on each compute node is available should users prefer to use that. It can be more efficient if you have a need to create lots of small files, but space is limited. The minimum /tmp on intel compute nodes is 800GB, the largest is 3.5TB.

Gaussian should delete scratch files automatically when a job completes successfully or dies cleanly. However, it often fails to do this. Scratch files are also not deleted when a job is killed externally or terminates abnormally so that you can use the scratch files to restart the job (if possible). Consequently, leftover files may accumulate in the scratch directory, and it is your responsibility to delete these files. Please check periodically whether you have a lot of temporary Gaussian files that can be deleted.

Using a Scratch Directory per Job

We now recommend using a different scratch directory for each job. This improves file access times if you run many jobs – writing 1000s of scratch files to a single directory can slow down your jobs. It is much better to create a directory for each job within your scratch area. It is also then easy to delete the entire directory if Gaussian has left unwanted scratch files behind.

The example jobscripts below show how to use this method (it is simple – just two extra lines in your jobscript).

Very large Gaussian scratch files

Occasionally some jobs create .rwf files which are very large (several TB). The batch system will not permit a job to create files bigger than 4TB. If your gaussian job fails and the .rwf file is 4TB then it may be that this limit has prevented your job from completing. You should re-run the job and in your input file request that the .rwf file be split into multiple files. For example to split the file into two 3TB files:

%rwf=/scratch/$USER/myjob/one.rwf,3000GB,/scratch/$USER/myjob/two.rwf,3000GB

Serial batch job

In the examples below we give example jobscripts using the BASH shell (the default used by most CSF users) and also the C shell, which is popular amongst computational chemists.

Example job submission

It is recommended you run from within your scratch area and use one directory per job:

cd ~/scratch
mkdir job1
cd job1

Create a job script, for example:

  • BASH shell verison:
    #!/bin/bash --login
    #$ -cwd                       # Run job in directory you submitted from
    
    # Load g16 for the CPU type our job is running on
    module load apps/binapps/gaussian/g16c01_em64t_detectcpu
    
    ## Set up scratch dir (please do this!)
    export GAUSS_SCRDIR=/scratch/$USER/gau_temp_$JOB_ID
    mkdir -p $GAUSS_SCRDIR
    
    ## Say how much memory to use (4GB per core)
    export GAUSS_MDEF=$((NSLOTS*4))GB
    
    $g16root/g16/g16 < file.inp > file.out
    
  • C shell version:
    #!/bin/csh         # No -f so that 'module' commands work
    #$ -cwd
    
    # Load g16 for the CPU type our job is running on
    module load apps/binapps/gaussian/g16c01_em64t_detectcpu
    
    # Set up scratch dir (please do this!)
    setenv GAUSS_SCRDIR /scratch/$USER/gau_temp_$JOB_ID
    mkdir -p $GAUSS_SCRDIR
    
    ## Say how much memory to use (4GB per core)
    @ mem = ( $NSLOTS * 4 )
    setenv GAUSS_MDEF ${mem}GB
    
    $g16root/g16/g16 < file.inp > file.out
    

Submit with the command:

qsub scriptname

where scriptname is the name of your job script.

When the job has finished check whether Gaussian has left behind unwanted scratch files (you’ll need to know the job id). For example, assuming your job id was 456789:

cd ~/scratch/gau_temp_456789
ls
Gau-21738.inp  Gau-21738.chk  Gau-21738.d2e  Gau-21738.int  Gau-21738.scr

# Example: Remove a specific scratch file
rm Gau-21738.scr

# Example: Remove all files in the directory (use with caution)
rm Gau*

# Example: go up and remove the empty directory
cd ..
rmdir gau_temp_456789

Parallel batch job

On the CSF Gaussian is a multi-threaded application (shared memory) only, so a job will not run across multiple compute nodes. Hence you are limited to a maximum of 32 cores. This means that you must run in smp.pe to confine your job to a single node.

Follow the steps below to submit a parallel Gaussian job.

Important Information About Requesting cores

You MUST declare the number of cores for your job twice – via the #$ -pe request in your jobscript and using a Gaussian specific environment variable, also set in the jobscript. See below for further details and examples.

Old method: We used to advise setting the number of cores to use for a job in the Gaussian input file using %NProcsShared or %nprocs. But this can easily lead to mistakes – if you change the number of cores in the jobscript but forget to also change it in the Gaussian input file you will either use too few cores (some of the cores your job requested are sat idle) or too many cores (your job is trying to use cores it shouldn’t, possibly trampling on another user’s job).

New method: We now recommend setting the GAUSS_PDEF environment variable in your jobscript (set it to $NSLOTS) so that it always tells Gaussian the correct number of cores to use. This also means you don’t have to keep editing your Gaussian input file each time you want to run the input deck with a different number of cores.

For example, depending which shell you use (look at the first line of your jobscript to find out):

# If using BASH (the default shell used by most CSF users):
export GAUSS_PDEF=$NSLOTS

# If using CSH (the 'traditional' shell used by chemistry users):
setenv GAUSS_PDEF $NSLOTS

Remember that $NSLOTS is automatically set by the batch system to the number of cores you requested on the #$ -pe smp.pe line in the jobscript. Hence there is only one number-of-cores to change if you want to run the job with a different number of cores.

Note: %NProcShared in the input file takes precedence over GAUSS_PDEF, so one could override the latter by setting %NProcShared in the input file. If you are using our recommended method of setting GAUSS_PDEF in the jobscript, please remove any %NProcShared line from your Gaussian input files.

Example job submission

You MUST declare the number of cores for your job twice – via the #$ -pe request in your jobscript and using a Gaussian specific variable, also set in the jobscript. See the above explanation for further details.

It is recommended you run from within your scratch area and use one directory per job:

cd ~/scratch
mkdir job1
cd job1

Create a job script, for example:

  • BASH shell verison:
    #!/bin/bash --login
    #$ -cwd                 # Run job in directory you submitted from
    #$ -pe smp.pe 8         # Number of cores (2--32) on single compute node
    
    # Load g16 for the CPU type our job is running on
    module load apps/binapps/gaussian/g16c01_em64t_detectcpu
    
    ## Set up scratch dir (please do this!)
    export GAUSS_SCRDIR=/scratch/$USER/gau_temp_$JOB_ID
    mkdir -p $GAUSS_SCRDIR
    
    ## Say how much memory to use (4GB per core)
    export GAUSS_MDEF=$((NSLOTS*4))GB
    
    ## Inform Gaussian how many cores to use
    export GAUSS_PDEF=$NSLOTS
    
    $g16root/g16/g16 < file.inp > file.out
    
  • C shell version:
    #!/bin/csh         
    #$ -cwd            # Run job in directory you submitted from
    #$ -pe smp.pe 8    # Number of cores (2--32) on single compute node
    
    # Load g16 for the CPU type our job is running on
    module load apps/binapps/gaussian/g16c01_em64t_detectcpu
    
    # Set up scratch dir (please do this!)
    setenv GAUSS_SCRDIR /scratch/$USER/gau_temp_$JOB_ID
    mkdir -p $GAUSS_SCRDIR
    
    ## Say how much memory to use (4GB per core)
    @ mem = ( $NSLOTS * 4 )
    setenv GAUSS_MDEF ${mem}GB
    
    ## Inform Gaussian how many cores to use
    setenv GAUSS_PDEF $NSLOTS
    
    $g16root/g16/g16 < file.inp > file.out
    

Submit with the command:

qsub scriptname

where scriptname is the name of your job script.

GAUSS_PDEF vs GAUSS_CDEF

Gaussian has two environment variables that can be used to say how many cores to use. We saw the GAUSS_PDEF variable above. Alternatively the GAUSS_CDEF variable can be set but this must only be used when you are using all of the cores on a compute node. If you are unsure whether your job does this, please use the GAUSS_PDEF variable as shown above.

The GAUSS_CDEF variable may give increased performance because it pins g16 threads (used to do the parallel processing in Gaussian) to specific CPU cores. Without pinning, Linux is free to move the threads between cores, although it tries not to do this. When a thread is moved it invalidates the low-level memory caches which may reduce performance.

The GAUSS_CDEF variable uses a slightly different format to the GAUSS_PDEF variable, as shown below:

#$ -pe smp.pe 32     # Use all 32 cores on a skylake node

# Say which cores to use, e.g., 0-31 (BASH shell):
export GAUSS_CDEF=0-$((NSLOTS-1))

# Say which cores to use, e.g., 0-31 (C shell):
@ maxcore = ( $NSLOTS - 1 )
setenv GAUSS_CDEF 0-$maxcore

Reminder: the GAUSS_CDEF variable should only be used when you are using all cores on a compute node. Jobs found to be using this variable incorrectly will be killed without warning because you will be slowing down other users’ jobs.

Gaussview

GaussView is available in all versions of Gaussian16, there is no optimised version of Gaussview.

You will need to log in to the CSF with remote X11 enabled.

Please do not run GaussView on the login node. An interactive session on a compute node can be used as follows:

On the CSF3 login node:

qrsh -l short

wait until you are logged in to a compute node, then:

module load apps/binapps/gaussian/g16c01_em64t
gv

OR

module load apps/binapps/gaussian/g16a03_em64t
gv

If you get error about rendering or opening windows, try

gv -mesagl

Further info

Last modified on January 22, 2024 at 5:36 pm by George Leaver