Gaussian16

Gaussian is a general purpose suite of electronic structure programs. Versions g16c01 is installed. It is available as binaries only. The source code is not available on the CSF.

Gaussian 09 is not currently installed on CSF4. Please contact us via its-ri-team@manchester.ac.uk if you require access to this version (it is installed on CSF3).

Restrictions on use

The University of Manchester site license allows access for all staff and students of the university, however strict licensing restrictions are in place. Access to this software is not automatic.

Please contact us via its-ri-team@manchester.ac.uk to request access to Gaussian 16.

Set up procedure

G16 has been been installed on CSF with optimized versions for the different Intel compute node architectures. In general a less optimized version will run on more compute nodes. But a more optimized version requires newer architectures and so will not run on older compute nodes.

The detectcpu modulefile will use the best version for the compute node your job is running on. This modulefile must be loaded inside your jobscript, not on the login node. If you prefer to use exactly the same for all of your job runs, there are also modulefiles to select a specific version.

After being added to the relevant unix group, you will be able to access the executables by loading the modulefile

For G16 C01:

# NOTE: CSF4 ONLY CONTAINS CASCADE LAKE NODES. WE RECOMMEND USING THE _detectcpu
#       OR THE _haswell MODULEFILES FOR BEST PERFORMANCE. The other modulefiles
#       match those available on CSF3 and will use the same level of optimization
#       as used on CSF3.

# This can ONLY be loaded inside your jobscript. It won't load on the login node.
module load gaussian/g16c01_em64t_detectcpu    # Detects the CPU type and uses the
                                               # fastest version for that CPU

# These can be loaded on the login node and inherited by your job or loaded
# in the jobscript (recommended). Less optimized to most optimized:
module load gaussian/g16c01_em64t              # Nehalem/Westmere (SSE4.2) any node

module load gaussian/g16c01_em64t_nehalem      # (as above)

module load gaussian/g16c01_em64t_sandybridge  # For Sandybridge (AVX), Ivybridge, Haswell,
                                               # Broadwell, Skylake, Cascade Lake nodes

module load gaussian/g16c01_em64t_haswell      # For Haswell (AVX2), Broadwell, Skylake
                                               # and Cascade Lake nodes

We recommend you do this in your jobscript, see examples below (not on the command line before job submission as per the previous CSF2).

Gaussian MUST ONLY be run in batch. Please DO NOT run g16 on the login nodes. Computational work found to be running on the login nodes will be killed WITHOUT WARNING.

Gaussian Scratch

Gaussian uses an environment variable $GAUSS_SCRDIR to specify a directory path for where to write scratch (temporary) files (two-electron integral files, integral derivative files and a read-write file for temporary workings).

It is set to your scratch directory (~/scratch) when you load the modulefile (scratch is a fast filesystem with good I/O performance.)

Please either leave the $GAUSS_SCRDIR at the default setting or, if you wish to set it to a job-specific directory, ensure you use a directory inside your scratch area (see below for how to do this.) DO NOT use your CSF home directory or any additional Research Data Storage that you may have access to for the $GAUSS_SCRDIR.

A faster, but smaller, local /tmp on each compute node is available should users prefer to use that. It can be more efficient if you have a need to create lots of small files, but space is limited. The size of /tmp on all compute nodes is 800GB.

Gaussian should delete scratch files automatically when a job completes successfully or dies cleanly. However, it often fails to do this. Scratch files are also not deleted when a job is killed externally or terminates abnormally so that you can use the scratch files to restart the job (if possible). Consequently, leftover files may accumulate in the scratch directory, and it is your responsibility to delete these files. Please check periodically whether you have a lot of temporary Gaussian files that can be deleted.

Using a Scratch Directory per Job

We now recommend using a different scratch directory for each job. This improves file access times if you run many jobs – writing 1000s of scratch files to a single directory can slow down your jobs. It is much better to create a directory for each job within your scratch area. It is also then easy to delete the entire directory if Gaussian has left unwanted scratch files behind.

The example jobscripts below show how to use this method (it is simple – just two extra lines in your jobscript). The two lines are:

export GAUSS_SCRDIR=/scratch/$USER/gau_temp_$SLURM_JOB_ID
mkdir -p $GAUSS_SCRDIR

Very large Gaussian scratch files

Occasionally some jobs create .rwf files which are very large (several TB). The batch system will not permit a job to create files bigger than 4TB. If your gaussian job fails and the .rwf file is 4TB then it may be that this limit has prevented your job from completing. You should re-run the job and in your input file request that the .rwf file be split into multiple files. For example to split the file into two 3TB files:

%rwf=/scratch/$USER/myjob/one.rwf,3000GB,/scratch/$USER/myjob/two.rwf,3000GB

Serial batch job

In the examples below we give example jobscripts using the BASH shell (the default used by most CSF users) and also the C shell, which is popular amongst computational chemists.

Example job submission

It is recommended you run from within your scratch area and use one directory per job:

cd ~/scratch
mkdir job1
cd job1

Create a job script, for example:

  • BASH shell verison:
    #!/bin/bash --login
    #SBATCH -p serial      # (or --partition) Single-core job
    #SBATCH -n 1           # (or --ntasks=) Number of cores (always 1)
    
    # Load g16 for the CPU type our job is running on
    module load gaussian/g16c01_em64t_detectcpu
    
    ## Set up scratch dir (please do this!)
    export GAUSS_SCRDIR=/scratch/$USER/gau_temp_$SLURM_JOB_ID
    mkdir -p $GAUSS_SCRDIR
    
    ## Say how much memory to use (max on CSF4 is 4GB per core)
    export GAUSS_MDEF=$((SLURM_NTASKS*4))GB
    
    $g16root/g16/g16 < file.inp > file.out
    
  • C shell version:
    #!/bin/csh
    #SBATCH -p serial    # (or --partition) Single-core job
    #SBATCH -n 1         # (or --ntasks=) Number of cores (always 1)
    
    # Load g16 for the CPU type our job is running on
    module load gaussian/g16c01_em64t_detectcpu
    
    # Set up scratch dir (please do this!)
    setenv GAUSS_SCRDIR /scratch/$USER/gau_temp_$SLURM_JOB_ID
    mkdir -p $GAUSS_SCRDIR
    
    ## Say how much memory to use (max on CSF4 is 4GB per core)
    @ mem = ( $SLURM_NTASKS * 4 )
    setenv GAUSS_MDEF ${mem}GB
    
    $g16root/g16/g16 < file.inp > file.out
    

Submit with the command:

sbatch scriptname

where scriptname is the name of your job script.

When the job has finished check whether Gaussian has left behind unwanted scratch files (you’ll need to know the job id). For example, assuming your job id was 456789:

cd ~/scratch/gau_temp_456789
ls
Gau-21738.inp  Gau-21738.chk  Gau-21738.d2e  Gau-21738.int  Gau-21738.scr

# Example: Remove a specific scratch file
rm Gau-21738.scr

# Example: Remove all files in the directory (use with caution)
rm Gau*

# Example: go up and remove the empty directory
cd ..
rmdir gau_temp_456789

Parallel batch job

On the CSF Gaussian is a multi-threaded application (shared memory) only, so a job will not run across multiple compute nodes. Hence you are limited to a maximum of 40 cores. This means that you must run in the multicore partition to confine your job to a single node.

Follow the steps below to submit a parallel Gaussian job.

Important Information About Requesting cores

You MUST declare the number of cores for your job twice – via the #SBATCH -n request in your jobscript and using a Gaussian specific environment variable, also set in the jobscript. See below for further details and examples.

Old method: We used to advise setting the number of cores to use for a job in the Gaussian input file using %NProcsShared or %nprocs. But this can easily lead to mistakes – if you change the number of cores in the jobscript but forget to also change it in the Gaussian input file you will either use too few cores (some of the cores your job requested are sat idle) or too many cores (your job is trying to use cores it shouldn’t, possibly trampling on another user’s job).

New method: We now recommend setting the GAUSS_PDEF environment variable in your jobscript (set it to $SLURM_NTASKS) so that it always tells Gaussian the correct number of cores to use. This also means you don’t have to keep editing your Gaussian input file each time you want to run the input deck with a different number of cores.

For example, depending which shell you use (look at the first line of your jobscript to find out):

# If using BASH (the default shell used by most CSF users):
export GAUSS_PDEF=$SLURM_NTASKS

# If using CSH (the 'traditional' shell used by chemistry users):
setenv GAUSS_PDEF $SLURM_NTASKS

Remember that $SLURM_NTASKS is automatically set by the batch system to the number of cores you requested on the #SBATCH -n line in the jobscript. Hence there is only one number-of-cores to change if you want to run the job with a different number of cores.

Note: %NProcShared in the input file takes precedence over GAUSS_PDEF, so one could override the latter by setting %NProcShared in the input file. If you are using our recommended method of setting GAUSS_PDEF in the jobscript, please remove any %NProcShared line from your Gaussian input files.

Example job submission

You MUST declare the number of cores for your job twice – via the #SBATCH -n request in your jobscript and using a Gaussian specific variable, also set in the jobscript. See the above explanation for further details.

It is recommended you run from within your scratch area and use one directory per job:

cd ~/scratch
mkdir job1
cd job1

Create a job script, for example:

  • BASH shell verison:
    #!/bin/bash --login
    #SBATCH -p multicore      # (or --partition) Single-node multicore
    #SBATCH -n 20             # (or --ntasks=) Number of cores (2--40)
    
    # Load g16 for the CPU type our job is running on
    module load gaussian/g16c01_em64t_detectcpu
    
    ## Set up scratch dir (please do this!)
    export GAUSS_SCRDIR=/scratch/$USER/gau_temp_$SLURM_JOB_ID
    mkdir -p $GAUSS_SCRDIR
    
    ## Say how much memory to use (4GB per core)
    export GAUSS_MDEF=$((SLURM_NTASKS*4))GB
    
    ## Inform Gaussian how many cores to use
    export GAUSS_PDEF=$SLURM_NTASKS
    
    $g16root/g16/g16 < file.inp > file.out
    
  • C shell version:
    #!/bin/csh         
    #SBATCH -p multicore      # (or --partition) Single-node multicore
    #SBATCH -n 20             # (or --ntasks=) Number of cores (2--40)
    
    # Load g16 for the CPU type our job is running on
    module load gaussian/g16c01_em64t_detectcpu
    
    # Set up scratch dir (please do this!)
    setenv GAUSS_SCRDIR /scratch/$USER/gau_temp_$SLURM_JOB_ID
    mkdir -p $GAUSS_SCRDIR
    
    ## Say how much memory to use (4GB per core)
    @ mem = ( $SLURM_NTASKS * 4 )
    setenv GAUSS_MDEF ${mem}GB
    
    ## Inform Gaussian how many cores to use
    setenv GAUSS_PDEF $SLURM_NTASKS
    
    $g16root/g16/g16 < file.inp > file.out
    

Submit with the command:

sbatch scriptname

where scriptname is the name of your job script.

GAUSS_PDEF vs GAUSS_CDEF

Gaussian has two environment variables that can be used to say how many cores to use. We saw the GAUSS_PDEF variable above. Alternatively the GAUSS_CDEF variable can be set but this must only be used when you are using all of the cores on a compute node. If you are unsure whether your job does this, please use the GAUSS_PDEF variable as shown above.

The GAUSS_CDEF variable may give increased performance because it pins g16 threads (used to do the parallel processing in Gaussian) to specific CPU cores. Without pinning Linux is free to move the threads between cores, although it tries not to do this. When a thread is moved it invalidates the low-level memory caches which may reduce performance.

The GAUSS_CDEF variable uses a slightly different format to the GAUSS_PDEF variable, as shown below:

#SBATCH -p multicore      # (or --partition) Single-node multicore
#SBATCH -n 40             # (or --ntasks=) Use all 40 cores in a cascade lake node

# Say which cores to use, e.g., 0-39 (BASH shell):
export GAUSS_CDEF=0-$((SLURM_NTASKS-1))

# Say which cores to use, e.g., 0-39 (C shell):
@ maxcore = ( $SLURM_NTASKS - 1 )
setenv GAUSS_CDEF 0-$maxcore

Reminder: the GAUSS_CDEF variable should only be used when you are using all cores on a compute node. Jobs found to be using this variable incorrectly will be killed without warning because you will be slowing down other users’ jobs.

Further info

Last modified on August 29, 2024 at 2:30 pm by George Leaver