NWChem

If you are a windows user – please ensure you create your jobscript ON THE CSF directly using gedit. This will prevent your job going into error (Eqw). Text files created on windows have hidden characters that linux cannot read. For further information please see the guide to using the system from windows, in particular the section about text & batch submission script files.

Overview

NWChem is a suite of highly scalable tools for atomistic computational chemistry, offering a wide range of quantum, classical and combined approximations. Molecular electronic structure calculations using ab initio, DFT and higher order methods utilise Gaussian orbitals, while plane wave + pseudopotential methods are available for periodic calculations. Most of these methods can also be used in conjunction with the QM/MM and MD modules.

Set up procedure

NWChem can be accessed by loading the module file as follows:

module load apps/intel-17.0/nwchem/6.8.1

This will also automatically load the appropriate MPI modulefile into your environment and set some environment variables where NWChem will look for basis set libraries, etc.

Running the application

Please do not run NWChem on the login nodes. Jobs should instead be submitted to the compute nodes via the batch system, SGE. Some example jobs are given below.

Serial batch job submission

Two NWChem executables are available: nwchem for MPI jobs and nwchem.serial for serial jobs. An example jobscript for a serial job is shown below:

#!/bin/bash --login
#$ -cwd               # Application will run from current working directory
#$ -N h2o             # Name given to batch job (optional)

module load apps/intel-17.0/nwchem/6.8.1

nwchem.serial h2o.nw  # Use the serial nwchem executable

Submit the jobscript using:

qsub scriptname

Single node parallel batch job submission (2-32 cores)

An example jobscript for a single node MPI job is given below. In this example, stdout and stderr are also redirected to a single file, which is often useful. SGE will still create stdout and stderr files, but these will now be empty:

#!/bin/bash --login
#$ -cwd               # Application will run from current working directory
#$ -N uo2_sodft       # Name given to batch job (optional)
#$ -pe smp.pe 32      # Request 32 cores using SMP parallel environment

module load apps/intel-17.0/nwchem/6.8.1

mpirun -np $NSLOTS nwchem uo2.nw > uo2.nwo 2>&1  # Use MPI version, redirect stdout/err to file

Submit the jobscript using:

qsub scriptname

Multi-node parallel batch job submission (48-120 cores in multiples of 24)

The following jobscript requests 120 cores (five 24 core nodes with Infiniband network):

#!/bin/bash --login
#$ -cwd                  # Application will run from current working directory
#$ -N p2ta_eomccsd       # Name given to batch job (optional)
#$ -pe mpi-24-ib.pe 120  # Request 120 cores using IB parallel environment

module load apps/intel-17.0/nwchem/6.8.1

mpirun -np $NSLOTS nwchem p2ta.nw > p2ta.nwo 2>&1

Submit the jobscript using:

qsub scriptname

Last modified on March 14, 2019 at 7:42 pm by George Leaver