ONETEP

Overview

ONETEP (Order-N Electronic Total Energy Package) is a linear-scaling code for quantum-mechanical calculations based on density-functional theory.

Version 6.1.1.6 is installed on the CSF. It was compiled with Intel Fortran 17.

Restrictions on use

This software is only available to users from Prof. Kaltsoyannis’ group. All users who ask for access must be approved by Prof. Kaltsoyannis before access can be granted.

All users being added to the group must confirm that they will abide by the terms of the license. The information below outlines the main terms, but but is not a substitute for the license:

  • You must be a member of staff or student at the University of Manchester.
  • This software is only for academic use and your research needs.
  • Use of the Software, or any code which is a modification of, enhancement to, derived from or based upon the Software, for industrially funded research or for providing services to a non-academic third party is expressly prohibited, except in the case of a member of the Group carrying out research that is funded by a CASE studentship.
  • All published work; including journal and conference papers and theses; produced in part using the Software must the cite the paper “Introducing ONETEP: Linear-scaling density functional simulations on parallel computers” C.-K. Skylaris, P. D. Haynes, A. A. Mostofi and M. C. Payne. J. Chem. Phys. 122, 084119 (2005)” along with any other relevant ONETEP references. The USER must be listed as the author or a co-author on this work.
  • The license has some provision for the license holder and their research group to produce additional functionality or code development. You should not undertake such work without first seeking guidance from the University of Manchester about intellectual property.

There is no access to the source code on the CSF.

Set up procedure

We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.

Load one of the following modulefiles:

module load apps/intel-17.0/onetep/6.1.1.6

Running the application

Please do not run ONETEP on the login node. Jobs should be submitted to the compute nodes via batch.

There are 3 ‘flavours’ available on the CSF3 – one which uses only OpenMP parallelism (onetep-omp), one which uses only MPI parallelism (onetep-mpi), and one which uses both (onetep-mpi-omp). All three are available via the same modulefile, but you must use the correct executable name.

Parallel batch job submission using OpenMP on 2 to 32 cores

Create a batch submission script (which will load the modulefile in the jobscript), for example:

#!/bin/bash --login
#$ -cwd             # Job will run from the current directory
                    # NO -V line - we load modulefiles in the jobscript
#$ -pe smp.pe 4     # Choose a number of cores between 2 and 32

export OMP_NUM_THREADS=$NSLOTS          # This will auto set to the number on the pe line
module load apps/intel-17.0/onetep/6.1.1.6

onetep-omp input
            #
            # Where input is your input file. 
            # You may also need other files in the working dir e.g. recopt files

Submit the jobscript using:

qsub scriptname

where scriptname is the name of your jobscript.

Parallel batch job submission using MPI on 2 to 32 cores

Create a batch submission script (which will load the modulefile in the jobscript), for example:

#!/bin/bash --login
#$ -cwd             # Job will run from the current directory
                    # NO -V line - we load modulefiles in the jobscript
#$ -pe smp.pe 4

module load apps/intel-17.0/onetep/6.1.1.6

mpirun -n $NSLOTS onetep-mpi input
   # Where input is your input file. 
   # You may also need other files in the working dir e.g. recopt files
   # $NSLOTS will auto set to the number on the pe line

Submit the jobscript using:

qsub scriptname

where scriptname is the name of your jobscript.

Parallel batch job submission using MPI on 48 to 120 cores

Create a batch submission script (which will load the modulefile in the jobscript), for example:

#!/bin/bash --login
#$ -cwd             # Job will run from the current directory
                    # NO -V line - we load modulefiles in the jobscript
#$ -pe mpi-24-ib.pe 48

module load apps/intel-17.0/onetep/6.1.1.6

mpirun -n $NSLOTS onetep-mpi input
   # Where input is your input file. 
   # You may also need other files in the working dir e.g. recopt files
   # $NSLOTS will auto set to the number on the pe line

Submit the jobscript using:

qsub scriptname

where scriptname is the name of your jobscript.

Parallel batch job submission using OpenMP and MPI on 48 to 120 cores (Mixed Mode)

This should method should be used only for multi-node (e.g. mpi-24-ib.pe or HPC Pool) jobs.

Create a batch submission script (which will load the modulefile in the jobscript), for example:

#!/bin/bash --login
#$ -cwd
#$ -pe mpi-24-ib.pe 48           # This gives the job 2 x 24-core compute nodes

module load apps/intel-17.0/onetep/6.1.1.6

# Instruct each MPI process to use 12 OpenMP threads
export OMP_NUM_THREADS=12

# Run 4 MPI processes in total, one per socket. The --map-by flag describes this distribution.
mpirun -n 4 --map-by ppr:1:socket:pe=$OMP_NUM_THREADS onetep-mpi-omp input
      # No. of MPI processes x No. of OpenMP threads = total cores requested

Submit the jobscript using:

qsub scriptname

where scriptname is the name of your jobscript.

If you would like to know more about running mixed mode jobs please see our HPC Pool documentation.

Further info

Updates

None.

Last modified on April 29, 2021 at 1:51 pm by George Leaver