ONETEP

Overview

ONETEP (Order-N Electronic Total Energy Package) is a linear-scaling code for quantum-mechanical calculations based on density-functional theory.

Version 7.2 is installed on the CSF. It was compiled with the GNU compiler suite 14.2.0 and also with the Intel OneAPI 2024.2.0 compiler suite (for the HPC Pool nodes.)

An older version from the CSF prior to the RHEL9/Slurm upgrade is available as version 6.1.1.6. It was compiled with Intel Fortran 17.

Restrictions on use

This software is only available to users from Prof. Kaltsoyannis’ group. All users who ask for access must be approved by Prof. Kaltsoyannis before access can be granted.

All users being added to the group must confirm that they will abide by the terms of the license. The information below outlines the main terms, but but is not a substitute for the license:

  • You must be a member of staff or student at the University of Manchester.
  • This software is only for academic use and your research needs.
  • Use of the Software, or any code which is a modification of, enhancement to, derived from or based upon the Software, for industrially funded research or for providing services to a non-academic third party is expressly prohibited, except in the case of a member of the Group carrying out research that is funded by a CASE studentship.
  • All published work; including journal and conference papers and theses; produced in part using the Software must the cite the paper “Introducing ONETEP: Linear-scaling density functional simulations on parallel computers” C.-K. Skylaris, P. D. Haynes, A. A. Mostofi and M. C. Payne. J. Chem. Phys. 122, 084119 (2005)” along with any other relevant ONETEP references. The USER must be listed as the author or a co-author on this work.
  • The license has some provision for the license holder and their research group to produce additional functionality or code development. You should not undertake such work without first seeking guidance from the University of Manchester about intellectual property.

There is no access to the source code on the CSF.

Set up procedure

We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.

Load one of the following modulefiles:

# OpenMP, OpenMPI, OpenMPI+OpenMP versions (FFTW3) [AMD Nodes]
module load apps/gcc/onetep/7.2

# OpenMP, OpenMPI, OpenMPI+OpenMP versions (MKL, ScaLAPACK) [Intel Nodes]
module load apps/intel-oneapi-2024.2.0/onetep/7.2

# OpenMP, OpenMPI, OpenMPI+OpenMP versions (MKL, ScaLAPACK)
module load apps/intel-17.0/onetep/6.1.1.6

Running the application

Please do not run ONETEP on the login node. Jobs should be submitted to the compute nodes via batch.

There are 3 ‘flavours’ available on the CSF3 – one which uses only OpenMP parallelism (onetep-omp), one which uses only MPI parallelism (onetep-mpi), and one which uses both (onetep-mpi-omp). All three are available via the same modulefile, but you must use the correct executable name.

Parallel batch job submission using OpenMP on AMD Nodes (2 to 168 cores)

Create a batch submission script (which will load the modulefile in the jobscript), for example:

#!/bin/bash --login
#SBATCH -p multicore     # AMD 168-core nodes
#SBATCH -n 1             # (or --ntasks=) Only one non-MPI task
#SBATCH -c 8             # (or --cpus-per-task) Number of OpenMP threads
#SBATCH -t 4-0           # Maximum wallclock (4-0 is 4 days, max permitted is 7)

module purge
module load apps/gcc/onetep/7.2

# This will use the number of cores provided on the -c line above
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

onetep-omp input.dat
            #
            # Where input.dat is your input file. 
            # You may also need other files in the working dir e.g. recopt files

Submit the jobscript using:

sbatch scriptname

where scriptname is the name of your jobscript.

Parallel batch job submission using MPI on AMD Nodes (2 to 168 cores)

Create a batch submission script (which will load the modulefile in the jobscript), for example:

#!/bin/bash --login
#SBATCH -p multicore     # AMD 168-core nodes
#SBATCH -n 8             # (or --ntasks=) Number of cores for MPI tasks
#SBATCH -t 4-0           # Maximum wallclock (4-0 is 4 days, max permitted is 7)

module purge
module load apps/gcc/onetep/7.2

# mpirun will start $SLURM_NTASKS processes (the number supplied by -n above)
mpirun onetep-mpi input.dat
            #
            # Where input.dat is your input file. 
            # You may also need other files in the working dir e.g. recopt files

Submit the jobscript using:

sbatch scriptname

where scriptname is the name of your jobscript.

Parallel batch job submission using MPI on the HPC Pool (multinode, 128-1024 cores)

Create a batch submission script (which will load the modulefile in the jobscript), for example:

#!/bin/bash --login
#SBATCH -p hpcpool       # Intel 32-core nodes
#SBATCH -n 128           # (or --ntasks=) Number of cores for MPI tasks
#SBATCH -t 4-0           # Maximum wallclock (4-0 is 4 days, max permitted is 4)
#SBATCH -A hpc-projcode  # All HPC-Pool projects require an account code

module purge
module load apps/gcc/onetep/7.2

# mpirun will start $SLURM_NTASKS processes (the number supplied by -n above)
mpirun onetep-mpi input.dat
            #
            # Where input.dat is your input file. 
            # You may also need other files in the working dir e.g. recopt files

Submit the jobscript using:

sbatch scriptname

where scriptname is the name of your jobscript.

Parallel batch job submission using OpenMP and MPI (mixed-mode) the HPC Pool (multinode, 128-1024 cores)

This should method should be used only for multi-node (HPC Pool) jobs.

Create a batch submission script (which will load the modulefile in the jobscript), for example:

#!/bin/bash --login
#SBATCH -p hpcpool       # Intel 32-core nodes
#SBATCH -N 4             # (or --nodes=) Minimum is 4, Max is 32. Job uses 32 cores on each node.
#SBATCH -n 4             # (or --ntasks=) TOTAL number of tasks - the MPI processes.
#SBATCH -c 32            # (or --cpus-per-task=) Number of cores per MPI process.
#SBATCH -t 4-0           # Maximum wallclock (4-0 is 4 days, max permitted is 4)
#SBATCH -A hpc-projcode  # All HPC-Pool projects require an account code

module purge
module load apps/gcc/onetep/7.2

# SLURM_NTASKS will be set to 4 (-n above)
# SLURM_CPUS_PER_TASK will be set to 32 (-c above)

# Instruct each MPI process to use 32 OpenMP threads
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

# Run 4 MPI processes in total, one per compute node. The --map-by flag describes this distribution.
mpirun -n $SLURM_NTASKS --map-by ppr:1:node:PE=$OMP_NUM_THREADS onetep-mpi-omp input.dat

Submit the jobscript using:

sbatch scriptname

where scriptname is the name of your jobscript.

If you would like to know more about running mixed mode jobs please see our HPC Pool documentation.

Further info

Updates

None.

Last modified on January 23, 2026 at 4:20 pm by George Leaver