ONETEP
Overview
ONETEP (Order-N Electronic Total Energy Package) is a linear-scaling code for quantum-mechanical calculations based on density-functional theory.
Version 6.1.1.6 is installed on CSF4. It was compiled with Intel 2020.02.
Restrictions on use
This software is only available to users from Prof. Kaltsoyannis’ group. All users who ask for access must be approved by Prof. Kaltsoyannis before access can be granted.
All users being added to the group must confirm that they will abide by the terms of the license. The information below outlines the main terms, but but is not a substitute for the license:
- You must be a member of staff or student at the University of Manchester.
- This software is only for academic use and your research needs.
- Use of the Software, or any code which is a modification of, enhancement to, derived from or based upon the Software, for industrially funded research or for providing services to a non-academic third party is expressly prohibited, except in the case of a member of the Group carrying out research that is funded by a CASE studentship.
- All published work; including journal and conference papers and theses; produced in part using the Software must the cite the paper “Introducing ONETEP: Linear-scaling density functional simulations on parallel computers” C.-K. Skylaris, P. D. Haynes, A. A. Mostofi and M. C. Payne. J. Chem. Phys. 122, 084119 (2005)” along with any other relevant ONETEP references. The USER must be listed as the author or a co-author on this work.
- The license has some provision for the license holder and their research group to produce additional functionality or code development. You should not undertake such work without first seeking guidance from the University of Manchester about intellectual property.
There is no access to the source code on the CSF.
Set up procedure
Load one of the following modulefiles:
module load onetep/6.1.1.6-iomkl-2020.02
Running the application
Please do not run ONETEP on the login node. Jobs should be submitted to the compute nodes via batch.
There are 3 ‘flavours’ available on the CSF4 – one which uses only OpenMP parallelism (onetep-omp
), one which uses only MPI parallelism (onetep-mpi
), and one which uses both (onetep-mpi-omp
). All three are available via the same modulefile, but you must use the correct executable name.
Parallel batch job submission using OpenMP on 2 to 40 cores
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=) Single-node multi-core job #SBATCH -n 20 # (or --ntasks=) Number of cores (2--40) # Load the version you require module load onetep/6.1.1.6-iomkl-2020.02 # Inform onetep how many OpenMP threads it can use. Will use the number specified above. export OMP_NUM_THREADS=$SLURM_NTASKS onetep-omp input # # Where input is your input file. # You may also need other files in the # working dir e.g. recopt files
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Small parallel batch job submission using MPI on 2 to 40 cores
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=) Single-node multi-core job #SBATCH -n 20 # (or --ntasks=) Number of cores (2--40) # Load the version you require module load onetep/6.1.1.6-iomkl-2020.02 # mpirun knows how many cores to use mpirun onetep-mpi input # # Where input is your input file. # You may also need other files in the # working directory, e.g. recopt files.
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Large parallel batch job submission using MPI on 80 or more cores
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multinode # (or --partition=) Multi-node job #SBATCH -n 80 # (or --ntasks=) 80 or more cores in multiples of 40 # Load the version you require module load onetep/6.1.1.6-iomkl-2020.02 # mpirun knows how many cores to use mpirun onetep-mpi input # # Where input is your input file. # You may also need other files in the # working directory, e.g. recopt files.
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Large Parallel batch job submission using OpenMP and MPI on 80 or more cores (Mixed Mode)
This should method should be used only for multi-node jobs where a small number of MPI tasks will be run and each MPI task will use multiple OpenMP threads. The example below uses: three compute nodes (each compute node has 40 cores), with two MPI tasks per compute node (hence 6 in total) and each MPI task will use 20-cores (hence 120 cores in total).
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multinode # (or --partition=) Multi-node job #SBATCH -N 3 # (or --numnodes=) Number of compute nodes (can be 2 or more) #SBATCH -n 6 # (or --ntasks=) Number of MPI tasks to run in total. They will be # spread across the nodes specified above. #SBATCH -c 20 # (or --cpus-per-task) Number of OpenMP threads to be used by each MPI task. # You should ensure (n/N)*c = 40. # Load the version you require module load onetep/6.1.1.6-iomkl-2020.02 # Inform each MPI task how many OpenMP cores to use export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK # For an MPI+OpenMP app SLURM knows to run numtasks MPI procs across numnodes nodes mpirun onetep-mpi-omp input # # Where input is your input file. # You may also need other files in the # working directory, e.g. recopt files.
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Further info
- The ONETEP websitehas documentation, tutorials, keyword information and an FAQ.
Updates
None.