CP2K

Overview

CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological systems. It provides a general framework for different methods such as e.g., density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials.

Version 6.1.0 is installed on the CSF. The Serial and SMP versions were compiled by the developers (a “binary” install.) The MPI Parallel (Popt) version for larger multi-node jobs was compiled by the RI Team using Intel Compilers 19.1.2 & MKL.

Restrictions on use

The software is open source under the GNU General Public License.

Set up procedure

We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.

To access the software you must first load the appropriate modulefile.

The 6.1.0 modulefile sets up to use the pre-compiled version built by the CP2K developers. It has more features enabled but may be less optimized for CSF hardware.

# Features: libint fftw3 libxc xsmm libderiv_max_am1=6 libint_max_am=7 max_contr=4
module load cp2k/6.1.0                 # Versions: cp2k.sopt (serial)
                                       #           cp2k.ssmp (single node multi-core OpenMP parallel)

The 6.1-iomkl-2020.02 modulefile set up to use the version built by the Research Infrastructure team.

# Features: libint fftw3 libxc xsmm parallel mpi3 scalapack mkl libderiv_max_am1=5 libint_max_am=6 plumed
module load cp2k/6.1-iomkl-2020.02     # Versions: cp2k.popt (single and multi-node MPI parallel )

Running the application

Please do not run cp2k on the login node. Jobs should be submitted to the compute nodes via batch.

Serial batch job submission

Ensure you run the cp2k.sopt executable after loading one of the above modulefiles.

#!/bin/bash --login

# Load the required modulefile
module load cp2k/6.1.0

cp2k.sopt -i mysim.inp

Submit the jobscript using:

sbatch scriptname

where scriptname is the name of your jobscript.

Small OpenMP parallel batch job submission – 2 to 40 cores

Ensure you run the cp2k.ssmp executable after loading one of the above modulefiles.
This version will only run on 2 or more processors up to a maximum of 40.

#!/bin/bash --login
#SBATCH -p multicore               # (or --partition=) One compute node will be used
#SBATCH -n 16                      # (or --ntasks=) Number of cores, max 40.

# Load the required version
module load cp2k/6.1.0

# Inform cp2k how many cores it can use. $SLURM_NTASKS is automatically set to the number above.
export OMP_NUM_THREADS=$SLURM_NTASKS
cp2k.ssmp -i mysim.inp

Submit the jobscript using:

sbatch scriptname

where scriptname is the name of your jobscript.

Small MPI parallel batch job submission – 40 cores or fewer

Ensure you run the cp2k executable after loading the above modulefiles.

This jobscript will run on one compute node – i.e, 40 cores or fewer.

#!/bin/bash --login
#SBATCH -p multicore               # (or --partition=) One compute node will be used
#SBATCH -n 16                      # (or --ntasks=) Number of cores, max 40.

# Load the required version
module load cp2k/6.1-iomkl-2020.02

# mpirun knows how many cores to use
mpirun cp2k.popt -i mysim.inp

Submit the jobscript using:

sbatch scriptname

where scriptname is the name of your jobscript.

Large parallel batch job submission – 80 cores or more

Ensure you run the cp2k executable after loading the above modulefiles.

This jobscript will run on 2 or more compute nodes – i.e, 80 cores or more in multiples of 40.

#!/bin/bash --login
#SBATCH -p multinode               # (or --partition=) Two or more compute nodes will be used
#SBATCH -n 80                      # (or --ntasks=) Number of cores: 80 or more in multiples of 40

# Load the required version
module load cp2k/6.1-iomkl-2020.02

# mpirun knows how many cores to use
mpirun cp2k.popt -i mysim.inp

Submit the jobscript using:

sbatch scriptname

where scriptname is the name of your jobscript.

Further info

Updates

None.

Last modified on October 17, 2022 at 1:53 pm by George Leaver