Quantum Espresso

If you are a windows user – please ensure you create your jobscript ON THE CSF directly using gedit. This will prevent your job going into error (Eqw). Text files created on windows have hidden characters that linux cannot read. For further information please see the guide to using the system from windows, in particular the section about text & batch submission script files.

Overview

Quantum Espresso is a suite of applications for performing electronic structure calculations using Density Functional Theory (DFT). At its core are two principal applications, PWscf and CP, for performing plane wave self-consistent field calculations and Carr-Parrinello molecular dynamics respectively. There are also applications for studying properties such as phonons and excitation spectra, as well as chemical reaction pathways.

Set up procedure

module load apps/intel-17.0/quantum-espresso/6.4
module load apps/intel-18.0/quantum-espresso/6.7

Running the application

Serial batch job submission

The following batch script will launch a serial job and run the PWscf application, pw.x:

#!/bin/bash --login
#SBATCH -p serial
#SBATCH -t 0-1

module load apps/intel-18.0/quantum-espresso/6.7

pw.x myqejob.in > myqejob.out

Submit the job to the queue by running sbatch jobscript.sh, where jobscript.sh is the name of your batch script.

Single node parallel batch job submission

The following batch script will request 8 cores and again run pw.x. To use multiple processes we need to launch the application with mpirun:

#!/bin/bash --login
#SBATCH -p multicore  # (or --partition=) Job will use the compute nodes reserved for parallel jobs.
#SBATCH -n 8          # (or --ntasks=) Number of cores to use.
#SBATCH -t 0-1        # This is the wallclock time limit. 0-1 is 1 hour. Job will be terminated if
                      # still running after after 1 hour.

module load apps/intel-18.0/quantum-espresso/6.7

mpirun pw.x myqejob.in > myqejob.out # No need to specify -np, mpirun will detect this

Again submit the job to the queue by running sbatch jobscript.sh, where jobscript.sh is the name of your batch script.

Multi-node parallel batch job submission

You need to have a hpcpool account code to do this, if you do not know what this is, start with the single node parallel batch job submission above.

Large DFT jobs can be very demanding, requiring the aggregate CPU resources and RAM of multiple nodes. The application is launched in exactly the same manner as before, but this time we request multiple nodes:

#!/bin/bash --login
#SBATCH -p hpcpool        # The "partition" - named hpcpool
#SBATCH -N 4              # (or --nodes=) Minimum is 4, Max is 32. Job uses 32 cores on each node.
#SBATCH -n 128            # (or --ntasks=) TOTAL number of tasks. Max is 1024.
#SBATCH -t 1-0            # Wallclock limit. 1-0 is 1 day. Maximum permitted is 4-0 (4-days).
#SBATCH -A hpc-proj-name  # Use your HPC project code

module load apps/intel-18.0/quantum-espresso/6.7

mpirun pw.x myqejob.in > myqejob.out # No need to specify -np, mpirun will detect this

Once again submit the job to the queue by running sbatch jobscript.sh, where jobscript.sh is the name of your batch script.

Further info

Updates

None.

Last modified on May 5, 2026 at 2:55 pm by Martin Wolstencroft