Molpro

Overview

Molpro is a comprehensive system of ab initio programs for advanced molecular electronic structure calculations

2019.2.0, 2018.2.0 & 2015.1.27 OpenMP are installed on the CSF3. Multi-node parallelism using MPI is supported by this version (though there are no multi-node resources in CSF3 yet).

Restrictions on use

This software is restricted to a specific research group. Please cite the software when used in your research.

Set up procedure

To access the software you must load one of the following modulefiles in your jobscript:

module load apps/binapps/molpro/2021.2.1_mpipr
module load apps/binapps/molpro/2021.2.1_sockets
module load apps/binapps/molpro/2021.1.0_omp
module load apps/binapps/molpro/2018.2.0_omp
module load apps/binapps/molpro/2015.1.27_omp
module load apps/binapps/molpro/2019.2.0_omp

Running the application

Please do not run molpro on the login node. Jobs should be submitted to the compute nodes via batch. Molpro is supplied with a script named molpro which will run the actual molpro binary molpro.exe with the requested number of cores. To see the available options run

molpro -h

on the login node. But please do NOT run simulations on the login node.

Serial batch job submission

Make sure you have your input file in the current directory and then create a jobscript in that directory. For example:

#!/bin/bash --login
#$ -S /bin/bash
#$ -cwd             # Job will run from the current directory

module load apps/binapps/molpro/2018.2.0_omp

molpro args

Submit the jobscript using:

qsub scriptname

where scriptname is the name of your jobscript.

Single-node Parallel batch job submission

Single-node parallel jobs can be run using MPI (multiple molpro processes are started) or with OpenMP (a single molpro processes is started that runs multiple threads). The results and efficiency of these two methods may be different.

Make sure you have your input file in the current directory and then create a jobscript in that directory. For example:

#!/bin/bash --login
#$ -S /bin/bash
#$ -cwd             # Job will run from the current directory
#$ -pe smp.pe 8     # Number of cores (max 32)

module load apps/binapps/molpro/2018.2.0_omp

# Run molpro with multiple MPI processes on a single node
# $NSLOTS is automatically set to the number of cores set above
molpro -n $NSLOTS args

### OR ###

# Run molpro.exe with multiple threads (using OpenMP) on a single node.
# Note, running the molpro helper script always tries to start MPI
# processes.
molpro.exe -t $NSLOTS args

Submit the jobscript using:

qsub scriptname

Multi-node Parallel batch job submission – not available on CSF3 at the moment

A multi-node parallel job must use the MPI method of starting molpro.

Make sure you have the modulefile loaded then create a batch submission script, for example:

#!/bin/bash --login
#$ -S /bin/bash
#$ -cwd                   # Job will run from the current directory
#$ -pe ??????.pe ??   # Minimum of 48 cores, must be a multiple of 24

molpro -n $NSLOTS args

Submit the jobscript using:

qsub scriptname

Experimental Multi-node Parallel Job – not applicable to CSF3 at the moment

It is also possible to start the molpro.exe directly with mpirun as we do with other MPI applications. In this case you must load an MPI module file. For example:

# This is suitable for fully-populated nodes (where you are using all cores on the node)
module load mpi/intel-14.0/openmpi/1.8.3m-ib
module load apps/binapps/molpro/2015.1.0

Then submit a batch job containing:

#!/bin/bash --login
#$ -S /bin/bash
#$ -cwd                   # Job will run from the current directory
#### See previous example for other PEs
#$ -pe ?????.pe ???   # Minimum of 48 cores, must be a multiple of 24

# We start the molpro.exe with mpirun:

mpirun -n $NSLOTS molpro.exe args

Further info

Updates

None.

Last modified on August 3, 2021 at 5:31 pm by Ben Pietras