The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
Molpro
Overview
Molpro is a comprehensive system of ab initio programs for advanced molecular electronic structure calculations
Versions 2015.1.0, 2015.1.5_OpenMP 2015.1.27 OpenMP are installed on the CSF. All versions support multi-node parallelism using MPI.
Restrictions on use
This software is restricted to a specific research group. Please cite the software when used in your research.
Set up procedure
To access the software you must first load one of the following modulefiles:
module load apps/binapps/molpro/2015.1.0 module load apps/binapps/molpro/2015.1.5_omp module load apps/binapps/molpro/2015.27_omp
Running the application
Please do not run molpro on the login node. Jobs should be submitted to the compute nodes via batch. Molpro is supplied with a script named molpro
which will run the actual molpro binary molpro.exe
with the requested number of cores. To see the available options run
molpro -h
on the login node. But please do NOT run simulations on the login node.
Serial batch job submission
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings molpro args
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Single-node Parallel batch job submission
Single-node parallel jobs can be run using MPI (multiple molpro processes are started) or with OpenMP (a single molpro processes is started that runs multiple threads). The results and efficiency of these two methods may be different. OpenMP parallelism requires the molpro 2015.1.5_omp
modulefile to be loaded.
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -pe smp.pe 8 # Number of cores (max 24) # $NSLOTS is automatically set to the number of cores set above # Run molpro with multiple MPI processes on a single node molpro -n $NSLOTS args ### OR ### # Run molpro.exe with multiple threads (using OpenMP) on a single node. # Note, running the molpro helper script always tries to start MPI # processes. molpro.exe -t $NSLOTS args
Submit the jobscript using:
qsub scriptname
Multi-node Parallel batch job submission
A multi-node parallel job must use the MPI method of starting molpro.
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -pe orte-24-ib.pe 48 # Minimum of 48 cores, must be a multiple of 24 molpro -n $NSLOTS args
Submit the jobscript using:
qsub scriptname
Experimental Multi-node Parallel Job
It is also possible to start the molpro.exe
directly with mpirun
as we do with other MPI applications. In this case you must load an MPI module file. For example:
# This is suitable for fully-populated nodes (where you are using all cores on the node) module load mpi/intel-14.0/openmpi/1.8.3m-ib module load apps/binapps/molpro/2015.1.0
Then submit a batch job containing:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #### See previous example for other PEs #$ -pe orte-24-ib.pe 48 # Minimum of 48 cores, must be a multiple of 24 # We start the molpro.exe with mpirun: mpirun -n $NSLOTS molpro.exe args
Further info
Updates
None.