OpenMolcas
Overview
OpenMolcas is an open source version of MOLCAS.
Versions 18.09, 20.10, and 20.10 with DMRG enabled are installed on CSF4. All versions use MPI for parallelism.
Restrictions on use
OpenMolcas is open source software released under the Lesser General Public License (LGPL). It is free to use by all members of the University.
Set up procedure
To access the software you must first load the modulefiles
module load openmolcas/20.10-iomkl-2020.02-python-3.8.2 module load openmolcas/18.09-iomkl-2020.02-python-3.8.2
The Density Matrix Renormalization Group (DMRG version) has been compiled and is available via the following modulefiles:
# Uses commit 71e2b130 26/11/2020 from the qcmaquis-release with patch from the Chilton group. # MPI parallel for multi-node jobs module load openmolcas-dmrg/20.10-iomkl-2020.02-python-3.8.2 # OpenMP parallel for single-node (multicore) jobs module load openmolcas-dmrg/20.10-iimkl-2020.02-python-3.8.2
We now recommend that for batch jobs you load the modulefile in the jobscript rather than loading it on the command line prior to submission. See below for examples.
Running the application
Please do not run OpenMolcas on the login node. Jobs should be submitted to the compute nodes via batch. NOTE we now recommend loading modules within your batch scripts.
Serial batch job submission
Create a batch submission script, for example:
#!/bin/bash --login module load openmolcas/20.10-iomkl-2020.02-python-3.8.2 pymolcas mymol.input # # Add the command: # pymolcas -clean mymol.input # to have the temporary scratch directory deleted at the end of the job (see below)
Submit the jobscript using sbatch scriptname
where scriptname is the name of your jobscript.
Single Node Parallel batch job submission
Parallel jobs on a single node using OpenMP are currently possible. Multi-node calculations using MPI are not currently supported.
Create a batch submission script, for example:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=) Single-node multi-core job #SBATCH -n 16 # (or --ntasks=) Number of cores (2--40) # Load the version you require module load openmolcas/20.10-iomkl-2020.02-python-3.8.2 pymolcas -np $SLURM_NTASKS mymol.input # # Add the command: # pymolcas -clean mymol.input # to have the temporary scratch directory deleted at the end of the job (see below)
Submit the jobscript using sbatch scriptname
where scriptname is the name of your jobscript.
Multi Node Parallel batch job submission
Parallel jobs on multiple compute nodes using MPI are possible. However not all openmolcas modules benefit from this parallelisation. Please check the Molcas documentation.
Create a batch submission script, for example:
#!/bin/bash --login #SBATCH -p multinode # (or --partition=) Multi-node job #SBATCH -n 80 # (or --ntasks=) 80 or more cores in multiples of 40 # Load the version you require module load openmolcas/20.10-iomkl-2020.02-python-3.8.2 pymolcas -np $SLURM_NTASKS mymol.input
Submit the jobscript using sbatch scriptname
where scriptname is the name of your jobscript.
OpenMolcas DMRG
The Density Matrix Renormalization Group (DMRG) version has been compiled from the qcmaquis-release gitlab branch. A patch file has also been applied to the source tree to make various modifications. This was supplied by Dr. Nick Chilton.
MPI parallel for single or multi-node jobs. For example:
#!/bin/bash #SBATCH -p multinode #SBATCH -n 80 # Can be 80 or more cores in multiples of 40 module load openmolcas-dmrg/20.10-iomkl-2020.02-python-3.8.2 # OpenMolcas itself will use MPI parallelism pymolcas -np $SLURM_NTASKS input
OpenMP parallel for single-node (multicore) jobs:
#!/bin/bash #SBATCH -p multicore #SBATCH -n 16 # Can be 2--40 cores module load openmolcas-dmrg/20.10-iimkl-2020.02-python-3.8.2 # The maths libraries may use multiple threads to speed up execution export OMP_NUM_THREADS=$SLURM_NTASKS # OpenMolcas itself will not use MPI parallelism so do not add the -np flag pymolcas input
OpenMolcas Scratch (temp) files
It is possible to modify how OpenMolcas uses your scratch directory for temporary files. Please read the following section so that you are aware of what OpenMolcas is doing with your scratch directory (you may create a lot of temporary junk files you do not need to keep).
The modulefiles above set the following environment variable:
MOLCAS_WORKDIR=/scratch/username
where username
is your CSF username. This instructs OpenMolcas to create a directory in your scratch area named after your input file. For example if your input file is called test000.input
then OpenMolcas will create a directory named
/scratch/username/test000
in which to store temporary files used during the computation. This directory will not be deleted at the end of the job. Hence you may end up with a lot of these temporary directories if you run many jobs!
To instruct OpenMolcas to delete this directory at the end of the job add the flag -clean
to the pymolcas
command in your jobscript. For example:
# Automatically delete the temporary scratch directory at the end of the job (RECOMMENDED) pymolcas -clean test000.input
If you wish to keep temporary directories and use a different temporary directory name each time you run (and rerun) the same input file (e.g., if you run the test000.input
input with a different number of CPU cores to do some timing tests) you should instruct OpenMolcas to add a random number to the directory name by adding the following to your jobscript:
# OpenMolcas will add a random number to the temporary directory name export MOLCAS_PROJECT=NAMEPID
Removing the -clean
flag from the pymolcas
command in your jobscript will prevent OpenMolcas from deleting it.
Using a Job Array
If running OpenMolcas in a job array you may need to create a directory per task otherwise the temporary directories and files created by OpenMolcas will overwrite each other when a job array task runs. Remember that OpenMolcas will use the name of your input file when creating its temporary directory. If each task in the job array uses the same OpenMolcas input filename then this will cause a problem when several job array tasks run at the same time. To fix this, please add the following to your jobscript before the lines that runs OpenMolcas:
export MOLCAS_WORKDIR=/scratch/$USER/molcas_${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID} mkdir -p $MOLCAS_WORKDIR
Each task in the job array will have its own directory. Within there will be a directory named after the input file (see above).
Further info
Updates
None.