The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
MOLCAS and MOLCAS@UU
Overview
MOLCAS is an ab-initio quantum chemistry software package with primary focus on multiconfigurational methods with applications typically connected to the treatment of highly degenerate states. This version supports serial and parallel computation.
MOLCAS@UU is a serial-only pre-compiled version of MOLCAS, provided by Uppsala University (UU). This version is free of charge.
Version 8.0 SP1 of both versions is installed on the CSF.
The GV graphics viewer application is also available (see below for how to run this correctly on the CSF).
Restrictions on use
The full MOLCAS verison is restricted to a specific research group. You will NOT be given access to this version unless you are a member of that research group.
The free MOLCAS@UU version still requires registration with the software provider via the form linked to at the MOLCAS@UU site. Please send confirmation you have done so to its-ri-team@manchester.ac.uk and we will discuss installation with you.
Set up procedure
To access the software you must first load one of the following modulefiles:
- Full paid-for MOLCAS version:
# For serial jobs and small (single-node) parallel jobs module load apps/binapps/molcas/8.0sp1 # For large multi-node parallel jobs using faster InfiniBand networking module load apps/binapps/molcas/8.0sp1-ib
- Free MOLCAS@UU version:
module load apps/binapps/molcasuu/8.0sp1 # Can only run serial jobs
Running the application
Please do not run MOLCAS on the login node. Jobs should be submitted to the compute nodes via batch.
Serial batch job submission
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings molcas mymol.input ### Use molcas -clean mymol.input to have the temporary scratch directory ### deleted at the end of the job (see below)
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Parallel batch job submission
MOLCAS will run itself using a given number of cores. You do not use the mpirun
command used by most MPI applications. Instead you must set the environment variable MOLCAS_CPUS
to the number of cores you request in the jobscript. The easiest way to do this is to set MOLCAS_CPUS=$NSLOTS
. This will ensure MOLCAS always uses only the number of cores reserved for your job by the batch system – see below for an example:
Make sure you have the modulefile loaded then create a batch submission script, for example:
Single-node Parallel Job
Make sure you have the modulefile loaded (not the -ib
version – see above) then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -pe smp.pe 8 # Max is 24 # $NSLOTS is automatically set to the number specified on the -pe line above export MOLCAS_NCPUS=$NSLOTS molcas mymol.input
Multi-node Parallel Job
Make sure you have the -ib
modulefile loaded (see above) then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -pe orte-24-ib.pe 48 # Minimum is 48, must be a multiple of 24 # $NSLOTS is automatically set to the number specified on the -pe line above export MOLCAS_NCPUS=$NSLOTS molcas mymol.input
MOLCAS Scratch (temp) files
It is possible to modify how MOLCAS uses your scratch directory for temporary files. Please read the following section so that you are aware of what MOLCAS is doing with your scratch directory (you may create a lot of temporary junk files you do not need to keep).
The modulefiles above set the following environment variable:
MOLCAS_WORKDIR=/scratch/username
where username
is your CSF username. This instructs MOLCAS to create a directory in your scratch area named after your input file. For example if your input file is called test000.input
then MOLCAS will create a directory named
/scratch/username/test000
in which to store temporary files used during the computation. This directory will not be deleted at the end of the job. Hence you may end up with a lot of these temporary directories if you run many jobs!
To instruct MOLCAS to delete this directory at the end of the job add the flag -clean
to the molcas
command in your jobscript. For example:
# Automatically delete the temporary scratch directory at the end of the job (RECOMMENDED) molcas -clean test000.input
If you wish to keep temporary directories and use a different temporary directory name each time you run (and rerun) the same input file (e.g., if you run the test000.input
input with a different number of CPU cores to do some timing tests) you should instruct MOLCAS to add a random number to the directory name by adding the following to your jobscript:
# Molcas will add a random number to the temporary directory name export MOLCAS_PROJECT=NAMEPID
Removing the -clean
flag from the molcas
command in your jobscript will prevent MOLCAS from deleting it.
Using a Job Array
If running MOLCAS in a job array you may need to create a directory per task otherwise the temporary directories and files created by MOLCAS will overwrite each other when a job array task runs. Remember that MOLCAS will use the name of your input file when creating its temporary directory. If each task in the job array uses the same MOLCAS input filename then this will cause a problem when several job array tasks run at the same time. To fix this, please add the following to your jobscript before the lines that runs MOLCAS:
export MOLCAS_WORKDIR=/scratch/$USER/molcas_${JOB_ID}_${SGE_TASK_ID} mkdir -p $MOLCAS_WORKDIR
Each task in the job array will have its own directory. Within there will be a directory named after the input file (see above).
GV – Graphics Viewer
MOLCAS developers have developed a graphical interface that can be used both to create input for the MOLCAS program and to analyze the results in a graphical manner by visualizing molecular orbitals, density plots, and other output properties.
The GV program should be run interactively on the CSF. This means you request an interactive session from the batch system. If resources are free it will run the GV program on a backend node. You should not run GV on the login node for all but the simplest of geometries.
To run the GV program interactively first load the MOLCAS modulefile then run the following command on the login node:
qrsh -l inter -l short -V -cwd molcas gv filename.xyz
Other input files can then be loaded from the GV user interface.
Further details on the GV program are available in the online manual.
Further info
Updates
None.