{"id":468,"date":"2020-10-29T18:04:23","date_gmt":"2020-10-29T18:04:23","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf4\/?page_id=468"},"modified":"2024-11-04T16:47:29","modified_gmt":"2024-11-04T16:47:29","slug":"openmolcas","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/openmolcas\/","title":{"rendered":"OpenMolcas"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"https:\/\/gitlab.com\/Molcas\/OpenMolcas\">OpenMolcas<\/a> is an open source version of MOLCAS.<\/p>\n<p>Versions 18.09, 20.10, and 20.10 with DMRG enabled are installed on CSF4. All versions use MPI for parallelism.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>OpenMolcas is open source software released under the Lesser General Public License (LGPL).\u00a0 It is free to use by all members of the University.<\/p>\n<h2>Set up procedure<\/h2>\n<p>To access the software you must first load the modulefiles<\/p>\n<pre>\r\n<!-- module load apps\/intel-2020.02\/openmolcas\/24.10 -->\r\nmodule load openmolcas\/20.10-iomkl-2020.02-python-3.8.2\r\nmodule load openmolcas\/18.09-iomkl-2020.02-python-3.8.2\r\n<\/pre>\n<p>The Density Matrix Renormalization Group (DMRG version) has been compiled and is available via the following modulefiles:<\/p>\n<pre>\r\n# Uses commit 71e2b130 26\/11\/2020 from the qcmaquis-release with patch from the Chilton group.\r\n\r\n# MPI parallel for multi-node jobs\r\nmodule load openmolcas-dmrg\/20.10-iomkl-2020.02-python-3.8.2\r\n\r\n# OpenMP parallel for single-node (multicore) jobs\r\nmodule load openmolcas-dmrg\/20.10-iimkl-2020.02-python-3.8.2\r\n<\/pre>\n<p>We now recommend that for batch jobs you load the modulefile in the jobscript rather than loading it on the command line prior to submission. See below for examples.<\/p>\n<h2>Running the application<\/h2>\n<p>Please do not run OpenMolcas on the login node. Jobs should be submitted to the compute nodes via batch. NOTE we now recommend loading modules within your batch scripts.<\/p>\n<h3>Serial batch job submission<\/h3>\n<p>Create a batch submission script, for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n\r\nmodule load openmolcas\/20.10-iomkl-2020.02-python-3.8.2\r\npymolcas <em>mymol<\/em>.input \r\n  #\r\n  # Add the command:\r\n  #    pymolcas -clean <em>mymol<\/em>.input\r\n  # to have the temporary <em>scratch<\/em> directory deleted at the end of the job (see below)\r\n<\/pre>\n<p>Submit the jobscript using <code>sbatch <em>scriptname<\/em><\/code> where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Single Node Parallel batch job submission<\/h3>\n<p>Parallel jobs on a single node using OpenMP are currently possible. Multi-node calculations using MPI are not currently supported.<\/p>\n<p>Create a batch submission script, for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore   # (or --partition=) Single-node multi-core job\r\n#SBATCH -n 16          # (or --ntasks=) Number of cores (2--40)\r\n\r\n# Load the version you require\r\nmodule load openmolcas\/20.10-iomkl-2020.02-python-3.8.2\r\n\r\npymolcas -np $SLURM_NTASKS <em>mymol<\/em>.input \r\n  #\r\n  # Add the command:\r\n  #    pymolcas -clean <em>mymol<\/em>.input\r\n  # to have the temporary <em>scratch<\/em> directory deleted at the end of the job (see below)\r\n<\/pre>\n<p>Submit the jobscript using <code>sbatch <em>scriptname<\/em><\/code> where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Multi Node Parallel batch job submission<\/h3>\n<p>Parallel jobs on multiple compute nodes using MPI are possible. However not all openmolcas modules benefit from this parallelisation. Please check the Molcas documentation.<\/p>\n<p>Create a batch submission script, for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multinode   # (or --partition=) Multi-node job\r\n#SBATCH -n 80          # (or --ntasks=) 80 or more cores in multiples of 40\r\n\r\n# Load the version you require\r\nmodule load openmolcas\/20.10-iomkl-2020.02-python-3.8.2\r\n\r\npymolcas -np $SLURM_NTASKS <em>mymol<\/em>.input \r\n<\/pre>\n<p>Submit the jobscript using <code>sbatch <em>scriptname<\/em><\/code> where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>OpenMolcas DMRG<\/h3>\n<p>The Density Matrix Renormalization Group (DMRG) version has been compiled from the qcmaquis-release gitlab branch. A patch file has also been applied to the source tree to make various modifications. This was supplied by Dr. Nick Chilton.<\/p>\n<p>MPI parallel for single or multi-node jobs. For example:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#SBATCH -p multi<strong>node<\/strong>\r\n#SBATCH -n 80          # Can be 80 or more cores in multiples of 40\r\n\r\nmodule load openmolcas-dmrg\/20.10-<strong>iomkl<\/strong>-2020.02-python-3.8.2\r\n\r\n# OpenMolcas itself will use MPI parallelism\r\npymolcas -np $SLURM_NTASKS <em>input<\/em>\r\n<\/pre>\n<p>OpenMP parallel for single-node (multicore) jobs:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#SBATCH -p multi<strong>core<\/strong>\r\n#SBATCH -n 16          # Can be 2--40 cores\r\n\r\nmodule load openmolcas-dmrg\/20.10-<strong>iimkl<\/strong>-2020.02-python-3.8.2\r\n\r\n# The maths libraries may use multiple threads to speed up execution\r\nexport OMP_NUM_THREADS=$SLURM_NTASKS\r\n\r\n# OpenMolcas itself will not use MPI parallelism so do not add the -np flag\r\npymolcas <em>input<\/em>\r\n<\/pre>\n<h3>OpenMolcas Scratch (temp) files<\/h3>\n<p>It is possible to modify how OpenMolcas uses your scratch directory for temporary files. Please read the following section so that you are aware of what OpenMolcas is doing with your scratch directory (you may create a lot of temporary junk files you do not need to keep).<\/p>\n<p>The modulefiles above set the following environment variable:<\/p>\n<pre>MOLCAS_WORKDIR=\/scratch\/<em>username<\/em>\r\n<\/pre>\n<p>where <code><em>username<\/em><\/code> is your CSF username. This instructs OpenMolcas to create a directory in your <em>scratch<\/em> area named after your input file. For example if your input file is called <code>test000.input<\/code> then OpenMolcas will create a directory named<\/p>\n<pre>\/scratch\/<em>username<\/em>\/test000\r\n<\/pre>\n<p>in which to store temporary files used during the computation. This directory will <strong>not<\/strong> be deleted at the end of the job. Hence you may end up with a lot of these temporary directories if you run many jobs!<\/p>\n<p>To instruct OpenMolcas to delete this directory at the end of the job add the flag <code>-clean<\/code> to the <code>pymolcas<\/code> command in your jobscript. For example:<\/p>\n<pre>\r\n# Automatically delete the temporary scratch directory at the end of the job (RECOMMENDED)\r\npymolcas -clean <em>test000<\/em>.input\r\n<\/pre>\n<p>If you wish to keep temporary directories and use a different temporary directory name each time you run (and rerun) the same input file (e.g., if you run the <code>test000.input<\/code> input with a different number of CPU cores to do some timing tests) you should instruct OpenMolcas to add a random number to the directory name by adding the following to your jobscript:<\/p>\n<pre>\r\n# OpenMolcas will add a random number to the temporary directory name\r\nexport MOLCAS_PROJECT=NAMEPID\r\n<\/pre>\n<p>Removing the <code>-clean<\/code> flag from the <code>pymolcas<\/code> command in your jobscript will prevent OpenMolcas from deleting it.<\/p>\n<h3>Using a Job Array<\/h3>\n<p>If running OpenMolcas in a <a href=\"\/csf4\/batch\/job-arrays\/\">job array<\/a> you may need to create a directory per task otherwise the temporary directories and files created by OpenMolcas will overwrite each other when a job array task runs. Remember that OpenMolcas will use the name of your input file when creating its temporary directory. If each task in the job array uses the same OpenMolcas input filename then this will cause a problem when several job array tasks run at the same time. To fix this, please add the following to your jobscript before the lines that runs OpenMolcas:<\/p>\n<pre>export MOLCAS_WORKDIR=\/scratch\/$USER\/molcas_${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}\r\nmkdir -p $MOLCAS_WORKDIR\r\n<\/pre>\n<p>Each task in the job array will have its own directory. Within there will be a directory named after the input file (see above).<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"https:\/\/gitlab.com\/Molcas\/OpenMolcas\/wikis\/home\">OpenMolcas website<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>None.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview OpenMolcas is an open source version of MOLCAS. Versions 18.09, 20.10, and 20.10 with DMRG enabled are installed on CSF4. All versions use MPI for parallelism. Restrictions on use OpenMolcas is open source software released under the Lesser General Public License (LGPL).\u00a0 It is free to use by all members of the University. Set up procedure To access the software you must first load the modulefiles module load openmolcas\/20.10-iomkl-2020.02-python-3.8.2 module load openmolcas\/18.09-iomkl-2020.02-python-3.8.2 The Density.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/openmolcas\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"parent":49,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-468","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/468","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/comments?post=468"}],"version-history":[{"count":15,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/468\/revisions"}],"predecessor-version":[{"id":1369,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/468\/revisions\/1369"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/49"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/media?parent=468"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}