{"id":861,"date":"2022-10-17T11:53:08","date_gmt":"2022-10-17T10:53:08","guid":{"rendered":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/?page_id=861"},"modified":"2022-10-17T13:53:59","modified_gmt":"2022-10-17T12:53:59","slug":"cp2k","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/cp2k\/","title":{"rendered":"CP2K"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological systems. It provides a general framework for different methods such as e.g., density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials.<\/p>\n<p>Version 6.1.0 is installed on the CSF. The Serial and SMP versions were compiled by the developers (a &#8220;binary&#8221; install.) The MPI Parallel (Popt) version for larger multi-node jobs was compiled by the RI Team using  Intel Compilers 19.1.2 &#038; MKL.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>The software is open source under the GNU General Public License.<\/p>\n<h2>Set up procedure<\/h2>\n<p>We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.<\/p>\n<p>To access the software you must first load the appropriate modulefile. <\/p>\n<p>The <code>6.1.0<\/code> modulefile sets up to use the pre-compiled version built by the CP2K developers. It has more features enabled but may be less optimized for CSF hardware. <\/p>\n<pre>\r\n# <strong>Features<\/strong>: libint fftw3 libxc xsmm libderiv_max_am1=6 libint_max_am=7 max_contr=4\r\nmodule load cp2k\/6.1.0                 # Versions: cp2k.sopt (serial)\r\n                                       #           cp2k.ssmp (single node multi-core OpenMP parallel)\r\n<\/pre>\n<p>The <code>6.1-iomkl-2020.02<\/code> modulefile set up to use the version built by the Research Infrastructure team.<\/p>\n<pre>\r\n# <strong>Features<\/strong>: libint fftw3 libxc xsmm parallel mpi3 scalapack mkl libderiv_max_am1=5 libint_max_am=6 plumed\r\nmodule load cp2k\/6.1-iomkl-2020.02     # Versions: cp2k.popt (single and multi-node MPI parallel )\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run cp2k on the login node. Jobs should be submitted to the compute nodes via batch.<\/p>\n<h3>Serial batch job submission<\/h3>\n<p>Ensure you run the <code>cp2k.sopt<\/code> executable after loading one of the above modulefiles.<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n\r\n# Load the required modulefile\r\nmodule load cp2k\/6.1.0\r\n\r\ncp2k.sopt -i <em>mysim.inp<\/em>\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Small OpenMP parallel batch job submission &#8211; 2 to 40 cores<\/h3>\n<p>Ensure you run the <code>cp2k.ssmp<\/code> executable after loading one of the above modulefiles.<br \/>\nThis version will only run on 2 or more processors up to a maximum of 40.<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore               # (or --partition=) One compute node will be used\r\n#SBATCH -n 16                      # (or --ntasks=) Number of cores, max 40.\r\n\r\n# Load the required version\r\nmodule load cp2k\/6.1.0\r\n\r\n# Inform cp2k how many cores it can use. $SLURM_NTASKS is automatically set to the number above.\r\nexport OMP_NUM_THREADS=$SLURM_NTASKS\r\ncp2k.ssmp -i <em>mysim.inp<\/em>\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Small MPI parallel batch job submission &#8211; 40 cores or fewer<\/h3>\n<p>Ensure you run the <code>cp2k<\/code> executable after loading the above modulefiles.<\/p>\n<p>This jobscript will run on one <em>compute node<\/em> &#8211; i.e, 40 cores or fewer.<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore               # (or --partition=) One compute node will be used\r\n#SBATCH -n 16                      # (or --ntasks=) Number of cores, max 40.\r\n\r\n# Load the required version\r\nmodule load cp2k\/6.1-iomkl-2020.02\r\n\r\n# mpirun knows how many cores to use\r\nmpirun cp2k.popt -i <em>mysim.inp<\/em>\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Large parallel batch job submission &#8211; 80 cores or more<\/h3>\n<p>Ensure you run the <code>cp2k<\/code> executable after loading the above modulefiles.<\/p>\n<p>This jobscript will run on 2 or more <em>compute nodes<\/em> &#8211; i.e, 80 cores or more in multiples of 40.<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multinode               # (or --partition=) Two or more compute nodes will be used\r\n#SBATCH -n 80                      # (or --ntasks=) Number of cores: 80 or more in multiples of 40\r\n\r\n# Load the required version\r\nmodule load cp2k\/6.1-iomkl-2020.02\r\n\r\n# mpirun knows how many cores to use\r\nmpirun cp2k.popt -i <em>mysim.inp<\/em>\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/www.cp2k.org\/\">CP2K website<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>None.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview CP2K is a program to perform atomistic and molecular simulations of solid state, liquid, molecular, and biological systems. It provides a general framework for different methods such as e.g., density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW) and classical pair and many-body potentials. Version 6.1.0 is installed on the CSF. The Serial and SMP versions were compiled by the developers (a &#8220;binary&#8221; install.) The MPI Parallel (Popt) version for.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/cp2k\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"parent":49,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-861","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/861","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/comments?post=861"}],"version-history":[{"count":8,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/861\/revisions"}],"predecessor-version":[{"id":870,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/861\/revisions\/870"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/49"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/media?parent=861"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}