{"id":458,"date":"2020-10-29T15:17:31","date_gmt":"2020-10-29T15:17:31","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf4\/?page_id=458"},"modified":"2025-10-06T16:33:11","modified_gmt":"2025-10-06T15:33:11","slug":"lammps","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/lammps\/","title":{"rendered":"LAMMPS"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"http:\/\/lammps.sandia.gov\/\">LAMMPS<\/a> (Large-scale Atomic\/Molecular Massively Parallel Simulator) is a classical molecular dynamics code and has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.<\/p>\n<p>Currently the following versions are installed on CSF4:<\/p>\n<ul>\n<li>Version 29-Oct-2020 (CPU build, with Kokos, Voro++, KIM-API, Scafacos, YAFF, PNG\/JPG and PLUMED 2.6 packages)<\/li>\n<li>Version 03-Mar-2020 (CPU build, with Kokos, Voro++, KIM-API, Scafacos, YAFF, PNG\/JPG and PLUMED 2.6 packages)<\/li>\n<li>Version 03-Mar-2020 with parallel Frenkel analysis (CPU build, with Kokos, Voro++, KIM-API, Scafacos, YAFF, PNG\/JPG and PLUMED 2.6 packages)<\/li>\n<li>Version 03-Mar-2020 with neural network potential parameterizations (CPU build, with Kokos, Voro++, KIM-API, Scafacos, YAFF, PNG\/JPG, PLUMED 2.6 and NNP packages)<\/li>\n<\/ul>\n<p>All versions have been compiled with the Intel 2020.02 compiler. Intel MKL provides the FFT implementation. OpenMPI 4.0.4 provides the MPI Library.<\/p>\n<p>If you require additional user packages please contact <a href=\"&#x6d;&#x61;&#x69;&#x6c;&#x74;&#x6f;&#x3a;&#x69;&#x74;&#x73;&#x2d;&#114;&#105;&#45;&#116;&#101;&#97;&#109;&#64;manch&#x65;&#x73;&#x74;&#x65;&#x72;&#x2e;&#x61;&#x63;&#x2e;&#x75;&#x6b;\">&#x69;&#x74;&#x73;&#x2d;&#114;&#105;&#45;te&#x61;&#x6d;&#x40;&#x6d;&#x61;&#110;&#99;&#104;es&#x74;&#x65;&#x72;&#x2e;&#x61;&#99;&#46;&#117;k<\/a>.<\/p>\n<p>Various tools have been compiled for pre and post processing: binary2txt, chain.x, msi2lmp<\/p>\n<h2>Restrictions on use<\/h2>\n<p>There are no restrictions on accessing LAMMPS. It is distributed as an open source code under the terms of the <a href=\"http:\/\/www.gnu.org\/copyleft\/gpl.html\">GPL<\/a>.<\/p>\n<h2>Set up procedure<\/h2>\n<p>To access the software you must first load the modulefile. NOTE we now recommend loading the module file in your batch script (see below for examples):<\/p>\n<pre>\r\nmodule load lammps\/2aug2023_update2-foss-2020a-python-3.8.2\r\nmodule load lammps\/29oct2020-iomkl-2020.02-python-3.8.2-kokkos\r\nmodule load lammps\/3mar2020-iomkl-2020.02-python-3.8.2-kokkos\r\nmodule load lammps\/3mar2020-iomkl-2020.02-python-3.8.2-kokkos-frenkel\r\nmodule load lammps\/3mar2020-iomkl-2020.02-python-3.8.2-kokkos-n2p2\r\n<\/pre>\n<p>This will load all necessary modulefiles (e.g., the plumed modulefile).<\/p>\n<div class=\"hint\">\nFollowing recent update of CSF4, the following <em>additional<\/em> modules must also be loaded <strong>in the given order<\/strong> to resolve a <code>libssl.so.10<\/code> error message:\n<\/div>\n<pre>\r\nmodule load openssl\/1.0.2k\r\nmodule load zlib\/1.2.11-gcccore-9.3.0\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run LAMMPS on the login node. Jobs should be submitted to the compute nodes via batch.<\/p>\n<p>Note also that LAMMPS may produce very large files (particularly the trajectory file ending in <code>.trj<\/code> and the potentials file ending in <code>.pot<\/code>). Hence you <em>must<\/em> run from your scratch directory. This will prevent your job filling up the home area. If you do not need certain files in your results, please turn off writing of the specific files in your control file (e.g., <code>lmp_control<\/code>) or delete them in your jobscript using:<\/p>\n<pre>rm -f *.trj\r\nrm -f *.pot\r\n<\/pre>\n<h3>Serial CPU batch job submission<\/h3>\n<p>LAMMPS can be run in parallel but you can run the pre\/post processing tools in serial.\u00a0 Create a batch submission script which loads the most appropriate LAMMPS modulefile, for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n## The default is to run with one core but you can also use the following\r\n#SBATCH -p serial      # (or --partition=) Single-core job\r\n#SBATCH -n 1           # (or --ntasks=) Just use one core\r\n\r\nmodule load lammps\/3mar2020-iomkl-2020.02-python-3.8.2-kokkos\r\nmodule load openssl\/1.0.2k\r\nmodule load zlib\/1.2.11-gcccore-9.3.0\r\n\r\nlmp &lt; infile &gt; outfile\r\n\r\n# Optional: delete any unwanted output files that may be huge\r\nrm -f *.trj\r\n<\/pre>\n<p>Submit the jobscript using: <code>sbatch <em>scriptname<\/em><\/code> where <code><em>scriptname<\/em><\/code> is the name of your jobscript.<\/p>\n<h3>Single-node Parallel CPU batch job submission: 2 to 40 cores<\/h3>\n<p>The following jobscript will run LAMMPS with 16 cores on a single node<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore         # (or --partition=) Parallel job using cores on a single node\r\n#SBATCH -n 24                # (or --ntasks=) Number of cores (2--40)\r\n\r\nmodule load lammps\/3mar2020-iomkl-2020.02-python-3.8.2-kokkos\r\nmodule load openssl\/1.0.2k\r\nmodule load zlib\/1.2.11-gcccore-9.3.0\r\n\r\n# mpirun knows how many cores to use\r\nmpirun lmp &lt; infile &gt; outfile\r\n<\/pre>\n<p>Submit the jobscript using: <code>sbatch <em>scriptname<\/em><\/code> where <code><em>scriptname<\/em><\/code> is the name of your jobscript.<\/p>\n<h3>Multi-node Parallel CPU batch job submission<\/h3>\n<p>These jobs must be 80 cores or more in multiples of 40 when running in the <code>multinode<\/code> partition.<br \/>\nThe following jobscript will run LAMMPS;<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multinode         # (or --partition=) Parallel job using all cores on nodes\r\n#SBATCH -n 80                # (or --ntasks=) Number of cores (80-200) in multiples of 40\r\n### Alternatively you can say how many nodes to use\r\n# #SBATCH -N 2               # (or --nodes=) Number of 40-core nodes to use\r\n\r\nmodule load lammps\/3mar2020-iomkl-2020.02-python-3.8.2-kokkos\r\nmodule load openssl\/1.0.2k\r\nmodule load zlib\/1.2.11-gcccore-9.3.0\r\n\r\n# mpirun knows how many cores to use\r\nmpirun lmp &lt; infile &gt; outfile\r\n<\/pre>\n<p>Submit the jobscript using: <code>sbatch <em>scriptname<\/em><\/code> where <code><em>scriptname<\/em><\/code> is the name of your jobscript.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/lammps.sandia.gov\/\">LAMMPS<\/a> website<\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>Oct 2020 &#8211; Initial install<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview LAMMPS (Large-scale Atomic\/Molecular Massively Parallel Simulator) is a classical molecular dynamics code and has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. Currently the following versions are installed on CSF4: Version 29-Oct-2020 (CPU build, with Kokos, Voro++, KIM-API, Scafacos, YAFF, PNG\/JPG and PLUMED 2.6.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/lammps\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"parent":49,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-458","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/458","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/comments?post=458"}],"version-history":[{"count":19,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/458\/revisions"}],"predecessor-version":[{"id":1456,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/458\/revisions\/1456"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/49"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/media?parent=458"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}