{"id":1640,"date":"2018-12-20T10:41:30","date_gmt":"2018-12-20T10:41:30","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=1640"},"modified":"2025-06-19T18:05:49","modified_gmt":"2025-06-19T17:05:49","slug":"lammps","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/lammps\/","title":{"rendered":"LAMMPS"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"http:\/\/lammps.sandia.gov\/\">LAMMPS<\/a> (Large-scale Atomic\/Molecular Massively Parallel Simulator) is a classical molecular dynamics code and has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.<\/p>\n<p>Several versions are installed on the CSF:<\/p>\n<ul>\n<li>Version 29-Aug-24 (CPU and GPU builds with PLUMED, many additional packages, python interface and JPEG\/PNG support)<\/li>\n<li>Version 02-Aug-23 (CPU and GPU builds with PLUMED, many additional packages, python interface and JPEG\/PNG support)<\/li>\n<li>Version 29-Sep-21 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG\/PNG support)<\/li>\n<li>Version 29-Oct-20 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG\/PNG support)<\/li>\n<li>Version 03-Oct-20 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG\/PNG support)<\/li>\n<li>Version 22-Aug-18 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG\/PNG support)<\/li>\n<li>Version 11-Aug-17 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG\/PNG support)<\/li>\n<li>Version 30-Jul-16 (CPU and GPU builds with many additional packages and user reaxc, python interface and JPEG\/PNG support)<\/li>\n<\/ul>\n<p>Version 29-Aug-24 has been built with the gcc compiler with fftw3 providing the FFT implementation. All versions prior to Version 29-Aug-24 have been compiled with the Intel compiler suite with multiple code paths allowing optimised usage on Ivybridge, Broadwell, Haswell and Skylake hardware if available. Intel MKL provides the FFT implementation.<\/p>\n<p>For the 29.08.24 CPU\/GPU build the following packages are included: ASPHERE BOCS BODY BROWNIAN CG-DNA CLASS2 COLLOID COLVARS COMPRESS CORESHELL<br \/>\nDIELECTRIC DIFFRACTION DIPOLE DPD-BASIC DPD-MESO DPD-REACT DPD-SMOOTH DRUDE EFF EXTRA-COMPUTE EXTRA-FIX EXTRA-MOLECULE EXTRA-PAIR FEP GRANULAR INTERLAYER KIM KSPACE MACHDYN MANYBODY MC MDI MEAM MESONT MISC ML-PACE ML-SNAP MOFFF MOLECULE MOLFILE NETCDF OPENMP OPT ORIENT PERI PHONON PLUGIN PLUMED POEMS PTM PYTHON QEQ REACTION REAXFF REPLICA RIGID SCAFACOS SHOCK SPH SPIN SRD TALLY UEF VORONOI YAFF <\/p>\n<p>For the 02.08.23 CPU\/GPU build the following packages are included: ASPHERE BOCS BODY BROWNIAN CG-DNA CLASS2 COLLOID COLVARS COMPRESS CORESHELL<br \/>\nDIELECTRIC DIFFRACTION DIPOLE DPD-BASIC DPD-MESO DPD-REACT DPD-SMOOTH DRUDE EFF EXTRA-COMPUTE EXTRA-FIX EXTRA-MOLECULE EXTRA-PAIR FEP GRANULAR INTERLAYER KIM KSPACE MACHDYN MANYBODY MC MDI MEAM MESONT MISC ML-PACE ML-SNAP MOFFF MOLECULE OPENMP OPT ORIENT PERI PHONON PLUGIN PLUMED POEMS PTM PYTHON QEQ REACTION REAXFF REPLICA RIGID SCAFACOS SHOCK SPH SPIN SRD TALLY UEF VORONOI YAFF <\/p>\n<p>For the 29.09.21 CPU build the following packages are included: ASPHERE, BODY, CLASS2, COLLOID, COMPRESS, CORESHELL, DIPOLE, GRANULAR, KIM, KOKKOS, KSPACE, MANYBODY, MC, MESSAGE, MISC, MLIAP, MOLECULE, MPIIO, OPT, PERI, POEMS, PYTHON, QEQ, REPLICA, RIGID, SHOCK, SNAP, SPIN, SRD, USER-ATC, USER-AWPMD, USER-BOCS, USER-CGDNA, USER-CGSDK, USER-COLVARS, USER-DIFFRACTION, USER-DPD, USER-DRUDE, USER-EFF, USER-FEP, USER-INTEL, USER-LB, USER-MANIFOLD, USER-MEAMC, USER-MESODPD, USER-MESONT, USER-MGPT, USER-MISC, USER-MOFFF, USER-MOLFILE, USER-OMP, USER-PHONON, USER-PLUMED, USER-PTM, USER-QMMM, USER-QTB, USER-REACTION, USER-REAXC, USER-SCAFACOS, USER-SDPD, USER-SMD, USER-SMTBQ, USER-SPH, USER-TALLY, USER-UEF, USER-YAFF, VORONOI. In addition the GPU builds have the package GPU.<\/p>\n<p>For the 29.10.20 CPU build the following packages are included: ASPHERE, BODY, CLASS2, COLLOID, COMPRESS, CORESHELL, DIPOLE, GRANULAR, KIM, KOKKOS, KSPACE, MANYBODY, MC, MESSAGE, MISC, MLIAP, MOLECULE, MPIIO, OPT, PERI, POEMS, PYTHON, QEQ, REPLICA, RIGID, SHOCK, SNAP, SPIN, SRD, USER-ATC, USER-AWPMD, USER-BOCS, USER-CGDNA, USER-CGSDK, USER-COLVARS, USER-DIFFRACTION, USER-DPD, USER-DRUDE, USER-EFF, USER-FEP, USER-INTEL, USER-LB, USER-MANIFOLD, USER-MEAMC, USER-MESODPD, USER-MESONT, USER-MGPT, USER-MISC, USER-MOFFF, USER-MOLFILE, USER-OMP, USER-PHONON, USER-PLUMED, USER-PTM, USER-QMMM, USER-QTB, USER-REACTION, USER-REAXC, USER-SCAFACOS, USER-SDPD, USER-SMD, USER-SMTBQ, USER-SPH, USER-TALLY, USER-UEF, USER-YAFF, VORONOI. In addition the GPU builds have the package GPU.<\/p>\n<p>For the 22.08.18 CPU build the following standard packages are included: ASPHERE, BODY, CLASS2, COLLOID, COMPRESS, CORESHELL, DIPOLE, GRANULAR, KIM, KSPACE, MANYBODY, MC, MEAM, MISC, MOLECULE, MPIIO, OPT, PERI, POEMS, PYTHON, QEQ, REAX, REPLICA, RIGID, SHOCK, SNAP, SPIN, SRD, VORONOI, USER-COLVARS, USER-DPD, USER-MISC, USER-REAXC.\u00a0 In addition the GPU builds have the package GPU.<\/p>\n<p>For the 11.08.17 CPU build the following standard packages are included: ASPHERE, BODY, CLASS2, COLLOID, COMPRESS, CORESHELL, DIPOLE, GRANULAR, KIM, KSPACE, MANYBODY, MC, MEAM, MISC, MOLECULE, MPIIO, OPT, PERI, POEMS, PYTHON, QEQ, REAX, REPLICA, RIGID, SHOCK, SNAP, SRD, VORONOI, USER-COLVARS, USER-DPD, USER-REAXC.\u00a0 In addition the GPU builds have the package GPU.<\/p>\n<p>For the 30.07.16 CPU build ASPHERE, BODY, CLASS2, COLLOID, COMPRESS, CORESHELL, DIPOLE, GRANULAR, KIM, KSPACE, MANYBODY, MC, MEAM, MISC, MOLECULE, MPIIO, OPT, PERI, POEMS, PYTHON, QEQ, REAX, REPLICA, RIGID, SHOCK, SNAP, SRD, VORONOI, USER-COLVARS, USER-DPD, USER-REAXC.\u00a0 In addition the GPU builds have the package GPU.<\/p>\n<p>If you require additional user packages please <a href=\"\/csf3\/overview\/help\/\">contact us<\/a>.<\/p>\n<p>GPU builds are available in single precision, double precision and mixed precision versions (where mixed precision means Accumulation of forces, etc. in double). Please see <code>$LAMMPS_HOME\/lib\/gpu\/README<\/code> for more information about the build procedure.<\/p>\n<p>Various tools have been compiled for pre and post processing: binary2txt, restart2data, chain, micelle2d, data2xmovie<\/p>\n<h2>Restrictions on use<\/h2>\n<p>There are no restrictions on accessing LAMMPS. It is distributed as an open source code under the terms of the <a href=\"http:\/\/www.gnu.org\/copyleft\/gpl.html\">GPL<\/a>.<\/p>\n<h2>Set up procedure<\/h2>\n<p>To access the software you must first load the modulefile.<\/p>\n<pre>module load modulefile\r\n<\/pre>\n<p>where <em>modulefile<\/em> is replaced with the relevant module file as listed below.<\/p>\n<p>NOTE we now recommend loading the module file in your batch script.<\/p>\n<ul>\n<li>CPU and CPU+GPU &#8211; choose only <strong>one<\/strong> <code>module load<\/code> command from the following:\n<pre># \r\n<strong>v29.08.24<\/strong> - MPI with additional lammps packages and python.\r\nmodule load apps\/<strong>gcc<\/strong>\/lammps\/29.08.24-packs-user\r\n\r\n<strong>v02.08.23<\/strong> - MPI with additional lammps packages and python.\r\nmodule load apps\/<strong>intel-19.1<\/strong>\/lammps\/02.08.23-packs-user\r\n\r\n<strong>v29.09.21<\/strong> - MPI with additional lammps packages and python.\r\nmodule load apps\/<strong>intel-19.1<\/strong>\/lammps\/29.09.21-packs-user\r\n\r\n<strong>v29.10.20<\/strong> - MPI with additional lammps packages and python.\r\nmodule load apps\/<strong>intel-18.0<\/strong>\/lammps\/29.10.20-packs-user\r\n\r\n<strong>v03.03.20<\/strong> - MPI with additional lammps packages and python.\r\nmodule load apps\/<strong>intel-18.0<\/strong>\/lammps\/03.03.20-packs-user\r\n\r\n<strong>v22.08.18 with PLUMED<\/strong> - MPI with Plumed and additional lammps packages and python. \r\nmodule load apps\/<strong>intel-17.0<\/strong>\/lammps\/22.08.18-packs-user-python-plumed \r\n\r\n# <strong>v22.08.18 <\/strong> - MPI with additional lammps packages and python. \r\nmodule load apps\/<strong>intel-17.0<\/strong>\/lammps\/22.08.18-packs-user-python-plumed \r\n\r\n# <strong>v11.08.17 with PLUMED<\/strong> - MPI with Plumed and additional lammps packages and python.\r\nmodule load apps\/<strong>intel-17.0<\/strong>\/lammps\/11.08.17-packs-user-python-plumed \r\n\r\n# <strong>v11.08.17<\/strong> - MPI with additional lammps packages and python.\r\nmodule load apps\/<strong>intel-17.0<\/strong>\/lammps\/11.08.17-packs-user-python\r\n\r\n# <strong>v30.07.16<\/strong> - MPI with additional lammps packages and python.\r\nmodule load apps\/<strong>intel-17.0<\/strong>\/lammps\/30.07.16-packs-user-python<\/pre>\n<\/li>\n<\/ul>\n<h2>Running the application<\/h2>\n<p>Please do not run LAMMPS on the login node. Jobs should be submitted to the compute nodes via batch. The GPU version <strong>must<\/strong> be submitted to a GPU node &#8211; it will not run otherwise.<\/p>\n<p>Note also that LAMMPS may produce very large files (particularly the trajectory file ending in <code>.trj<\/code> and the potentials file ending in <code>.pot<\/code>). Hence you <em>must<\/em> run from your scratch directory. This will prevent your job filling up the home area. If you do not need certain files in your results, please turn off writing of the specific files in your control file (e.g., <code>lmp_control<\/code>) or delete them in your jobscript using:<\/p>\n<pre>rm -f *.trj\r\nrm -f *.pot\r\n<\/pre>\n<h3>Serial CPU batch job submission<\/h3>\n<p>LAMMPS can be run in parallel but you can run the pre\/post processing tools in serial.\u00a0 Create a batch submission script which loads the most appropriate LAMMPS modulefile, for example:<\/p>\n<pre>#!\/bin\/bash --login\r\n#$ -cwd             # Run from the current directory (input files in here)\r\nmodule load apps\/intel-17.0\/lammps\/22.08.18-packs-user-python\r\n\r\nlmp_linux -in <em>infile<\/em>\r\n\r\n# Optional: delete any unwanted output files that may be huge\r\nrm -f *.trj\r\n<\/pre>\n<p>Submit the jobscript using:<br \/>\n<code>qsub <em>scriptname<\/em><\/code><\/p>\n<h3>Single-node Parallel CPU batch job submission: 2 to 32 cores<\/h3>\n<p>The following jobscript will run LAMMPS with 24 cores on a single node<\/p>\n<pre>#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 24   # Minimum 2, maximum 32\r\nmodule load apps\/intel-17.0\/lammps\/22.08.18-packs-user-python\r\nmpirun -n $NSLOTS lmp_linux -in <em>infile<\/em>\r\n\r\n<\/pre>\n<p>Submit the jobscript using:<br \/>\n<code>qsub <em>scriptname<\/em><\/code><\/p>\n<h3>Multi-node Parallel CPU batch job submission<\/h3>\n<p>These jobs must be 48 cores or more in multiples of 24 when running in <code>mpi-24-ib.pe<\/code>.<br \/>\nThe following jobscript will run LAMMPS;<\/p>\n<pre>#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe mpi-24-ib.pe 48    # Must be a minimum of 48 AND a multiple of 24.\r\nmodule load apps\/intel-17.0\/lammps\/22.08.18-packs-user-python\r\n\r\nmpirun -n $NSLOTS lmp_linux -in <em>infile<\/em>\r\n<\/pre>\n<p>Submit the jobscript using:<br \/>\n<code>qsub <em>scriptname<\/em><\/code><\/p>\n<h3>Running on a single GPU<\/h3>\n<p><strong>You need to request being added to the relevant group to access <a href=\"\/csf3\/batch\/gpu-jobs\/\">GPUs<\/a> before you can run LAMMPS on them.<\/strong><\/p>\n<p>If you have \u2018free at the point of use\u2019 access to the GPUs then the maximum number of GPUs you can request is 2.<\/p>\n<p>The CUDA kernel precision (single, double, mixed) is determined by the name of the executable you use in the jobscript. The executables are named:<\/p>\n<ul>\n<li><code>lmp_linux_gpu_single<\/code><\/li>\n<li><code>lmp_linux_gpu_double<\/code><\/li>\n<li><code>lmp_linux_gpu_mixed<\/code><\/li>\n<\/ul>\n<p>For technical reasons it is not possible to use more than one CPU in conjunction with a GPU.<\/p>\n<pre>#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -l v100   \r\nmodule load apps\/intel-17.0\/lammps\/22.08.18-packs-user-python\r\n\r\n## The LAMMPS arg '-pk gpu ${NGPUS}' tells lammps we are using ${NGPUS} where ${NGPUS}=1 by default.\r\n## See $LAMMPS_HOME\/bench\/GPU\/bench.in.gpu for the input file.\r\n\r\nlmp_linux_gpu_double -sf gpu -nc -pk gpu ${NGPUS} -in bench.in.gpu\r\n<\/pre>\n<p>Submit the jobscript using:<br \/>\n<code>qsub <em>scriptname<\/em><\/code><\/p>\n<h3>Running on several GPUs<\/h3>\n<p><strong>You need to request being added to the relevant group to access <a href=\"\/csf3\/batch\/gpu-jobs\/\">GPUs<\/a> before you can run LAMMPS on them.<\/strong><\/p>\n<p>If you have \u2018free at the point of use\u2019 access to the GPUs then the maximum number of GPUs you can request is 2.<\/p>\n<p>For technical reasons it is not possible to use more than one CPU in conjunction with a GPU. However, it is possible to use multiple GPUs. Each of the v100 nodes currently on CSF contains 4 GPUs.<\/p>\n<p>The CUDA kernel precision (single, double, mixed) is determined by the name of the executable you use in the jobscript. The executables are named:<\/p>\n<ul>\n<li><code>lmp_linux_gpu_single<\/code><\/li>\n<li><code>lmp_linux_gpu_double<\/code><\/li>\n<li><code>lmp_linux_gpu_mixed<\/code><\/li>\n<\/ul>\n<p>For example, to run the mixed precision job on 2 GPUs and 1 CPU on the\u00a0CSF v100 nodes:<\/p>\n<pre>#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -l v100=2               # Select a GPU node\r\nmodule load apps\/intel-17.0\/lammps\/22.08.18-packs-user-python\r\n\r\n## Use '-pk gpu ${NGPUS}' to tell lammps we are using the number of GPUs requested above.\r\n## See $LAMMPS_HOME\/bench\/GPU\/bench.in.gpu for the input file.\r\n\r\nlmp_linux_gpu_mixed -sf gpu -nc -pk gpu ${NGPUS}  -in bench.in.gpu\r\n<\/pre>\n<p>Submit the jobscript using:<br \/>\n<code>qsub <em>scriptname<\/em><\/code><\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/lammps.sandia.gov\/\">LAMMPS<\/a> website<\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>Dec 2018 &#8211; <em> make yes-user-misc<\/em> added to 22.08.18 builds.<br \/>\nNov 2018 &#8211; 22.08.18 version built for CPU and CPU\/GPU on CSF3.<br \/>\nNov 2018 &#8211; 11.08.17 version built for CPU and CPU\/GPU on CSF3.<br \/>\nNov 2018 &#8211; 30.09.16 version built for CPU and CPU\/GPU on CSF3.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview LAMMPS (Large-scale Atomic\/Molecular Massively Parallel Simulator) is a classical molecular dynamics code and has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. Several versions are installed on the CSF: Version 29-Aug-24 (CPU and GPU builds with PLUMED, many additional packages, python interface and JPEG\/PNG.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/lammps\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":6,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1640","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/1640","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=1640"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/1640\/revisions"}],"predecessor-version":[{"id":10405,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/1640\/revisions\/10405"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=1640"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}