{"id":750,"date":"2013-06-05T15:53:01","date_gmt":"2013-06-05T15:53:01","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/?page_id=750"},"modified":"2018-10-10T13:27:42","modified_gmt":"2018-10-10T13:27:42","slug":"lammps","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/lammps\/","title":{"rendered":"LAMMPS"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"http:\/\/lammps.sandia.gov\/\">LAMMPS<\/a> (Large-scale Atomic\/Molecular Massively Parallel Simulator) is a classical molecular dynamics code and has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.<\/p>\n<p>Several versions are installed on the CSF:<\/p>\n<ul>\n<li>Version 30-May-13 (CPU only)<\/li>\n<li>Version 30-Sep-13 (CPU and GPU builds)<\/li>\n<li>Version 30-Sep-13 (CPU and GPU builds with many additional packages)<\/li>\n<li>Version 30-Sep-13 (CPU and GPU builds with many additional packages and user reaxc)<\/li>\n<li>Version 01-Feb-14 (CPU and GPU builds with many additional packages and user reaxc)<\/li>\n<li>Version 15-May-15 (CPU and GPU builds with many additional packages and user reaxc)<\/li>\n<li>Version 30-Jul-16 (CPU and GPU builds with many additional packages and user reaxc, python interface and JPEG\/PNG support)<\/li>\n<li>Version 11-Aug-17 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG\/PNG support)<\/li>\n<\/ul>\n<p>The 11-Aug-17 version has been compiled with the Intel 15.0.3 compiler with multiple code paths allowing optimised usage on Sandybridge, Ivybridge and Broadwell hardware if available. The Intel MKL 11.2u3 provides the FFT implementation. OpenMPI 1.8.3 provides the MPI Library. PLUMED 2.4.0 has been patched in to (some of) the executables.<\/p>\n<p>The 30-Jul-16 version has been compiled with the Intel 15.0.3 compiler with multiple code paths allowing optimised use on Sandybridge, Ivybridge and Broadwell hardware if available. The Intel MKL 11.2u3 provides the FFT implementation. OpenMPI 1.6 provides the MPI library.<\/p>\n<p>Previous versions of LAMMPS have been compiled with the Intel 12.0.5 compiler with multiple code paths allowing optimised usage on Sandybridge hardware if available. The Intel MKL 10.3u5 library provides the FFT implementation. OpenMPI 1.6 was used for the MPI implementation.<\/p>\n<p>Compilation for the CPU only and CPU+GPU builds included the following LAMMPS standard packages: ASPHERE, KSPACE, MANYBODY, MOLECULE. In addition the GPU package was used for the gpu build.<\/p>\n<p>Compilation for the CPU+GPU builds with many additional packages included the following LAMMPS standard packages: ASPHERE, BODY, CLASS2, COLLOID, DIPOLE, FLD, GPU, exist, exist, exist, exist, exist, exist, GRANULAR, KSPACE, MANYBODY, MC, MEAM, MISC, MOLECULE, OPT, PERI, POEMS, REAX, REPLICA, RIGID, SHOCK, SRD, VORONOI, XTC. Please note that the KIM and KOKKOS packages were <em>not<\/em> built (KIM has been built in the 30.07.16 build). If you require these packages please contact <a href=\"mailto:its-r&#105;&#45;&#116;&#101;&#97;&#109;&#64;&#109;&#97;&#110;&#99;&#104;&#101;&#115;&#116;&#x65;&#x72;&#x2e;&#x61;&#x63;&#x2e;&#x75;&#x6b;\">&#105;&#x74;s&#x2d;r&#105;&#x2d;&#116;&#x65;a&#109;&#x40;&#109;&#x61;n&#x63;&#x68;&#101;&#x73;t&#x65;r&#46;&#x61;&#99;&#x2e;u&#107;<\/a>.<\/p>\n<p>In addition, another build of the above <em>many package<\/em> version has been done but also with the inclusion of the <code>REAXC<\/code> <em>user<\/em> package. If you require additional user packages please contact <a href=\"&#x6d;&#x61;&#x69;&#x6c;&#116;&#111;&#58;&#105;ts-&#x72;&#x69;&#x2d;&#x74;&#x65;&#97;&#109;&#64;man&#x63;&#x68;&#x65;&#x73;&#x74;&#101;&#114;&#46;ac&#46;&#x75;&#x6b;\">&#105;&#x74;&#115;&#x2d;&#114;&#x69;&#45;&#x74;&#101;&#x61;&#109;&#x40;m&#x61;n&#x63;h&#x65;s&#x74;e&#x72;&#46;&#97;&#x63;&#46;&#x75;&#107;<\/a>.<\/p>\n<p>GPU builds are available in single precision, double precision and mixed precision versions (where mixed precision means Accumulation of forces, etc. in double). Please see <code>$LAMMPS_HOME\/lib\/gpu\/README<\/code> for more information about the build procedure.<\/p>\n<p>Please contact <a href=\"&#x6d;a&#x69;&#108;&#x74;&#111;&#x3a;&#x69;t&#x73;&#45;&#x72;&#105;&#x2d;&#116;e&#x61;&#109;&#x40;&#109;&#x61;&#110;c&#x68;&#101;&#x73;&#116;&#x65;&#114;&#46;&#x61;c&#x2e;&#117;&#x6b;\">&#x69;&#x74;&#x73;&#x2d;&#114;&#105;&#45;te&#x61;&#x6d;&#x40;&#x6d;&#x61;&#110;&#99;&#104;es&#x74;&#x65;&#x72;&#x2e;&#x61;&#99;&#46;&#117;k<\/a> if you require other packages to be compiled.<\/p>\n<p>Various tools have been compiled for pre and post processing: binary2txt, restart2data, chain, micelle2d, data2xmovie<\/p>\n<h2>Restrictions on use<\/h2>\n<p>There are no restrictions on accessing LAMMPS. It is distributed as an open source code under the terms of the <a href=\"http:\/\/www.gnu.org\/copyleft\/gpl.html\">GPL<\/a>.<\/p>\n<h2>Set up procedure<\/h2>\n<p>To access the software you must first load the modulefile. It will set up the MPI environment so you must select either the InfiniBand networking (modulefile names contain -ib-) or non-IB (Ethernet) networking version. Note that the GPU version does not support InfiniBand.<\/p>\n<p>You should use the InfiniBand modulefile only for larger multi-node jobs where the number of cores is a multiple of 24 (running in <code>orte-24-ib.pe<\/code>) and at lest two compute nodes are used. You should choose the non-InfiniBand modulefile for smaller, single-node (multi-core) jobs.<\/p>\n<ul>\n<li>CPU only &#8211; choose only <strong>one<\/strong> <code>module load<\/code> command from the following:\n<pre>\r\n# <strong>v11.08.17 with PLUMED<\/strong> - InfiniBand or non-IB, with additional lammps packages and python.\r\n# Note that <strong>you<\/strong> must load the plumed modulefile before the lammps modulefile.\r\nmodule load apps\/intel-15.0\/plumed\/2.4.0-mpi<strong>-ib<\/strong>\r\nmodule load apps\/<strong>intel-15.0<\/strong>\/lammps\/11.08.17<strong>-ib<\/strong>-packs-user-python     # USER-REAXC, USER-COLVARS, USER-DPD packages, jpeg\/png\r\n\r\nmodule load apps\/intel-15.0\/plumed\/2.4.0-mpi\r\nmodule load apps\/<strong>intel-15.0<\/strong>\/lammps\/11.08.17-packs-user-python        # USER-REAXC, USER-COLVARS, USER-DPD packages, jpeg\/png\r\n\r\n# <strong>v11.08.17<\/strong> - InfiniBand or non-IB, with additional lammps packages and python.\r\nmodule load apps\/<strong>intel-15.0<\/strong>\/lammps\/11.08.17-ib-packs-user-python     # USER-REAXC, USER-COLVARS, USER-DPD packages, jpeg\/png\r\nmodule load apps\/<strong>intel-15.0<\/strong>\/lammps\/11.08.17-packs-user-python        # USER-REAXC, USER-COLVARS, USER-DPD packages, jpeg\/png\r\n\r\n# <strong>v30.07.16<\/strong> - InfiniBand or non-IB, with additional lammps packages and python.\r\nmodule load apps\/<strong>intel-15.0<\/strong>\/lammps\/30.07.16-ib-packs-user-python     # USER-REAXC package, jpeg\/png\r\nmodule load apps\/<strong>intel-15.0<\/strong>\/lammps\/30.07.16-packs-user-python        # USER-REAXC package, jpeg\/png\r\n\r\n# v15.05.15 - InfiniBand or non-IB, with additional lammps packages.\r\nmodule load apps\/intel-12.0\/lammps\/15.05.15-ib-packs-user     # USER-REAXC package\r\nmodule load apps\/intel-12.0\/lammps\/15.05.15-packs-user        # USER-REAXC package\r\n\r\n# v01.02.14 - InfiniBand or non-IB, with additional lammps packages.\r\nmodule load apps\/intel-12.0\/lammps\/01.02.14-ib-packs-user     # USER-REAXC package\r\nmodule load apps\/intel-12.0\/lammps\/01.02.14-packs-user        # USER-REAXC package\r\n\r\n# v30.09.13 - InfiniBand only, without or with additional lammps packages.\r\nmodule load apps\/intel-12.0\/lammps\/30.09.13-ib\r\nmodule load apps\/intel-12.0\/lammps\/30.09.13-ib-packs\r\nmodule load apps\/intel-12.0\/lammps\/30.09.13-ib-packs-user     # USER-REAXC package\r\n\r\n# v30.09.13 - non-IB only, without or with additional lammps packages.\r\nmodule load apps\/intel-12.0\/lammps\/30.09.13\r\nmodule load apps\/intel-12.0\/lammps\/30.09.13-packs\r\nmodule load apps\/intel-12.0\/lammps\/30.09.13-packs-user        # USER-REAXC package\r\n\r\n# v30.05.13 - InfiniBand or non-IB\r\nmodule load apps\/intel-12.0\/lammps\/30.05.13-ib\r\nmodule load apps\/intel-12.0\/lammps\/30.05.13\r\n<\/pre>\n<\/li>\n<li>CPU+GPU &#8211; choose <strong>one<\/strong> of the following:\n<pre>\r\n# <strong>v11.08.17 with PLUMED<\/strong> - with GPU support, with additional lammps packages and python.\r\n# Note that <strong>you<\/strong> must load the plumed modulefile before the lammps modulefile.\r\nmodule load apps\/intel-15.0\/plumed\/2.4.0-mpi\r\nmodule load apps\/<strong>intel-15.0<\/strong>\/lammps\/11.08.17-gpu-packs-user-python     # USER-REAXC, USER-COLVARS, USER-DPD packages, jpeg\/png\r\n\r\n# v30.07.16 - with GPU support, additional lammps packages and python.\r\nmodule load apps\/<strong>intel-15.0<\/strong>\/lammps\/30.07.16-gpu-packs-user-python     # USER-REAXC package, jpeg\/png\r\n\r\n# v15.05.15 - with GPU support and with additional lammps packages.\r\nmodule load apps\/intel-12.0\/lammps\/15.05.15-gpu-packs-user     # USER-REAXC package\r\n\r\n# v01.02.14 - with GPU support and with additional lammps packages.\r\nmodule load apps\/intel-12.0\/lammps\/01.02.14-gpu-packs-user     # USER-REAXC package\r\n\r\n# v30.09.13 - with GPU support, without or with additional lammps packages.\r\nmodule load apps\/intel-12.0\/lammps\/30.09.13-gpu\r\nmodule load apps\/intel-12.0\/lammps\/30.09.13-gpu-packs\r\nmodule load apps\/intel-12.0\/lammps\/30.09.13-gpu-packs-user     # USER-REAXC package\r\n<\/pre>\n<\/li>\n<\/ul>\n<p>Note that GPU version <em>must<\/em> be run on the CSF GPU nodes even if you are not actually using the GPU features. This is because the LAMMPS executables are linked against the CUDA library which is only available on a GPU node.<\/p>\n<h2>Running the application<\/h2>\n<p>Please do not run LAMMPS on the login node. Jobs should be submitted to the compute nodes via batch. The GPU version <strong>must<\/strong> be submitted to a GPU node &#8211; it will not run otherwise.<\/p>\n<p>Note also that LAMMPS may produce very large files (particularly the trajectory file ending in <code>.trj<\/code> and the potentials file ending in <code>.pot<\/code>). Hence you <em>must<\/em> run from your scratch directory. This will prevent your job filling up the home area. If you do not need certain files in your results, please turn off writing of the specific files in your control file (e.g., <code>lmp_control<\/code>) or delete them in your jobscript using:<\/p>\n<pre>\r\nrm -f *.trj\r\nrm -f *.pot\r\n<\/pre>\n<h3>Serial CPU batch job submission (non-IB only)<\/h3>\n<p>LAMMPS can be run in parallel but you can run the pre\/post processing tools in serial. Make sure you have the appropriate non-IB modulefile loaded then create a batch submission script, for example:<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd             # Run from the current directory (input files in here)\r\n#$ -V               # Inherit current environment when job runs\r\n\r\nlmp_linux < infile > outfile\r\n\r\n# Optional: delete any unwanted output files that may be huge\r\nrm -f *.trj\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<h3>Single-node Parallel CPU batch job submission: 2 to 24 cores (non-IB only)<\/h3>\n<p>The following jobscript will run LAMMPS (load the correct non-IB modulefile first);<\/p>\n<p>NOTE: If running the version with PLUMMED support, please run: <code>lmp_linux_plumed<\/code><\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -pe smp.pe 24   # Minimum 2, maximum 24\r\n \r\nmpirun -n $NSLOTS lmp_linux < infile > outfile\r\n                   #\r\n                   # Use lmp_linux_plumed if using the version with PLUMED added\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<h3>Multi-node Parallel CPU batch job submission (InfiniBand only)<\/h3>\n<p>These jobs must be 48 cores or more in multiples of 24 when running in <code>orte-24-ib.pe<\/code>.<\/p>\n<p>If the <code>lmp_linux<\/code> executable is run on InfiniBand connected hardware then do not use sandybridge nodes. The following jobscript will run LAMMPS (load the correct IB modulefile first);<\/p>\n<p>NOTE: If running the version with PLUMMED support, please run: <code>lmp_linux_plumed<\/code><\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n### Alternatively use:\r\n#$ -pe orte-24-ib.pe 48    # Must be a minimum of 48 AND a multiple of 24.\r\n\r\nmpirun -n $NSLOTS lmp_linux < infile > outfile\r\n                   #\r\n                   # Use lmp_linux_plumed if using the version with PLUMED added\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<h3>Serial GPU batch submission job<\/h3>\n<p>The CUDA kernel precision (single, double, mixed) is determined by the name of the executable you use in the jobscript. The executables are named:<\/p>\n<ul>\n<li><code>lmp_linux_gpu_single<\/code><\/li>\n<li><code>lmp_linux_gpu_double<\/code><\/li>\n<li><code>lmp_linux_gpu_mixed<\/code> (this is the only version compiled in v11.08.17)<\/li>\n<\/ul>\n<p>For example, to run the double precision GPU version on one of the CSF gpu nodes (containing a single Nvidia GPU):<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -l nvidia_k20               # Select a GPU node\r\n\r\n## The LAMMPS arg '-v g 1' sets a variable named g = 1\r\n## and the input file uses this as the number of GPUs to use.\r\n## See $LAMMPS_HOME\/bench\/GPU\/in.lj.gpu for the input file.\r\n\r\nlmp_linux_gpu_double -sf gpu -c off -v g 1 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj.gpu > outfile.gpu\r\n  #\r\n  # Use lmp_linux_gpu_mixed_plumed if using the version with PLUMED added\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<h3>Parallel GPU batch submission job &#8211; No currently available<\/h3>\n<p><del datetime=\"2018-10-10T13:24:00+00:00\"><br \/>\nIt is possible to run multiple LAMMPS MPI processes on a multi-core CPU all of which use a single GPU in the node on which they are running. However, we do <strong>not<\/strong> specify a PE in the jobscript. We submit a serial job to the CSF GPU node. We will be given exclusive use of the GPU node so can safely run multiple MPI (CPU) processes on that node.<br \/>\n<\/del><br \/>\n<del datetime=\"2018-10-10T13:25:14+00:00\"><br \/>\nThe CUDA kernel precision (single, double, mixed) is determined by the name of the executable you use in the jobscript. The executables are named:<\/p>\n<ul>\n<li><code>lmp_linux_gpu_single<\/code><\/li>\n<li><code>lmp_linux_gpu_double<\/code><\/li>\n<li><code>lmp_linux_gpu_mixed<\/code> (this is the only version compiled in v11.08.17)<\/li>\n<\/ul>\n<p><\/del><br \/>\nFor example, to run the mixed precision GPU version on one of the CSF Nvidia nodes (containing a single Nvidia GPU and 12 CPU cores):<\/p>\n<pre>\r\n#!\/bin\/bash\r\n#$ -S \/bin\/bash\r\n#$ -cwd\r\n#$ -V\r\n#$ -l nvidia_k20               # Select a GPU node\r\n\r\n## NOTE: We do not specify a PE. Hence it appears to be a serial\r\n##       job. But we have exclusive access to the GPU node so can\r\n##       run more than one MPI process. They will all access the\r\n##       same GPU (LAMMPS supports this mode of operation).\r\n\r\n## The LAMMPS arg '-v g 1' sets a variable named g = 1\r\n## and the input file uses this as the number of GPUs to use.\r\n## See $LAMMPS_HOME\/bench\/GPU\/in.lj.gpu for the input file.\r\n\r\n# 12 MPI processes will run, each using the same GPU\r\nmpirun -n 12 lmp_linux_gpu_mixed -sf gpu -c off -v g 1 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj.gpu > outfile.gpu\r\n              #\r\n              # Use lmp_linux_gpu_mixed_plumed if using the version with PLUMED added\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>qsub <em>scriptname<\/em><\/pre>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/lammps.sandia.gov\/\">LAMMPS<\/a> website<\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>Jul 2014 &#8211; <em> make yes-user-reaxc<\/em> build of the 30.09.13 (with packages) version.<br \/>\nApr 2014 &#8211; <em>make yes-standard<\/em> build of 30.09.13 version.<br \/>\nOct 2013 &#8211; GPU build of 30.09.13 version.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview LAMMPS (Large-scale Atomic\/Molecular Massively Parallel Simulator) is a classical molecular dynamics code and has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. Several versions are installed on the CSF: Version 30-May-13 (CPU only) Version 30-Sep-13 (CPU and GPU builds) Version 30-Sep-13 (CPU and GPU.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/lammps\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":31,"menu_order":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-750","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/750","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/comments?post=750"}],"version-history":[{"count":19,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/750\/revisions"}],"predecessor-version":[{"id":4868,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/750\/revisions\/4868"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/31"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/media?parent=750"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}