LAMMPS
Overview
LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a classical molecular dynamics code and has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.
Several versions are installed on the CSF:
- Version 29-Aug-24 (CPU and GPU builds with PLUMED, many additional packages, python interface and JPEG/PNG support)
- Version 02-Aug-23 (CPU and GPU builds with PLUMED, many additional packages, python interface and JPEG/PNG support)
- Version 29-Sep-21 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG/PNG support)
- Version 29-Oct-20 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG/PNG support)
- Version 03-Oct-20 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG/PNG support)
- Version 22-Aug-18 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG/PNG support)
- Version 11-Aug-17 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG/PNG support)
- Version 30-Jul-16 (CPU and GPU builds with many additional packages and user reaxc, python interface and JPEG/PNG support)
Version 29-Aug-24 has been built with the gcc compiler with fftw3 providing the FFT implementation. All versions prior to Version 29-Aug-24 have been compiled with the Intel compiler suite with multiple code paths allowing optimised usage on Ivybridge, Broadwell, Haswell and Skylake hardware if available. Intel MKL provides the FFT implementation.
For the 29.08.24 CPU/GPU build the following packages are included: ASPHERE BOCS BODY BROWNIAN CG-DNA CLASS2 COLLOID COLVARS COMPRESS CORESHELL
DIELECTRIC DIFFRACTION DIPOLE DPD-BASIC DPD-MESO DPD-REACT DPD-SMOOTH DRUDE EFF EXTRA-COMPUTE EXTRA-FIX EXTRA-MOLECULE EXTRA-PAIR FEP GRANULAR INTERLAYER KIM KSPACE MACHDYN MANYBODY MC MDI MEAM MESONT MISC ML-PACE ML-SNAP MOFFF MOLECULE MOLFILE NETCDF OPENMP OPT ORIENT PERI PHONON PLUGIN PLUMED POEMS PTM PYTHON QEQ REACTION REAXFF REPLICA RIGID SCAFACOS SHOCK SPH SPIN SRD TALLY UEF VORONOI YAFF
For the 02.08.23 CPU/GPU build the following packages are included: ASPHERE BOCS BODY BROWNIAN CG-DNA CLASS2 COLLOID COLVARS COMPRESS CORESHELL
DIELECTRIC DIFFRACTION DIPOLE DPD-BASIC DPD-MESO DPD-REACT DPD-SMOOTH DRUDE EFF EXTRA-COMPUTE EXTRA-FIX EXTRA-MOLECULE EXTRA-PAIR FEP GRANULAR INTERLAYER KIM KSPACE MACHDYN MANYBODY MC MDI MEAM MESONT MISC ML-PACE ML-SNAP MOFFF MOLECULE OPENMP OPT ORIENT PERI PHONON PLUGIN PLUMED POEMS PTM PYTHON QEQ REACTION REAXFF REPLICA RIGID SCAFACOS SHOCK SPH SPIN SRD TALLY UEF VORONOI YAFF
For the 29.09.21 CPU build the following packages are included: ASPHERE, BODY, CLASS2, COLLOID, COMPRESS, CORESHELL, DIPOLE, GRANULAR, KIM, KOKKOS, KSPACE, MANYBODY, MC, MESSAGE, MISC, MLIAP, MOLECULE, MPIIO, OPT, PERI, POEMS, PYTHON, QEQ, REPLICA, RIGID, SHOCK, SNAP, SPIN, SRD, USER-ATC, USER-AWPMD, USER-BOCS, USER-CGDNA, USER-CGSDK, USER-COLVARS, USER-DIFFRACTION, USER-DPD, USER-DRUDE, USER-EFF, USER-FEP, USER-INTEL, USER-LB, USER-MANIFOLD, USER-MEAMC, USER-MESODPD, USER-MESONT, USER-MGPT, USER-MISC, USER-MOFFF, USER-MOLFILE, USER-OMP, USER-PHONON, USER-PLUMED, USER-PTM, USER-QMMM, USER-QTB, USER-REACTION, USER-REAXC, USER-SCAFACOS, USER-SDPD, USER-SMD, USER-SMTBQ, USER-SPH, USER-TALLY, USER-UEF, USER-YAFF, VORONOI. In addition the GPU builds have the package GPU.
For the 29.10.20 CPU build the following packages are included: ASPHERE, BODY, CLASS2, COLLOID, COMPRESS, CORESHELL, DIPOLE, GRANULAR, KIM, KOKKOS, KSPACE, MANYBODY, MC, MESSAGE, MISC, MLIAP, MOLECULE, MPIIO, OPT, PERI, POEMS, PYTHON, QEQ, REPLICA, RIGID, SHOCK, SNAP, SPIN, SRD, USER-ATC, USER-AWPMD, USER-BOCS, USER-CGDNA, USER-CGSDK, USER-COLVARS, USER-DIFFRACTION, USER-DPD, USER-DRUDE, USER-EFF, USER-FEP, USER-INTEL, USER-LB, USER-MANIFOLD, USER-MEAMC, USER-MESODPD, USER-MESONT, USER-MGPT, USER-MISC, USER-MOFFF, USER-MOLFILE, USER-OMP, USER-PHONON, USER-PLUMED, USER-PTM, USER-QMMM, USER-QTB, USER-REACTION, USER-REAXC, USER-SCAFACOS, USER-SDPD, USER-SMD, USER-SMTBQ, USER-SPH, USER-TALLY, USER-UEF, USER-YAFF, VORONOI. In addition the GPU builds have the package GPU.
For the 22.08.18 CPU build the following standard packages are included: ASPHERE, BODY, CLASS2, COLLOID, COMPRESS, CORESHELL, DIPOLE, GRANULAR, KIM, KSPACE, MANYBODY, MC, MEAM, MISC, MOLECULE, MPIIO, OPT, PERI, POEMS, PYTHON, QEQ, REAX, REPLICA, RIGID, SHOCK, SNAP, SPIN, SRD, VORONOI, USER-COLVARS, USER-DPD, USER-MISC, USER-REAXC. In addition the GPU builds have the package GPU.
For the 11.08.17 CPU build the following standard packages are included: ASPHERE, BODY, CLASS2, COLLOID, COMPRESS, CORESHELL, DIPOLE, GRANULAR, KIM, KSPACE, MANYBODY, MC, MEAM, MISC, MOLECULE, MPIIO, OPT, PERI, POEMS, PYTHON, QEQ, REAX, REPLICA, RIGID, SHOCK, SNAP, SRD, VORONOI, USER-COLVARS, USER-DPD, USER-REAXC. In addition the GPU builds have the package GPU.
For the 30.07.16 CPU build ASPHERE, BODY, CLASS2, COLLOID, COMPRESS, CORESHELL, DIPOLE, GRANULAR, KIM, KSPACE, MANYBODY, MC, MEAM, MISC, MOLECULE, MPIIO, OPT, PERI, POEMS, PYTHON, QEQ, REAX, REPLICA, RIGID, SHOCK, SNAP, SRD, VORONOI, USER-COLVARS, USER-DPD, USER-REAXC. In addition the GPU builds have the package GPU.
If you require additional user packages please contact its-ri-team@manchester.ac.uk.
GPU builds are available in single precision, double precision and mixed precision versions (where mixed precision means Accumulation of forces, etc. in double). Please see $LAMMPS_HOME/lib/gpu/README
for more information about the build procedure.
Various tools have been compiled for pre and post processing: binary2txt, restart2data, chain, micelle2d, data2xmovie
Restrictions on use
There are no restrictions on accessing LAMMPS. It is distributed as an open source code under the terms of the GPL.
Set up procedure
To access the software you must first load the modulefile.
module load modulefile
where modulefile is replaced with the relevant module file as listed below.
NOTE we now recommend loading the module file in your batch script.
- CPU and CPU+GPU – choose only one
module load
command from the following:# v29.08.24 - MPI with additional lammps packages and python. module load apps/gcc/lammps/29.08.24-packs-user v02.08.23 - MPI with additional lammps packages and python. module load apps/intel-19.1/lammps/02.08.23-packs-user v29.09.21 - MPI with additional lammps packages and python. module load apps/intel-19.1/lammps/29.09.21-packs-user v29.10.20 - MPI with additional lammps packages and python. module load apps/intel-18.0/lammps/29.10.20-packs-user v03.03.20 - MPI with additional lammps packages and python. module load apps/intel-18.0/lammps/03.03.20-packs-user v22.08.18 with PLUMED - MPI with Plumed and additional lammps packages and python. module load apps/intel-17.0/lammps/22.08.18-packs-user-python-plumed # v22.08.18 - MPI with additional lammps packages and python. module load apps/intel-17.0/lammps/22.08.18-packs-user-python-plumed # v11.08.17 with PLUMED - MPI with Plumed and additional lammps packages and python. module load apps/intel-17.0/lammps/11.08.17-packs-user-python-plumed # v11.08.17 - MPI with additional lammps packages and python. module load apps/intel-17.0/lammps/11.08.17-packs-user-python # v30.07.16 - MPI with additional lammps packages and python. module load apps/intel-17.0/lammps/30.07.16-packs-user-python
Running the application
Please do not run LAMMPS on the login node. Jobs should be submitted to the compute nodes via batch. The GPU version must be submitted to a GPU node – it will not run otherwise.
Note also that LAMMPS may produce very large files (particularly the trajectory file ending in .trj
and the potentials file ending in .pot
). Hence you must run from your scratch directory. This will prevent your job filling up the home area. If you do not need certain files in your results, please turn off writing of the specific files in your control file (e.g., lmp_control
) or delete them in your jobscript using:
rm -f *.trj rm -f *.pot
Serial CPU batch job submission
LAMMPS can be run in parallel but you can run the pre/post processing tools in serial. Create a batch submission script which loads the most appropriate LAMMPS modulefile, for example:
#!/bin/bash --login #$ -cwd # Run from the current directory (input files in here) module load apps/intel-17.0/lammps/22.08.18-packs-user-python lmp_linux -in infile # Optional: delete any unwanted output files that may be huge rm -f *.trj
Submit the jobscript using:
qsub scriptname
Single-node Parallel CPU batch job submission: 2 to 32 cores
The following jobscript will run LAMMPS with 24 cores on a single node
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 24 # Minimum 2, maximum 32 module load apps/intel-17.0/lammps/22.08.18-packs-user-python mpirun -n $NSLOTS lmp_linux -in infile
Submit the jobscript using:
qsub scriptname
Multi-node Parallel CPU batch job submission
These jobs must be 48 cores or more in multiples of 24 when running in mpi-24-ib.pe
.
The following jobscript will run LAMMPS;
#!/bin/bash --login #$ -cwd #$ -pe mpi-24-ib.pe 48 # Must be a minimum of 48 AND a multiple of 24. module load apps/intel-17.0/lammps/22.08.18-packs-user-python mpirun -n $NSLOTS lmp_linux -in infile
Submit the jobscript using:
qsub scriptname
Running on a single GPU
You need to request being added to the relevant group to access GPUs before you can run LAMMPS on them.
If you have ‘free at the point of use’ access to the GPUs then the maximum number of GPUs you can request is 2.
The CUDA kernel precision (single, double, mixed) is determined by the name of the executable you use in the jobscript. The executables are named:
lmp_linux_gpu_single
lmp_linux_gpu_double
lmp_linux_gpu_mixed
For technical reasons it is not possible to use more than one CPU in conjunction with a GPU.
#!/bin/bash --login #$ -cwd #$ -l v100 module load apps/intel-17.0/lammps/22.08.18-packs-user-python ## The LAMMPS arg '-pk gpu ${NGPUS}' tells lammps we are using ${NGPUS} where ${NGPUS}=1 by default. ## See $LAMMPS_HOME/bench/GPU/bench.in.gpu for the input file. lmp_linux_gpu_double -sf gpu -nc -pk gpu ${NGPUS} -in bench.in.gpu
Submit the jobscript using:
qsub scriptname
Running on several GPUs
You need to request being added to the relevant group to access GPUs before you can run LAMMPS on them.
If you have ‘free at the point of use’ access to the GPUs then the maximum number of GPUs you can request is 2.
For technical reasons it is not possible to use more than one CPU in conjunction with a GPU. However, it is possible to use multiple GPUs. Each of the v100 nodes currently on CSF contains 4 GPUs.
The CUDA kernel precision (single, double, mixed) is determined by the name of the executable you use in the jobscript. The executables are named:
lmp_linux_gpu_single
lmp_linux_gpu_double
lmp_linux_gpu_mixed
For example, to run the mixed precision job on 2 GPUs and 1 CPU on the CSF v100 nodes:
#!/bin/bash --login #$ -cwd #$ -l v100=2 # Select a GPU node module load apps/intel-17.0/lammps/22.08.18-packs-user-python ## Use '-pk gpu ${NGPUS}' to tell lammps we are using the number of GPUs requested above. ## See $LAMMPS_HOME/bench/GPU/bench.in.gpu for the input file. lmp_linux_gpu_mixed -sf gpu -nc -pk gpu ${NGPUS} -in bench.in.gpu
Submit the jobscript using:
qsub scriptname
Further info
- LAMMPS website
Updates
Dec 2018 – make yes-user-misc added to 22.08.18 builds.
Nov 2018 – 22.08.18 version built for CPU and CPU/GPU on CSF3.
Nov 2018 – 11.08.17 version built for CPU and CPU/GPU on CSF3.
Nov 2018 – 30.09.16 version built for CPU and CPU/GPU on CSF3.