The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
LAMMPS
Overview
LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a classical molecular dynamics code and has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.
Several versions are installed on the CSF:
- Version 30-May-13 (CPU only)
- Version 30-Sep-13 (CPU and GPU builds)
- Version 30-Sep-13 (CPU and GPU builds with many additional packages)
- Version 30-Sep-13 (CPU and GPU builds with many additional packages and user reaxc)
- Version 01-Feb-14 (CPU and GPU builds with many additional packages and user reaxc)
- Version 15-May-15 (CPU and GPU builds with many additional packages and user reaxc)
- Version 30-Jul-16 (CPU and GPU builds with many additional packages and user reaxc, python interface and JPEG/PNG support)
- Version 11-Aug-17 (CPU and GPU builds with PLUMED, many additional packages and user reaxc, python interface and JPEG/PNG support)
The 11-Aug-17 version has been compiled with the Intel 15.0.3 compiler with multiple code paths allowing optimised usage on Sandybridge, Ivybridge and Broadwell hardware if available. The Intel MKL 11.2u3 provides the FFT implementation. OpenMPI 1.8.3 provides the MPI Library. PLUMED 2.4.0 has been patched in to (some of) the executables.
The 30-Jul-16 version has been compiled with the Intel 15.0.3 compiler with multiple code paths allowing optimised use on Sandybridge, Ivybridge and Broadwell hardware if available. The Intel MKL 11.2u3 provides the FFT implementation. OpenMPI 1.6 provides the MPI library.
Previous versions of LAMMPS have been compiled with the Intel 12.0.5 compiler with multiple code paths allowing optimised usage on Sandybridge hardware if available. The Intel MKL 10.3u5 library provides the FFT implementation. OpenMPI 1.6 was used for the MPI implementation.
Compilation for the CPU only and CPU+GPU builds included the following LAMMPS standard packages: ASPHERE, KSPACE, MANYBODY, MOLECULE. In addition the GPU package was used for the gpu build.
Compilation for the CPU+GPU builds with many additional packages included the following LAMMPS standard packages: ASPHERE, BODY, CLASS2, COLLOID, DIPOLE, FLD, GPU, exist, exist, exist, exist, exist, exist, GRANULAR, KSPACE, MANYBODY, MC, MEAM, MISC, MOLECULE, OPT, PERI, POEMS, REAX, REPLICA, RIGID, SHOCK, SRD, VORONOI, XTC. Please note that the KIM and KOKKOS packages were not built (KIM has been built in the 30.07.16 build). If you require these packages please contact its-ri-team@manchester.ac.uk.
In addition, another build of the above many package version has been done but also with the inclusion of the REAXC
user package. If you require additional user packages please contact its-ri-team@manchester.ac.uk.
GPU builds are available in single precision, double precision and mixed precision versions (where mixed precision means Accumulation of forces, etc. in double). Please see $LAMMPS_HOME/lib/gpu/README
for more information about the build procedure.
Please contact its-ri-team@manchester.ac.uk if you require other packages to be compiled.
Various tools have been compiled for pre and post processing: binary2txt, restart2data, chain, micelle2d, data2xmovie
Restrictions on use
There are no restrictions on accessing LAMMPS. It is distributed as an open source code under the terms of the GPL.
Set up procedure
To access the software you must first load the modulefile. It will set up the MPI environment so you must select either the InfiniBand networking (modulefile names contain -ib-) or non-IB (Ethernet) networking version. Note that the GPU version does not support InfiniBand.
You should use the InfiniBand modulefile only for larger multi-node jobs where the number of cores is a multiple of 24 (running in orte-24-ib.pe
) and at lest two compute nodes are used. You should choose the non-InfiniBand modulefile for smaller, single-node (multi-core) jobs.
- CPU only – choose only one
module load
command from the following:# v11.08.17 with PLUMED - InfiniBand or non-IB, with additional lammps packages and python. # Note that you must load the plumed modulefile before the lammps modulefile. module load apps/intel-15.0/plumed/2.4.0-mpi-ib module load apps/intel-15.0/lammps/11.08.17-ib-packs-user-python # USER-REAXC, USER-COLVARS, USER-DPD packages, jpeg/png module load apps/intel-15.0/plumed/2.4.0-mpi module load apps/intel-15.0/lammps/11.08.17-packs-user-python # USER-REAXC, USER-COLVARS, USER-DPD packages, jpeg/png # v11.08.17 - InfiniBand or non-IB, with additional lammps packages and python. module load apps/intel-15.0/lammps/11.08.17-ib-packs-user-python # USER-REAXC, USER-COLVARS, USER-DPD packages, jpeg/png module load apps/intel-15.0/lammps/11.08.17-packs-user-python # USER-REAXC, USER-COLVARS, USER-DPD packages, jpeg/png # v30.07.16 - InfiniBand or non-IB, with additional lammps packages and python. module load apps/intel-15.0/lammps/30.07.16-ib-packs-user-python # USER-REAXC package, jpeg/png module load apps/intel-15.0/lammps/30.07.16-packs-user-python # USER-REAXC package, jpeg/png # v15.05.15 - InfiniBand or non-IB, with additional lammps packages. module load apps/intel-12.0/lammps/15.05.15-ib-packs-user # USER-REAXC package module load apps/intel-12.0/lammps/15.05.15-packs-user # USER-REAXC package # v01.02.14 - InfiniBand or non-IB, with additional lammps packages. module load apps/intel-12.0/lammps/01.02.14-ib-packs-user # USER-REAXC package module load apps/intel-12.0/lammps/01.02.14-packs-user # USER-REAXC package # v30.09.13 - InfiniBand only, without or with additional lammps packages. module load apps/intel-12.0/lammps/30.09.13-ib module load apps/intel-12.0/lammps/30.09.13-ib-packs module load apps/intel-12.0/lammps/30.09.13-ib-packs-user # USER-REAXC package # v30.09.13 - non-IB only, without or with additional lammps packages. module load apps/intel-12.0/lammps/30.09.13 module load apps/intel-12.0/lammps/30.09.13-packs module load apps/intel-12.0/lammps/30.09.13-packs-user # USER-REAXC package # v30.05.13 - InfiniBand or non-IB module load apps/intel-12.0/lammps/30.05.13-ib module load apps/intel-12.0/lammps/30.05.13
- CPU+GPU – choose one of the following:
# v11.08.17 with PLUMED - with GPU support, with additional lammps packages and python. # Note that you must load the plumed modulefile before the lammps modulefile. module load apps/intel-15.0/plumed/2.4.0-mpi module load apps/intel-15.0/lammps/11.08.17-gpu-packs-user-python # USER-REAXC, USER-COLVARS, USER-DPD packages, jpeg/png # v30.07.16 - with GPU support, additional lammps packages and python. module load apps/intel-15.0/lammps/30.07.16-gpu-packs-user-python # USER-REAXC package, jpeg/png # v15.05.15 - with GPU support and with additional lammps packages. module load apps/intel-12.0/lammps/15.05.15-gpu-packs-user # USER-REAXC package # v01.02.14 - with GPU support and with additional lammps packages. module load apps/intel-12.0/lammps/01.02.14-gpu-packs-user # USER-REAXC package # v30.09.13 - with GPU support, without or with additional lammps packages. module load apps/intel-12.0/lammps/30.09.13-gpu module load apps/intel-12.0/lammps/30.09.13-gpu-packs module load apps/intel-12.0/lammps/30.09.13-gpu-packs-user # USER-REAXC package
Note that GPU version must be run on the CSF GPU nodes even if you are not actually using the GPU features. This is because the LAMMPS executables are linked against the CUDA library which is only available on a GPU node.
Running the application
Please do not run LAMMPS on the login node. Jobs should be submitted to the compute nodes via batch. The GPU version must be submitted to a GPU node – it will not run otherwise.
Note also that LAMMPS may produce very large files (particularly the trajectory file ending in .trj
and the potentials file ending in .pot
). Hence you must run from your scratch directory. This will prevent your job filling up the home area. If you do not need certain files in your results, please turn off writing of the specific files in your control file (e.g., lmp_control
) or delete them in your jobscript using:
rm -f *.trj rm -f *.pot
Serial CPU batch job submission (non-IB only)
LAMMPS can be run in parallel but you can run the pre/post processing tools in serial. Make sure you have the appropriate non-IB modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Run from the current directory (input files in here) #$ -V # Inherit current environment when job runs lmp_linux < infile > outfile # Optional: delete any unwanted output files that may be huge rm -f *.trj
Submit the jobscript using:
qsub scriptname
Single-node Parallel CPU batch job submission: 2 to 24 cores (non-IB only)
The following jobscript will run LAMMPS (load the correct non-IB modulefile first);
NOTE: If running the version with PLUMMED support, please run: lmp_linux_plumed
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 24 # Minimum 2, maximum 24 mpirun -n $NSLOTS lmp_linux < infile > outfile # # Use lmp_linux_plumed if using the version with PLUMED added
Submit the jobscript using:
qsub scriptname
Multi-node Parallel CPU batch job submission (InfiniBand only)
These jobs must be 48 cores or more in multiples of 24 when running in orte-24-ib.pe
.
If the lmp_linux
executable is run on InfiniBand connected hardware then do not use sandybridge nodes. The following jobscript will run LAMMPS (load the correct IB modulefile first);
NOTE: If running the version with PLUMMED support, please run: lmp_linux_plumed
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V ### Alternatively use: #$ -pe orte-24-ib.pe 48 # Must be a minimum of 48 AND a multiple of 24. mpirun -n $NSLOTS lmp_linux < infile > outfile # # Use lmp_linux_plumed if using the version with PLUMED added
Submit the jobscript using:
qsub scriptname
Serial GPU batch submission job
The CUDA kernel precision (single, double, mixed) is determined by the name of the executable you use in the jobscript. The executables are named:
lmp_linux_gpu_single
lmp_linux_gpu_double
lmp_linux_gpu_mixed
(this is the only version compiled in v11.08.17)
For example, to run the double precision GPU version on one of the CSF gpu nodes (containing a single Nvidia GPU):
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -l nvidia_k20 # Select a GPU node ## The LAMMPS arg '-v g 1' sets a variable named g = 1 ## and the input file uses this as the number of GPUs to use. ## See $LAMMPS_HOME/bench/GPU/in.lj.gpu for the input file. lmp_linux_gpu_double -sf gpu -c off -v g 1 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj.gpu > outfile.gpu # # Use lmp_linux_gpu_mixed_plumed if using the version with PLUMED added
Submit the jobscript using:
qsub scriptname
Parallel GPU batch submission job – No currently available
It is possible to run multiple LAMMPS MPI processes on a multi-core CPU all of which use a single GPU in the node on which they are running. However, we do not specify a PE in the jobscript. We submit a serial job to the CSF GPU node. We will be given exclusive use of the GPU node so can safely run multiple MPI (CPU) processes on that node.
The CUDA kernel precision (single, double, mixed) is determined by the name of the executable you use in the jobscript. The executables are named:
lmp_linux_gpu_single
lmp_linux_gpu_double
lmp_linux_gpu_mixed
(this is the only version compiled in v11.08.17)
For example, to run the mixed precision GPU version on one of the CSF Nvidia nodes (containing a single Nvidia GPU and 12 CPU cores):
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -l nvidia_k20 # Select a GPU node ## NOTE: We do not specify a PE. Hence it appears to be a serial ## job. But we have exclusive access to the GPU node so can ## run more than one MPI process. They will all access the ## same GPU (LAMMPS supports this mode of operation). ## The LAMMPS arg '-v g 1' sets a variable named g = 1 ## and the input file uses this as the number of GPUs to use. ## See $LAMMPS_HOME/bench/GPU/in.lj.gpu for the input file. # 12 MPI processes will run, each using the same GPU mpirun -n 12 lmp_linux_gpu_mixed -sf gpu -c off -v g 1 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj.gpu > outfile.gpu # # Use lmp_linux_gpu_mixed_plumed if using the version with PLUMED added
Submit the jobscript using:
qsub scriptname
Further info
- LAMMPS website
Updates
Jul 2014 – make yes-user-reaxc build of the 30.09.13 (with packages) version.
Apr 2014 – make yes-standard build of 30.09.13 version.
Oct 2013 – GPU build of 30.09.13 version.