LAMMPS
Overview
LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a classical molecular dynamics code and has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.
Currently the following versions are installed on CSF4:
- Version 29-Oct-2020 (CPU build, with Kokos, Voro++, KIM-API, Scafacos, YAFF, PNG/JPG and PLUMED 2.6 packages)
- Version 03-Mar-2020 (CPU build, with Kokos, Voro++, KIM-API, Scafacos, YAFF, PNG/JPG and PLUMED 2.6 packages)
- Version 03-Mar-2020 with parallel Frenkel analysis (CPU build, with Kokos, Voro++, KIM-API, Scafacos, YAFF, PNG/JPG and PLUMED 2.6 packages)
- Version 03-Mar-2020 with neural network potential parameterizations (CPU build, with Kokos, Voro++, KIM-API, Scafacos, YAFF, PNG/JPG, PLUMED 2.6 and NNP packages)
All versions have been compiled with the Intel 2020.02 compiler. Intel MKL provides the FFT implementation. OpenMPI 4.0.4 provides the MPI Library.
If you require additional user packages please contact its-ri-team@manchester.ac.uk.
Various tools have been compiled for pre and post processing: binary2txt, chain.x, msi2lmp
Restrictions on use
There are no restrictions on accessing LAMMPS. It is distributed as an open source code under the terms of the GPL.
Set up procedure
To access the software you must first load the modulefile. NOTE we now recommend loading the module file in your batch script (see below for examples):
module load lammps/29oct2020-iomkl-2020.02-python-3.8.2-kokkos module load lammps/3mar2020-iomkl-2020.02-python-3.8.2-kokkos module load lammps/3mar2020-iomkl-2020.02-python-3.8.2-kokkos-frenkel module load lammps/3mar2020-iomkl-2020.02-python-3.8.2-kokkos-n2p2
module load openssl/1.0.2k module load zlib/1.2.11-gcccore-9.3.0
This will load all necessary modulefiles (e.g., the plumed modulefile).
Running the application
Please do not run LAMMPS on the login node. Jobs should be submitted to the compute nodes via batch.
Note also that LAMMPS may produce very large files (particularly the trajectory file ending in .trj
and the potentials file ending in .pot
). Hence you must run from your scratch directory. This will prevent your job filling up the home area. If you do not need certain files in your results, please turn off writing of the specific files in your control file (e.g., lmp_control
) or delete them in your jobscript using:
rm -f *.trj rm -f *.pot
Serial CPU batch job submission
LAMMPS can be run in parallel but you can run the pre/post processing tools in serial. Create a batch submission script which loads the most appropriate LAMMPS modulefile, for example:
#!/bin/bash --login ## The default is to run with one core but you can also use the following #SBATCH -p serial # (or --partition=) Single-core job #SBATCH -n 1 # (or --ntasks=) Just use one core module load lammps/3mar2020-iomkl-2020.02-python-3.8.2-kokkos module load openssl/1.0.2k module load zlib/1.2.11-gcccore-9.3.0 lmp < infile > outfile # Optional: delete any unwanted output files that may be huge rm -f *.trj
Submit the jobscript using: sbatch scriptname
where scriptname
is the name of your jobscript.
Single-node Parallel CPU batch job submission: 2 to 40 cores
The following jobscript will run LAMMPS with 16 cores on a single node
#!/bin/bash --login #SBATCH -p multicore # (or --partition=) Parallel job using cores on a single node #SBATCH -n 24 # (or --ntasks=) Number of cores (2--40) module load lammps/3mar2020-iomkl-2020.02-python-3.8.2-kokkos module load openssl/1.0.2k module load zlib/1.2.11-gcccore-9.3.0 # mpirun knows how many cores to use mpirun lmp < infile > outfile
Submit the jobscript using: sbatch scriptname
where scriptname
is the name of your jobscript.
Multi-node Parallel CPU batch job submission
These jobs must be 80 cores or more in multiples of 40 when running in the multinode
partition.
The following jobscript will run LAMMPS;
#!/bin/bash --login #SBATCH -p multinode # (or --partition=) Parallel job using all cores on nodes #SBATCH -n 80 # (or --ntasks=) Number of cores (80-200) in multiples of 40 ### Alternatively you can say how many nodes to use # #SBATCH -N 2 # (or --nodes=) Number of 40-core nodes to use module load lammps/3mar2020-iomkl-2020.02-python-3.8.2-kokkos module load openssl/1.0.2k module load zlib/1.2.11-gcccore-9.3.0 # mpirun knows how many cores to use mpirun lmp < infile > outfile
Submit the jobscript using: sbatch scriptname
where scriptname
is the name of your jobscript.
Further info
- LAMMPS website
Updates
Oct 2020 – Initial install