The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
DL_POLY4 4.08
Overview
DL_POLY refers to a suite of general purpose molecular dynamics simulation packages, including
- DL_POLY_4 – (documented on this page) this is version 4.08 of the STFC codes with a STFC licence and requires MPI2 to allow parallel I/O. Its general parallel approach is domain decomposition. This version is available on the CSF for academic use only. Please email its-ri-team@manchester.ac.uk for access.
- DL_POLY_4 + PLUMED – (documented on this page) as above but with PLUMED 2.4.0 support compiled in.
- DL_POLY_CLASSIC – an Open Source branch of DL_POLY_2 with a BSD (ie less restrictive) licence. This is installed on CSF and available to all users. It replicates data to allow parallelism.
If you are unsure which version to use, please consult the STFC FAQ for advice.
Please see Further Info for more details
The remainder of this page documents use of DL_POLY_4.
Restrictions on use
Whilst the software is free for academic usage there are limitations within the license agreement which must be strictly adhered to by users. All users who wish to use the software must request access to the dlpoly4
unix group. A copy of the full license is available on the CSF in
/opt/gridware/apps/intel-15.0/DLPOLY/4.08/DL_POLY_4_licence_agreement.txt
Important points to note are:
- No industrially-funded work must be undertaken using the software. See clauses 2.1.3 and 2.2 of the license.
- The software is only available to Staff and Students of the University of Manchester. Users are reminded that they must not share their password with anyone, or allow anyone else to utlise their account.
- Citation of the software must appear in any published work. See clause 4.2 for the required text.
There is no access to the source code on the CSF.
Running the application
Please do not run DL_POLY on the login node. Jobs should be submitted to the compute nodes via batch submission system.
The DL_POLY_4 executable is named DLPOLY.Z
Serial batch job submission
It is recommended that you use DL_POLY_CLASSIC for serial jobs.
Parallel batch job submission
Load one of the appropriate modulefile. For MPI jobs on a single node:
# DL_POLY module load apps/intel-15.0/dl_poly/4.08 # DL_POLY + PLUMED module load apps/intel-15.0/dl_poly/4.08-plumed
For MPI jobs which use several nodes, i.e. >24 cores use one of:
# DL_POLY module load apps/intel-15.0/dl_poly/4.08-ib # DL_POLY + PLUMED module load apps/intel-15.0/dl_poly/4.08-plumed-ib
The modulefiles also load the correct mpi library so you do not need to.
Then create a jobscript in the directory containing your input files (CONTROL, CONFIG etc).
Example jobscript – Single node jobs – maximum 24 cores
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 12 # Max 12 cores in this Parallel Environment. # (example uses smp.pe so load non-IB mpi modulefile) # NSLOTS is automatically set to the number of cores request on the PE line mpirun -n $NSLOTS DLPOLY.Z
Submit the job using
qsub jobscript
where jobscript is the name of your jobscript.
Example jobscript – Multi node jobs – 48 cores or more in multiples of 24
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe orte-24-ib.pe 48 # NSLOTS is automatically set to the number of cores request on the PE line mpirun -n $NSLOTS DLPOLY.Z
Submit the job using
qsub jobscript
where jobscript is the name of your jobscript.
Multiple Similar Jobs
If you wish to run lots of DLPOLY jobs – e.g., running using different input files each in a different directory to process lots of different simulations, then please use Job Arrays. That page has examples of how to run the same application from different directories. Job Arrays place less strain on the batch system compared to submitting lots of individual jobs.
Further info
Updates
- Feb 2017 build mpi version
- May 2018 build mpi + plumed version