The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
DLPOLY 4.03
Overview
DL_POLY refers to a suite of general purpose molecular dynamics simulation packages, including
- DL_POLY_4 – (documented on this page) this is version 4 of the STFC codes with a STFC licence and requires MPI2 to allow parallel I/O. Its general parallel approach is domain decomposition. This version is available on the CSF for academic use only. Please email its-ri-team@manchester.ac.uk for access.
- DL_POLY_CLASSIC – an Open Source branch of DL_POLY_2 with a BSD (ie less restrictive) licence. This is installed on CSF and available to all users. It replicates data to allow parallelism.
If you are unsure which version to use, please consult the STFC FAQ for advice.
Restrictions on use
Whilst the software is free for academic usage there are limitations within the license agreement which must be strictly adhered to by users. All users who wish to use the software must request access to the dlpoly4
unix group. A copy of the full license is available on the CSF in
/opt/gridware/apps/intel-12.0/DLPOLY/4.03/DL_POLY_4_licence_agreement.txt
Important points to note are:
- No industrially-funded work must be undertaken using the software. See clauses 2.1.3 and 2.2 of the license.
- The software is only available to Staff and Students of the University of Manchester. Users are reminded that they must not share their password with anyone, or allow anyone else to utlise their account.
- Citation of the software must appear in any published work. See clause 4.2 for the required text.
There is no access to the source code on the CSF.
Set up procedure
Parallel CPU
The main use of DL_POLY_4 is to run it in parallel. If you wish to use DL_POLY_4 across multiple nodes connected with InfiniBand (fast) networking
module load mpi/intel-12.0/openmpi/1.6-ib module load apps/intel-12.0/dl_poly/4.03/par-intel-medOpt
(in this case your choice of parallel environment (PE) in the jobscript should be an ib PE).
If you wish to use DL_POLY_4 in parallel on a single (smp) node or across multiple nodes connect with Ethernet (slower) networking
module load mpi/intel-12.0/openmpi/1.6 module load apps/intel-12.0/dl_poly/4.03/par-intel-medOpt
(in this case your choice of parallel environment (PE) in the jobscript should be a non-ib or smp PE).
If you try to load the dl_poly par-intel-medOpt
modulefile without an MPI modulefile an error will be reported. You must load an MPI modulefile.
Running the application
Please do not run DL_POLY on the login node. Jobs should be submitted to the compute nodes via batch.
The DL_POLY_4 (CPU) executable is named DLPOLY.Z
Serial batch job submission
It is recommended that you use DL_POLY_CLASSIC for serial jobs. The serial version of DL_POLY_4 is built for testing rather than efficiency and so may print warnings.
Ensure the appropriate modulefile has been loaded (see above). Then create a jobscript in the directory containing your input files (CONTROL, CONFIG etc). For example:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V DLPOLY.Z
Submit the job using
qsub jobscript
where jobscript is the name of your jobscript.
Parallel batch job submission
Ensure the appropriate modulefile has been loaded (see above). Then create a jobscript in the directory containing your input files (CONTROL, CONFIG etc). For example:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 12 # Max 12 cores in this Parallel Environment. # (example uses smp.pe so load non-IB mpi modulefile) # NSLOTS is automatically set to the number of cores request on the PE line mpirun -n $NSLOTS DLPOLY.Z
Submit the job using
qsub jobscript
where jobscript is the name of your jobscript.
Parallel GPU batch job submission
Currently DL_POLY_4 with CUDA support can be run on the Nvidia non-InfiniBand GPU nodes. These contain one Nvidia Fermi GPU and 12 CPU cores.
The name of the CUDA-enabled executable is DLPOLY.Z.cu
The preferred way of running the CUDA version is to run one MPI process for each GPU being used. In our case we have one GPU in the node. The CPU parts of the code will use all 12 CPU cores on the node hosting the GPU card.
Ensure the appropriate modulefiles have been loaded (see above). Then create a jobscript in the directory containing your input files. For example:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -l nvidia # Ensure we run on a GPU node # We run one MPI executable mpirun -n 1 DLPOLY.Z.cu
Submit the job using
qsub jobscript
where jobscript is the name of your jobscript.
Further info
Multiple Similar Jobs
If you wish to run lots of DLPOLY jobs – e.g., running using different input files each in a different directory to process lots of different simulations, then please use Job Arrays. That page has examples of how to run the same application from different directories. Job Arrays place less strain on the batch system compared to submitting lots of individual jobs.