Amber (pmemd.cuda)
Amber (Assisted Model Building with Energy Refinement) is a general purpose molecular mechanics/dynamics suite which uses analytic potential energy functions, derived from experimental and ab initio data, to refine macromolecular conformations.
It contains pmemd.cuda which provides GPU-accelerated simulation capabilities, specifically for Nvidia GPUs. You are strongly advised to read the pmemd.cuda page for information on supported features and number of atoms that can be simulated in the CUDA version.
Zrek provides:
- version Amber 16 (bugfix 8) + AmberTools 17 (bugfix 12) compiled with GCC 4.4.6 + MPICH2 3.1 + CUDA 8.0.44
- version Amber 16 (bugfix 8) + AmberTools 17 (bugfix 12) compiled with Intel 15.0.3 + OpenMPI 1.6 + CUDA 8.0.44
- version Amber 14 (bugfix 8) + AmberTools 14 (bugfix 22) compiled with GCC 4.4.6 + MPICH2 3.1 + CUDA 6.5.14
- version Amber 12 (bugfix 21) + AmberTools 13 (bugfix 23) compiled with GCC 4.4.6 + MPICH2 3.1 + CUDA 5.5.22
This page documents usage of the pmemd.cuda component of Amber. For more information on the other (CPU) codes, please refer to the CSF Amber Page.
Restrictions on use
This software is licensed for University of Manchester users. All users should familiarise themselves with the appropriate amber licenses and guidelines before using this software:
- License Agreement V16
- University’s AMBER 16 Usage Guidelines
- License Agreement V14
- University’s AMBER 14 Usage Guidelines
- License Agreement V12.
- University’s AMBER 12 Usage Guidelines
You must confirm to its-ri-team@manchester.ac.uk that you have read the license and University guidelines and that your usage will comply with their terms before you can be added to the amber
unix group which restricts access to the software.
There is no access to the source code – groups that require source access should get in touch with us to see if this can be arranged.
Supported Backend Nodes
This application is available on the Nvidia GPU nodes: besso and kaiju[1-5]. Please see the K40 node instructions and the K20 node instructions for how to access the nodes.
Set up procedure
pmemd.cuda can be run interactively on a backend node or in batch using a jobscript submitted from the login node (similar to the CSF). Where you run from affects when and where you should load the modulefile:
- If running interactive on a backend node load the modulefile after logging in to that backend node.
- If running in batch, submitting a jobscript from the login node, load the modulefile before submitting the job on the login node.
In both of the above cases, load one of the following modulefiles:
# Version 16 module load apps/gcc/amber/16-cuda-mpi-at17 module load apps/intel-15.0/amber/16-cuda-mpi-at17 # Version 14 module load apps/gcc/amber/14-cuda-mpi-at14 # Version 12 module load apps/gcc/amber/12-cuda-mpi-at13
This will load the necessary CUDA and MPICH2 modulefiles for you.
Running the application
The following instructions describe interactive use on a backend node and batch jobs from the login node.
Interactive use on a Backend Node
Once logged in to a backend K20 node or K40 node (using qrsh) and having loaded the modulefile there, run:
pmemd.cuda -O -i mdin -o mdout -p prmtop -c inpcrd -r restrt -x mdcrd
This will use the GPU assigned to you.
If you have reserved both GPUs you must use MPI to run two instances of pmemd.cuda, one on each GPU, as follows:
mpirun -np 2 pmemd.cuda.MPI -O -i mdin -o mdout -p prmtop \ -c inpcrd -r restrt -x mdcrd
The GPU precision used by pmemd.cuda executables is SPFP – mixed single precision and 64bit fixed-point precision.
Single GPU jobs running in Batch
Do not log in to a backend node. The job must be submitted from the zrek login node. Ensure you have loaded the correct modulefile on the login node and then create a jobscript similar to the following:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Run job from directory where submitted #$ -V # Inherit environment (modulefile) settings #$ -l k20 # Select a single GPU (Nvidia K20) node pmemd.cuda -O -i mdin -o out.$JOB_ID
Submit your jobscript from the zrek login node using
qsub jobscript
where jobscript
is the name of your jobscript.
Dual GPU jobs running in Batch
Do not log in to a backend node. The job must be submitted from the zrek login node. Ensure you have loaded the correct modulefile on the login node and then create a jobscript similar to the following:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Run job from directory where submitted #$ -V # Inherit environment (modulefile) settings #$ -l k20duo # Select a Dual GPU (Nvidia K20) node mpirun -np 2 pmemd.cuda -O -i mdin -o out.$JOB_ID
Submit your jobscript from the zrek login node using
qsub jobscript
where jobscript
is the name of your jobscript.
Please ensure that when running parallel amber you have selected the correct executable – normally with .MPI at the end of the name. Running a serial executable with multiple cores is inefficient and may give inaccurate results. If you in doubt about whether the part of amber you wish to use can be run in parallel please check the amber manual.
Further info
- Manuals are available on the system ($AMBERHOME/doc)
- The Amber web site also has manuals, tutorials, an FAQ and mailing list: http://ambermd.org
Updates
None