AMBER

Overview

Amber (Assisted Model Building with Energy Refinement) is a general purpose molecular mechanics/dynamics suite which uses analytic potential energy functions, derived from experimental and ab initio data, to refine macromolecular conformations.

A number of versions are available (see below).

Restrictions on use

This software is licensed for University of Manchester users (staff and currently enrolled students. All users should familiarise themselves with the Terms and Conditions of the license and the University’s AMBER Usage Guidelines which are linked below:

To get access to Amber on the CSF, you must confirm to its-ri-team@manchester.ac.uk that you have read these documents and that your usage will comply with the terms.

Once we have your confirmation you will be added to the amber unix group which controls access to the software.

There is no access to the source code – groups that require source access should get in touch with us to see if this can be arranged.

Note: AMBER22 – we are not currently licensed for this version, but we hope to be in a position to install it in Oct 2023.

Set up procedure

To access the software you must load ONE of the modulefiles:

For AMBER 20 with AMBERTOOLS 21:

apps/intel-19.1/amber/20-bf12-at21-bf12

For AMBER 18 with AMBERTOOLS 19:

module load apps/intel-17.0/amber/18-at19-may2019

For AMBER 18 with AMBERTOOLS 18:

 
module load apps/intel-17.0/amber/18

For AMBER 16 with AMBERTOOLS 17:

module load apps/intel-17.0/amber/16

For AMBER 14 with AMBERTOOLS 14:

module load apps/intel17.0/amber/14

The modulefile will load everything needed for all types of amber job (e.g. MPI for muliti-core, cuda for GPU). You do not need to load anything else.

We recommend that you load the module file in your jobscript now rather than on the login node before you submit. This is covered in the examples below. All of the examples use version 16, replace this with another from the list above if you wish to use a more recent version (or 14 if using the legacy version).

If you want more information about how we installed AMBER and what was included in each version please see the ‘Further Information’ section at the bottom of this page.

Running Serial Amber

An example batch submission script:

#!/bin/bash --login
#$ -cwd

# Load the software
module load apps/intel-17.0/amber/16

sander -O \
       -i file.in
       -o file.out
       -c file.restart
       -r file.rst
       -p file.prmtop
       -x file.cmd
       -inf file.inf

Submit with the command: qsub scriptname

Running Parallel Amber

Important notes for all parallel jobs

  • Please ensure that when running parallel amber you have selected the correct executable – normally with .MPI at the end of the name. Running a serial executable with multiple cores is inefficient and may give inaccurate results. If you are in doubt about whether the part of amber you wish to use can be run in parallel please check the amber manual.
  • Ensure you use $NSLOTS as shown in the examples – this ensures that the core request you have made is known to the application. Do not set the cores yourself on the application line as it can lead to errors and nodes being overloaded.
  • Where possible you should do scaling tests of your job to see how many cores suit it best.

Single node – 2 to 32 cores using MPI

An example batch submission script:

#!/bin/bash --login
#$ -cwd
#$ -pe smp.pe 12

# Load the software
module load apps/intel-17.0/amber/16

mpirun -n $NSLOTS sander.MPI -O \
       -i file.in
       -o file.out
       -c file.restart
       -r file.rst
       -p file.prmtop
       -x file.cmd
       -inf file.inf

Submit with the command: qsub scriptname

Multi-node – 48 or more cores in multiples of 24

#!/bin/bash --login
#$ -cwd
#$ -pe mpi-24-ib.pe 48

# Load the software
module load apps/intel-17.0/amber/16

mpirun -n $NSLOTS sander.MPI -O \
       -i file.in
       -o file.out
       -c file.restart
       -r file.rst
       -p file.prmtop
       -x file.cmd
       -inf file.inf

3. Submit with the command: qsub scriptname

Please note – there are no Broadwell or Skylake nodes with Infiniband connections. Jobs submitted to mpi-24-ib.pe will by default run only on Haswell nodes.

OpenMP components

The following executables have been compiled with OpenMP (Multi-threaded) capability:

cpptraj.OMP
   ## The serial executable is called ccptraj and should be run as per section 5 of these notes. 
   ## The mpi executable is called cpptraj.MPI and should be run as per the MPI examples above.
nab
   ## There is no serial version of this.
saxs_rism
   ## No serial version.
saxs_md
   ## No serial version.

By default (due to a modulefile setting) they will only use one core. To use multiple cores you must request smp.pe and set the OMP_NUM_THREADS variable to $NSLOTS in your jobscript. For example:

#!/bin/bash --login
#$ -cwd
#$ -pe smp.pe 12

# Load the software
module load apps/intel-17.0/amber/16

# Set the threads
export OMP_NUM_THREADS=$NSLOTS

ccptraj.OMP -p file.prmtop

Other components may be OpenMP capable, please consult the AMBER manual for further information.

pmemd.cuda serial GPU example

You need to request being added to the relevant group to access GPUs before you can run amber on them. There is limited free at the point of use GPU access.

An example batch submission script:

#!/bin/bash --login
#$ -cwd                   
#$ -l nvidia_v100=1

# Load the software
module load apps/intel-17.0/amber/16

pmemd.cuda -O -i mdin -o out.$JOB_ID

Submit with the command: qsub scriptname

For technical reasons it is not possible to use more than one GPU for AMBER on the CSF.

Further info

  • Manuals are available on the system ($AMBERHOME/doc)
  • The Amber web site also has manuals, tutorials, an FAQ and mailing list: http://ambermd.org

Further Information

Install Details

Below we provide a few details on how we installed the different versions.

Version 20 with AmberTools 21

  • Installed with Intel Compilers 19.1.2 and Openmpi 4.1.1 and CUDA 11.2.0
  • The compile flag -axSSE4.2,AVX,CORE-AVX2,CORE-AVX512 was used. This builds multiple auto-dispatch ‘code paths’ in the executable for use on different Intel architectures. This is deemed beneficial given there are several different Intel CPUs available in the CSF.
  • AmberTools 21 was patched to bugfix level 12 and Amber 20 to bugfix level 12.
  • cpptraj, nab and saxs have been compiled with OpenMP capability.
  • DFTB files – All are available in $AMBERHOME/dat/slko
  • This version has it’s own installation of Anaconda python (miniconda), it uses python3, and is configured in the modulefile.
  • CUDA support has been included. Please note, multi-GPU jobs are not possible.
  • Note: Updates are made available for Amber and Ambertools quite frequently. To avoid unexpected changes for some users we do not add these update to the current installation. To avoid having to do a new install frequently we will not be recompiling everytime an update is available, we will re-compile approximately every 6 months if required.

Version 18 with AmberTools 19

  • Installed with Intel Compilers 17.0.7 and Openmpi 3.1.3 and CUDA 9.2.148
  • The compile flag -axSSE4.2,AVX,CORE-AVX2,CORE-AVX512 was used. This builds multiple auto-dispatch ‘code paths’ in the executable for use on different Intel architectures. This is deemed beneficial given there are several different Intel CPUs available in the CSF.
  • AmberTools 19 was patched to bugfix level 2 and Amber 18 to bugfix level 14.
  • cpptraj, nab and saxs have been compiled with OpenMP capability.
  • DFTB files – All are available in $AMBERHOME/dat/slko
  • This version has it’s own installation of Anaconda python (miniconda) and this is configured in the modulefile.
  • CUDA support has been included. The -volta flag was not used, -cuda is now volta compatible according to the information in Amber 18 bugfix 12 of 21st Jan 2019. The executable will detect the GPU type at runtime. Please note, multi-GPU jobs are not possible.
  • Note: Updates are made available for Amber and Ambertools quite frequently. To avoid unexpected changes for some users we do not add these update to the current installation. To avoid having to do a new install frequently we will not be recompiling everytime an update is available, we will re-compile approximately every 6 months if required.

Version 18 with AmberTools 18

  • Installed with Intel Compilers 17.0.7 and Openmpi 3.1.3 and CUDA 9.2.148
  • The compile flag -axSSE4.2,AVX,CORE-AVX2,CORE-AVX512 was used. This builds multiple auto-dispatch ‘code paths’ in the executable for use on different Intel architectures. This is deemed beneficial given there are several different Intel CPUs available in the CSF.
  • AmberTools 18 was patched to bugfix level 13 and Amber 18 to bugfix level 13.
  • cpptraj, nab and saxs have been compiled with OpenMP capability.
  • DFTB files – All are available in $AMBERHOME/dat/slko
  • This version has it’s own installation of Anaconda python (miniconda) and this is configured in the modulefile.
  • CUDA support has been included. The -volta flag was not used, -cuda is sufficient according to the information in Amber 18 bugfix 12 of 21st Jan 2019. The executable will detect the GPU type at runtime. Please note, multi-GPU jobs are not possible.

Version 16 with AmberTools 17

  • Installed with Intel Compilers 17.0.7 and Openmpi 3.1.1and CUDA 9.0.176
  • The compile flag -axSSE4.2,AVX,CORE-AVX2,CORE-AVX512 was used. This builds multiple auto-dispatch ‘code paths’ in the executable for use on different Intel architectures. This is deemed beneficial given there are several different Intel CPUs available in the CSF.
  • AmberTools 17 was patched to bugfix level 10 and Amber 16 to bugfix level 15.
  • cpptraj, nab and saxs have been compiled with OpenMP capability.
  • DFTB files – All are available in $AMBERHOME/dat/slko
  • This version has it’s own installation of Anaconda python (miniconda) and this is configured in the modulefile.
  • CUDA support has been included. Please note, multi-GPU jobs are not possible.

Version 14 with AmberTools 14

  • This version has been installed by special request for a specific legacy purpose only. We recommend that you use a newer version.
  • Installed with Intel Compilers 17.0.7 and Openmpi 4.0.1
  • The compile flag -axSSE4.2,AVX,CORE-AVX2,CORE-AVX512 was used. This builds multiple auto-dispatch ‘code paths’ in the executable for use on different Intel architectures. This is deemed beneficial given there are several different Intel CPUs available in the CSF.
  • AmberTools 14 was patched to bugfix level 27 and Amber 14 to bugfix level 13.
  • cpptraj has been compiled with OpenMP capability.
  • DFTB files – All are available in $AMBERHOME/dat/slko
  • CUDA support has not been included (the CUDA library/driver requirements are too old for the CSF3).

Last modified on September 15, 2023 at 5:04 pm by Pen Richardson