Amber
Overview
Amber (Assisted Model Building with Energy Refinement) is a general purpose molecular mechanics/dynamics suite which uses analytic potential energy functions, derived from experimental and ab initio data, to refine macromolecular conformations.
A number of versions are available (see below).
Restrictions on use
This software is licensed for University of Manchester users (staff and currently enrolled students. To get access to Amber on the CSF, you must confirm to its-ri-team@manchester.ac.uk that you have read these documents and that your usage will comply with the terms before you can be added to the amber
unix group which restricts access to the software.
Access by anyone else is prohibited). All users should familiarise themselves with the Terms and Conditions of the license and the University’s AMBER Usage Guidelines which are linked below:
There is no access to the source code – groups that require source access should get in touch with us to see if this can be arranged.
Note: AMBER22 – we are not currently licensed for this version, but we hope to be in a position to install it in Oct 2023.
Set up procedure
To access the software you must load ONE of the modulefiles:
For AMBER 20 (bugfix level 12) with AMBERTOOLS 21 (bugfix level 12):
module load amber/20.12-iomkl-2020.02-ambertools-21.12
The modulefile will load everything needed for all types of amber job (e.g. MPI for muliti-core). You do not need to load anything else.
We recommend that you load the module file in your jobscript now rather than on the login node before you submit. This is covered in the examples below.
If you want more information about how we installed AMBER and what was included in each version please see the ‘Further Information’ section at the bottom of this page.
Running Serial Amber
An example batch submission script:
#!/bin/bash --login #SBATCH -p serial # Optional line, this is the default partition #SBATCH -n 1 # Load the software, clearing any modules inherited from the login node first module purge module load amber/20.12-iomkl-2020.02-ambertools-21.12 sander -O \ -i file.in -o file.out -c file.restart -r file.rst -p file.prmtop -x file.cmd -inf file.inf
Submit with the command: sbatch scriptname
where scriptname is the name of your jobscript.
Running Parallel Amber
Important notes for all parallel jobs
- Please ensure that when running parallel amber you have selected the correct executable – normally with
.MPI
at the end of the name. Running a serial executable with multiple cores is inefficient and may give inaccurate results. If you are in doubt about whether the part of amber you wish to use can be run in parallel please check the amber manual. - Ensure you use
$SLURM_NTASKS
as shown in the examples – this ensures that the core request you have made is known to the application. Do not set the cores yourself on the application line as it can lead to errors and nodes being overloaded. - Where possible you should do scaling tests of your job to see how many cores suit it best.
Single node – 2 to 40 cores using MPI
An example batch submission script:
#!/bin/bash --login #SBATCH -p multicore # Single-node parallel job #SBATCH -n 40 # Number of cores (can be 2--40) # Load the software module purge module load amber/20.12-iomkl-2020.02-ambertools-21.12 mpirun sander.MPI -O \ -i file.in -o file.out -c file.restart -r file.rst -p file.prmtop -x file.cmd -inf file.inf
Submit with the command: sbatch scriptname
where scriptname is the name of your jobscript.
Multi-node – 80 or more cores in multiples of 40
#!/bin/bash --login #SBATCH -p multinode # Multi-node parallel job #SBATCH -n 80 # Number of cores (80 or more in multiples of 40) # Load the software module purge module load amber/20.12-iomkl-2020.02-ambertools-21.12 mpirun sander.MPI -O \ -i file.in -o file.out -c file.restart -r file.rst -p file.prmtop -x file.cmd -inf file.inf
3. Submit with the command: qsub scriptname
where scriptname is the name of your jobscript.
OpenMP components
The following executables have been compiled with OpenMP (Multi-threaded) capability:
cpptraj.OMP # # The serial executable is called ccptraj and should be run as per section 5 of these notes. # The MPI executable is called cpptraj.MPI and should be run as per the MPI examples above. nab # # Serial-only saxs_rism.OMP # # The serial executable is called saxs_rism saxs_md.OMP # # The serial executable is called saxs_md
By default (due to a modulefile setting) they will only use one core. To use multiple cores you must request the multicore
partition and set the OMP_NUM_THREADS
variable to $NSLOTS
in your jobscript. For example:
#!/bin/bash --login #SBATCH -p multicore # Single-node parallel job #SBATCH -n 40 # Number of cores (can be 2--40) # Load the software module purge module load amber/20.12-iomkl-2020.02-ambertools-21.12 # Set the threads export OMP_NUM_THREADS=$SLURM_NTASKS ccptraj.OMP -p file.prmtop
Other components may be OpenMP capable, please consult the AMBER manual for further information.
Further info
- Manuals are available on the system (
$AMBERHOME/doc
) - The Amber web site also has manuals, tutorials, an FAQ and mailing list: http://ambermd.org
Further Information
Install Details
Below we provide a few details on how we installed the different versions.
Version 20 with AmberTools 21
- Installed with Intel Compilers 19.1.2 and Openmpi 4.0.4
- AmberTools 21 was patched to bugfix level 12 and Amber 20 to bugfix level 12.
- DFTB files – All are available in
$AMBERHOME/dat/slko
- This version has it’s own installation of Anaconda python (miniconda), it uses python3, and is configured in the modulefile.
- Note: Updates are made available for Amber and Ambertools quite frequently. To avoid unexpected changes for some users we do not add these update to the current installation. To avoid having to do a new install frequently we will not be recompiling every time an update is available, we will re-compile approximately every 6 months if required.