NAMD
If you are a windows user – please ensure you create your jobscript ON THE CSF directly using gedit. This will prevent your job going into error (Eqw). Text files created on windows have hidden characters that linux cannot read. For further information please see the guide to using the system from windows, in particular the section about text & batch submission script files. |
Overview
NAMD is a highly-scalable parallel molecular dynamics (MD) code for the simulation of large biomolecular systems.
Version 3.0 (2024-06-14) is now available in CSF3.
Both CPU and NVIDIA CUDA accelerated GPU versions are available.
The CUDA accelerated version is much faster than the CPU version.
Version 2.14 (2020-08-05) CUDA accelerated is now available in CSF3.
The CUDA accelerated version is much faster than the CPU version.
Version 2.13 & 2.14 are installed on the CSF.
Restrictions on use
NAMD is not open source software. Please read the license before you request access. In particular please note:
- The software may be used for academic, research, and internal business purposes only.
- The software must not be used for commercial purposes. Commercial use includes (but is not limited to): (1) integration of all or part of the Software into a product for sale, lease or license by or on behalf of Licensee to third parties, or (2) distribution of the Software to third parties that need it to commercialize product sold or licensed by or on behalf of Licensee.
- Citation of the software must appear in any published work. See clause 6 of the above license and the NAMD website for the required text.
- Export regulations including remote access: You must comply with all United States and United Kingdom export control laws and regulations controlling the export of the software, including, without limitation, all Export Administration Regulations of the United States Department of Commerce. Among other things, these laws and regulations prohibit, or require a license for, the export of certain types of software to specified countries. Please be aware that allowing remote access from outside the United Kingdom may constitute an export.
- There is no access to the source code on the CSF.
- Access to this software is not permitted for visitors or collaborators.
A copy of the license is also available on the CSF in: /opt/apps/apps/binapps/namd/namd-license-accessed-13dec2018.pdf
To get access to NAMD you need to be added to a the namdbin
unix group. Please email its-ri-team@manchester.ac.uk and confirm that you have read the above information and that your work will comply with the T&Cs.
Set up procedure – Version 2.14 – CUDA Accelerated
We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.
Load one of the following modulefiles:
apps/binapps/namd/2.14-cuda
For example:
module load apps/binapps/namd/2.14-cuda
Running the application
Please do not run NAMD on the login node. Jobs should be submitted to the compute nodes via the batch system. Note that the NAMD executable is named namd2
.
Single node parallel (multi-threaded) batch job submission with GPU (2-32 cores)
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 8 #$ -l v100=1 module load apps/binapps/namd/2.14-cuda namd2 +p$NSLOTS apoa1.namd
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Set up procedure – Version 2.13 & 2.14 – CPU only
We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.
Load one of the following modulefiles:
apps/binapps/namd/2.13/mpi apps/binapps/namd/2.13/serial apps/binapps/namd/2.13/smp # Use version 2.14 if using HPC pool apps/binapps/namd/2.14/mpi apps/binapps/namd/2.14/serial apps/binapps/namd/2.14/smp
For example:
module load apps/binapps/namd/2.14/smp
Running the application
Please do not run NAMD on the login node. Jobs should be submitted to the compute nodes via the batch system. Note that the NAMD executable is named namd2
.
Serial batch job submission
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd module load apps/binapps/namd/2.14/serial namd2 apoa1.namd
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Single node parallel batch job submission (2-32 cores)
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 8 module load apps/binapps/namd/2.14/smp namd2 +p$NSLOTS apoa1.namd
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Multi-node parallel batch job submission (multiples of 24 cores, minimum 48)
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd #$ -pe mpi-24-ib.pe 48 module load apps/binapps/namd/2.14/mpi charmrun +p$NSLOTS ++mpiexec $NAMD_BIN/namd2 apoa1.namd
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
NAMD is built using the Charm++ parallel programming system, therefore charmrun
is invoked to spawn the processes on each node.
IMPORTANT: The ++mpiexec
option must be used so that node information, etc, is passed from the batch system. Without this option, you will find that all “processes” (the Charmm++ parallel object is actually called a chare) run on one node. The path to namd2
must also be included, otherwise the remote hosts will not be able to find it.
Set up procedure – Version 3.0 – CPU only and CUDA Accelerated
NAMD Version 3.0 and above needs newer version of C Libraries than available on CSF3.
These versions can now be run on CSF3 using a Singularity image, now made available in CSF3, which contains these newer libraries.
See the example jobscripts below for how to run NAMD v3.0 using Singularity image.
Following modules are available:
# CPU only version of NAMD v3.0 apps/binapps/namd/3.0 # CPU-GPU Hybrid with NVIDIA CUDA accelerated version of NAMD v3.0 apps/binapps/namd/3.0-cuda # Singularity image for running NAMD v3.0 apps/singularity/rocky/9.3
For example:
module load apps/binapps/namd/3.0
Running the application
Please do not run NAMD on the login node. Jobs should be submitted to the compute nodes via the batch system. Note that the NAMD executable is named namd3
.
Single node parallel (multi-threaded) batch job submission (2-32 cores) for CPU only version of NAMD v3.0
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 8 module load apps/binapps/namd/3.0 module load apps/singularity/rocky/9.3 export NAMD3DIR="/opt/apps/apps/binapps/namd/3.0/bin" singularity exec --bind /scratch:/scratch,/opt:/opt $ROCKY_SIF $NAMD3DIR/namd3 +p$NSLOTS namd.inp > namd.out
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Single node parallel (multi-threaded) batch job submission (2-32 cores) for CPU-GPU version of NAMD v3.0
Supports both GPU-offload and GPU-resident modes
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 8 #$ -l v100=1 module load apps/binapps/namd/3.0-cuda module load apps/singularity/rocky/9.3 export NAMD3DIR="/opt/apps/apps/binapps/namd/3.0-cuda/bin" singularity exec --bind /scratch:/scratch,/opt:/opt $ROCKY_SIF $NAMD3DIR/namd3 +p$NSLOTS namd.inp > namd.out
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Further info
Updates
None.