NAMD
If you are a windows user – please ensure you create your jobscript ON THE CSF directly using gedit. This will prevent your job going into error (Eqw). Text files created on windows have hidden characters that linux cannot read. For further information please see the guide to using the system from windows, in particular the section about text & batch submission script files. |
Overview
NAMD is a highly-scalable parallel molecular dynamics (MD) code for the simulation of large biomolecular systems.
Version 2.14 is installed on the CSF.
Restrictions on use
NAMD is not open source software. Please read the license before you request access. In particular please note:
- The software may be used for academic, research, and internal business purposes only.
- The software must not be used for commercial purposes. Commercial use includes (but is not limited to): (1) integration of all or part of the Software into a product for sale, lease or license by or on behalf of Licensee to third parties, or (2) distribution of the Software to third parties that need it to commercialize product sold or licensed by or on behalf of Licensee.
- Citation of the software must appear in any published work. See clause 6 of the above license and the NAMD website for the required text.
- Export regulations including remote access: You must comply with all United States and United Kingdom export control laws and regulations controlling the export of the software, including, without limitation, all Export Administration Regulations of the United States Department of Commerce. Among other things, these laws and regulations prohibit, or require a license for, the export of certain types of software to specified countries. Please be aware that allowing remote access from outside the United Kingdom may constitute an export.
- There is no access to the source code on the CSF.
- Access to this software is not permitted for visitors or collaborators.
A copy of the license is also available on the CSF in: /opt/apps/apps/binapps/namd/namd-license-accessed-13dec2018.pdf
To get access to NAMD you need to be added to a unix group. Please email its-ri-team@manchester.ac.uk and confirm that you have read the above information and that your work will comply with the T&Cs.
Set up procedure
We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.
Load one of the following modulefiles:
module load namd/2.14-smp # Binary package install module load namd/2.14-iompi-2020.02-mpi # Compiled on CSF4
Running the application
Please do not run NAMD on the login node. Jobs should be submitted to the compute nodes via the batch system. Note that the NAMD executable is named namd2
.
Single node parallel batch job submission (2-40 cores)
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=multicode) Single-node parallel #SBATCH -n 16 # (or --ntasks=16) Number of cores (2--40) module load namd/2.14 namd2 +p$SLURM_NTASKS myfile.namd
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Multi-node parallel batch job submission (multiples of 40 cores, minimum 80)
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=multicode) Single-node parallel #SBATCH -n 16 # (or --ntasks=16) Number of cores (2--40) module load namd/2.14-iompi-2020.02-mpi charmrun +p$SLURM_NTASKS ++mpiexec $NAMD_BIN/namd2 myfile.namd
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
NAMD is built using the Charm++ parallel programming system, therefore charmrun
is invoked to spawn the processes on each node.
IMPORTANT: The ++mpiexec
option must be used so that node information, etc, is passed from the batch system. Without this option, you will find that all “processes” (the Charmm++ parallel object is actually called a chare) run on one node. The path to namd2
must also be included, otherwise the remote hosts will not be able to find it.
Further info
Updates
None.