The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
NAMD
Overview
Parallel molecular dynamics code (home page).
Two versions are available on CSF:
- Version 2.7 is compiled from source on the CSF (very limited access).
- Version 2.9 is a binary install.
Restrictions on use
Due to license restrictions users must request access to the appropriate CSF Unix group, namd
or namdbin
. Most users can only be granted access to the binary version. Please email its-ri-team@manchester.ac.uk for access.
All users should read and follow the license, a copy of which can be found $NAMD_HOME/license.txt
, but only once you have access to the module. The following points are of particular note:
- NAMD is owned by and copyrighted to the University of Illinois – this is detailed throughout the license agreement. Of most note in this respect are clauses 2., 4., and 5.
- NAMD is for academic, research and internal business purposes only, e.g. not for commercial use. A definition of commercial use is given in clause 7 of the license.
- Citation of the NAMD software must appear in any published work. See clause 6 and the NAMD website for the required text.
Set up procedure
The NAMD 2.9 (binary install) modulefile is loaded using:
module load apps/binapps/namd/2.9
The NAMD 2.7 (compiled on CSF) modulefile has a number of prerequisites. Hence you should load the following, for use with non-InfiniBand (GigE) connected nodes:
module load compilers/intel/fortran/11.1.064 module load libs/intel/mkl/10.2u3 module load mpi/intel-11.1/openmpi/1.4.3 module load apps/intel-11.1/namd/2.7
In all verisons executable is named: namd2
Requirements
- the MPI versions were compiled against the Gigabit Ethernet and are not compatible with the Infiniband implementation of MPI
- Compilation was done with Intel 11.1 compilers
Example
Load the appropriate modulefiles as listed above. Then create a jobscript in the directory containing your input files, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd #$ -V #$ -pe smp.pe 8 # Example: 8 cores in this parallel environment (max 12) # NSLOTS is automatically set to the number of cores specified on the PE line mpirun -n $NSLOTS namd2
Submit the job using
qsub jobscript
where jobscript is the name of your jobscript above.
Further info
- BBSRC benchmarking on HECToR etc: http://www.bbsrc.ac.uk/funding/facilities/facilities.aspx?#highperformancecomputing