{"id":2062,"date":"2019-01-18T13:05:22","date_gmt":"2019-01-18T13:05:22","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=2062"},"modified":"2025-06-19T18:04:35","modified_gmt":"2025-06-19T17:04:35","slug":"namd","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/namd\/","title":{"rendered":"NAMD"},"content":{"rendered":"<table class=\"hint\">\n<tr>\n<td><em>If you are a windows user &#8211; please ensure you create your jobscript ON THE CSF directly using <a href=\"\/csf3\/software\/tools\/gedit\/\">gedit<\/a>. This will prevent your job going into error (Eqw). Text files created on windows have hidden characters that linux cannot read. For further information please see the <a href=\"\/csf3\/getting-started\/using-from-windows\/\">guide to using the system from windows<\/a>, in particular the section about <a href=\"\/csf3\/getting-started\/using-from-windows\/#textfiles\">text &amp; batch submission script files<\/a><\/em>.\n<\/td>\n<\/tr>\n<\/table>\n<h2>Overview<\/h2>\n<p><a href=\"https:\/\/www.ks.uiuc.edu\/Research\/namd\">NAMD<\/a> is a highly-scalable parallel molecular dynamics (MD) code for the simulation of large biomolecular systems.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-content\/uploads\/new.png\" width=\"20\"><\/img><br \/>\n<a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/namd\/#Set_up_procedure_8211_Version_30_8211_CPU_only_and_CUDA_Accelerated\"><strong>Version 3.0 (2024-06-14)<\/strong> is now available in CSF3.<\/a><br \/>\nBoth CPU and NVIDIA CUDA accelerated GPU versions are available.<br \/>\nThe CUDA accelerated version is much faster than the CPU version.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-content\/uploads\/new.png\" width=\"20\"><\/img><br \/>\n<a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/namd\/#Set_up_procedure_8211_Version_214_8211_CUDA_Accelerated\"><strong>Version 2.14 (2020-08-05) CUDA accelerated<\/strong> is now available in CSF3.<\/a><br \/>\nThe CUDA accelerated version is much faster than the CPU version.<\/p>\n<p><strong>Version 2.13 &#038; 2.14<\/strong> are installed on the CSF.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>NAMD is not open source software. Please read the  <a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/license.html\">license<\/a> before you request access. In particular please note:<\/p>\n<ul>\n<li>The software may be used for academic, research, and internal business purposes only. <\/li>\n<li>The software must not be used for commercial purposes. Commercial use includes (but is not limited to): (1) integration of all or part of the Software into a product for sale, lease or license by or on behalf of Licensee to third parties, or (2) distribution of the Software to third parties that need it to commercialize product sold or licensed by or on behalf of Licensee.<\/li>\n<\/li>\n<li>Citation of the software must appear in any published work. See clause 6 of the above license and <a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/papers.html\">the NAMD website<\/a> for the required text.<\/li>\n<li>Export regulations including remote access: You must comply with all United States and United Kingdom export control laws and regulations controlling the export of the software, including, without limitation, all Export Administration Regulations of the United States Department of Commerce. Among other things, these laws and regulations prohibit, or require a license for, the export of certain types of software to specified countries. Please be aware that allowing remote access from outside the United Kingdom may constitute an export.<\/li>\n<li>There is no access to the source code on the CSF.<\/code>\n<li>Access to this software is not permitted for visitors or collaborators.<\/li>\n<\/ul>\n<p>A copy of the license is also available on the CSF in: <code>\/opt\/apps\/apps\/binapps\/namd\/namd-license-accessed-13dec2018.pdf<\/code><\/p>\n<p>To get access to NAMD you need to be added to a the <code>namdbin<\/code> unix group. Please <a href=\"\/csf3\/overview\/help\/\">contact us<\/a> and confirm that you have read the above information and that your work will comply with the T&#038;Cs.<\/p>\n<h2>Set up procedure &#8211; Version 2.14 &#8211; CUDA Accelerated<\/h2>\n<p>We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job <abbr title=\"add '#$ -V' to your jobscript\">inherit these settings<\/abbr>.<\/p>\n<p>Load one of the following modulefiles:<\/p>\n<pre>\r\napps\/binapps\/namd\/2.14-cuda\r\n<\/pre>\n<p>For example:<\/p>\n<pre>\r\nmodule load apps\/binapps\/namd\/2.14-cuda\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run NAMD on the login node. Jobs should be submitted to the compute nodes via the batch system. Note that the NAMD executable is named <code>namd2<\/code>.<\/p>\n<h3>Single node parallel (multi-threaded) batch job submission with GPU (2-32 cores)<\/h3>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p gpuV        # GPU Partition name (required). Available options - gpuV, gpuA, gpuA40GB, gpuL\r\n#SBATCH -G 1           # Number of GPUs (or --gpus=N) (required)\r\n#SBATCH -n 8           # (or --ntasks=) Number of cores (8 on gpuV and 12 on gpuA, gpuA40GB, gpuL)\r\n#SBATCH -t 2-0         # Job \"wallclock\" limit (required). Max permitted is 4 days (4-0) for GPU partitoins\r\n                       # In this example 2-0 is 2 days (and 0 hours).\r\n                       # Other formats:  min:sec, hrs:min:sec, day-hours (to name a few)\r\n\r\n# Clean env and load module\r\nmodule purge\r\nmodule load apps\/binapps\/namd\/2.14-cuda\r\n\r\nnamd2 +p$SLURM_NTASKS apoa1.namd\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h2>Set up procedure &#8211; Version 2.13 &#038; 2.14 &#8211; CPU only<\/h2>\n<p>We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job <abbr title=\"add '#$ -V' to your jobscript\">inherit these settings<\/abbr>.<\/p>\n<p>Load one of the following modulefiles:<\/p>\n<pre>\r\n  apps\/binapps\/namd\/2.13\/mpi\r\n  apps\/binapps\/namd\/2.13\/serial\r\n  apps\/binapps\/namd\/2.13\/smp\r\n\r\n# Use version 2.14 if using HPC pool\r\n  apps\/binapps\/namd\/2.14\/mpi\r\n  apps\/binapps\/namd\/2.14\/serial\r\n  apps\/binapps\/namd\/2.14\/smp\r\n<\/pre>\n<p>For example:<\/p>\n<pre>\r\nmodule load apps\/binapps\/namd\/2.14\/smp\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run NAMD on the login node. Jobs should be submitted to the compute nodes via the batch system. Note that the NAMD executable is named <code>namd2<\/code>.<\/p>\n<h3>Serial batch job submission<\/h3>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p serial   # Partition name (required). This gives you the 1 CPU core.\r\n#SBATCH -t 2-0         # Job \"wallclock\" limit (required). Max permitted is 4 days (4-0) for GPU partitoins\r\n                       # In this example 2-0 is 2 days (and 0 hours).\r\n                       # Other formats:  min:sec, hrs:min:sec, day-hours (to name a few)\r\n\r\n# Clean env and load module\r\nmodule purge\r\nmodule load apps\/binapps\/namd\/2.14\/serial\r\n\r\nnamd2 apoa1.namd\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Single node parallel batch job submission (2-168 cores)<\/h3>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore   # Partition name (required). This gives you the AMD Genoa (168-core) nodes.\r\n#SBATCH -n 8           # (or --ntasks=) Number of cores (2--168 on AMD)\r\n#SBATCH -t 2-0         # Job \"wallclock\" limit (required). Max permitted is 7 days (7-0)\r\n                       # In this example 2-0 is 2 days (and 0 hours).\r\n                       # Other formats:  min:sec, hrs:min:sec, day-hours (to name a few)\r\n\r\n# Clean env and load module\r\nmodule purge\r\nmodule load apps\/binapps\/namd\/2.14\/smp\r\n\r\nnamd2 +p$SLURM_NTASKS apoa1.namd\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Multi-node parallel batch job submission (for HPC Pool only)<\/h3>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n### All of the following flags are required!\r\n#SBATCH -p hpcpool        # The \"partition\" - named hpcpool\r\n#SBATCH -N 4              # (or --nodes=) Minimum is 4, Max is 32. Job uses 32 cores on each node.\r\n#SBATCH -n 128            # (or --ntasks=) TOTAL number of tasks. Max is 1024.\r\n#SBATCH -t 1-0            # Wallclock limit. 1-0 is 1 day. Maximum permitted is 4-0 (4-days).\r\n#SBATCH -A hpc-proj-name  # Use your HPC project code\r\n\r\nmodule purge\r\nmodule load apps\/binapps\/namd\/2.14\/mpi\r\n\r\ncharmrun +p$SLURM_NTASKS ++mpiexec $NAMD_BIN\/namd2 apoa1.namd\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<p>NAMD is built using the Charm++ parallel programming system, therefore <code>charmrun<\/code> is invoked to spawn the processes on each node. <\/p>\n<p><strong>IMPORTANT: The <code>++mpiexec<\/code> option must be used so that node information, etc, is passed from the batch system. Without this option, you will find that all &#8220;processes&#8221; (the Charmm++ parallel object is actually called a chare) run on one node. The path to <code>namd2<\/code> must also be included, otherwise the remote hosts will not be able to find it.<\/strong><\/p>\n<h2>Set up procedure &#8211; Version 3.0 &#8211; CPU only and CUDA Accelerated<\/h2>\n<p><del>NAMD Version 3.0 and above needs newer version of C Libraries than available on CSF3.<br \/>\nThese versions can now be run on CSF3 using a Singularity image, now made available in CSF3, which contains these newer libraries.<\/del><br \/>\nThese versions can now be run on updated CSF3 (SLURM) directly Without using Singularity image.<\/p>\n<p>See the example jobscripts below for how to run NAMD v3.0.<\/p>\n<p>Following modules are available:<\/p>\n<pre>\r\n# CPU only version of NAMD v3.0\r\n\r\n  <strong>apps\/binapps\/namd\/3.0<\/strong>\r\n\r\n\r\n# CPU-GPU Hybrid with NVIDIA CUDA accelerated version of NAMD v3.0\r\n\r\n  <strong>apps\/binapps\/namd\/3.0-cuda<\/strong>\r\n<\/pre>\n<p>For example:<\/p>\n<pre>\r\nmodule load apps\/binapps\/namd\/3.0\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run NAMD on the login node. Jobs should be submitted to the compute nodes via the batch system. Note that the NAMD executable is named <code>namd3<\/code>.<\/p>\n<h3>Single node parallel (multi-threaded) batch job submission (2-32 cores) for CPU only version of NAMD v3.0<\/h3>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore   # Partition name (required). This gives you the AMD Genoa (168-core) nodes.\r\n#SBATCH -n 8           # (or --ntasks=) Number of cores (2--168 on AMD)\r\n#SBATCH -t 2-0         # Job \"wallclock\" limit (required). Max permitted is 7 days (7-0)\r\n                       # In this example 2-0 is 2 days (and 0 hours).\r\n                       # Other formats:  min:sec, hrs:min:sec, day-hours (to name a few)\r\n\r\n# Clean env and load module\r\nmodule purge\r\nmodule load apps\/binapps\/namd\/3.0\r\n\r\nnamd3 +p$SLURM_NTASKS namd.inp > namd.out\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Single node parallel (multi-threaded) batch job submission (2-32 cores)  for CPU-GPU version of NAMD v3.0<\/h3>\n<p>Supports both <strong>GPU-offload<\/strong> and <strong>GPU-resident<\/strong> modes<\/p>\n<p>Create a batch submission script (which will load the modulefile in the jobscript), for example:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p gpuV        # GPU Partition name (required). Available options - gpuV, gpuA, gpuA40GB, gpuL\r\n#SBATCH -G 1           # Number of GPUs (or --gpus=N) (required)\r\n#SBATCH -n 8           # (or --ntasks=) Number of cores (8 on gpuV and 12 on gpuA, gpuA40GB, gpuL)\r\n#SBATCH -t 2-0         # Job \"wallclock\" limit (required). Max permitted is 4 days (4-0) for GPU partitoins\r\n                       # In this example 2-0 is 2 days (and 0 hours).\r\n                       # Other formats:  min:sec, hrs:min:sec, day-hours (to name a few)\r\n\r\n# Clean env and load module\r\nmodule purge\r\nmodule load apps\/binapps\/namd\/3.0-cuda\r\n\r\nnamd3 +p$SLURM_NTASKS namd.inp > namd.out\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"https:\/\/www.ks.uiuc.edu\/Research\/namd\">NAMD website<\/a><\/li>\n<\/ul>\n<ul>\n<li><a href=\"https:\/\/developer.nvidia.com\/blog\/delivering-up-to-9x-throughput-with-namd-v3-and-a100-gpu\/\" target=\"_blank\" rel=\"noopener\">Getting good performance with NAMD v3.0 using GPU<\/a>\n<\/ul>\n<h2>Updates<\/h2>\n<p>None.\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>If you are a windows user &#8211; please ensure you create your jobscript ON THE CSF directly using gedit. This will prevent your job going into error (Eqw). Text files created on windows have hidden characters that linux cannot read. For further information please see the guide to using the system from windows, in particular the section about text &amp; batch submission script files. Overview NAMD is a highly-scalable parallel molecular dynamics (MD) code for.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/namd\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-2062","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/2062","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=2062"}],"version-history":[{"count":22,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/2062\/revisions"}],"predecessor-version":[{"id":10402,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/2062\/revisions\/10402"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=2062"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}