{"id":9123,"date":"2025-03-17T17:58:43","date_gmt":"2025-03-17T17:58:43","guid":{"rendered":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=9123"},"modified":"2026-03-27T11:04:12","modified_gmt":"2026-03-27T11:04:12","slug":"partitions","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/batch-slurm\/partitions\/","title":{"rendered":"Slurm Partitions (Compute Resources)"},"content":{"rendered":"<p>In Slurm, jobs are submitted to <em>partitions<\/em> &#8211; think of these as job queues for different types of jobs.<\/p>\n<p>In your jobscript, you <em>must<\/em> specify the <em>partition<\/em>, <em>number of cores<\/em> and max &#8220;<em>wallclock<\/em>&#8221; time your job is allowed to run for (up to a permitted maximum time.) There is now <em>no<\/em> default wallclock time &#8211; you must specify it.<\/p>\n<p>You may also need to specify other flags depending on which parition you submit to. For example, if using one of the GPU partitions, you&#8217;ll need to say how many GPUs you want. The HPC Pool and H200 partitions also require an account code.<\/p>\n<p>The table below provides an overview of the partitions, and the compute resources, currently available in the upgraded CSF3 cluster (Slurm).<\/p>\n<p>The <a href=\"#jobscript\">Slurm jobscript template<\/a> (shown further down this page) gives you a starting point for your jobscript. Much more detail on Slurm jobscript options is available in the <a href=\"\/csf3\/batch-slurm\/sge-to-slurm\/\">SGE to Slurm reference<\/a>.<\/p>\n<p>You will also need to use new commands on the login node: Slurm uses <code>sbatch<\/code>, <code>squeue<\/code>, <code>scancel<\/code> and <code>srun<\/code> commands, (not <code>qsub<\/code>, <code>qstat<\/code>, <code>qdel<\/code> and <code>qrsh<\/code> as used in SGE.)<\/p>\n<div class=\"hint\">\n<strong>Reminder: You <em>must<\/em> specify a Slurm Partition in your jobscript for <em>all<\/em> job types<\/strong>: <code>-p <em>name<\/em><\/code><\/p>\n<p><strong>You <em>must also<\/em> specify a maximum wallclock time limit for your job (up to a permitted maximum time)<\/strong>: <code>-t <em>timelimit<\/em><\/code><\/p>\n<p><strong>See table below.<\/strong>\n<\/div>\n<table class=\"striped\">\n<caption>CSF3 Slurm Partitions and Compute Resources<\/caption>\n<thead>\n<tr>\n<th>Slurm Partition name<\/th>\n<th>Partition Summary<\/th>\n<th>CPU<\/th>\n<th><acronym title=\"This is the permitted maximum wallclock. You MUST specify an actual wallclock limit for your job.\">Max Wallclock<\/acronym><\/th>\n<th>Compute Resources<\/th>\n<th>Notes<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr class=\"subheader\">\n<th colspan=\"6\">AMD &#8220;Genoa&#8221; Nodes<\/th>\n<\/tr>\n<tr>\n<td><code>-p multicore<\/code><\/p>\n<p>(was <code>-pe amd.pe<\/code> on SGE)<\/td>\n<td>Single-node <strong>parallel (2-168 cores) batch<\/strong> jobs. <code>-n <em>numcores<\/em><\/code><\/td>\n<td>AMD Genoa<\/td>\n<td>7 days <code>-t 7-0<\/code><\/td>\n<td>74x AMD nodes (168 cores per node). 8GB RAM per core.<\/td>\n<td>12,432 cores available in total.<\/p>\n<p><strong>This is the main parallel-job partition that most people will use.<\/strong><\/td>\n<\/tr>\n<tr>\n<td><code>-p interactive<\/code><\/td>\n<td><strong>Serial (1-core) and small parallel <em>interactive<\/em><\/strong> jobs. Optional: <code>-n <em>numcores<\/em><\/code><\/p>\n<p><strong>Serial (1-core) <em>batch<\/em><\/strong> jobs.<\/td>\n<td>AMD Genoa<\/td>\n<td><span class=\"red\"><strong>6 hours<\/strong><\/span> <code>-t 0-6<\/code><\/td>\n<td>2x AMD nodes (168 cores per node). 8GB RAM per core.<\/td>\n<td>336 cores available in total. <code>srun<\/code> (interactive), <code>srun-x11<\/code> (interactive GUIs) and also <code>sbatch<\/code> (jobscripts) accepted.<\/p>\n<p><strong>For testing apps on AMD &#8220;Genoa&#8221;.<\/strong><\/td>\n<\/tr>\n<tr class=\"subheader\">\n<th colspan=\"6\">Intel Nodes<\/th>\n<\/tr>\n<tr>\n<td><code>-p multicore_small<\/code><\/p>\n<p>(was <code>-pe smp.pe<\/code> on SGE)<\/td>\n<td>Single-node <strong>parallel (2-32 cores) batch<\/strong> jobs. <code>-n <em>numcores<\/em><\/code><\/td>\n<td>Intel<\/td>\n<td>7 days <code>-t 7-0<\/code><\/td>\n<td>21x Haswell nodes (24 cores, 5GB\/core) Optional: <code>-C haswell<\/code><\/p>\n<p>28x Skylake nodes (32 cores, 6GB\/core) Optional: <code>-C skylake<\/code><\/td>\n<td>1400 cores available in total (shared with the serial partition below)<\/td>\n<\/tr>\n<tr>\n<td><code>-p serial<\/code><\/td>\n<td><strong>Serial (1-core) batch<\/strong> jobs.<\/p>\n<p><strong>Serial (1-core) <em>interactive<\/em><\/strong> jobs.<\/td>\n<td>Intel<\/td>\n<td>7 days <code>-t 7-0<\/code><\/p>\n<p><span class=\"red\"><strong>6 hours<\/strong><\/span> <code>-t 0-6<\/code><\/td>\n<td>21x Haswell nodes (24 cores, 5GB\/core) Optional: <code>-C haswell<\/code><\/p>\n<p>28x Skylake nodes (32 cores, 6GB\/core) Optional: <code>-C skylake<\/code><\/td>\n<td>1400 cores available in total (shared with the multicore_small partition above)<\/td>\n<\/tr>\n<tr class=\"subheader\">\n<th colspan=\"6\">High Memory Nodes<\/th>\n<\/tr>\n<tr>\n<td><code>-p himem<\/code><\/td>\n<td>High memory jobs up to 2TB (1-32 cores). Optional: <code>-n <em>numcores<\/em><\/code><\/td>\n<td>Intel<\/td>\n<td>7 days <code>-t 7-0<\/code><\/td>\n<td>1x 512GB node (16 cores, 32GB per core) Optional: <code>-C haswell<\/code><\/p>\n<p>4x 1.5TB node (32 cores, 40GB per core) Optional: <code>-C cascadelake<\/code><\/p>\n<p>9x 2TB nodes (32 cores, 64GB per core) Optional: <code>-C icelake<\/code> or <code>-C ssd<\/code><\/td>\n<td>Job default is 32GB\/core <em>unless you ask for more<\/em> &#8211; even if you add a <code>-C<\/code> constraint! Use <code>--mem=<em>total<\/em>G<\/code> or <code>--mem-per-cpu=<em>num<\/em>G<\/code> to request more.<br \/>\n<a href=\"..\/high-memory-jobs-slurm\">High Memory Docs<\/a><\/td>\n<\/tr>\n<tr>\n<td><code>-p vhimem<\/code><\/td>\n<td>High memory jobs requiring up to 4TB. Optional: <code>-n <em>numcores<\/em><\/code><\/td>\n<td>Intel<\/td>\n<td>7 days <code>-t 7-0<\/code><\/td>\n<td>1x 4tb node (32 cores, 128GB per core) Optional: <code>-C icelake<\/code> or <code>-C ssd<\/code><\/td>\n<td><strong>Restricted access<\/strong> &#8211; <a href=\"\/csf3\/help\">request access<\/a><\/td>\n<\/tr>\n<tr class=\"subheader\">\n<th id=\"gpunodes\" colspan=\"6\">GPU Nodes<\/th>\n<\/tr>\n<tr>\n<td><code>-p gpuV<\/code><\/td>\n<td>Nvidia V100 GPU batch and interactive jobs. Optional: <code>-n <em>numcores<\/em><\/code><\/td>\n<td>Intel<\/td>\n<td>4 days <code>-t 4-0<\/code><\/td>\n<td>Node spec: 4x Nvidia V100<\/p>\n<p>16GB GPU RAM<\/p>\n<p>32x Intel host cores<\/p>\n<p>192GB host RAM (5GB\/core)<\/p>\n<p>Required: <code>-G <em>NUM<\/em><\/code> or <code>--gpus=<em>NUM<\/em><\/code><\/td>\n<td class=\"red\">\n<strong>!!!NO LONGER AVAILABLE!!!<br \/>\nOctober 2025 &#8211; ALL v100s have been removed from service and replaced with L40S GPUs.<\/strong><\/td>\n<\/tr>\n<tr>\n<td><code>-p gpuA<\/code><\/td>\n<td>Nvidia A100 GPU batch and interactive jobs. Optional: <code>-n <em>numcores<\/em><\/code><\/td>\n<td>AMD Milan<\/td>\n<td>4 days <code>-t 4-0<\/code><\/td>\n<td>Node spec: 4x Nvidia A100s<br \/>\n80GB GPU RAM<br \/>\n48x AMD &#8220;Milan&#8221; host cores<br \/>\n512GB host RAM (10GB\/core) Required: <code>-G <em>NUM<\/em><\/code> or <code>--gpus=<em>NUM<\/em><\/code><\/td>\n<td>Max 12 CPU-cores per GPU<\/p>\n<p>19 nodes available = 76 GPUs<\/td>\n<\/tr>\n<tr>\n<td><code>-p gpuL<\/code><\/td>\n<td>Nvidia L40S GPU batch and interactive jobs. Optional: <code>-n <em>numcores<\/em><\/code><\/td>\n<td>Intel<\/td>\n<td>4 days <code>-t 4-0<\/code><\/td>\n<td>Node spec: 4x Nvidia L40S<br \/>\n48GB GPU RAM<br \/>\n2 x 24-core Intel &#8220;Sapphire Rapids&#8221; host cores<br \/>\n512GB host RAM (10GB\/core) Required: <code>-G <em>NUM<\/em><\/code> or <code>--gpus=<em>NUM<\/em><\/code><\/td>\n<td>Max 12 CPU-cores per GPU<\/p>\n<p>21 nodes available = 84 GPUs<\/td>\n<\/tr>\n<tr>\n<td><code>-p gpuA40GB<\/code><\/td>\n<td>Nvidia A100 (40GB) GPU batch and interactive jobs. Optional: <code>-n <em>numcores<\/em><\/code><\/td>\n<td>AMD Milan<\/td>\n<td>4 days <code>-t 4-0<\/code><\/td>\n<td>Node spec: 4x Nvidia A100s<br \/>\n40GB GPU RAM<br \/>\n48x AMD &#8220;Milan&#8221; host cores<br \/>\n512GB host RAM (10GB\/core) Required: <code>-G <em>NUM<\/em><\/code> or <code>--gpus=<em>NUM<\/em><\/code><\/td>\n<td><strong>Very Restricted Access<\/strong><br \/>\nMax 12 CPU-cores per GPU, 2 nodes available = 8 GPUs<\/td>\n<\/tr>\n<tr>\n<td><code>-p gpuH<\/code><\/td>\n<td>Nvidia H200 (141 GB) GPU batch jobs. Optional: <code>-n <em>numcores<\/em><\/code><\/td>\n<td>Intel Xeon Emerald Rapids<\/td>\n<td>4 days <code>-t 4-0<\/code><\/td>\n<td>Node spec: 8x Nvidia H200<br \/>\n141GB GPU RAM<br \/>\n64 &#8220;Emerald Rapids&#8221; host cores<br \/>\n1.5 TB host RAM (24GB\/core) Required: <code>-G <em>NUM<\/em><\/code> or <code>--gpus=<em>NUM<\/em><\/code><\/td>\n<td><strong>Restricted Access<\/strong><br \/>\nMax 8 CPU-cores per GPU, up to 3 nodes available = 24 GPUs.<br \/>\n<a href=\"..\/gpu-jobs-slurm\/h200\/\">H200 docs<\/a>.<\/td>\n<\/tr>\n<tr>\n<td><code>-p gpuH_short<\/code><\/td>\n<td>Nvidia H200 (141 GB) GPU batch and interactive jobs. Optional: <code>-n <em>numcores<\/em><\/code> or <code>-c <em>numcores<\/em><\/code><\/td>\n<td>Intel Xeon Emerald Rapids<\/td>\n<td>1 day <code>-t 1-0<\/code><\/td>\n<td>Node spec: 8x Nvidia H200<br \/>\n141GB GPU RAM<br \/>\n64 &#8220;Emerald Rapids&#8221; host cores<br \/>\n1.5 TB host RAM (24GB\/core) Required: <code>-G <em>NUM<\/em><\/code> or <code>--gpus=<em>NUM<\/em><\/code><\/td>\n<td><strong>Restricted Access<\/strong><br \/>\nMax 8 CPU-cores per GPU, up to 2 nodes available = 16 GPUs.<br \/>\n<a href=\"..\/gpu-jobs-slurm\/h200\/\">H200 docs<\/a>.<\/td>\n<\/tr>\n<tr class=\"subheader\">\n<th colspan=\"6\">HPC Pool<\/th>\n<\/tr>\n<tr>\n<td><code>-p hpcpool<\/code><\/td>\n<td>Multi-node <strong>parallel (128-1024 cores) batch<\/strong> jobs. <code>-n <em>numcores<\/em><\/code><\/td>\n<td>Intel Skylake<\/td>\n<td>4 days <code>-t 4-0<\/code><\/td>\n<td>124x Skylake nodes (32 cores, 5GB\/core)<\/p>\n<pre>-A <em>hpc-proj-code<\/em>\r\n-N 4\r\n-n 128<\/pre>\n<p>&nbsp;<\/td>\n<td>3968 cores available in total.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><em>*Information is correct as of 7th May 2025<\/em><\/p>\n<h2 id=\"jobscript\">Jobscript Template<\/h2>\n<p>All jobs must specify the partition name (see above) and wallclock runtime limit (see the various <a href=\"..\/timelimits-slurm\">time formats<\/a>) for the job.<\/p>\n<p>Note that the Slurm lines begin with <code>#SBATCH<\/code> &#8211; not <code>#$<\/code> as used on SGE. A common mistake is to use a <code>$<\/code> symbol in the Slurm special line. It is NOT <code>#<strong>$<\/strong>BATCH<\/code>.<\/p>\n<h3>CPU Jobs<\/h3>\n<pre class=\"slurm\">#!\/bin\/bash --login\r\n#SBATCH -p <em>partitionname<\/em>    # <strong>Required (all jobs)<\/strong> - see table above\r\n#SBATCH -n <em>numcores<\/em>         # <strong>Required (parallel jobs)<\/strong>  - defaults to 1 for serial\r\n#SBATCH -t <em>timelimit<\/em>        # <strong>Required (all jobs)<\/strong> wallclock timelimit. Max permitted is 7 days (7-0)\r\n\r\n#SBATCH <em>flag resources<\/em>      # See table above for additional high-memory, CPU architecture\r\n                            # and GPU resource flags.\r\n\r\n# We recommend purging your env\r\nmodule purge\r\nmodule load apps\/<em>gcc<\/em>\/<em>someapp<\/em>\/<em>1.2.3<\/em>\r\n\r\n# $SLURM_NTASKS will be set to the number of cores requested above, if your app wants to know.\r\n<em>someapp.exe<\/em>\r\n\r\n# Slurm knows to run $SLURM_NTASKS mpi processes if using mpirun\r\nmpirun <em>some_mpi_app.exe<\/em>\r\n<\/pre>\n<p>For OpenMP parallel applications you can use:<\/p>\n<pre class=\"slurm\">#!\/bin\/bash --login\r\n#SBATCH -n 1           # 1 task ($SLURM_NTASKS set to 1)\r\n#SBATCH -c <em>numcores<\/em>    # cores-per-task ($SLURM_CPUS_PER_TASK set to this)\r\n#SBATCH -t <em>timelimit<\/em>   # Max wallclock time\r\n\r\n# Inform OpenMP how many cores to use\r\nexport OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK\r\n<em>some_omp_app.exe<\/em>\r\n<\/pre>\n<p>Submit the job using<\/p>\n<pre class=\"slurm\">sbatch <em>jobscript<\/em>\r\n<\/pre>\n<p>Check on the status of the job using<\/p>\n<pre class=\"slurm\">squeue\r\n<\/pre>\n<h3>GPU Jobs<\/h3>\n<p>Please see the <a href=\"\/csf3\/batch-slurm\/gpu-jobs-slurm\/\">GPU Jobs<\/a> page.<\/p>\n<h3>High-memory Jobs<\/h3>\n<p>Please see the <a href=\"\/csf3\/batch-slurm\/high-memory-jobs-slurm\/\">High Memory Jobs<\/a> page.<\/p>\n<h3>More jobscript options<\/h3>\n<p>For much more information on Slurm jobscript options, compared to SGE options, please see the <a href=\"\/csf3\/batch-slurm\/sge-to-slurm\/\">SGE to Slurm<\/a> reference.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In Slurm, jobs are submitted to partitions &#8211; think of these as job queues for different types of jobs. In your jobscript, you must specify the partition, number of cores and max &#8220;wallclock&#8221; time your job is allowed to run for (up to a permitted maximum time.) There is now no default wallclock time &#8211; you must specify it. You may also need to specify other flags depending on which parition you submit to. For.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/batch-slurm\/partitions\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":9105,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-9123","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/9123","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=9123"}],"version-history":[{"count":24,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/9123\/revisions"}],"predecessor-version":[{"id":12193,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/9123\/revisions\/12193"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/9105"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=9123"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}