{"id":34,"date":"2020-06-02T17:46:24","date_gmt":"2020-06-02T16:46:24","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf4\/?page_id=34"},"modified":"2024-09-23T11:56:42","modified_gmt":"2024-09-23T10:56:42","slug":"sge-to-slurm","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/batch\/sge-to-slurm\/","title":{"rendered":"SGE to SLURM"},"content":{"rendered":"<p>The use of SLURM on CSF4 represents a significant change for CSF3 users who are used to using the SGE batch system. <\/p>\n<p>While SGE has served us well, SLURM has been widely adopted by many other HPC sites, is under active development and has features and flexibility that we need as we introduce new platforms for the research community at the University.<\/p>\n<p>This page shows the SLURM commands and jobscript options next to their SGE counterparts to help you move from SGE to SLURM.<\/p>\n<h2>Jobscript Special Lines &#8211; SGE (#$) vs SLURM (#SBATCH)<\/h2>\n<p>The use of the SLURM batch system means your CSF3 jobscripts will no longer work on CSF4.<\/p>\n<p>This is because the CSF3 jobscript <em>special lines<\/em> beginning with <code>#$<\/code> will be ignored by SLURM. Instead, you should use lines beginning with <code>#SBATCH<\/code> and will need to change the options you use on those lines.<\/p>\n<div class=\"note\">Note that it is <code>#<strong>S<\/strong>BATCH<\/code> (short for SLURM BATCH) and NOT <code>#<strong>$<\/strong>BATCH<\/code>. This is an easy mistake to make when you begin to modify your SGE jobscripts. Do <strong>not <\/strong>use a $ (dollar) symbol in the SLURM special lines.<\/div>\n<p>It is possible to have both SGE and SLURM lines in your jobscripts &#8211; they will each ignore the other&#8217;s special lines. However, CSF4 uses different modulefile names and there are some differences in the way multi-core and multi-node jobs are run, so we advise writing new jobscripts for use on CSF4.<\/p>\n<p>Examples of CSF3 jobscripts and their equivalent CSF4 jobscript are given below. One suggestion is to name your CSF4 jobscripts <code><em>jobscript<\/em>.sbatch<\/code> and your CSF3 jobscripts <code><em>jobscript<\/em>.qsub<\/code>, but you can, of course, use any naming scheme you like.<\/p>\n<p>The commands used to submit jobs and check on the queue have also changed. See below for the equivalent commands.<\/p>\n<h2>Command-line tools &#8211; SGE (qsub, &#8230;) vs SLURM (sbatch, &#8230;)<\/h2>\n<table class=\"jobscriptcompare\">\n<thead>\n<tr>\n<th width=\"50%\">SGE Commands (CSF3)<\/th>\n<th width=\"50%\">SLURM Commands (CSF4)<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<pre>\r\n# Batch job submission\r\nqsub <em>jobscript<\/em>\r\nqsub <em>jobscript<\/em> arg1 arg2 ...\r\nqsub <em>options<\/em> -b y <em>executable arg1 ...<\/em>\r\n\r\n# Job queue status\r\nqstat                # Show your jobs (if any)\r\nqstat -u \"*\"         # Show all jobs\r\nqstat -u <em>username<\/em>\r\n\r\n# Cancel (delete) a job\r\nqdel <em>jobid<\/em>\r\nqdel <em>jobname<\/em>\r\nqdel <em>jobid<\/em> -t <em>taskid<\/em>\r\nqdel \"*\"             # Delete all my jobs\r\n\r\n# Interactive job\r\nqrsh -l short\r\n\r\n# Completed job stats\r\nqacct -j <em>jobid<\/em>\r\n<\/pre>\n<\/td>\n<td>\n<pre>\r\n# Batch job submission\r\nsbatch <em>jobscript<\/em>\r\nsbatch <em>jobscript<\/em> arg1 arg2 ...\r\nsbatch <em>options<\/em> --wrap=\"<em>executable arg1 ...<\/em>\"\r\n\r\n# Job queue status\r\nsqueue      # An alias for \"squeue --me\"\r\n\\squeue     # Unaliased squeue shows all jobs\r\nsqueue -u <em>username<\/em>\r\n\r\n# Cancel (delete) a job\r\nscancel <em>jobid<\/em>\r\nscancel -n <em>jobname<\/em>\r\nscancel <em>jobid<\/em>_<em>taskid<\/em>\r\nscancel -u $USER       # Delete all my jobs\r\n\r\n# Interactive job\r\nsrun --pty bash\r\n\r\n# Completed job stats\r\nsacct -j <em>jobid<\/em>\r\n<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Job Output Files (stdout and stderr)<\/h2>\n<table class=\"jobscriptcompare\">\n<thead>\n<tr>\n<th width=\"50%\">SGE job output files, not merged by default (CSF3)<\/th>\n<th width=\"50%\">SLURM job output files, merged by default (CSF4)<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<pre>\r\n# Individual (non-array) jobs\r\n<em>jobscriptname<\/em>.o<em>JOBID<\/em>\r\n<em>jobscriptname<\/em>.e<em>JOBID<\/em>\r\n\r\n# Array jobs\r\n<em>jobscriptname<\/em>.o<em>JOBID<\/em>.<em>TASKID<\/em>\r\n<em>jobscriptname<\/em>.e<em>JOBID<\/em>.<em>TASKID<\/em>\r\n<\/pre>\n<\/td>\n<td>\n<pre>\r\n# Individual (non-array) jobs\r\nslurm-<em>JOBID<\/em>.out\r\n\r\n\r\n# Array jobs (see later for more details)\r\nslurm-<em>ARRAYJOBID_TASKID<\/em>.out\r\n\r\n<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>The SLURM files contain the normal and error output that SGE splits in to two files.<\/p>\n<p>The naming and merging of the files can be changed using jobscript options (<a href=\"#moreoptions\">see below<\/a>) but for now, in the basic jobscripts shown next, we&#8217;ll just accept these default names to keep the jobscripts short.<\/p>\n<h2>Jobscripts<\/h2>\n<p>You will need to rewrite your SGE (CSF3) jobscripts. You could name them <code><em>somename<\/em>.slurm<\/code> if you like, to make it obvious it is a SLURM jobscript.<\/p>\n<h3>Put #SBATCH lines in one block<\/h3>\n<p>Please note: all SLURM <em>special lines<\/em> beginning with <code>#SBATCH<\/code> must come before ordinary lines that run Linux commands or your application. Any <code>#SBATCH<\/code> lines appearing after the first non-<code>#SBATCH<\/code> line will be ignored. For example:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n\r\n# You may put comment lines before and after &#35;SBATCH lines\r\n#SBATCH -p serial\r\n#SBATCH -n 4\r\n\r\n# Now the first 'ordinary' line. So no more &#35;SBATCH lines allowed after here\r\nexport MY_DATA=~\/scratch\/data\r\nmodule load <em>myapp<\/em>\/<em>1.2.3<\/em>\r\n\r\n# <strong>Any SBATCH lines here will be ignored!<\/strong>\r\n#SBATCH --job-name  new_job_name\r\n\r\n.\/my_app dataset1.dat\r\n<\/pre>\n<h3>Basic Serial (1-core) Josbscript<\/h3>\n<p>Note that in SLURM you must specify one core to be safe &#8211; some jobscripts will need the <code>$SLURM_NTASKS<\/code> environment variable (equivalent of SGE&#8217;s <code>$NSLOTS<\/code> variable) and SLURM only sets it if you explicitly request one core. The need to do this may change in our config in future.<\/p>\n<table class=\"jobscriptcompare\">\n<thead>\n<tr>\n<th width=\"50%\">SGE Jobscript (CSF3)<\/th>\n<th width=\"50%\">SLURM Jobscript (CSF4)<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -cwd           # Run in current directory\r\n\r\n# <strong>Default<\/strong> in SGE is to use 1 core\r\n\r\n\r\n\r\n\r\n\r\n\r\n# Modules have a different name format\r\nmodule load apps\/gcc\/appname\/x.y.z\r\n\r\nserialapp.exe in.dat out.dat\r\n<\/pre>\n<\/td>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n# <strong>Default<\/strong> in SLURM: run in <em>current dir<\/em>\r\n\r\n# OPTIONAL LINE: default partition is serial\r\n#SBATCH -p serial # (or --partition=serial)\r\n\r\n# OPTIONAL LINE: default is 1 core in serial\r\n#SBATCH -n 1      # (or --ntasks=1) use 1 core\r\n                  # $SLURM_NTASKS will be set.\r\n\r\n# Modules have a different name format\r\nmodule load appname\/x.y.z\r\n\r\nserialapp.exe in.dat out.dat\r\n<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>Basic Multi-core (single compute node) Parallel Jobscript<\/h3>\n<p>Note that requesting a 1-core multicore job is not possible &#8211; the job will be rejected. The minimum number of cores is 2.<\/p>\n<table class=\"jobscriptcompare\">\n<thead>\n<tr>\n<th width=\"50%\">SGE Jobscript (CSF3)<\/th>\n<th width=\"50%\">SLURM Jobscript (CSF4)<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -cwd           # Run in current directory\r\n\r\n# Multi-core on a single node (2--32 cores)\r\n#$ -pe smp.pe 4   # Single-node, 4 cores\r\n\r\n\r\n# Modules have a different name format\r\nmodule load apps\/gcc\/appname\/x.y.z\r\n\r\n# If running an OpenMP app, use:\r\nexport OMP_NUM_THREADS=$NSLOTS\r\nopenmpapp.exe in.dat out.dat\r\n\r\n# Or an app may have its own flag. EG:\r\nmulticoreapp.exe -n $NSLOTS in.dat out.dat\r\n<\/pre>\n<\/td>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n# <strong>Default<\/strong> in SLURM\r\n\r\n# Multi-core on a single node (2--40 cores)\r\n#SBATCH -p multicore # (or --partition=multicore)  \r\n#SBATCH -n 4         # (or --ntasks=4) 4 cores\r\n\r\n# Modules have a different name format\r\nmodule load appname\/x.y.z\r\n\r\n# If running an OpenMP app, use:\r\nexport OMP_NUM_THREADS=$SLURM_NTASKS\r\nopenmpapp.exe in.dat out.dat\r\n\r\n# Or an app may have its own flag. EG:\r\nmulticoreapp.exe -n $SLURM_NTASKS in.dat out.dat\r\n<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>Basic Multi-node Parallel Jobscript<\/h3>\n<p>Note that at the moment in SLURM you must specify the number of compute nodes to be safe &#8211; this ensures all cores are on the same compute node. The need to do this may change in our config in future.<\/p>\n<table class=\"jobscriptcompare\">\n<thead>\n<tr>\n<th width=\"50%\">SGE Jobscript (CSF3)<\/th>\n<th width=\"50%\">SLURM Jobscript (CSF4)<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -cwd           # Run in current directory\r\n\r\n# Multi-node (all 24 cores in use on each node)\r\n#$ -pe mpi-24-ib.pe 48   # 2 x 24-core nodes\r\n\r\n\r\n\r\n\r\n\r\n# Modules have a different name format\r\nmodule load apps\/gcc\/appname\/x.y.z\r\n\r\n# Use $NSLOTS to say how many cores to use\r\nmpirun -n $NSLOTS multinodeapp.exe in.dat out.dat\r\n<\/pre>\n<\/td>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n# <strong>Default<\/strong> in SLURM\r\n\r\n# Multi-node (all 40 cores in use on each node)\r\n#SBATCH -p multinode # (or --partition=multinode)\r\n# The number of nodes is now <em>mandatory<\/em>!\r\n#SBATCH -N 2         # (or --nodes=2)  2x40 cores\r\n# Can <em>optionally<\/em> also give total number of cores\r\n#SBATCH -n 80        # (or --ntasks=80)  80 cores\r\n\r\n# Modules have a different name format\r\nmodule load appname\/x.y.z\r\n\r\n# SLURM knows how many cores to use for mpirun\r\nmpirun multinodeapp.exe in.dat out.dat\r\n<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>For an example of a multi-node <em>mixed-mode<\/em> (OpenMP+MPI) jobscript, please see the <a href=\"\/csf4\/batch\/parallel-jobs\/#Multinode_parallel_large_mixed-mode_MPIOpenMP\">parallel jobs page<\/a>.<\/p>\n<h3>Basic Job Array Jobscript<\/h3>\n<p>Note that Job Arrays in SLURM have some subtle differences in the way the unique JOBID is handled. Also, if you are renaming the default SLURM output (.out) file then you need to use different wildcards for job arrays.<\/p>\n<table class=\"jobscriptcompare\">\n<thead>\n<tr>\n<th width=\"50%\">SGE Jobscript (CSF3)<\/th>\n<th width=\"50%\">SLURM Jobscript (CSF4)<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -cwd           # Run in current directory\r\n\r\n# Run 100 tasks numbered 1,2,...,100\r\n# (<strong>cannot <\/strong>start at zero!!)\r\n# Max permitted array size: 75000\r\n#$ -t 1-100\r\n\r\n# <strong>Default<\/strong> in SGE is to use 1 core\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n# Modules have a different name format\r\nmodule load apps\/gcc\/appname\/x.y.z\r\n\r\n# EG: input files are named data.1, data.2, ...\r\n# and output files result.1, result.2, ...\r\napp.exe -in data.$SGE_TASK_ID \\\r\n        -out result.$SGE_TASK_ID\r\n<\/pre>\n<\/td>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n# <strong>Default<\/strong> in SLURM\r\n\r\n# Run 100 tasks numbered 1,2,...,100\r\n# (<strong>can <\/strong>start at zero, e.g.: <strong>0-99<\/strong>)\r\n# Max permitted array size: 10000\r\n#SBATCH -a 1-100    # (or --array=1-100)\r\n\r\n# Note: This is the number of cores to use\r\n# for each jobarray task, not the number of \r\n# tasks in the job array (see above).\r\n#SBATCH -n 1      # (or --ntasks=1) use 1 core\r\n\r\n# OPTIONAL LINE: default partition is serial\r\n#SBATCH -p serial # (or --partition=serial)\r\n\r\n# Modules have a different name format\r\nmodule load appname\/x.y.z\r\n\r\n# EG: input files are named data.1, data.2, ...\r\n# and output files result.1, result.2, ...\r\napp.exe -in data.$SLURM_ARRAY_TASK_ID \\\r\n        -out result.$SLURM_ARRAY_TASK_ID\r\n<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><a name=\"moreoptions\"><\/a><\/p>\n<h2>More Jobscript special lines &#8211; SGE vs SLURM<\/h2>\n<p>Here are some more example jobscripts special lines for achieving things in SGE and SLURM. <\/p>\n<h3>Renaming a job and the output .o and .e files<\/h3>\n<table class=\"jobscriptcompare\">\n<thead>\n<tr>\n<th width=\"50%\">SGE Jobscript<\/th>\n<th width=\"50%\">SLURM Jobscript<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n...\r\n# Naming the job is optional.\r\n# <strong>Default<\/strong> is <em>name of jobscript<\/em>\r\n# <strong>DOES<\/strong> rename .o and .e output files.\r\n#$ -N jobname\r\n\r\n# Naming the output files is optional.\r\n# <strong>Default<\/strong> is <strong>separate<\/strong> .o and .e files:\r\n# <strong><em>jobname<\/em>.o<em>JOBID<\/em><\/strong> and <strong><em>jobname<\/em>.e<em>JOBID<\/em><\/strong>\r\n# Use of '-N jobname' <strong>DOES<\/strong> affect those defaults\r\n#$ -o myjob.out\r\n#$ -e myjob.err\r\n\r\n# To join .o and .e in to a single file\r\n# similar to Slurm's default behaviour:\r\n#$ -j y\r\n<\/pre>\n<\/td>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n...\r\n# Naming the job is optional.\r\n# <strong>Default<\/strong> is <em>name of jobscript<\/em>\r\n# Does <strong>NOT<\/strong> rename .out file.\r\n#SBATCH -J jobname\r\n\r\n# Naming the output files is optional.\r\n# <strong>Default<\/strong> is a <strong>single file<\/strong> for .o and .e:\r\n# <strong>slurm-<em>JOBID<\/em>.out<\/strong>\r\n# Use of '-J jobname' does <strong>NOT<\/strong> affect the default\r\n#SBATCH -o myjob.out\r\n#SBATCH -e myjob.err\r\n\r\n# Use wildcards to recreate the SGE names\r\n#SBATCH -o %x.o%j      # %x = SLURM_JOB_NAME\r\n#SBATCH -e %x.e%j      # %j = SLURM_JOB_ID\r\n<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>The <code>$SLURM_JOB_NAME<\/code> variable will tell you the name of your jobscript, unless the <code>-J <em>jobname<\/em><\/code> variable is used to rename your job. Then the env var is set to the value of <\/code><em>jobname<\/em><\/code>.<\/p>\n<p>If you wanted to use <code>$SLURM_JOB_NAME<\/code> to always give you the name of the jobscript from within your job, you would have to remove the <code>-J<\/code> flag. However, the following command run inside your jobscript will give you the name of the jobscript regardless of whether you use the <code>-J<\/code> flag or not:<\/p>\n<pre>\r\nscontrol show jobid $SLURM_JOB_ID | grep Command= | awk -F\/ '{print $NF}'\r\n<\/pre>\n<h3>Renaming an array job output .o and .e files<\/h3>\n<p>An array job uses <code>slurm-<em>ARRAYJOBID_TASKID<\/em>.out<\/code> as the default output file for each task in the array job. This can be renamed but you need to use the <code>%A<\/code> and <code>%a<\/code> wildcards (not <code>%j<\/code>).<\/p>\n<table class=\"jobscriptcompare\">\n<thead>\n<tr>\n<th width=\"50%\">SGE Jobscript<\/th>\n<th width=\"50%\">SLURM Jobscript<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n...\r\n# An array job (cannot start at 0)\r\n#$ -t 1-1000\r\n\r\n# Naming the job is optional.\r\n# <strong>Default<\/strong> is <em>name of jobscript<\/em>\r\n#$ -N jobname\r\n\r\n# Naming the output files is optional.\r\n# <strong>Default<\/strong> is separate .o and .e files:\r\n# <strong><em>jobname<\/em>.o<em>JOBID<\/em><\/strong> and <strong><em>jobname<\/em>.e<em>JOBID<\/em><\/strong>\r\n# Use of '-N jobname' <strong>DOES<\/strong> affect those defaults\r\n\r\n# To join .o and .e in to a single file\r\n# similar to Slurm's default behaviour:\r\n#$ -j y\r\n\r\n<\/pre>\n<\/td>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n...\r\n# An array job (CAN start at 0)\r\n#SBATCH -a 0-999     # (or --array=0-999)\r\n\r\n# Naming the job is optional.\r\n# <strong>Default<\/strong> is <em>name of jobscript<\/em>\r\n#SBATCH -J jobname\r\n\r\n# Naming the output files is optional.\r\n# <strong>Default<\/strong> is a <strong>single file<\/strong> for .o and .e:\r\n# <strong>slurm-<em>ARRAYJOBID_TASKID<\/em>.out<\/strong>\r\n# Use of '-J jobname' does <strong>NOT<\/strong> affect the default\r\n\r\n# Use wildcards to recreate the SGE names\r\n#SBATCH -o %x.o%A.%a   # %x = SLURM_JOB_NAME\r\n#SBATCH -e %x.e%A.%a   # %A = SLURM_ARRAY_JOB_ID\r\n                       # %a = SLURM_ARRAY_TASK_ID\r\n<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>Emailing from a job<\/h3>\n<p>SLURM can email you when your job begins, ends or fails.<\/p>\n<table class=\"jobscriptcompare\">\n<thead>\n<tr>\n<th width=\"50%\">SGE Jobscript<\/th>\n<th width=\"50%\">SLURM Jobscript<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n...\r\n# Mail events: <strong>b<\/strong>egin, <strong>e<\/strong>nd, <strong>a<\/strong>bort\r\n#$ -m bea\r\n#$ -M &#101;&#x6d;&#97;&#x69;&#108;&#x61;&#100;&#x64;r&#x40;m&#x61;n&#x63;h&#101;&#x73;&#116;&#x65;&#114;&#x2e;&#97;&#x63;&#46;&#x75;k\t\r\n<\/pre>\n<\/td>\n<td>\n<pre>\r\n#!\/bin\/bash --login\r\n...\r\n# Mail events: NONE, BEGIN, END, FAIL, ALL\r\n#SBATCH --mail-type=ALL\r\n#SBATCH &#x2d;&#x2d;&#109;&#97;i&#x6c;&#x2d;&#x75;&#115;&#101;r&#x3d;&#x65;&#x6d;&#97;&#105;l&#x61;&#x64;&#x64;&#114;&#64;m&#x61;&#x6e;&#x63;&#104;es&#x74;&#x65;&#x72;&#46;ac&#x2e;&#x75;&#107;\r\n<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Note that in SLURM, array jobs only send one email, not an email per job-array tasks as happens in SGE. If you want an email from every job-array task, add <code>ARRAY_TASKS<\/code> to the <code>--mail<\/code> flag:<\/p>\n<pre>\r\n#SBATCH --mail-type=ALL,ARRAY_TASKS\r\n                            #\r\n                            # DO NOT USE IF YOUR ARRAY JOB CONTAINS MORE THAN\r\n                            # 20 TASKS!! THE UoM MAIL ROUTERS WILL BLOCK THE CSF!\r\n<\/pre>\n<p>But please be aware that you will receive A LOT of email if you run a large job array with this flag enabled.<\/p>\n<h2>Job Environment Variables<\/h2>\n<p>A number of environment variables are available for use in your jobscripts &#8211; these are sometimes useful when creating your own log files, for informing applications how many cores they are allowed to use (we&#8217;ve already seen <code>$SLURM_NTASKS<\/code> in the examples above), and for reading sequentially numbered data files in job arrays.<\/p>\n<table class=\"jobscriptcompare\">\n<thead>\n<tr>\n<th width=\"50%\">SGE Environment Variables<\/th>\n<th width=\"50%\">SLURM Environment Variables<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<pre>\r\n$NSLOTS             # Num cores reserved\r\n\r\n$JOB_ID             # Unique jobid number\r\n$JOB_NAME           # Name of job\r\n\r\n# For array jobs\r\n$JOB_ID             # Same for all tasks\r\n                    # (e.g, <em>20173<\/em>)\r\n\r\n\r\n$SGE_TASK_ID        # Job array task number\r\n                    # (e.g., 1,2,3,...)\r\n$SGE_TASK_FIRST     # First task id\r\n$SGE_TASK_LAST      # Last task id\r\n$SGE_TASK_STEPSIZE  # Taskid increment: default 1\r\n\r\n\r\n# You will be unlikely to use these:\r\n$PE_HOSTFILE        # Multi-node job host list\r\n$NHOSTS             # Number of nodes in use\r\n$SGE_O_WORKDIR      # Submit directory\r\n<\/pre>\n<\/td>\n<td>\n<pre>\r\n$SLURM_NTASKS         # Num cores from -n flag\r\n$SLURM_CPUS_PER_TASK  # Num cores from -c flag\r\n$SLURM_JOB_ID         # Unique job id number\r\n$SLURM_JOB_NAME       # Name of job\r\n\r\n# For array jobs\r\n$SLURM_JOB_ID         # <strong>DIFFERENT FOR ALL TASKS<\/strong> \r\n                      # (e.g, <em>20173<\/em>,<em>20174<\/em>,<em>20175<\/em>,)\r\n$SLURM_ARRAY_JOB_ID   # <strong>SAME <\/strong>for all tasks\r\n                      # (e.g, <em>20173<\/em>)\r\n$SLURM_ARRAY_TASK_ID  # Job array task number\r\n                      # (e.g., 1,2,3,...)\r\n$SLURM_ARRAY_TASK_MIN # First task id\r\n$SLURM_ARRAY_TASK_MAX # Last task id\r\n$SLURM_ARRAY_TASK_STEP  # Increment: default 1\r\n$SLURM_ARRAY_TASK_COUNT # Number of tasks\r\n\r\n# You will be unlikely to use these:\r\n$SLURM_JOB_NODELIST   # Multi-node job host list\r\n$SLURM_JOB_NUM_NODES  # Number of nodes in use\r\n$SLURM_SUBMIT_DIR     # Submit directory\r\n<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Many more environment variables are available for use in your jobscript. The <a href=\"https:\/\/slurm.schedmd.com\/sbatch.html\">Slurm sbatch manual<\/a> (also available on the CSF login node by running <code>man sbatch<\/code>) documents <em>Input<\/em> and <em>Output<\/em> environment variables. The <em>input<\/em> variables can be set by you <em>before<\/em> submitting a job to set job options (although we recommend <em>not<\/em> doing this &#8211; it is better to put all options in your jobscript so that you have a permanent record of how you ran the job). The <em>output<\/em> variables can be used inside your jobscript to get information about the job (e.g., number of cores, job name and so on &#8211; we have documented several of these above.)<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The use of SLURM on CSF4 represents a significant change for CSF3 users who are used to using the SGE batch system. While SGE has served us well, SLURM has been widely adopted by many other HPC sites, is under active development and has features and flexibility that we need as we introduce new platforms for the research community at the University. This page shows the SLURM commands and jobscript options next to their SGE.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/batch\/sge-to-slurm\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":31,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-34","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/34","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/comments?post=34"}],"version-history":[{"count":21,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/34\/revisions"}],"predecessor-version":[{"id":1340,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/34\/revisions\/1340"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/31"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/media?parent=34"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}