Research Infrastructure

Batch Script and qsub Options

Specifying Job Options

There are many SGE options which can be specified in a qsub jobscript, for example using

#$ -cwd
#$ -V

or on the qsub command line, for example using

qsub -cwd -V ... filename [optional args] 

Note: if filename is an executable (e.g., myapp.exe) rather than a jobscript you must use the -b y flag (please see below). We recommend using a jobscript however.

All of the possible flags are described in the manual page (man qsub). The most commonly used options are briefly described below. Note that the order in which you specify options, either in the jobscript or on the command line does not matter.

The SGE commands should be available automatically to all users of the system when you log in. However if this isn’t the case, or you have run module purge to clear your environment, then when you try to run qsub or qstat you may receive an error:

bash: qsub: command not found

To fix this you need to load the batch system modulefile on the login node using:

module load services/gridscheduler

You will then be able to submit jobs, monitor your jobs and so on.

SGE Switches

Execute the job from the current (working) directory — the directory from which the qsub command is issued. If this option is not present, the job will be executed the user’s home directory. The .oNNNNN and .eNNNNN stdout and stderr files created by SGE for each job will also be written to the directory specified by this flag (or the home directory if not present) unless the -o and -e flags are used to override where these files are written.
(Uppercase V). This ensures that any environment settings you’ve made on the login node are inherited/passed to the compute node, including the settings applied by loading software modulefiles. A copy of your current environment is taken when you run the qsub command (i.e., immediately, not when the job finally runs). Hence you can change your environment after running qsub, perhaps to set up for another job, or even log out and your job will still see the environment that was in place when you originally ran qsub.
-j y
Merge the standard error stream into the standard output stream, i.e., job output and error messages are sent to the same .o file, rather than different files (usually .o and .e files).
Specify the SGE parallel-environment to which a job is sent — see the section on running parallel jobs.
-l resource
Specify a resource to modify where in the system the job is placed. For example -l highmem to select a high-memory node. You may specify more than one resource flag, for example -l sandybridge -l 's_rt=00:10:00' although not all combinations are supported. Resource flags exist for CPU architectures, job time limits, memory requirements, GPUs and interactivity so you should check those pages for details together with the parallel environment documentation to determine whether a resource and a PE are compatible (not all combinations are permitted).
-S /bin/bash
(Uppercase S). Indicate your jobscript is written using /bin/bash shell syntax. See the introductory SGE information.
-N name
(Uppercase N). Sets the job name, e.g. -N my_job_name to set job name to my_job_name. The .o and .e job output files will be named using this value — for example my_job_name.o12345 and my_job_name.e12345. If you don’t use the -N option then the job output files will use the name of the jobscript (or executable) specified on the qsub command-line. Do not use spaces in the name.
-o /path/to/dir
-e /path/to/dir
-o /path/to/dir/stdoutfile

-e /path/to/dir/stderrfile
Use either the directory form or the filename form. If a directory name is given, it specifies the path to a directory where the usual standard output stream (stdout) and standard error stream (stderr) files (JobName.oNNNNN and JobName.eNNNNN respectively) will be written. The directories must already exist before the job runs – the batch system will not create them for you. If filenames are given, they specify the files to which stdout and stderr output will be written. No JobID number will be appended – your supplied filenames will be used as-is. If these flags are not used the standard output and error stream files will be written in the directory in which the job runs (see -cwd).
-hold_jid jobid
Specifies this job is conditional upon completion of a previous job or jobs, e.g. -hold_jid jobID to submit a job which will not start until jobID has completed. jobID can be a job number (e.g., 89213) or job name (i.e., the earlier job was named using the -N flag). Multiple jobIDs can be specified using a comma separated list of jobIDs. In that case the current job will not run until all specified jobIDs have finished.
-m bea
Causes an email to be sent when the job begins, when it ends and/or if it is aborted. You can specify any or all of the bea letters. For example, most users only want to know when a job ends or aborts so use -m ea. The email will be sent to your University email account.

January 2016: Please note that on the new login nodes it is necessary to put your email address in your jobscript or on the command line submission (see below for how) as it does not automatically detect you University email at the moment. We are looking into this.

-M address
(Uppercase M). Specify an email address to which -m status emails will be sent. By default your University email account will receive the email unless you use this option.

January 2016: Please note that on the new login nodes it is necessary to put your email address in your jobscript or on the command line submission as it does not automatically detect you University email at the moment. We are looking into this.

-b y
For use on the qsub command-line only. Indicates that the filename given on the qsub command line is an executable (binary) file, not a jobscript. This allows you to specify the executable directly on the command-line rather than in a job script. By default the qsub command assumes the filename refers to a jobscript. For example, the following command line and jobscript (submitted with qsub myjobscript) are equivalent:

qsub -b y -cwd -V -l short /bin/hostname


#$ -cwd
#$ -V
#$ -l short

It is up to the user which method they prefer. However we recommend writing a jobscript so that you can see how the job was submitted if referring back to an old job (perhaps submitted months ago) rather than trying to remember a command-line. It also allows the sysadmins to identify more easily any problems with jobs.

See the individual pages (in menu of left side of this page) for the PE names and resources available on this system.

SGE Environment Variables

The following environment variables are available for use in your jobscript when the job runs. They can be used to create unique names for output files, for example, by including the job id or name in the output filename.

The number of cores requested using the -pe flag or 1 if running a serial job (no -pe option specified). Use this variable if your application requires the number of cores to use on its command-line, rather than repeating the number in two places. This makes running jobs with different numbers of cores easier. For example:

#$ -pe 4
myapp -cores $NSLOTS -input sample.dat -output results.dat
  # $NSLOTS will be automatically replaced with 4 in this example

You could also use this variable in the name of an output file if doing several runs with a different number of cores when timing your code. For example

#$ -pe 4
myapp -cores $NSLOTS -input sample.dat -output results.${NSLOTS}cores.dat
   # The output file will be named results.4cores.dat
The unique job id number assigned to the job at runtime by the batch system. You can use this to generate unique filenames that won’t be overwritten by other jobs. For example:

#$ -cwd
#$ -V
myapp -input sample.dat -output results.$JOB_ID.dat
   # Output file will be named results.37823.dat where 37823 is my unique jobid.
The value of the -N flag if present or the name of the jobscript if that flag is not used. Note that a unique jobid is always generated even if you use the -N flag. For example:

#$ -cwd
#$ -N phase1
myapp -input sample.dat -output results.$JOB_NAME.$JOB_ID.dat
  # Names the output file results.phase1.38795.dat (in this case)
See the Job Arrays documentation for environment variables related to each task.
You will not normally need to use this variable in your jobscripts. However, some applications documented on the CSF software page process the names of the nodes on which your job will run in to their own format. This variable gives the name of a file containing the names of the nodes on which your job has been scheduled to run. Do not, however, change the value of this variable yourself.

Last modified on July 28, 2017 at 9:08 am by George Leaver