Serial Jobs

Serial batch job submission

A serial job uses one CPU core. Unless your jobscript says otherwise, a job will by default be a serial job. To run a serial job in the batch system you should not specify a Parallel Environment (PE) or a queue (which you may have done on other systems).

We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run and it also minimises the potential for conflict between different pieces of software and libraries. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.

You should run jobs from within your scratch area:

cd ~/scratch/my_code/run01       # for example

An example single core job script:

#!/bin/bash --login
#$ -cwd              # Job will run in the current directory (where you ran qsub)
                     # Nothing specified to request more cores = default 1 core

# Load any required modulefiles
module load apps/some/example/1.2.3

# Now the commands to be run by the job
theapp.exe

To submit the job to the batch system:

qsub jobscript

Where jobscript is replaced with the name of your submission script.

Command-line one-liner…

The above serial job can also be achieved all on the command-line on the login node (a quick way of running a serial job in batch). Note that you will need to load any modulefiles before submitting the job. The -V tells the batch system to take a copy of any settings made by the modulefile so that they are visible to the job when it eventually runs. You could logout of the CSF before the job runs and it will still have a copy of the modulefile settings allowing the job to run correctly.

module load apps/some/example/1.2.3
qsub -b y -V -cwd theapp.exe optional-args

where optional-args are any command line flags that you want to pass to the theapp.exe program (or your own program). The -b y flag indicates the filename submitted (theapp.exe) is a binary (executable) program, not a jobscript.

Serial Hardware

Serial jobs can run on various Intel compute-nodes. The type of Intel hardware used (amount of memory available to the job, CPU architecture, runtime) can be controlled with extra jobscript flags as shown below. But in general you don’t need to specify any of these flags in your jobscript unless you know your job needs more memory or a specific type of CPU for example.

Serial (1-core) jobs

  • For 1 core jobs (includes serial job arrays)
  • 7 day runtime limit.
  • 4GB or 5GB per core by default.
  • Jobs will currently run on haswell nodes by default. (11.08.2021 – sandybridge nodes no longer available, 01.11.2022 Ivybridge no longer available)
  • Note that Broadwell and Skylake CPUs cannot be used to run any serial jobs, only parallel jobs.
  • If you need more than 5GB of memory you can request a higher memory node see optional resources below
  • Note: Choosing a node type is not recommended as it can mean a much longer wait in the queue.
Optional Resources Node type Additional usage guidance
-l mem256 16GB/core (haswell nodes only) Jobs must genuinely need extra memory.
-l mem512
-l mem512 -l ivybridge
-l mem512 -l haswell
32GB/core (system chooses ivybridge or haswell)
32GB/core (ivybridge nodes)
32GB/core (haswell nodes)
Jobs must genuinely need extra memory.
-l haswell 4GB/core Use only Haswell cores.
-l short 4GB/core, 1 hour runtime Currently just two haswell nodes of 24 cores. For test jobs and interactive use ONLY. Do not submit production work here.

Running lots of similar serial jobs

If you wish to run a large number of largely identical jobs: for example, the same program many times with different arguments or parameters; or perhaps process a thousand different input files then please use SGE Job Arrays. These cut down on the amount of job setup you need to do and are much more efficient for the batch system than lots of individual jobs.

Serial Java and Matlab Jobs

Some applications need to be told explicitly to use only a single core otherwise they will try to grab all of the cores available in a compute node.

When you submit a serial jobscript to the queue the batch system will reserve a single core for you (and only run your jobscript when a core becomes available). But you must also ensure your application only uses a single core when it runs.

If you know your code (or application) is serial then you have nothing more to do – it will correctly use one core only. However applications such as Java and MATLAB will try to grab all of the cores in a compute node unless you explicitly tell them to use only one core. You may end up trampling on other users’ jobs that are also running on the same compute node.

Please read carefully any application-specific documentation to check your code will use only a single core if run using a serial jobscript. In particular please read our Java notes and MATLAB notes. Jobs found running on more cores than have been requested in the jobscript will be killed without notice.

Last modified on April 3, 2023 at 2:38 pm by Pen Richardson