Parallel Jobs
Current Configuration and Parallel Environments
For jobs that require two or more CPU cores, the appropriate SGE parallel environment should be selected from the table below.
Please also consult the software page specific for the code / application you are running for advice on the most suitable PE.
A parallel job script takes the form:
#!/bin/bash --login #$ -cwd # Job will run in the current directory (where you ran qsub) #$ -pe pename numcores # Choose a PE name from the tables below and a number of cores # Load any required modulefiles module load apps/some/example/1.2.3 # Now the commands to be run in job. You MUST tell your app how many cores to use! There are usually # three way to do this. Note: $NSLOTS is automatically set to the number of cores given above.
- OpenMP applications (multicore but all in a single compute node):
export OMP_NUM_THREADS=$NSLOTS the_openmp_app
- MPI applications (small jobs on a single node or larger jobs across multiple compute nodes)
mpirun -n $NSLOTS the_mpi_app
- Other multicore apps that use their own command-line flags (you must check the app’s documentation for how to do this correctly). For example:
the_bioinfo_app --numthreads $NSLOTS # This is an example - check your app's docs!
The available parallel environments are now described. Use the name of a parallel environment in your jobscript.
AMD Parallel Environments
New from September 2024. We are installing new AMD compute nodes in the CSF, and these will eventually be the majority of nodes.
Single Node Multi-core(SMP) and MPI Jobs
PE name: amd.pe (NOTE: it is NOT smp.pe – that is for Intel CPUs – see below)
|
||
Optional Resources | Max cores per job, RAM per core | Additional usage guidance |
---|---|---|
-l short |
Max 28 cores, 8GB/core (Genoa nodes.) 1 hour runtime | Usually has shorter queue-wait times. Only 2 nodes available. This option is for test jobs and interactive only – DO NOT use it for production runs as it is unfair on those who need it for testing/interactive. |
Intel Parallel Environments
Single Node Multi-core(SMP) and small MPI Jobs
PE name: smp.pe
|
||
Optional Resources | Max cores per job, RAM per core | Additional usage guidance |
---|---|---|
-l mem512 -l mem1500 -l mem2000 -l mem4000 (restricted) |
Please see the High Memory Jobs page. | High memory nodes. Jobs must genuinely need extra memory. |
-l short |
Max 24 cores, 4GB/core (haswell nodes.) 1 hour runtime | Usually has shorter queue-wait times. Only 2 nodes available. This option is for test jobs and interactive only – DO NOT use it for production runs as it is unfair on those who need it for testing/interactive. |
Most users will not need to use the following flags. They will restrict the pool of nodes available to your jobs, which will result in longer queue-wait times. | ||
-l broadwell |
Max 28 cores, 5GB/core | Use only Broadwell cores. |
-l skylake |
Max 32 cores, 6GB/core | Use only Skylake cores. |
-l avx |
Limits will depend on what node type the system chooses and whether you include any memory options. | System will choose one of Broadwell, Skylake CPUs |
-l avx2 |
Limits will depend on what node type the system chooses and whether you include any memory options. | System will choose one of Broadwell, Skylake CPUs |
-l avx512 |
Max 32 cores, 6GB/core | Use only Skylake CPUs |
Multi-node large MPI Jobs
PE name: mpi-24-ib.pe
This PE HAS NOW BEEN RETIRED!! Use the AMD PE above, for larger parallel jobs (up to 168 cores).
For mutli-node jobs larger than 168 cores, please see the HPC Pool. |
||
Optional Resources | Max cores per job, RAM per core | Additional usage guidance |
---|---|---|
NONE | NONE | NONE |