HPC Pool (Slurm FAQ)
Where is the HPC Pool?
As part of the CSF upgrade work, the HPC Pool and associated hardware has been migrated from SGE to Slurm. The adoption of Slurm on CSF3 represents a significant change for CSF users who are accustomed to using the SGE batch system. This page outlines how to access the HPC Pool via the upgraded CSF3 Slurm Cluster.
Who Can Access the HPC Pool via Slurm?
Similar to the HPC Pool in the previous SGE cluster, access is not granted by default.
Only HPC projects created on or after April 2025 and HPC Pool projects that ran batch jobs between 1st August 2024 and April 2025 have been migrated to the upgraded CSF3 Slurm environment.
A complete list of project codes that will have access to the HPC Pool after the maintenance can be found below.
If your project is not listed and you still require access to the HPC Pool, you can request re-enablement via our help form: Requesting Help.
Can I still access the HPC Pool in the SGE Environment?
No.
IMPORTANT: All jobs running on the HPC Pool in the CSF3 SGE cluster were terminated at 09:00 AM on Wednesday, 23rd April 2025.
After this time, users can no longer access the HPC Pool in the CSF3 SGE cluster. In order to access the HPC Pool you will need to access it via upgraded CSF3 Slurm environment.
How to Run Jobs in the HPC Pool Using Slurm?
Please see the HPC Pool Jobs page for example Slurm jobscripts.
Which HPC Pool projects have been migrated to the Slurm environment?
You should use one of these codes in your jobscript with the -A
flag (see above.)
hpc-am-gypsum hpc-am-vdwstructs hpc-ar-m3cfd hpc-ar-uhi hpc-as-thermofluids hpc-cp-memb hpc-ds-dmg hpc-ds-owcm hpc-fs-occp hpc-jh-futuredams hpc-jh-wrg hpc-jk-nmcc hpc-kl-psmc hpc-mcs-weld hpc-ml-acmpr hpc-nc-mrica hpc-nk-fortress hpc-nk-mcdr hpc-nk-smi2 hpc-nk-surfchemad hpc-nk-tshz hpc-pc-goh2o hpc-pc-npm hpc-po-enveng hpc-rb-piezo hpc-rb-topo hpc-rc-atp hpc-support hpc-sz-msss hpc-vf-tmdm hpc-ymi-thermofluids hpc-zhong-dlth hpc-zz-aerosol