The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
CSF & DPSF Merger/Upgrade
The CSF3 has arrived….
The CSF and DPSF, the University’s flagship HPC systems for compute and high memory work, have been steadily growing over the past 7 years. Approximately £4million have been invested resulting in over 10,000 CPU cores, over 1PB of scratch and 60TB of RAM being available.
To allow for essential operating system upgrades, enable future investments/expanison and to streamline support of these systems we are merging the CSF and DPSF in to one large system – the CSF3.
Combining the two systems will simplify the user experience and more high memory nodes will be available to everyone.
Once the CSF3 is in full service it is expected to have 15,000 cores!
Project Status – as of 18th March 2019
Summary
- Most of the Intel compute nodes have been moved to CSF3.
- More nodes are being prepared for a move next week – as a result the capacity of CSF2 has been significantly reduced.
- All CSF2 users were given access to CSF3 on 25th February – if you have not started using CSF3 please do so now.
- It is recommended that you no longer submit Intel based work to CSF2 (e.g (smp.pe, orte-24-ib.pe, fluent-smp.pe).
- Please ensure you read the guide on moving from CSF2 to CSF3 before you use the new system as some things are different.
- Application installs on CSF3 are ongoing – information available in the above guide on how to find out what is installed.
- Other tabs on the above website give further details about the system, batch etc
Availability of nodes in CSF2
The CSF2 currently contains ONLY
- 31 Intel Westmere nodes of 12 core each for serial and smp.pe work.
- 18 AMD Magny-Cour nodes
- 23 AMD Bulldozer nodes
The following node types have been removed from CSF2
- Sandybridge, Ivybridge, Haswell or Broadwell
- mem256
- GPU
If you have any questions or run into problems using CSF3 please let us know by emailing: