The Computational Shared Facility 3

Overview

The CSF3 is a new HPC system at the University comprising new compute hardware (CPUs, GPUs) and the existing hardware from the CSF2 and DPSF systems.

Accessing from Off Campus

Working from home / off campus: please read our Working from Home documentation for how to access the CSF (and friends) and storage (RDS) from off-campus.

This page also addresses some common problems when GlobalProtect is running.

What is the CSF?

The CSF (aka Danzek) is a High Performance Computing (HPC) cluster (~9,700 cores + 68 GPUs) at the University of Manchester, managed by IT Services for the use of University academics, post-doctoral assistants and post-graduates to conduct academic research.

  • It is built on a shared model: the majority compute nodes are funded by contributions of funds to the system by University research groups; the cost of infrastructure such as login nodes, fileservers and network equipment is, for the most part, paid for by the University.
  • Academics are encouraged to contribute financially to the CSF rather than purchase their own smaller HPC clusters. The funds are used to buy compute hardware which is pooled in to the system. You are then given a proportional share of the available throughput in the system. Please see the benefits of the CSF for details on why this model is better than buying your own hardware.
  • The CSF is suitable for a variety of workloads. Small–moderate parallel jobs (2-120 cores), serial jobs (1-core), high throughput jobs (running many copies of the same application at the same time to process many datasets) and GPU jobs (using Nvidia v100 Volta GPUs) are all supported. The number of jobs you can submit to the system is not restricted. The time it takes to run all of your jobs depends on your group’s contribution to the system.
  • There is also some limited “free at the point of use” resource available in the CSF funded by the University. Please contact us if you are interested in using this.

For groups wishing to run larger parallel HPC jobs (128-1024 cores) the HPC Pool provides another resource (4096 cores in total). A separate, per-project application process is required to use it. For convenience, the CSF software and file-systems are available on the HPC Pool and so we document that system within these CSF online docs.

Talk to the Research Infrastructure Team

To find out more about the CSF, contact the IT Services RI team: its-ri-team@manchester.ac.uk.

Last modified on March 16, 2021 at 11:58 am by George Leaver