Research Infrastructure

Our Services – The CIR Ecosystem

University of Manchester academics, postdocs and postgrads have access to a complete campus ecosystem for computationally-intensive research (CIR). The ecosystem comprises the following integrated services which act as a complete computational/storage/VM package. We can provide some free-at-point-of-use access or larger-capacity funded access.

Please contact us via if you have any questions, would like advice on which aspects of the ecosystem are best suited to your work or to request access to specific systems.

Batch Computation – The CSF and HPC Pool
Batch-based, computational resources – the Computational Shared Facility (CSF) is the University flagship HPC cluster with over 14000 CPU cores available. It is used for a wide variety of work: parallel computation using multiple (2 to 100s of) CPU cores; high-throughput work (running lots of copies of a job to process many datasets); work requiring large amounts of memory (RAM) or access to high-capacity (disk) storage with fast I/O; access to Nvidia Volta (v100) GPUs. A dedicated HPC-Pool is available for those wishing to run larger multi-node parallel jobs (up to 1024 cores per job).
Interactive Computation – The iCSF
A computational resource designed for GUI-based interactive work – the iCSF, aka Incline, is designed for research groups that currently purchase powerful workstations for private use. It does not use a batch queuing system — hence the name: interactive-CSF. It is expected that Incline will be used closely with the Research Virtual Desktop Service (see below).
Large-scale Research Data Storage
Large-scale, resilient (backed-up) storage for research data – the Research Data Storage (RDS) service (aka isilon). This is a multi-petabyte storage system, providing storage to be allocated to/shared amongst research groups. Some storage can be provided free-at-the-point-of-use, with the option to buy more for groups who need more.

The RDS service provides storage “shares” (areas of storages) for researchers which may be accessed from desktop machines across campus or from the CSF, iCSF, zCSF and Condor.

High Throughput Computing – Condor (on-campus and in the cloud)
The Research IT Condor service is a computational platform which uses “spare” CPU-cycles from open-access PC clusters located on campus. It is suitable for HTC, i.e., for running large numbers of small and short jobs to process many 100s – 1000s of datasets. We can also burst Condor jobs in the AWS Cloud.
Research Virtual Machine Service (RVMS)
Centrally-hosted virtual machines (VMs) are provided for University Research staff and PG students through the Research Virtual Machine Service (RVMS). This service is complementary to commercial cloud services, such as Amazon AWS, Azure and Google Cloud Platform: the RVMS is suitable for VMs which are expected to generate large amounts of network traffic, may be moderately CPU-intensive or require tight integration with the CIR platforms (CSF, iCSF, etc.) or the Research Data Storage (RDS) service.

Administrative control of VMs provided may be handed over to research groups. Alternatively, administrative support may be provided by Research IT where no group administrators are available.

It is hoped that this service will help to reduce the number of insecure “under-desk servers” to be found scattered around campus!

Research Virtual Desktop Service
A research virtual desktop service (RVDS) which provides a virtual Linux desktop from which you can access the CSF and iCSF — and the same Linux desktop session — from anywhere in the world.
Tier 1 (ARCHER2) and Tier 2 (regional) systems
We are also able to provide advice and guidance when applying for access to the Tier 1 and 2 national and regional HPC platforms, such as ARCHER2 and Bede (the N8CIR GPU system.) Typically, technical details such as the scalability of your code(s) must be provided, perhaps with evidence of having benchmarked them on local services. Please see our Tier 1 and 2 page for more information.
We are also able to provide advice and guidance on bespoke infrastructure requirements for CIR.

See the page on how the components of the ecosystem are integrated for further details on the benefits we can offer.

Last modified on October 18, 2021 at 9:13 am by George Leaver