Slide Navigation

[[ Slide navigation:

Forwards:right arrow, space-bar or enter key
Reverse:left arrow




The RI Team:
The CSF and Other Stuff

Simon Hood, RI Coordinator

2013 August Core Infra Meeting


The RI Team

Who's in the RI team? Contact Details, Web. . .

  • Simon
  • Pen
  • and poachee, George


The Page Title

 -- stressed piggy --- timescales, how res comp is diff from enterprise...


What do RI do?

What do RI do?
What infrastructure? Users? Customers?



RI do HPC and related stuff


  • Aggregating computing power in a way that delivers much higher performance than a desktop computer in order to solve large problems in science, engineering, or business.



RI have Users from all Faculties

We help academics, postdocs and postgrads get their computational research done

Academics/postdocs/postgrads, some final year undergrad projects.

Engineering and Physical Sciences
  • CFD — Nuclear power stations; Formula 1 cars. . .
  • Chemistry/Chemical Engineering
  • Bioinformatics
  • St Mary's — drug targetting service
  • Statistical analysis of large data sets (Economics. . .)
  • Monte Carlo simulations (Economics, MBS)


The Past

A long time ago, in a galaxy. . .

The RI Team:

  • managed random, individual research group's Beowulf clusters;
  • looked after the (good, but) embarrassingly small Uni HPC systems.

  • . . .Uni HPC strategy was like this. . .


Uni HPC history?

Uni HPC history?
The name of this band is No Direction...


History and Meetings

 -- VUM hosted national service, e.g., CSAR

 -- VUM:  Bezzier --- v small
 -- UMIST:  Cosmos and Eric --- v small

 -- UoM:  Horace --- v small

 -- many small Beowulfs around campus 

 -- no real strategy, no Uni capital spending...

 ...then came Manchester Informatics and lots of meetings...
     -- more meetings, req captures, talk, meetings, req captures...
     -- ervs
     -- made mistake of asking me to be project manager...

. . .and we ended up with. . .


We will add...

Resistance is futile.
We will add your compute resource to our own.


What It Is

What it is:

 -- replacement for multitude of Beowulfs around campus
 -- better run
 -- more efficient --- "spare" cycles used
 -- cheaper
 -- assimilation of anarchic compute resources on campus into one central facility
     -- a few other such systems exist (e.g., hep, astro, FLS)

 -- computational batch engine

What it is not:

 -- super computer cf National Service


RI have Customers

We sell our services to University academics


CSF Contributions


CSF Size

Total core count: 4,912 (excluding GPU cores, but including their compute hosts)

    2,732 Intel cores with between 2GB and 8GB of memory per core.
    60 Intel cores with more than 21GB of memory per core.
    2,144 AMD cores with 2GB of memory per core.
    A number of GPUs (see below for details). 

    -- amd --- bang-per-buck for opensource codes
    -- scratch --- 160 TB


CSF Tech Bits

 -- linux --- RHEL 6.x
 -- Gridengine (formerly SGE), soon to be SoGE (or OGS)
 -- NFS (local) and Isilon
 -- Lustre
 -- pxeboot + kickstart + post scripts


Other Stuff




  • RDS — Isilon
  • RDN — CSF, Redqueen, Isilon, Michael Smith



  • Incline and NyX — Interactive computational resource, virtual/stateful desktops


Not Just Infrastructure


 -- not just about hardware and OS and infra apps and apps
 -- about intensive one-to-one support of users
 -- and intensive one-to-one discussion with customers


RI Plans


 -- VMs for researchers

 -- replace some of those badly-managed sub-desk servers...

 -- own data-centre VLAN
     -- walled off from other d-c VLANs via default-deny ACLs...

 -- some already
     -- e.g., atmos example (RQ, Met Office)


RI do All Sorts

In short, RI do all sorts...
  • Computational stuff (HPC and HTC)
  • Storage, Networking
  • Linux, VMs
  • One-to-one intensive user help
  • Work with academics on strategy/procurement