The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
Current System Configuration
24th Jan 2019 – The information in this webpage is no longer up to date due to the upgrade.
Compute
As of November 2017:
Total core count: 9,844 (9336 currently available November 2017). Not all cores are currently available while compute nodes are being moved as part of an infrastructure refresh. Does not include retired nodes, login/management nodes and GPU cores, but does include cores from GPU host nodes.
- 7,200 Intel cores with between 4GB and 5.3GB of memory per core.
- 96 Intel cores with 8GB of memory per core.
- 64 Intel cores with 16GB of memory per core.
- 48 Intel cores with 21GB of memory per core.
- 12 Intel cores with 42GB of memory per core.
- 2,400 AMD cores with 2GB of memory per core.
- 24 Intel cores in GPU host nodes.
- A number of GPUs (see below for details).
Details:
Intel: 7,200 cores – does not include GPU host nodes (6900 currently available November 2017)
- 1,008 cores (708 currently available November 2016): 84 nodes each with 12 core (two x 6-core) Xeon X5650 2.66GHz (Westmere, SSE4.2) processors and 48GB of memory.
- 1,248 cores: 104 nodes each with 12 core (two x 6-core) Xeon E5-2640 2.50GHz (Sandy Bridge, AVX) processors and 64GB of memory.
- 1,056 cores: 66 nodes each with 16 core (two x 8-core) Xeon E5-2650 v2 2.60GHz (Ivy Bridge, AVX) processors and 64GB of memory.
- 2,304 cores: 96 nodes each with 24 core (two x 12-core) Xeon E5-2690 v3 2.60GHz (Haswell, AVX2) processors and 128GB of memory. InfiniBand-connected.
- 1,584 cores: 66 nodes each with 24 core (two x 12-core) Xeon E5-2680 v4 2.40GHz (Broadwell, AVX2) processors and 128GB of memory. InfiniBand-connected.
Intel High-memory nodes: 220 cores (172 currently available November 2017)
- 96 cores (48 currently available April 2016): 8 nodes each with 12 core (two x 6-core) Xeon X5650 2.66GHz (Westmere, SSE4.2) processors and 96GB of memory.
- 48 cores: 4 nodes each with 12 core (two x 6-core) Xeon E5-2640 2.50GHz (Sandy Bridge, AVX) processors and 256GB of memory (hosted on behalf of a specific research group).
- 64 cores: 4 nodes each with 16 core (two x 8-core) Xeon E5-2650 v2 2.60GHz (Ivy Bridge, AVX) processors and 256GB of memory (hosted on behalf of a specific research group).
- 12 cores: 1 node with 12 core (two x 6-core) Xeon X7542 2.67GHz (Westmere, SSE4.2) processors and 504GB of memory (hosted on behalf of a specific research group).
AMD: 2,400 cores (2,240 currently available November 2017)
- 736 cores (704 currently available November 2017): 23 nodes each with 32 core (four x 8-core) Opteron 6136 (Magny-Cours, SSE4a) 2.4GHZ processors and 64GB of memory. InfiniBand-connected.
- 1,664 cores (1,536 currently available November 2017): 26 nodes each with 64 core (four x 16-core) Opteron 6276 (Interlagos “Bulldozer”, AVX, FMA4) 2.3GHZ processors and 128GB of memory. InfiniBand-connected.
GPU: GPUs and Host-node Intel cores (24 currently available November 2017)
- Four Nvidia Kepler K20 GPU cards, with a pair hosted in each 12 core Intel Sandybridge-based (E5-2640 2.50GHz) compute nodes with 64GB of memory (24 host cores in total, these GPUs are hosted on behalf of specific research groups).
Retired Nodes: (168 cores + 164 GPU-host cores)
- 168 cores (0 currently available April 2016): 14 nodes each of 12 core (two x 6-core) Xeon L5650/L5640 2.27GHz (Westmere, SSE4.2) processors and 24GB of memory. These nodes are expected to be replaced with new, more powerful and energy efficient nodes.
- Seven (0 currently available May 2017) Nvidia GPUs — two 2070 cards and five 2050 cards — each hosted on a blade which has 12 core (two 6-core) Intel Westmere (E5649 2.53GHz) processors and 48GB of memory (84 host cores).
- 16 (0 currently available November 2016) Infiniband connected M2050 GPUs, two hosted on each of eight Intel Westmere-based compute nodes (4 of which have L5640 2.27GHz and 4 have E5640 2.67GHz processors) 24GB of memory (80 host cores, these GPUs are hosted on behalf of specific research groups).
Interconnect
- Most older Intel nodes are connected together by Gigabit ethernet.
- 36 nodes (12-core Westmere 2.66GHz with 4GB RAM per core, total cores 432) and 96 nodes (24-core Haswell 2.60GHz with 5GB RAM per core, total 2,304 cores) are connected by higher bandwidth Infiniband. This is now the standard interconnect for new nodes.
- The AMD nodes are all connected by higher-bandwidth Infiniband.
Filesystems
- Home filestore is provided by the central Research Data Service (RDS, aka Isilon) – also see below.
- A high performance (parallel) Lustre 306TB scratch filestore (ie $HOME/scratch) is provided for the running of batch jobs.
- Additional space is available to research groups and mountable on the CSF via RDS. Requests for space are processed by the local faculty IT research support teams.
Operating System and Batch
Scientific Linux 6.6 – a popular free Linux distribution based on Redhat Enterprise Linux. SGE (Open Grid Scheduler 2011.11p1_155) is used for batch/compute allocation.