The Computational Shared Facility 3

Current System Configuration

Summary

Please note, the CSF3 is in active development with new hardware being added several times a year.

Total CSF CPU cores: 10,148

  • Standard nodes total #cores: 8,188
  • High memory nodes total #cores: 1,032
  • GPU host nodes total #cores: 928
  • NB: The HPC Pool cores (4,096) have not been included in this total

Total CSF Nvidia GPUs: 100

Total lustre storage (scratch): 1.2PB

Login nodes: 2 physical machines, each with 32 (hyper-threaded) cores, 256GB of memory

OS on all CSF nodes: CentOS Linux release 7.9.2009 (Core)
OS on all HPC Pool nodes: CentOS Linux release 7.9.2009 (Core)

CSF CPU nodes

Standard nodes

1176 Sandybridge cores: 98 nodes of 2×6-core Intel Xeon E5-2640 0 @ 2.50GHz + 64GB RAM

1040 Ivybridge cores: 65 nodes of 2×8-core Intel Xeon E5-2650 v2 @ 2.60GHz + 64GB RAM

2304 Haswell cores: 96 nodes of 2×12-core Intel Xeon E5-2690 v3 @ 2.60GHz + 128GB RAM

2380 Broadwell cores: 85 nodes of 2×14-core Intel Xeon E5-2680 v4 @ 2.40GHz + 128GB RAM + 56Gb/s (4X FDR) mlx4 Mellanox InfiniBand

1376 Skylake cores: 43 nodes of 2×16-core Intel Xeon Gold 6130 CPU @ 2.10GHz + 192GB RAM + 100Gb/s (4X EDR) mlx5 Mellanox InfiniBand

High memory nodes

24 Sandybridge cores: 2 nodes of 2×12-core E5-2640 0 @ 2.50GHz + 256GB RAM

64 Ivybridge cores: 4 nodes of 2×8-core E5-2650 v2 @ 2.60GHz + 256GB RAM

160 Ivybridge cores: 10 nodes of 2×8-core E5-2650 v2 @ 2.60GHz + 512GB RAM

352 Haswell cores: 22 nodes of 2×8-core Intel E5-2640 v3 @ 2.60GHz + 256GB RAM

288 Haswell cores: 18 nodes of 2×8-core Intel E5-2640 v3 @ 2.60GHz + 512GB RAM

20 Broadwell cores: 1 node of 2×10-core Intel Xeon E5-2640 v4 @ 2.40GHz + 1TB RAM + 1Gb/s Ethernet (access upon request)

96 Skylake cores: 3 nodes of 2×16-core Intel Xeon Gold 6130 CPU @ 2.10GHz + 1.5TB RAM (access upon request)

128 Cascade Lake cores: 4 nodes of 2×16-core Intel Xeon Gold 5218 CPU @ 2.30GHz + 1.5TB RAM (access upon request)

GPU nodes

GPUs

68 v100 GPUs: 17 nodes of 4 x Nvidia v100-SXM2-16GB (Volta) GPUs, 16GB GPU global mem, 5120 CUDA Cores + NVLink. Access to GPUs is restricted.

32 A100 GPUs: 8 nodes of 4 x Nvidia HGX A100-SXM4-80GB (Ampere) GPUs, 80 GB GPU global mem, 6912 CUDA Cores + NVLink. Access to GPUs is restricted.

GPU host nodes

416 Skylake cores: 13 nodes of 2×16-core Intel Xeon Gold 6130 CPU @ 2.10GHz + 192GB RAM + 100Gb/sec (4X EDR) mlx5 Mellanox InfiniBand + 1.6TB Samsung Non-Volatile memory (NVMe) SSD Controller 172Xa

128 Cascade Lake cores: 4 nodes of 2×16-core Intel Xeon Gold 5218 CPU @ 2.30GHz + 192GB RAM + 100Gb/sec (4X EDR) mlx5 Mellanox InfiniBand + 1.6TB Samsung Non-Volatile memory (NVMe) SSD Controller 172Xa

384 AMD EPYC Milan cores: 8 nodes of 2×24-core AMD 7413 “Milan” CPU @ 2.65GHz + 512GB RAM + 100Gb/sec (6X HDR) mlx5 Mellanox InfiniBand + 1.6TB Intel Non Volatile memory (NVMe) SSD Controller

The HPC Pool

This is a distinct pool of resources with a different access procedure to the CSF. Please consult The HPC Pool documentation for further information.

4096 Skylake cores: 128 nodes of 2×16-core Intel Xeon Gold 6130 CPU @ 2.10GHz + 192GB RAM + 100Gb/s (4X EDR) mlx5 Mellanox InfiniBand

Last modified on November 22, 2021 at 9:16 am by George Leaver