isambardjr

Specification

isambardjr comprises one ARM64 server:

1U ThunderX2 ARMv8 server with:
   2 x 2.1GHz Cavium ARM TX2 processors (2 sockets of 28 cores, each core with 4 threads = 224 processors)
   12 x DDR4 DIMM slots per CPU
   2 x 10Gbe SFP+ ports
   10 x 2.5 hot swap HDD/SDD bays
   2 x 1600W 80 Plus Platinum PSU

Installed:
   1 x 32GB RDIMM-TX2
   2 x 1000GB HDD Seagate_2.5       # Seem to be 7200rpm ATA disks
   1 x Qlogic Dual 1/10G RJ45
   Ubuntu 18.04

Getting Access to isambardjr

Access is restricted to a specific research group.

Restrictions on Access

Priority is given to those who funded the system.

Accessing the Host Node

For interactive use

From the Zrek login node, use qrsh to log in to the ARM64 node. This will give you a command-line on a isambardjr (the ARM64 node) node and you can then run GUI apps or non-GUI compute apps:

qrsh -l arm64 bash

Reminder: run the above commands on the Zrek login node! No password will required from the zrek login node.

Once you’ve been logged in to the ARM64 node you should now load any modulefiles (see below) required for your applications.

You can also open more terminals (command-line windows) on that node by running:

xterm &

For traditional batch jobs

From the Zrek login node, batch jobs (non-interactive) can be submitted using qsub jobscript and the jobscript should contain the following line to run on a single CPU core:

#$ -l arm64

or, to run on multiple CPU cores:

#$ -l arm64
#$ -pe smp.pe 224      # Number of cores (2--224)

Once you have submitted the batch job you can even log out of zrek – the job will be in the system and zrek will run it when a suitable number of CPU cores on the node become free.

Using the ARM64 CPUs

Once you have been allocated a GPU by either qrsh or qsub you will have exclusive access to those CPU cores.

Note: if you want to open more terminals (command-line windows) to run other programs on the node, simpy run

xterm &

to get a new window.

OpenMPI

OpenMPI is available for the ARM node using:

module load mpi/gcc/openmpi/4.0.1

The MPI executables (mpirun and so on) will only execute on the ARM node but you can load the modulefile on the zrek login node.

Last modified on March 24, 2020 at 9:28 am by George Leaver