Singularity / Apptainer

Overview

Singularity provides a mechanism to run containers where containers can be used to package entire scientific workflows, software and libraries, and even data.

Version 3.5.2-1 is installed on the CSF.

Please note: Docker is not available on the CSF for security reasons. Instead you should use Singularity or Apptainer. You can convert your Docker images to singularity / apptainer image, but you’ll need to do this on your local PC where you have root / admin access.

Please see below for info on using your own containers.

Restrictions on use

The software is licensed under the BSD 3-clause “New” or “Revised” License.

Set up procedure

Please note: you no longer need to use a modulefile. We have installed singularity as a system-wide command. So you can simply run the singularity commands you wish to run without loading any modulefiles.

For example:

# Ensure you have NO singularity modulefiles loaded ('module purge' will unload all modulefiles)

[username@login2 [csf3] ~]$ singularity --version
singularity version 3.5.2-1.el7.centos

Apptainer – optional

If a version of singularity more recent than 3.5 is required, apptainer may be used instead. Currently version 1.0.3 is available to load as a module.

module load apps/gcc/apptainer/1.0.3

Apptainer may be used interchangeably with singularity commands.

[username@login2 [csf3] ~]$ module load apps/gcc/apptainer/1.0.3
[username@login2 [csf3] ~]$ singularity --version
apptainer version 1.0.3

Running the application

Please do not run Singularity containers on the login node. Jobs should be submitted to the compute nodes via batch. You may run the command on its own to obtain a list of flags:

singularity
USAGE: singularity [global options...]  [command options...] ...

ONTAINER USAGE COMMANDS:
    exec       Execute a command within container                               
    run        Launch a runscript within container                              
    shell      Run a Bourne shell within container                              
    test       Launch a testscript within container                             

CONTAINER MANAGEMENT COMMANDS:
    apps       List available apps within a container                           
    bootstrap  *Deprecated* use build instead                                   
    build      Build a new Singularity container                                
    check      Perform container lint checks                                    
    inspect    Display container's metadata                                     
    mount      Mount a Singularity container image                              
    pull       Pull a Singularity/Docker container to $PWD

Please note that users will not be permitted to run singularity containers in --writeable mode. You should build containers on your own platform where you have root access.

Serial batch job submission

Create a batch submission script, for example:

#!/bin/bash --login 
#$ -cwd              # Job will run from the current directory

singularity run mystack.simg

Submit the jobscript using:

qsub scriptname

where scriptname is the name of your jobscript.

Parallel batch job submission

You should check the Singularity Documentation for how to ensure your Singularity container can access the cores available to your job. For example, MPI applications inside a container usually require the singularity command itself to be run via mpirun in the jobscript.

Create a jobscript similar to the following:

#!/bin/bash --login
#$ -cwd
#$ -pe smp.pe 8       # Number of cores - can be 2-32 for single node (smp.pe) jobs

mpirun -n $NSLOTS singularity exec name_of_container name_of_app_inside_container

Submit the jobscript using:

qsub scriptname

where scriptname is the name of your jobscript.

Further info

Using your own containers

You may want to use your own containers on the CSF3 – that’s fine. You will need to have a /scratch directory (stub) within the container to bind to the /scratch directory on the CSF.

If building from a .def file, please include the line

%post
...
mkdir /scratch

in the %post section.

If using a prebuilt .sif (or .simg), then follow the steps below to rebuild with a /scratch directory within:

# You must run these commands on your own Linux system (you don't have sudo rights on the CSF)
sudo singularity build --sandbox mysandbox myimage.sif
sudo mkdir mysandbox/scratch
sudo singularity build myimage-csf3ready.sif mysandbox

Please note this will not work on the CSF3 – as you cannot have sudo (admin) rights. The steps are necessary before the image is used on the CSF3. If you cannot, we can do this for you: its-ri-team@manchester.ac.uk.

Converting from a Docker container

Many Docker images exist that can be converted to singularity images, then uploaded to the CSF3 and ran. As before, these steps must be completed on your own machine with a singularity / apptainer install.
The example below uses https://hub.docker.com/r/cp2k/cp2k/.

# You must run these commands on your own Linux system (you don't have sudo rights on the CSF)
sudo singularity build cp2k.sif docker://cp2k/cp2k
singularity run cp2k.sif cp2k --version | head -1
# returns 'CP2K version 2023.1', which we use to label
sudo singularity build --sandbox cp2k-2023.1-sandbox cp2k.sif 
sudo mkdir cp2k-2023.1-sandbox/scratch
sudo singularity build cp2k-2023.1-csf.sif cp2k-2023.1-sandbox

The cp2k-2023.1-csf.sif can then be uploaded and ran (usually in a jobscript) on the CSF3.

Please remember to bind scratch (and also /mnt which will make your home directory available) and run your jobs from there:

singularity run --bind /scratch:/scratch,/mnt:/mnt my_container.sif arg1 arg2 ...

Alternatively, you can set the following environment variable:

export SINGULARITY_BINDPATH="/scratch,/mnt"
singularity run my_container.sif arg1 arg2 ...

Running GPU containers

If your app will be using a GPU, you’ll need to submit the job to GPU nodes as usual. Your jobscript should load a CUDA modulefile.

We like to use the following code in a script or jobscript to run containers – it will automatically pass the required GPU flags and settings to singularity if needed:

# Note: If running a GPU-enabled container your jobscript must load a 'libs/cuda'
# modulefile before you use the code below.

# These env vars (without the SINGULARITY_) will be visible inside the image at runtime
export SINGULARITY_HOME="$HOME"
export SINGULARITY_LANG="$LANG"
# Make CSF scratch and your home dir visible to the container
export SINGULARITY_BINDPATH="/scratch,/mnt"
# A GPU job on the CSF will have set $CUDA_VISIBLE_DEVICE, so test whether it is set or not (-n means "non-zero")
if [ -n "$CUDA_VISIBLE_DEVICES" ]; then
   # We are a GPU job. Set the special SINGULARITYENV_CUDA_VISIBLE_DEVICES to limit which GPUs the container can see.
   export SINGULARITYENV_CUDA_VISIBLE_DEVICES="$CUDA_VISIBLE_DEVICES"
   # Flag for the singularity command line
   NVIDIAFLAG=--nv
fi
# We use the 'sg' command to ensure the container is run with your own group id.
sg $GROUP -c "singularity run $NVIDIAFLAG my_container.sif arg1 arg2 ..."

Building your own Singularity image

You can build your own sifs for use on the CSF3 via the online resource: https://cloud.sylabs.io/builder

Please remember to include

mkdir /scratch

in the definition instructions. Be aware also that this resource is not affiliated with The University of Manchester.

Updates

None.

Last modified on November 29, 2024 at 12:52 pm by George Leaver