Singularity / Apptainer
Overview
Singularity provides a mechanism to run containers where containers can be used to package entire scientific workflows, software and libraries, and even data.
Version 1.4.2-1.el9 is installed on the CSF3.
Please see below for info on using your own containers.
Restrictions on use
The software is licensed under the BSD 3-clause “New” or “Revised” License.
Set up procedure
Please note: you no longer need to use a modulefile. We have installed Apptainer, which can be run using the command apptainer
or singularity
as a system-wide command. So you can simply run the commands you wish to run without loading any modulefiles.
For example:
# Ensure you have NO singularity modulefiles loaded ('module purge' will unload all modulefiles) [username@login2[csf3] ~]$ singularity --version apptainer version 1.4.2-1.el9 # # It is apptainer that is installed on the CSF. Hence the # 'singularity' command is an alias for apptainer. [username@login2[csf3] ~]$ apptainer --version apptainer version 1.4.2-1.el9
Running the application
Please do not run Apptainer / Singularity containers on the login node. Jobs should be submitted to the compute nodes via batch. You may run the command on its own to obtain a list of flags:
apptainer Usage: apptainer [global options...]Available Commands: build Build an Apptainer image cache Manage the local cache capability Manage Linux capabilities for users and groups checkpoint Manage container checkpoint state (experimental) completion Generate the autocompletion script for the specified shell config Manage various apptainer configuration (root user only) delete Deletes requested image from the library exec Run a command within a container inspect Show metadata for an image instance Manage containers running as services key Manage OpenPGP keys keyserver Manage apptainer keyservers oci Manage OCI containers overlay Manage an EXT3 writable overlay image plugin Manage Apptainer plugins pull Pull an image from a URI push Upload image to the provided URI registry Manage authentication to OCI/Docker registries remote Manage apptainer remote endpoints run Run the user-defined default command within a container run-help Show the user-defined help for an image search Search a Container Library for images shell Run a shell within a container sif Manipulate Singularity Image Format (SIF) images sign Add digital signature(s) to an image test Run the user-defined tests within a container verify Verify digital signature(s) within an image version Show the version for Apptainer Run 'apptainer --help' for more detailed usage information.
Serial batch job submission
Create a batch submission script, for example:
#!/bin/bash --login #SBATCH -p serial # (or --partition=) Run on the nodes dedicated to 1-core jobs #SBATCH -t 2-0 # Wallclock time limit (2-0 is 2 days, max permitted is 7-0) # We'll use the system-wide command, hence no apptainer modulefiles to load module purge apptainer run mystack.simg
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Parallel batch job submission
You should check the Apptainer Documentation for how to ensure your Apptainer / Singularity container can access the cores available to your job. For example, MPI applications inside a container usually require the apptainer command itself to be run via mpirun
in the jobscript.
Create a jobscript similar to the following:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=) Run on the AMD 168-core Genoa nodes #SBATCH -n 8 # (or --ntasks=) Number of cores #SBATCH -t 2-0 # Wallclock time limit (2-0 is 2 days, max permitted is 7-0) # We'll use the system-wide command, hence no apptainer modulefiles to load module purge # mpirun knows to run $SLURM_NTASKS processes (which is the-n number above) mpirun apptainer exec name_of_container name_of_app_inside_container
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Further info
Using your own containers
You may want to use your own containers on the CSF3 – that’s fine. You will need to have a /scratch
directory (stub) within the container to bind to the /scratch
directory on the CSF.
If building from an image definition .def
file, please include the line
%post ... mkdir /scratch
in the %post
section.
If using a prebuilt .sif
(or .simg
) container image and you don’t have a .def file available, then follow the steps below to rebuild with a /scratch
directory within. It is suggested to do the below steps in scratch, as it is faster than you home directory:
# From the image .sif create a sandbox directory which you can edit apptainer build --sandbox mysandbox myimage.sif # add an empty /scratch dir in the sandbox mkdir mysandbox/scratch # build a new image based on the sandbox directory apptainer build myimage-csf3ready.sif mysandbox
Converting from a Docker container
Many Docker images exist that can be converted to apptainer / singularity images.
The example below uses https://hub.docker.com/r/cp2k/cp2k/.
There are 2 methods to build the container in a way that is CSF3 compatible (i.e it includes a /scratch directory).
The end result will be exactly the same with both methods.
1) Build using a .def file
First create a .def file named cp2k-csf.def with the contents below:
BootStrap: docker From: cp2k/cp2k %post mkdir /scratch
Then build the container with the command:
apptainer build cp2k-csf.sif cp2k-csf.def
2) Convert to .sif then add /scratch in 2 steps
apptainer build cp2k.sif docker://cp2k/cp2k apptainer build --sandbox cp2k-sandbox cp2k.sif mkdir cp2k-sandbox/scratch apptainer build cp2k-csf.sif cp2k-sandbox
Running a container
When running your container, please remember to bind scratch (and also /mnt which will make your home directory available) and run your jobs from there:
apptainer run --bind /scratch:/scratch,/mnt:/mnt my_container.sif arg1 arg2 ...
Alternatively, you can set the following environment variable:
export APPTAINER_BINDPATH="/scratch,/mnt" apptainer run my_container.sif arg1 arg2 ...
Running GPU containers
If your app will be using a GPU, you’ll need to submit the job to GPU nodes as usual. Your jobscript should load a CUDA modulefile.
We like to use the following code in a script or jobscript to run containers – it will automatically pass the required GPU flags and settings to singularity if needed:
# Note: If running a GPU-enabled container your jobscript must load a 'libs/cuda' # modulefile before you use the code below. # These env vars (without the APPTAINER_) will be visible inside the image at runtime export APPTAINER_HOME="$HOME" export APPTAINER_LANG="$LANG" # Bind the CSF's real /scratch and /mnt dirs to empty dirs inside the image export APPTAINER_BINDPATH="/scratch,/mnt" # A GPU job on the CSF will have set $CUDA_VISIBLE_DEVICES, so test # whether it is set or not (-n means "non-zero") if [ -n "$CUDA_VISIBLE_DEVICES" ]; then # We are a GPU job. Set the special SINGULARITYENV_CUDA_VISIBLE_DEVICES to limit # which GPUs the container can see. export APPTAINERENV_CUDA_VISIBLE_DEVICES="$CUDA_VISIBLE_DEVICES" # This is the nvidia flag for the apptainer command line NVIDIAFLAG=--nv fi # We use the 'sg' command to ensure the container is run with your own group id. sg $GROUP -c "apptainer run $NVIDIAFLAG my_container.sif arg1 arg2 ..."
Building your own Singularity image
You can build your own sifs for use on the CSF3 via the online resource: https://cloud.sylabs.io/builder
Please remember to include
mkdir /scratch
in the definition instructions. Be aware also that this resource is not affiliated with The University of Manchester.
Updates
OCT 2025 – Removed the requirement to build containers in own machine, updated variables names to APPTAINER…