Singularity
Overview
Singularity provides a mechanism to run containers where containers can be used to package entire scientific workflows, software and libraries, and even data.
Version 3.5.3-1.1 is installed on CSF4.
Restrictions on use
The software is licensed under the BSD 3-clause “New” or “Revised” License.
Set up procedure
Please note: you no longer need to use a modulefile. We have installed singularity as a system-wide command. So you can simply run the singularity commands you wish to run without loading any modulefiles.
For example:
[username@login02 [CSF4] ~]$ singularity --version
singularity version 3.5.3-1.1.el7
Running the application
Please do not run Singularity containers on the login node. Jobs should be submitted to the compute nodes via batch. You may run the command on its own to obtain a list of flags:
singularity USAGE: singularity [global options...][command options...] ... ONTAINER USAGE COMMANDS: exec Execute a command within container run Launch a runscript within container shell Run a Bourne shell within container test Launch a testscript within container CONTAINER MANAGEMENT COMMANDS: apps List available apps within a container bootstrap *Deprecated* use build instead build Build a new Singularity container check Perform container lint checks inspect Display container's metadata mount Mount a Singularity container image pull Pull a Singularity/Docker container to $PWD
Please note that users will not be permitted to run singularity containers in --writeable
mode. You should build containers on your own platform where you have root access.
Serial batch job
Create a batch submission script, for example:
#!/bin/bash --login #SBATCH -p serial # Optional, default partition is serial #SBATCH -n 1 # Optional, jobs in serial partition always use 1 core singularity run mystack.simg
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Single-node Parallel batch job
You should check the Singularity Documentation for how to ensure your Singularity container can access the cores available to your job. For example, MPI applications inside a container usually require the singularity command itself to be run via mpirun
in the jobscript.
Create a jobscript similar to the following:
#!/bin/bash --login #SBATCH -p multicore # Single-node parallel job #SBATCH -n 8 # Number of cores, can be 2--40 # mpirun knows how many cores to use mpirun singularity exec name_of_container name_of_app_inside_container
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Multi-node Parallel batch job
You should check the Singularity Documentation for how to ensure your Singularity container can access the cores available to your job. For example, MPI applications inside a container usually require the singularity command itself to be run via mpirun
in the jobscript.
Create a jobscript similar to the following:
#!/bin/bash --login #SBATCH -p multinode # Multi-node parallel job #SBATCH -n 120 # Number of cores, can be 80 or more in multiples of 40 # mpirun knows how many cores to use mpirun singularity exec name_of_container name_of_app_inside_container
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Further info
Updates
None.