{"id":650,"date":"2018-10-29T15:18:43","date_gmt":"2018-10-29T15:18:43","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=650"},"modified":"2026-01-16T15:36:26","modified_gmt":"2026-01-16T15:36:26","slug":"singularity","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/singularity\/","title":{"rendered":"Singularity \/ Apptainer"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"https:\/\/www.sylabs.io\/docs\/\">Singularity<\/a> provides a mechanism to run containers where containers can be used to package entire scientific workflows, software and libraries, and even data.<\/p>\n<p>Version 1.4.2-1.el9 is installed on the CSF3.<\/p>\n<div class=\"note\">Please note: Docker is <strong>not<\/strong> available on the CSF for security reasons. Instead you should use Singularity or Apptainer. You can convert your Docker images to Singularity \/ Apptainer images.<\/p>\n<p>Please <a href=\"#own\">see below<\/a> for info on using your own containers.\n<\/div>\n<h2>Restrictions on use<\/h2>\n<p>The software is licensed under the BSD 3-clause &#8220;New&#8221; or &#8220;Revised&#8221; <a href=\"https:\/\/github.com\/apptainer\/singularity\/blob\/master\/LICENSE.md\">License<\/a>.<\/p>\n<h2>Set up procedure<\/h2>\n<p><strong>Please note:<\/strong> <strong>you no longer need to use a modulefile<\/strong>.<\/p>\n<p>We have installed Apptainer, which can be run using the command <code>apptainer<\/code> or <code>singularity<\/code>, as a system-wide command. So you can simply run the commands you wish to run <strong>without loading any modulefiles<\/strong>.<\/p>\n<p>For example:<\/p>\n<pre># Ensure you have NO singularity modulefiles loaded ('module purge' will unload all modulefiles)\r\n\r\n[<em>username<\/em>@login2[csf3] ~]$ singularity --version\r\napptainer version 1.4.2-1.el9\r\n  #\r\n  # It is apptainer that is installed on the CSF. Hence the\r\n  # 'singularity' command is an alias for apptainer.\r\n\r\n[<em>username<\/em>@login2[csf3] ~]$ apptainer --version\r\napptainer version 1.4.2-1.el9\r\n<\/pre>\n<h3>Singularity version 3<\/h3>\n<p>If you need singularity v3, e.g., for Nextflow, please first try using the system-wide version &#8211; i.e., <em>do not<\/em> load any singularity modulefiles.<\/p>\n<p>The system-wide singularity is provided by Apptainer v1.4.2, and this includes many of the features that you may be wanting from singularity v3.<\/p>\n<p>In short, you will likely be able to do everything you need to do simply by NOT loading any singularity or apptainer modulefiles.<\/p>\n<h2>Running the application<\/h2>\n<p>Please do not run Apptainer \/ Singularity containers on the login node. Jobs should be submitted to the compute nodes via batch. You may run the command on its own to obtain a list of flags:<\/p>\n<pre>\r\napptainer\r\nUsage:\r\n  apptainer [global options...] <command>\r\n\r\nAvailable Commands:\r\n  build       Build an Apptainer image\r\n  cache       Manage the local cache\r\n  capability  Manage Linux capabilities for users and groups\r\n  checkpoint  Manage container checkpoint state (experimental)\r\n  completion  Generate the autocompletion script for the specified shell\r\n  config      Manage various apptainer configuration (root user only)\r\n  delete      Deletes requested image from the library\r\n  exec        Run a command within a container\r\n  inspect     Show metadata for an image\r\n  instance    Manage containers running as services\r\n  key         Manage OpenPGP keys\r\n  keyserver   Manage apptainer keyservers\r\n  oci         Manage OCI containers\r\n  overlay     Manage an EXT3 writable overlay image\r\n  plugin      Manage Apptainer plugins\r\n  pull        Pull an image from a URI\r\n  push        Upload image to the provided URI\r\n  registry    Manage authentication to OCI\/Docker registries\r\n  remote      Manage apptainer remote endpoints\r\n  run         Run the user-defined default command within a container\r\n  run-help    Show the user-defined help for an image\r\n  search      Search a Container Library for images\r\n  shell       Run a shell within a container\r\n  sif         Manipulate Singularity Image Format (SIF) images\r\n  sign        Add digital signature(s) to an image\r\n  test        Run the user-defined tests within a container\r\n  verify      Verify digital signature(s) within an image\r\n  version     Show the version for Apptainer\r\n\r\nRun 'apptainer --help' for more detailed usage information.\r\n<\/pre>\n<h3>Serial batch job submission<\/h3>\n<p>Create a batch submission script, for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login \r\n#SBATCH -p serial       # (or --partition=) Run on the nodes dedicated to 1-core jobs\r\n#SBATCH -t 2-0          # Wallclock time limit (2-0 is 2 days, max permitted is 7-0)\r\n\r\n# We'll use the system-wide command, hence no apptainer modulefiles to load\r\nmodule purge\r\n\r\napptainer run mystack.simg\r\n<\/pre>\n<p>Submit the jobscript using:<\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Parallel batch job submission<\/h3>\n<p>You should check the <a href=\"https:\/\/apptainer.org\/docs\/user\/main\/mpi.html#running-an-mpi-application\">Apptainer Documentation<\/a> for how to ensure your Apptainer \/ Singularity container can access the cores available to your job. For example, MPI applications inside a container usually require the apptainer command itself to be run via <code>mpirun<\/code> in the jobscript.<\/p>\n<p>Create a jobscript similar to the following:<\/p>\n<pre>#!\/bin\/bash --login\r\n#SBATCH -p multicore    # (or --partition=) Run on the AMD 168-core Genoa nodes\r\n#SBATCH -n 8            # (or --ntasks=) Number of cores\r\n#SBATCH -t 2-0          # Wallclock time limit (2-0 is 2 days, max permitted is 7-0)\r\n\r\n# We'll use the system-wide command, hence no apptainer modulefiles to load\r\nmodule purge\r\n\r\n# mpirun knows to run $SLURM_NTASKS processes (which is the-n number above)\r\nmpirun apptainer exec <em>name_of_container<\/em> <em>name_of_app_inside_container<\/em>\r\n<\/pre>\n<p>Submit the jobscript using:<\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h2 id=\"own\">Using your own containers<\/h2>\n<p>You may want to use your own containers on the CSF3 &#8211; that&#8217;s fine. You will need to have a <code>\/scratch<\/code> directory (stub) <em>within<\/em> the container to bind to the <code>\/scratch<\/code> directory on the CSF.<\/p>\n<h3>Rebuilding with a scratch mount point<\/h3>\n<p>If building from an image definition <code>.def<\/code> file, please include the line <\/p>\n<pre>\r\n%post\r\n...\r\nmkdir \/scratch\r\n<\/pre>\n<p>in the <code>%post<\/code> section.<\/p>\n<p>If using a prebuilt <code>.sif<\/code> (or <code>.simg<\/code>) container image and you don&#8217;t have a .def file available, then follow the steps below to rebuild with a <code>\/scratch<\/code> directory within. It is suggested to do the below steps in scratch, as it is faster than you home directory:<\/p>\n<pre>\r\n# From the image .sif create a sandbox directory which you can edit  \r\napptainer build --sandbox mysandbox myimage.sif\r\n# add an empty \/scratch dir in the sandbox\r\nmkdir mysandbox\/scratch\r\n# build a new image based on the sandbox directory\r\napptainer build myimage-csf3ready.sif mysandbox\r\n<\/pre>\n<h3>Adding scratch and home at runtime<\/h3>\n<p>If you don&#8217;t want to rebuild your image, you can make scratch, home directories and any additional Research Data Storage you may have access to, using the following command when you run apptainer:<\/p>\n<pre>\r\napptainer exec -B \/scratch,\/mnt <em>myimage.sif<\/em>\r\n<\/pre>\n<h3>Converting from a Docker container &#8211; example 1<\/h3>\n<p>Many Docker images exist that can be converted to apptainer \/ singularity images. We user an apptainer recipe (<code>.def<\/code> file) to pull down the Docker container and build an Apptainer image from it.<\/p>\n<p>The example below uses <a href=\"https:\/\/hub.docker.com\/r\/cp2k\/cp2k\/\">https:\/\/hub.docker.com\/r\/cp2k\/cp2k\/<\/a>.<br \/>\nThere are 2 methods to build the container in a way that is CSF3 compatible (i.e it includes a \/scratch directory).<br \/>\nThe end result will be exactly the same with both methods.<\/p>\n<h4>1) Build using a .def file<\/h4>\n<p>First create a .def file named cp2k-csf.def with the contents below:<\/p>\n<pre>\r\nBootStrap: docker\r\nFrom: cp2k\/cp2k\r\n\r\n%post\r\n   mkdir \/scratch \r\n<\/pre>\n<p>Then build the container with the command:<\/p>\n<pre>apptainer build cp2k-csf.sif cp2k-csf.def<\/pre>\n<h4>2) Convert to .sif then add \/scratch in 2 steps<\/h4>\n<pre>\r\napptainer build cp2k.sif docker:\/\/cp2k\/cp2k\r\napptainer build --sandbox cp2k-sandbox cp2k.sif \r\nmkdir cp2k-sandbox\/scratch\r\napptainer build cp2k-csf.sif cp2k-sandbox\r\n<\/pre>\n<h3>Converting from a Docker container &#8211; example 2<\/h3>\n<p>The following example shows how a container can be used to make available old versions of software that might be difficult to obtain or install on the current system.<\/p>\n<p>We were asked to install Tensorflow 2.15 with TensorFlow Probability 0.23. The following method can be used to create an Apptainer image for this:<\/p>\n<p>First create a text file named <code>tensorflow-gpu-2.150.def<\/code> containing:<\/p>\n<pre>\r\nBootstrap: docker\r\n\r\nFrom: tensorflow\/tensorflow:2.15.0-gpu\r\n\r\n%post\r\n  pip install tensorflow-probability==0.23.0\r\n<\/pre>\n<p>On the CSF login node, build the image:<\/p>\n<pre>\r\nmodule purge\r\napptainer build tensorflow-gpu-2.15.0.sif tensorflow-gpu-2.15.0.def\r\n<\/pre>\n<p>Now test in an interactive session on the CSF (1 x L40S GPU, 4 CPU cores, 30 minute session):<\/p>\n<pre>\r\nsrun -p gpuL -G 1 -n 4 -t 30 --pty bash\r\nmodule purge\r\napptainer exec -B \/scratch,\/mnt --nv tensorflow-gpu-2.15.0.sif python3 <em>my_python_code<\/em>.py\r\n\r\n# When finished test, return to the login node\r\nexit\r\n<\/pre>\n<p>You should see the Tensorflow results displayed to your terminal.<\/p>\n<h3>Running a container<\/h3>\n<p>When running your container, please remember to bind scratch (and also \/mnt which will make your home directory available) and run your jobs from there:<\/p>\n<pre>\r\napptainer run -B \/scratch,\/mnt my_container.sif <em>arg1 arg2 ...<\/em>\r\n<\/pre>\n<p>Alternatively, you can set the following environment variable:<\/p>\n<pre>\r\nexport APPTAINER_BINDPATH=\"\/scratch,\/mnt\"\r\napptainer run my_container.sif <em>arg1 arg2 ...<\/em>\r\n<\/pre>\n<h3>Running GPU containers<\/h3>\n<p>If your app will be using a GPU, you&#8217;ll need to <a href=\"\/csf3\/batch-slurm\/gpu-jobs-slurm\/\">submit the job to GPU nodes<\/a> as usual. Your jobscript should load a <a href=\"\/csf3\/batch\/gpu-jobs\/#CUDA_Libraries\">CUDA modulefile<\/a>.<\/p>\n<p>The simplest method of running a GPU container (from a jobscript or srun interactive session) is simply to do:<\/p>\n<pre>\r\napptains exec -B \/scratch,\/mnt --nv <em>myimage.sif<\/em> <em>args...<\/em>\r\n<\/pre>\n<p>This will make available to the container the GPUs that have been assigned to your job.<\/p>\n<p>The following instructions are from the pre-Slurm CSF can be ignored (TBC).<\/p>\n<p>We like to use the following code in a script or jobscript to run containers &#8211; it will automatically pass the required GPU flags and settings to singularity if needed:<\/p>\n<pre>\r\n# Note: If running a GPU-enabled container your jobscript must load a 'libs\/cuda'\r\n# modulefile before you use the code below.\r\n\r\n# These env vars (without the APPTAINER_) will be visible inside the image at runtime\r\nexport APPTAINER_HOME=\"$HOME\"\r\nexport APPTAINER_LANG=\"$LANG\"\r\n\r\n# Bind the CSF's real \/scratch and \/mnt dirs to empty dirs inside the image\r\nexport APPTAINER_BINDPATH=\"\/scratch,\/mnt\"\r\n\r\n# A GPU job on the CSF will have set $CUDA_VISIBLE_DEVICES, so test\r\n# whether it is set or not (-n means \"non-zero\")\r\nif [ -n \"$CUDA_VISIBLE_DEVICES\" ]; then\r\n   # We are a GPU job. Set the special SINGULARITYENV_CUDA_VISIBLE_DEVICES to limit\r\n   # which GPUs the container can see.\r\n   export APPTAINERENV_CUDA_VISIBLE_DEVICES=\"$CUDA_VISIBLE_DEVICES\"\r\n   # This is the nvidia flag for the apptainer command line\r\n   NVIDIAFLAG=--nv\r\nfi\r\n\r\n# We use the 'sg' command to ensure the container is run with your own group id.\r\nsg $GROUP -c \"apptainer run $NVIDIAFLAG my_container.sif <em>arg1 arg2 ...<\/em>\"\r\n<\/pre>\n<h3>Building your own Singularity image<\/h3>\n<p>You can build your own sifs for use on the CSF3 via the online resource: <a href=\"https:\/\/cloud.sylabs.io\/builder\">https:\/\/cloud.sylabs.io\/builder<\/a><\/p>\n<p>Please remember to include <\/p>\n<pre>mkdir \/scratch<\/pre>\n<p> in the definition instructions. Be aware also that this resource is not affiliated with The University of Manchester.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/singularity.lbl.gov\/docs\">Singularity Documentation<\/a><\/li>\n<li><a href=\"https:\/\/apptainer.org\/docs\/user\/latest\/build_a_container.html\">Apptainer documentation on building containers<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>OCT 2025 &#8211; Removed the requirement to build containers in own machine, updated variables names to APPTAINER&#8230;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview Singularity provides a mechanism to run containers where containers can be used to package entire scientific workflows, software and libraries, and even data. Version 1.4.2-1.el9 is installed on the CSF3. Please note: Docker is not available on the CSF for security reasons. Instead you should use Singularity or Apptainer. You can convert your Docker images to Singularity \/ Apptainer images. Please see below for info on using your own containers. Restrictions on use The.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/singularity\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-650","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/650","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=650"}],"version-history":[{"count":21,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/650\/revisions"}],"predecessor-version":[{"id":11706,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/650\/revisions\/11706"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=650"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}