{"id":822,"date":"2022-04-22T16:27:12","date_gmt":"2022-04-22T15:27:12","guid":{"rendered":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/?page_id=822"},"modified":"2022-04-22T16:33:58","modified_gmt":"2022-04-22T15:33:58","slug":"singularity","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/singularity\/","title":{"rendered":"Singularity"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"https:\/\/www.sylabs.io\/docs\/\">Singularity<\/a> provides a mechanism to run containers where containers can be used to package entire scientific workflows, software and libraries, and even data.<\/p>\n<p>Version 3.5.3-1.1 is installed on CSF4.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>The software is licensed under the BSD 3-clause &#8220;New&#8221; or &#8220;Revised&#8221; <a href=\"https:\/\/github.com\/singularityware\/singularity\/blob\/master\/LICENSE.md\">License<\/a>. <\/p>\n<h2>Set up procedure<\/h2>\n<p><strong>Please note:<\/strong> you no longer need to use a modulefile. We have installed singularity as a system-wide command. So you can simply run the singularity commands you wish to run without loading any modulefiles.<\/p>\n<p>For example:<\/p>\n<pre>\r\n[<em>username<\/em>@login02 [CSF4] ~]$ singularity --version\r\nsingularity version 3.5.3-1.1.el7\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>Please do not run Singularity containers on the login node. Jobs should be submitted to the compute nodes via batch. You may run the command on its own to obtain a list of flags:<\/p>\n<pre>\r\nsingularity\r\nUSAGE: singularity [global options...] <command> [command options...] ...\r\n\r\nONTAINER USAGE COMMANDS:\r\n    exec       Execute a command within container                               \r\n    run        Launch a runscript within container                              \r\n    shell      Run a Bourne shell within container                              \r\n    test       Launch a testscript within container                             \r\n\r\nCONTAINER MANAGEMENT COMMANDS:\r\n    apps       List available apps within a container                           \r\n    bootstrap  *Deprecated* use build instead                                   \r\n    build      Build a new Singularity container                                \r\n    check      Perform container lint checks                                    \r\n    inspect    Display container's metadata                                     \r\n    mount      Mount a Singularity container image                              \r\n    pull       Pull a Singularity\/Docker container to $PWD\r\n<\/pre>\n<p>Please note that users will not be permitted to run singularity containers in <code>--writeable<\/code> mode. You should build containers on your own platform where you have root access.<\/p>\n<h3>Serial batch job<\/h3>\n<p>Create a batch submission script, for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login \r\n#SBATCH -p serial            # Optional, default partition is serial\r\n#SBATCH -n 1                 # Optional, jobs in serial partition always use 1 core\r\n\r\nsingularity run <em>mystack<\/em>.simg\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Single-node Parallel batch job<\/h3>\n<p>You should check the <a href=\"http:\/\/singularity.lbl.gov\/docs\">Singularity Documentation<\/a> for how to ensure your Singularity container can access the cores available to your job. For example, MPI applications inside a container usually require the singularity command itself to be run via <code>mpirun<\/code> in the jobscript.<\/p>\n<p>Create a jobscript similar to the following:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore        # Single-node parallel job\r\n#SBATCH -n 8                # Number of cores, can be 2--40\r\n\r\n# mpirun knows how many cores to use\r\nmpirun singularity exec <em>name_of_container<\/em> <em>name_of_app_inside_container<\/em>\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h3>Multi-node Parallel batch job<\/h3>\n<p>You should check the <a href=\"http:\/\/singularity.lbl.gov\/docs\">Singularity Documentation<\/a> for how to ensure your Singularity container can access the cores available to your job. For example, MPI applications inside a container usually require the singularity command itself to be run via <code>mpirun<\/code> in the jobscript.<\/p>\n<p>Create a jobscript similar to the following:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multinode        # Multi-node parallel job\r\n#SBATCH -n 120              # Number of cores, can be 80 or more in multiples of 40\r\n\r\n# mpirun knows how many cores to use\r\nmpirun singularity exec <em>name_of_container<\/em> <em>name_of_app_inside_container<\/em>\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/singularity.lbl.gov\/docs\">Singularity Documentation<\/a><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>None.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview Singularity provides a mechanism to run containers where containers can be used to package entire scientific workflows, software and libraries, and even data. Version 3.5.3-1.1 is installed on CSF4. Restrictions on use The software is licensed under the BSD 3-clause &#8220;New&#8221; or &#8220;Revised&#8221; License. Set up procedure Please note: you no longer need to use a modulefile. We have installed singularity as a system-wide command. So you can simply run the singularity commands you.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/singularity\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"parent":49,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-822","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/822","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/comments?post=822"}],"version-history":[{"count":5,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/822\/revisions"}],"predecessor-version":[{"id":827,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/822\/revisions\/827"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/49"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/media?parent=822"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}