{"id":598,"date":"2018-10-16T16:50:37","date_gmt":"2018-10-16T15:50:37","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=598"},"modified":"2026-01-23T14:48:28","modified_gmt":"2026-01-23T14:48:28","slug":"openmpi","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/libraries\/openmpi\/","title":{"rendered":"MPI (OpenMPI)"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>The <em>Message-Passing Interface<\/em> (MPI) library provides functionality to implement single-node and <em>multi-node<\/em> multi-core parallel applications. The runtime tools allow you to run applications compiled against the MPI library. The particular implementation installed and supported on CSF is <a href=\"http:\/\/www.open-mpi.org\/\">OpenMPI<\/a>.<\/p>\n<p>NOTE: do not confuse OpenMPI with <a href=\"\/csf3\/software\/libraries\/openmp\">OpenMP<\/a>. While they are both used to develop parallel applications, they are different technologies and offer different capabilities.<\/p>\n<p>Versions available:<\/p>\n<ul>\n<li>5.0.7 (with and without CUDA support)<\/li>\n<li>4.1.8 (with and without CUDA support)<\/li>\n<li>4.1.6<\/li>\n<li>4.1.0<\/li>\n<li>4.0.1 (with and without CUDA support)<\/li>\n<li>3.1.4<\/li>\n<li>3.1.3<\/li>\n<\/ul>\n<p>The modulefile names below indicate which compiler was used and the version of OpenMPI. <strong>NOTE: <\/strong>we no longer distinguish between non-InfiniBand (slower networking) and InfiniBand (faster networking) versions. OpenMPI will use the fastest available network. Previously you may have loaded a modulefile with <code>-ib<\/code> at the end of the name. This is now not necessary.<\/p>\n<h2>When to use an MPI Modulefile<\/h2>\n<p>The following two scenarios require an MPI modulefile:<\/p>\n<h3>Running Centrally Installed MPI programs<\/h3>\n<p>If you intend to run a centrally installed application (e.g., GROMACS) then we provide a modulefile for that application which loads the appropriate MPI modulefile. Hence you often never load an MPI modulefile from the list below yourself &#8211; the application&#8217;s modulefile will do it for you. You can probably stop reading this page here if you are not compiling your own code or an open-source application, for example!<\/p>\n<h3>Writing your own MPI programs or compiling open source programs<\/h3>\n<p>If writing your own parallel, MPI application, or compiling a downloaded open-source MPI application, you <em>must<\/em> load an MPI modulefile to compile the source code and also when running the executable (in a jobscript). If writing your own application you will need to amend your program to include the relevant calls to the MPI library.<\/p>\n<p>The MPI modulefiles below will <em>automatically load<\/em> the correct compiler modulefile for you. This is the compiler that was used to build the MPI libraries and so you should use that same compiler to build your own MPI application. The modulefile name\/path (see below) indicates which compiler will be used.<\/p>\n<p>For further details of the available compilers please see:<\/p>\n<ul>\n<li><a href=\"\/csf3\/software\/compilers\/intel-oneapi\">Intel OneAPI<\/a><\/li>\n<li><a href=\"\/csf3\/software\/compilers\/gnu\">GNU Compiler<\/a><\/li>\n<li><a href=\"\/csf3\/software\/compilers\/pgi\">PGI Compiler<\/a><\/li>\n<\/ul>\n<p>Please <a href=\"\/csf3\/overview\/help\/\">contact us<\/a> if you require advice.<\/p>\n<p>Programming with MPI is currently beyond the scope of this webpage.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>Code may be <em>compiled<\/em> on the login node, but aside from <em>very<\/em> short test runs (e.g., one minute on fewer than 4 cores), executables must always be run by submitting to the batch system, Slurm.<\/p>\n<h2>Set up procedure<\/h2>\n<p>Load the appropriate modulefile from the lists below. The <code>openmpi<\/code> modulefiles below will <em>automatically load<\/em> the compiler modulefile matching the version of the compiler used to build the MPI libraries.<\/p>\n<h3>Intel compilers<\/h3>\n<p>Load <strong>one<\/strong> of the following modulefiles:<\/p>\n<pre>\r\n# Note: the compiler wrapper scripts use icx, icpx, and ifx from OneAPI\r\nmodule load mpi\/intel-oneapi-2024.2.0\/openmpi\/5.0.7-ifx\r\n\r\n# Note: the compiler wrapper scripts use icx, icpx from OneAPI but ifort (the classic compiler)\r\nmodule load mpi\/intel-oneapi-2024.2.0\/openmpi\/5.0.7-cuda    # CUDA support compiled in\r\nmodule load mpi\/intel-oneapi-2024.2.0\/openmpi\/5.0.7\r\nmodule load mpi\/intel-oneapi-2024.2.0\/openmpi\/4.1.8\r\nmodule load mpi\/intel-oneapi-2023.1.0\/openmpi\/4.1.8\r\n\r\n## Versions <em>below<\/em> here do NOT have Slurm support compiled in, so are NOT recommended\r\n\r\n# Note: the compiler wrapper script use icc, icpc and ifort\r\nmodule load mpi\/intel-18.0\/openmpi\/4.1.0\r\nmodule load mpi\/intel-18.0\/openmpi\/4.0.1\r\nmodule load mpi\/intel-18.0\/openmpi\/3.1.4\r\n\r\nmodule load mpi\/intel-17.0\/openmpi\/4.0.1\r\nmodule load mpi\/intel-17.0\/openmpi\/3.1.3\r\n\r\n\r\n# CUDA aware MPI is avaiable via\r\nmodule load mpi\/intel-18.0\/openmpi\/4.0.1-cuda\r\nmodule load mpi\/intel-17.0\/openmpi\/3.1.3-cuda\r\n<\/pre>\n<h3>GNU compilers<\/h3>\n<p>Using versions that have been compiled with GCC 14.2.0 and newer are recommended so that you can use GCC optimizing flags for the AMD Genoa nodes in your compilation commands. See the <a href=\"\/csf3\/software\/compilers\/gnu\/#Optimizing_Flags_for_CSF_Hardware\">GCC Optimizing Flags<\/a> notes for more information on those flags.<\/p>\n<p>Load <strong>one<\/strong> of the following modulefiles:<\/p>\n<pre>\r\nmodule load mpi\/gcc\/openmpi\/5.0.7-gcc-14.2.0\r\nmodule load mpi\/gcc\/openmpi\/4.1.8-gcc-14.2.0\r\nmodule load mpi\/gcc\/openmpi\/4.1.8-gcc-11.4.1\r\nmodule load mpi\/gcc\/openmpi\/4.1.8-gcc-8.2.0          # This version and those above were compiled on CSF3 (el9, Slurm)\r\nmodule load mpi\/gcc\/openmpi\/4.1.6-ucx-gcc-14.1.0     # This version and those below were copied from CSF3 (el7, SGE)\r\nmodule load mpi\/gcc\/openmpi\/4.1.0\r\nmodule load mpi\/gcc\/openmpi\/4.0.1\r\nmodule load mpi\/gcc\/openmpi\/3.1.3\r\n\r\n# CUDA aware MPI is available via\r\nmodule load mpi\/gcc\/openmpi\/5.0.7-cuda-12.8.1-gcc-14.2.0\r\nmodule load mpi\/gcc\/openmpi\/4.1.8-cuda-12.4.1-gcc-11.4.1   # This version and those above were compiled on CSF3 (el9, Slurm)\r\nmodule load mpi\/gcc\/openmpi\/4.0.1-cuda                     # This version and those below were copied from CSF3 (el7, SGE)\r\n<\/pre>\n<h2>Compiling the application<\/h2>\n<p>If you are simply running an existing application you can skip these step.<\/p>\n<p>If compiling your own (or open source, for example) MPI application you should use the MPI compiler wrapper scripts <code> mpif90<\/code>, <code>mpicc<\/code>, <code>mpiCC<\/code>. These will ultimately use the compiler you selected above (Intel, PGI, GNU) but will add compiler flags to ensure the MPI header files and libraries are used during compilation (setting these flags manually can be very difficult to get right).<\/p>\n<p>Example code compilations when compiling on the command-line:<\/p>\n<pre>\r\nmpif90 my_prog.f90 -o my_prog        # Produces fortran mpi executable binary called my_prog\r\n\r\nmpicc my_prog.c -o my_prog           # Produces C mpi executable binary called my_prog\r\n\r\nmpiCC my_prog.cpp -o my_prog         # Produces C++ mpi executable binary called my_prog\r\n<\/pre>\n<p>In some build procedures you specify the compiler name using environment variables such as <code>CC<\/code> and <code>FC<\/code>. When compiling MPI code you simply use the name of the wrapper script as your compiler name. For example:<\/p>\n<pre>\r\nCC=mpicc FC=mpif90 .\/configure --prefix=~\/local\/appinst\r\n<\/pre>\n<p>Please consult the build instructions for your application.<\/p>\n<h2>Parallel batch job submission<\/h2>\n<p>We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job <abbr title=\"add '#$ -V' to your jobscript\">inherit these settings<\/abbr>.<\/p>\n<h3>Intel nodes<\/h3>\n<p>Note that you are not restricted to using the version of MPI compiled with the Intel compiler. The GNU (gcc) compiled version or the PGI compiled version can be used on the Intel hardware.<\/p>\n<p>To submit an MPI batch job to Intel nodes so that two or more nodes will be used create a jobscript similar to the following. <\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore\r\n#SBATCH -n 8\r\n#SBATCH -t 1-0\r\n\r\n## Load the required modulefile (can use intel, gcc or PGI versions)\r\nmodule purge\r\nmodule load mpi\/intel-oneapi-2024.2.0\/openmpi\/4.1.8\r\n\r\n## The variable $SLURM_NTASKS sets the number of processes. \r\n## OpenMPI is SLURM aware and in most of the cases this variable will not be required.\r\nmpirun -n $SLURM_NTASKS .\/my_prog\r\n<\/pre>\n<p>Submit the jobscript using: <\/p>\n<pre>sbatch <em>scriptname<\/em><\/pre>\n<p>where <em>scriptname<\/em> is the name of your jobscript.<\/p>\n<h2>Small MPI jobs that require 32 cores or fewer<\/h2>\n<p>In these jobs, all processes will be placed on the same physical node and hence no communication over the network will take place. Instead shared-memory communication will be used, which is more efficient. <\/p>\n<h2>Further info<\/h2>\n<p>Online help via the command line:<\/p>\n<pre>\r\nman mpif90         # for fortran mpi\r\nman mpicc          # for C\/C++ mpi\r\nman mpirun         # for information on running mpi executables\r\n<\/pre>\n<ul>\n<li><a href=\"http:\/\/www.open-mpi.org\/\">Open MPI website<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Overview The Message-Passing Interface (MPI) library provides functionality to implement single-node and multi-node multi-core parallel applications. The runtime tools allow you to run applications compiled against the MPI library. The particular implementation installed and supported on CSF is OpenMPI. NOTE: do not confuse OpenMPI with OpenMP. While they are both used to develop parallel applications, they are different technologies and offer different capabilities. Versions available: 5.0.7 (with and without CUDA support) 4.1.8 (with and without.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/libraries\/openmpi\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":140,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-598","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/598","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=598"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/598\/revisions"}],"predecessor-version":[{"id":11725,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/598\/revisions\/11725"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/140"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=598"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}