{"id":1391,"date":"2025-02-28T19:42:10","date_gmt":"2025-02-28T19:42:10","guid":{"rendered":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/?page_id=1391"},"modified":"2025-02-28T19:46:53","modified_gmt":"2025-02-28T19:46:53","slug":"dl_meso","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/dl_meso\/","title":{"rendered":"DL_MESO"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>DL_MESO is a general purpose mesoscale simulation package which supports both Lattice Boltzmann Equation (LBE) and Dissipative Particle Dynamics (DPD) methods.<\/p>\n<p>Version 2.8 is now available. It was compiled with Intel 18 compilers. It has additional DPD executables which can be run on GPU. All utilities are also available.<\/p>\n<p>The java interface is not available.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>Whilst the software is free for academic usage there are limitations within the <a href=\"http:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-content\/uploads\/DL_MESO_LICENCE.txt\">DL_MESO license agreement<\/a> which must be strictly adhered to by users. All users who wish to use the software must request access to the &#8216;dlmeso&#8217; unix group. A copy of the full license is also available on the CSF in <code>$dlmeso_home\/$dlmeso_ver\/LICENCE<\/code>. Important points to note are:<\/p>\n<ul>\n<li>No industrially-funded work must be undertaken using the software. See clauses 2.1.3 and 2.2 of the license.<\/li>\n<li>The software is only available to Staff and Students of the University of Manchester. Users are reminded that they must not share their password with anyone, or allow anyone else to utlise their account.<\/li>\n<li>Citation of the software must appear in any published work. See clause 4.2 for the required text.<\/li>\n<\/ul>\n<p>There is no access to the source code on the CSF.<\/p>\n<p>To get access to the software please confirm to <a href=\"&#x6d;&#x61;&#x69;&#x6c;&#x74;&#111;&#58;&#105;&#116;s-r&#x69;&#x2d;&#x74;&#x65;&#x61;&#x6d;&#64;&#109;&#97;nch&#x65;&#x73;&#x74;&#x65;&#x72;&#x2e;&#97;&#99;&#46;&#117;k\">&#x69;&#x74;&#x73;&#45;&#114;i-&#x74;&#x65;&#x61;&#109;&#64;ma&#x6e;&#x63;&#x68;&#101;&#115;te&#x72;&#x2e;&#x61;&#99;&#46;uk<\/a> that your work will meet the above T&amp;Cs.<\/p>\n<h2>Set up procedure<\/h2>\n<p>Once you have been added to the unix group please load the following modulefile:<\/p>\n<pre>\r\nmodule load apps\/intel-2024.2\/dl_meso\/2.8\r\n<\/pre>\n<h2>Running the application<\/h2>\n<p>You will notice that there are some differences between the User Manual and the CSF installation, in particular the naming of the executables. The tables below show the main executables that are available.<\/p>\n<p>DL_MESO v2.8 executables:<\/p>\n<table>\n<tbody>\n<tr>\n<td><strong>Executable<\/strong><\/td>\n<td><strong>Simulation<\/strong><\/td>\n<\/tr>\n<tr>\n<td>slbe.exe<\/td>\n<td>Serial LBE<\/td>\n<\/tr>\n<tr>\n<td>plbe.exe<\/td>\n<td>Parallel LBE (uses MPI &#8211; single and multi-node jobs)<\/td>\n<\/tr>\n<tr>\n<td>sdpd.exe<\/td>\n<td>Serial DPD<\/td>\n<\/tr>\n<tr>\n<td>pdpd.exe<\/td>\n<td>Parallel DPD (uses MPI &#8211; single and multi-node jobs)<\/td>\n<\/tr>\n<tr>\n<td>pdpd-omp.exe<\/td>\n<td>Parallel DPD (uses OpenMP &#8211; single-node multi-threaded jobs)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Example Batch Jobs<\/h2>\n<h2>Serial Batch job examples<\/h2>\n<h3>Serial LBE batch job submission<\/h3>\n<ul>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code>:<\/li>\n<\/ul>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -n 1\r\n\r\nmodule load apps\/intel-2024.2\/dl_meso\/2.8\r\n\r\nslbe.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Serial DPD batch job submission<\/h3>\n<ul>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code>:<\/li>\n<\/ul>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -n 1\r\n\r\nmodule load apps\/intel-2024.2\/dl_meso\/2.8\r\n\r\nsdpd.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h2>Parallel Batch job examples<\/h2>\n<h3>It is highly recommended that you run scaling tests on 2,4,6,8,10,12,16,18,20,22,24&#8230;.40 cores before moving on to running larger multinode jobs to see how well your job performs as the number of cores increases.<\/h3>\n<h3>Parallel LBE multicore batch job submission &#8211; 2 to 40 cores using MPI<\/h3>\n<ul>\n<li>Make sure you have the dl_meso modulefile loaded.<\/li>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code> and asking for 6 cores:<\/li>\n<\/ul>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore   # (or --partition=) One compute node will be used\r\n#SBATCH -n 6           # (or --ntasks=) Use 6cores on a single node (can be 2 to 40)\r\n                       # The $SLURM_NTASKS variable will be set to this value.\r\n\r\nmodule load apps\/intel-2024.2\/dl_meso\/2.8\r\n\r\nmpirun -n $SLURM_NTASKS plbe.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Parallel LBE multinode batch job submission &#8211; 80 to 200 cores using MPI<\/h3>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multinode    # (or --partition=) \r\n#SBATCH -N 2            # (or --nodes=) 2 or more. The jobs uses all 40 cores on each node.\r\n#SBATCH -n 80           # (or --ntasks=) 80 or more - the TOTAL number of tasks in your job.\r\n\r\nmodule load apps\/intel-2024.2\/dl_meso\/2.8\r\n\r\nmpirun -n $SLURM_NTASKS plbe.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Parallel DPD multicore batch job submission &#8211; 2 to 40 cores using MPI<\/h3>\n<ul>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code> and asking for 12 cores:<\/li>\n<\/ul>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore   # (or --partition=) One compute node will be used\r\n#SBATCH -n 6           # (or --ntasks=) Use 6cores on a single node (can be 2 to 40)\r\n                       # The $SLURM_NTASKS variable will be set to this value.\r\n\r\nmodule load apps\/intel-2024.2\/dl_meso\/2.8\r\n\r\nmpirun -n $SLURM_NTASKS pdpd.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Parallel DPD multinode batch job submission &#8211; 80 to 200 cores using MPI<\/h3>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multinode    # (or --partition=) \r\n#SBATCH -N 2            # (or --nodes=) 2 or more. The jobs uses all 40 cores on each node.\r\n#SBATCH -n 80           # (or --ntasks=) 80 or more - the TOTAL number of tasks in your job.\r\n\r\nmodule load apps\/intel-2024.2\/dl_meso\/2.8\r\n\r\nmpirun -n $SLURM_NTASKS pdpd.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Parallel DPD multicore batch job submission &#8211; 2 to 40 cores using OpenMP<\/h3>\n<ul>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code> and asking for 12 cores:<\/li>\n<\/ul>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore   # (or --partition=) One compute node will be used\r\n#SBATCH -n 6           # (or --ntasks=) Use 6cores on a single node (can be 2 to 40)\r\n                       # The $SLURM_NTASKS variable will be set to this value.\r\n\r\nmodule load apps\/intel-2024.2\/dl_meso\/2.8\r\n\r\nexport OMP_NUM_THREADS=$SLURM_NTASKS\r\npdpd-omp.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Parallel DPD multinode batch job submission &#8211; 80 to 200 cores using OpenMP<\/h3>\n<pre>\r\n#!\/bin\/bash --login\r\n#SBATCH -p multinode    # (or --partition=) \r\n#SBATCH -N 2            # (or --nodes=) 2 or more. The jobs uses all 40 cores on each node.\r\n#SBATCH -n 80           # (or --ntasks=) 80 or more - the TOTAL number of tasks in your job.\r\n\r\nmodule load apps\/intel-2024.2\/dl_meso\/2.8\r\n\r\nexport OMP_NUM_THREADS=$SLURM_NTASKS\r\npdpd-omp.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Parallel DPD GPU batch job submission<\/h3>\n<ul>\n<li>There are no GPUs in this Cluster and therefore the DPD GPU builds are not available here<\/li>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"https:\/\/www.ccp5.ac.uk\/dl_meso\/\" target=\"_blank\" rel=\"noopener\">DL_MESO Homepage<\/a><\/li>\n<li>Example data and cases can be found in <code>$dlmeso_home\/$dlmeso_ver\/DEMO<\/code> &#8211; please see the User Manual for further details.<\/li>\n<li>DL_MESO User Manual can be found in <code>$dlmeso_home\/$dlmeso_ver\/MAN\/USRMAN.pdf<\/code><\/li>\n<\/ul>\n<h2>Updates<\/h2>\n","protected":false},"excerpt":{"rendered":"<p>Overview DL_MESO is a general purpose mesoscale simulation package which supports both Lattice Boltzmann Equation (LBE) and Dissipative Particle Dynamics (DPD) methods. Version 2.8 is now available. It was compiled with Intel 18 compilers. It has additional DPD executables which can be run on GPU. All utilities are also available. The java interface is not available. Restrictions on use Whilst the software is free for academic usage there are limitations within the DL_MESO license agreement.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/software\/applications\/dl_meso\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":14,"featured_media":0,"parent":49,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1391","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/1391","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/comments?post=1391"}],"version-history":[{"count":16,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/1391\/revisions"}],"predecessor-version":[{"id":1407,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/1391\/revisions\/1407"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/pages\/49"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf4\/wp-json\/wp\/v2\/media?parent=1391"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}