{"id":125,"date":"2013-04-22T12:32:10","date_gmt":"2013-04-22T12:32:10","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/?page_id=125"},"modified":"2018-05-24T15:17:13","modified_gmt":"2018-05-24T15:17:13","slug":"dl_meso","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/dl_meso\/","title":{"rendered":"DL_MESO"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>DL_MESO is a general purpose mesoscale simulation package which supports both Lattice Boltzmann Equation (LBE) and Dissipative Particle Dynamics (DPD) methods.<\/p>\n<p>Versions 2.5 and 2.6 are installed. v2.5 was compiled with intel compilers 12.0.5 and openmpi 1.6. v2.6 was compiled with intel compilers 14.0.4 and openmpi 1.6.<\/p>\n<p>Multiple versions of 2.6 are installed with bugfixes (see modulefile info below). Details of the bugfixes can be found on the <a href=http:\/\/www.scd.stfc.ac.uk\/support\/40696.aspx\">DL_MESO INFOMAIL website<\/a>.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>Whilst the software is free for academic usage there are limitations within the <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-content\/uploads\/DL_MESO_LICENCE.txt\">DL_MESO license agreement<\/a> which must be strictly adhered to by users. All users who wish to use the software must request access to the &#8216;dlmeso&#8217; unix group. A copy of the full license is also available on the CSF in <code>$dlmeso_home\/$dlmeso_ver\/LICENCE<\/code>. Important points to note are:<\/p>\n<ul>\n<li>No industrially-funded work must be undertaken using the software. See clauses 2.1.3 and 2.2 of the license.<\/li>\n<li>The software is only available to Staff and Students of the University of Manchester. Users are reminded that they must not share their password with anyone, or allow anyone else to utlise their account.<\/li>\n<li>Citation of the software must appear in any published work. See clause 4.2 for the required text.<\/li>\n<\/ul>\n<p>There is no access to the source code on the CSF.<\/p>\n<h2>Set up procedure<\/h2>\n<p>Once you have been added to the unix group please load the modulefile:<\/p>\n<h3>DL_MESO v2.6 with bugfixes to 14 April 2017 Modulefiles<\/h3>\n<p>In April 2017 we added a new installation which contains all the bugfixes released up to 14th April 2017 (up to INFOMAIL 42).<\/p>\n<pre>module load apps\/intel-14.0\/dl_meso\/2.6-bf-20170414<\/pre>\n<p>This will allow you to run v2.6 as a serial (1-core app) or a multi-threaded (i.e., single compute node) app.<\/p>\n<p>If you wish to run the software in parallel using MPI you will also require one of the following mpi modulefiles:<\/p>\n<pre>module load mpi\/intel-14.0\/openmpi\/1.6<\/pre>\n<p>for jobs of 2 cores to 24 cores.<\/p>\n<p>OR this one for jobs of 48 cores or more which are a multiple of 24 (targets the Infiniband connected nodes)<\/p>\n<pre class=\"in1\">module load mpi\/intel-14.0\/openmpi\/1.6-ib<\/pre>\n<h3>DL_MESO v2.6 with bugfixes to 14 April 2017 Modulefiles compiled with fftw3<\/h3>\n<p>These modulefiles contain ONLY executables for DPD. You do not need to load an mpi or fftw3 modulefile as this is automatically done for you.<\/p>\n<p>For FFTW3 floating precision on 2-24 cores please use:<\/p>\n<pre>module load apps\/intel-15.0\/dl_meso\/2.6-bf-20170414-fftw3-float-mpi<\/pre>\n<p>For FFTW3 floating precision on 48 cores or more which are a multiple of 24 (targets the Infiniband connected nodes) cores please use:<\/p>\n<pre>module load apps\/intel-15.0\/dl_meso\/2.6-bf-20170414-fftw3-float-mpi-ib<\/pre>\n<p>For FFTW3 double precision on 2-24 cores please use:<\/p>\n<pre>module load apps\/intel-15.0\/dl_meso\/2.6-bf-20170414-fftw3-double-mpi<\/pre>\n<p>For FFTW3 double precision on 48 cores or more which are a multiple of 24 (targets the Infiniband connected nodes) cores please use:<\/p>\n<pre>module load apps\/intel-15.0\/dl_meso\/2.6-bf-20170414-fftw3-double-mpi-ib<\/pre>\n<h3>DL_MESO v2.6 with bugfixes to 28 Nov 2016 Modulefiles<\/h3>\n<p>In January 2017 we added a new installation which contains all the bugfixes released up to 28th November 2016 (up to INFOMAIL 37).<\/p>\n<pre>module load apps\/intel-14.0\/dl_meso\/2.6-bf-20161128<\/pre>\n<p>This will allow you to run v2.6 as a serial (1-core app) or a multi-threaded (i.e., single compute node) app.<\/p>\n<p>If you wish to run the software in parallel using MPI you will also require one of the following mpi modulefiles:<\/p>\n<pre>module load mpi\/intel-14.0\/openmpi\/1.6<\/pre>\n<p>for jobs of 2 cores to 24 cores.<\/p>\n<p>OR this one for jobs of 48 cores or more which are a multiple of 24 (targets the Infiniband connected nodes)<\/p>\n<pre class=\"in1\">module load mpi\/intel-14.0\/openmpi\/1.6-ib<\/pre>\n<h3>DL_MESO v2.6 Modulefiles<\/h3>\n<pre>module load apps\/intel-14.0\/dl_meso\/2.6<\/pre>\n<p>This will allow you to run v2.6 as a serial (1-core app) or a multi-threaded (i.e., single compute node) app.<\/p>\n<p>If you wish to run the software in parallel using MPI you will also require one of the following mpi modulefiles:<\/p>\n<pre>module load mpi\/intel-14.0\/openmpi\/1.6<\/pre>\n<p>for jobs of 2 cores to 24 cores.<\/p>\n<p>OR this one for jobs of 48 cores or more which are a multiple of 24 (targets the Infiniband connected nodes)<\/p>\n<pre class=\"in1\">module load mpi\/intel-14.0\/openmpi\/1.6-ib<\/pre>\n<h3>DL_MESO v2.5 Modulefiles<\/h3>\n<pre>module load apps\/intel-12.0\/dl_meso\/2.5<\/pre>\n<p>If you wish to run the software in parallel you will also require one of the following mpi modulefiles:<\/p>\n<pre class=\"in1\">module load mpi\/intel-12.0\/openmpi\/1.6<\/pre>\n<p>for jobs of 2 cores or more, but fewer than 12 (targets ethernet connected nodes).<\/p>\n<p>OR this one for jobs of 48 cores or more which are a multiple of 24 (targets the Infiniband connected nodes)<\/p>\n<pre class=\"in1\">module load mpi\/intel-12.0\/openmpi\/1.6-ib<\/pre>\n<h2>Running the application<\/h2>\n<p>You will notice that there are some differences between the User Manual and the CSF installation. The java interface is not available. Serial and parallel executables are available on the system. For both versions they are named as follows:<\/p>\n<p>DL_MESO v2.6 executables:<\/p>\n<table>\n<tr>\n<td><strong>Executable<\/strong> <\/td>\n<td> <strong>Simulation<\/strong><\/td>\n<\/tr>\n<tr>\n<td> slbe.exe <\/td>\n<td> Serial LBE <\/td>\n<\/tr>\n<tr>\n<td> plbe.exe <\/td>\n<td> Parallel LBE (uses MPI &#8211; single and multi-node jobs)<\/td>\n<\/tr>\n<tr>\n<td> plbe-omp.exe <\/td>\n<td> Parallel LBE (uses OpenMP &#8211; single-node multi-threaded jobs)<\/td>\n<\/tr>\n<tr>\n<td> sdpd.exe <\/td>\n<td> Serial DPD <\/td>\n<\/tr>\n<tr>\n<td> pdpd.exe <\/td>\n<td> Parallel DPD (uses MPI &#8211; single and multi-node jobs)<\/td>\n<\/tr>\n<tr>\n<td> pdpd-omp.exe <\/td>\n<td> Parallel DPD (uses OpenMP &#8211; single-node multi-threaded jobs)<\/td>\n<\/tr>\n<tr>\n<td> pdpd-fftw3-double.exe <\/td>\n<td> Parallel DPD (uses MPI and FFTW double precision &#8211; single and multi-node jobs)<\/td>\n<\/tr>\n<tr>\n<td> pdpd-fftw3-float.exe <\/td>\n<td> Parallel DPD (uses MPI and FFTW float precision &#8211; single and multi-node jobs)<\/td>\n<\/tr>\n<\/table>\n<p>DL_MESO v2.5 executables:<\/p>\n<table>\n<tr>\n<td><strong>Executable<\/strong> <\/td>\n<td> <strong>Simulation<\/strong><\/td>\n<\/tr>\n<tr>\n<td> slbe.exe <\/td>\n<td> Serial LBE <\/td>\n<\/tr>\n<tr>\n<td> plbe.exe <\/td>\n<td> Parallel LBE <\/td>\n<\/tr>\n<tr>\n<td> sdpd.exe <\/td>\n<td> Serial DPD <\/td>\n<\/tr>\n<tr>\n<td> pdpd.exe <\/td>\n<td> Parallel DPD <\/td>\n<\/tr>\n<\/table>\n<h2>Serial Batch job examples<\/h2>\n<h3>Serial LBE batch job submission<\/h3>\n<ul>\n<li>Make sure you have the dl_meso modulefile loaded.<\/li>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code>:<\/li>\n<\/ul>\n<pre class=\"in1\">     \r\n#!\/bin\/bash\r\n### SGE Job Stuff\r\n#$ -cwd\r\n#$ -V\r\n#$ -S \/bin\/bash\r\n\r\nslbe.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Serial DPD batch job submission<\/h3>\n<ul>\n<li>Make sure you have the dl_meso modulefile loaded.<\/li>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code>:<\/li>\n<\/ul>\n<pre class=\"in1\">     \r\n#!\/bin\/bash\r\n### SGE Job Stuff\r\n#$ -cwd\r\n#$ -V\r\n#$ -S \/bin\/bash\r\n\r\nsdpd.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h2>Parallel (multi-core) Batch job examples<\/h2>\n<p>It is highly recommended that you run scaling tests on 2,4,6,8,10,12,16,18,20,22,24 cores before moving on to running larger jobs to see how well your job performs as the number of cores increases.  <\/p>\n<h3>Parallel LBE batch job submission &#8211; 2 to 24 cores using MPI<\/h3>\n<ul>\n<li>Make sure you have the dl_meso and non-ib mpi modulefile loaded.<\/li>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code> and asking for 6 cores:<\/li>\n<\/ul>\n<pre>     \r\n#!\/bin\/bash\r\n### SGE Job Stuff\r\n#$ -cwd\r\n#$ -V\r\n#$ -S \/bin\/bash\r\n#$ -pe smp.pe 6\r\n\r\nmpirun -n $NSLOTS plbe.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Parallel LBE batch job submission &#8211; 2 to 24 cores using OpenMP<\/h3>\n<ul>\n<li>Make sure you have the dl_meso modulefile loaded.<\/li>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code> and asking for 6 cores:<\/li>\n<\/ul>\n<pre>    \r\n#!\/bin\/bash\r\n### SGE Job Stuff\r\n#$ -cwd\r\n#$ -V\r\n#$ -S \/bin\/bash\r\n#$ -pe smp.pe 6\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS\r\nplbe-omp.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Parallel DPD batch job submission &#8211; 2 to 24 cores using MPI<\/h3>\n<ul>\n<li>Make sure you have the dl_meso and non-ib mpi modulefile loaded.<\/li>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code> and asking for 12 cores:<\/li>\n<\/ul>\n<pre class=\"in1\">     \r\n#!\/bin\/bash\r\n### SGE Job Stuff\r\n#$ -cwd\r\n#$ -V\r\n#$ -S \/bin\/bash\r\n#$ -pe smp.pe 12\r\n\r\nmpirun -n $NSLOTS pdpd.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Parallel DPD batch job submission &#8211; 2 to 24 cores using OpenMP<\/h3>\n<ul>\n<li>Make sure you have the dl_meso modulefile loaded.<\/li>\n<li>Set up a directory from which your job will run, with all the required input files in it.<\/li>\n<li>Write a job submission script, for example, in a file called <code>jobscript<\/code> and asking for 12 cores:<\/li>\n<\/ul>\n<pre>     \r\n#!\/bin\/bash\r\n### SGE Job Stuff\r\n#$ -cwd\r\n#$ -V\r\n#$ -S \/bin\/bash\r\n#$ -pe smp.pe 12\r\n\r\nexport OMP_NUM_THREADS=$NSLOTS\r\npdpd-omp.exe\r\n<\/pre>\n<ul>\n<li>Submit: <code>qsub jobscript<\/code><\/li>\n<\/ul>\n<h3>Parallel batch job submissions for more jobs of 24 cores or more and a multiple of 12<\/h3>\n<ul>\n<li>As above, but use the inifiniband mpi modulefile and replace <code>smp.pe<\/code> with <code>orte-24-ib.pe<\/code> and an integer equal to or greater than 48 which is a multiple of 24.<\/li>\n<li>Please ensure you have done some scaling first on 2, 4, 6 etc cores to ensure that you are seeing a benefit from increasing the number of cores.\n<\/ul>\n<h2>Further info<\/h2>\n<ul>\n<li><a href=\"http:\/\/www.cse.scitech.ac.uk\/ccg\/software\/DL_MESO\/index.shtml \">DL_MESO Homepage<\/a><\/li>\n<li><a href=\"http:\/\/www.cse.scitech.ac.uk\/ccg\/software\/DL_MESO\/MANUAL\/USRMAN.pdf \">DL_MESO User Manual<\/a><\/li>\n<li>Example data and cases can be found in \/opt\/gridware\/apps\/intel-14.0\/dl_meso\/2.6\/DEMO &#8211; please see the User Manual for further details.<\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>Version 2.6 installed 19 Jan 2016.<br \/>\nVersion 2.5 installed 22 Jan 2014.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview DL_MESO is a general purpose mesoscale simulation package which supports both Lattice Boltzmann Equation (LBE) and Dissipative Particle Dynamics (DPD) methods. Versions 2.5 and 2.6 are installed. v2.5 was compiled with intel compilers 12.0.5 and openmpi 1.6. v2.6 was compiled with intel compilers 14.0.4 and openmpi 1.6. Multiple versions of 2.6 are installed with bugfixes (see modulefile info below). Details of the bugfixes can be found on the<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":31,"menu_order":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-125","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/125","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/comments?post=125"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/125\/revisions"}],"predecessor-version":[{"id":3978,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/125\/revisions\/3978"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/31"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/media?parent=125"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}