DL_MESO

Overview

DL_MESO is a general purpose mesoscale simulation package which supports both Lattice Boltzmann Equation (LBE) and Dissipative Particle Dynamics (DPD) methods.

Version 2.7.x is available. It was compiled with Intel 17 compilers (-msse4.2 -axCORE-AVX512,CORE-AVX2,AVX) . A version of DPD has been compiled with FFTW3 (see below). Utilities were compiled with the default GNU 4.8.5 compiler.

Version 2.6 is available and has been compiled with Intel 17 compilers.

The java interface is not available in either version.

Restrictions on use

Whilst the software is free for academic usage there are limitations within the DL_MESO license agreement which must be strictly adhered to by users. All users who wish to use the software must request access to the ‘dlmeso’ unix group. A copy of the full license is also available on the CSF in $dlmeso_home/$dlmeso_ver/LICENCE. Important points to note are:

  • No industrially-funded work must be undertaken using the software. See clauses 2.1.3 and 2.2 of the license.
  • The software is only available to Staff and Students of the University of Manchester. Users are reminded that they must not share their password with anyone, or allow anyone else to utlise their account.
  • Citation of the software must appear in any published work. See clause 4.2 for the required text.

There is no access to the source code on the CSF.

To get access to the software please confirm to its-ri-team@manchester.ac.uk that your work will meet the above T&Cs.

Set up procedure

Once you have been added to the unix group please load ONE of the following modulefiles:

module load apps/intel-17.0/dl_meso/2.7.10
module load apps/intel-17.0/dl_meso/2.7.5
module load apps/intel-17.0/dl_meso/2.7.4
module load apps/intel-17.0/dl_meso/2.7
module load apps/intel-17.0/dl_meso/2.6

Running the application

You will notice that there are some differences between the User Manual and the CSF installation, in particular the naming of the executables. The tables below show the main executables that are available.

DL_MESO v2.7 executables:

Executable Simulation
slbe.exe Serial LBE
plbe.exe Parallel LBE (uses MPI – single and multi-node jobs)
plbe-omp.exe Parallel LBE (uses OpenMP – single-node multi-threaded jobs)
sdpd.exe Serial DPD
pdpd-mpi-fftw-double.exe Parallel DPD (uses MPI & Double Precision FFTW3 – single and multi-node jobs)
pdpd.exe Parallel DPD (uses MPI – single and multi-node jobs)
pdpd-omp.exe Parallel DPD (uses OpenMP – single-node multi-threaded jobs)

DL_MESO v2.6 executables:

Executable Simulation
slbe.exe Serial LBE
plbe.exe Parallel LBE (uses MPI – single and multi-node jobs)
plbe-omp.exe Parallel LBE (uses OpenMP – single-node multi-threaded jobs)
sdpd.exe Serial DPD
pdpd.exe Parallel DPD (uses MPI – single and multi-node jobs)
pdpd-omp.exe Parallel DPD (uses OpenMP – single-node multi-threaded jobs)

Example Batch Jobs

All of the examples below use the 2.6 version. If you wish to use 2.7/2.7.4 change the modulefile and if necessary the executable name.

Serial Batch job examples

Serial LBE batch job submission

  • Set up a directory from which your job will run, with all the required input files in it.
  • Write a job submission script, for example, in a file called jobscript:
#!/bin/bash --login
#$ -cwd

module load apps/intel-17.0/dl_meso/2.6

slbe.exe
  • Submit: qsub jobscript

Serial DPD batch job submission

  • Set up a directory from which your job will run, with all the required input files in it.
  • Write a job submission script, for example, in a file called jobscript:
  
#!/bin/bash --login
#$ -cwd

module load apps/intel-17.0/dl_meso/2.6

sdpd.exe
  • Submit: qsub jobscript

Parallel (multi-core) Batch job examples

It is highly recommended that you run scaling tests on 2,4,6,8,10,12,16,18,20,22,24 cores before moving on to running larger jobs to see how well your job performs as the number of cores increases.

Parallel LBE batch job submission – 2 to 32 cores using MPI

  • Make sure you have the dl_meso and non-ib mpi modulefile loaded.
  • Set up a directory from which your job will run, with all the required input files in it.
  • Write a job submission script, for example, in a file called jobscript and asking for 6 cores:
     
#!/bin/bash --login
#$ -cwd
#$ -pe smp.pe 6

mpirun -n $NSLOTS plbe.exe
  • Submit: qsub jobscript

Parallel LBE batch job submission – 2 to 32 cores using OpenMP

  • Set up a directory from which your job will run, with all the required input files in it.
  • Write a job submission script, for example, in a file called jobscript and asking for 6 cores:
    
#!/bin/bash --login
#$ -cwd
#$ -pe smp.pe 6

module load apps/intel-17.0/dl_meso/2.6

export OMP_NUM_THREADS=$NSLOTS
plbe-omp.exe
  • Submit: qsub jobscript

Parallel DPD batch job submission – 2 to 32 cores using MPI

  • Set up a directory from which your job will run, with all the required input files in it.
  • Write a job submission script, for example, in a file called jobscript and asking for 12 cores:
#!/bin/bash --login
#$ -cwd

#$ -pe smp.pe 12

mpirun -n $NSLOTS pdpd.exe
  • Submit: qsub jobscript

Parallel DPD batch job submission – 2 to 32 cores using OpenMP

  • Set up a directory from which your job will run, with all the required input files in it.
  • Write a job submission script, for example, in a file called jobscript and asking for 12 cores:
     
#!/bin/bash --login
#$ -cwd
#$ -pe smp.pe 12

module load apps/intel-17.0/dl_meso/2.6

export OMP_NUM_THREADS=$NSLOTS
pdpd-omp.exe
  • Submit: qsub jobscript

Parallel batch job submissions for more jobs of 24 cores or more and a multiple of 24

  • As above, but replace smp.pe with orte-24-ib.pe and an integer equal to or greater than 48 which is a multiple of 24.
  • Please ensure you have done some scaling first on 2, 4, 6 etc cores to ensure that you are seeing a benefit from increasing the number of cores.

Further info

  • DL_MESO Homepage
  • DL_MESO User Manual
  • Example data and cases can be found in /opt/gridware/apps/intel-14.0/dl_meso/2.6/DEMO – please see the User Manual for further details.

Updates

Last modified on July 27, 2022 at 2:15 pm by Ben Pietras