DL_MESO
Overview
DL_MESO is a general purpose mesoscale simulation package which supports both Lattice Boltzmann Equation (LBE) and Dissipative Particle Dynamics (DPD) methods.
Version 2.8 is now available. It was compiled with Intel 18 compilers. It has additional DPD executables which can be run on GPU. All utilities are also available.
The java interface is not available.
Restrictions on use
Whilst the software is free for academic usage there are limitations within the DL_MESO license agreement which must be strictly adhered to by users. All users who wish to use the software must request access to the ‘dlmeso’ unix group. A copy of the full license is also available on the CSF in $dlmeso_home/$dlmeso_ver/LICENCE
. Important points to note are:
- No industrially-funded work must be undertaken using the software. See clauses 2.1.3 and 2.2 of the license.
- The software is only available to Staff and Students of the University of Manchester. Users are reminded that they must not share their password with anyone, or allow anyone else to utlise their account.
- Citation of the software must appear in any published work. See clause 4.2 for the required text.
There is no access to the source code on the CSF.
To get access to the software please confirm to its-ri-team@manchester.ac.uk that your work will meet the above T&Cs.
Set up procedure
Once you have been added to the unix group please load the following modulefile:
module load apps/intel-2024.2/dl_meso/2.8
Running the application
You will notice that there are some differences between the User Manual and the CSF installation, in particular the naming of the executables. The tables below show the main executables that are available.
DL_MESO v2.8 executables:
Executable | Simulation |
slbe.exe | Serial LBE |
plbe.exe | Parallel LBE (uses MPI – single and multi-node jobs) |
sdpd.exe | Serial DPD |
pdpd.exe | Parallel DPD (uses MPI – single and multi-node jobs) |
pdpd-omp.exe | Parallel DPD (uses OpenMP – single-node multi-threaded jobs) |
Example Batch Jobs
Serial Batch job examples
Serial LBE batch job submission
- Set up a directory from which your job will run, with all the required input files in it.
- Write a job submission script, for example, in a file called
jobscript
:
#!/bin/bash --login #SBATCH -n 1 module load apps/intel-2024.2/dl_meso/2.8 slbe.exe
- Submit:
qsub jobscript
Serial DPD batch job submission
- Set up a directory from which your job will run, with all the required input files in it.
- Write a job submission script, for example, in a file called
jobscript
:
#!/bin/bash --login #SBATCH -n 1 module load apps/intel-2024.2/dl_meso/2.8 sdpd.exe
- Submit:
qsub jobscript
Parallel Batch job examples
It is highly recommended that you run scaling tests on 2,4,6,8,10,12,16,18,20,22,24….40 cores before moving on to running larger multinode jobs to see how well your job performs as the number of cores increases.
Parallel LBE multicore batch job submission – 2 to 40 cores using MPI
- Make sure you have the dl_meso modulefile loaded.
- Set up a directory from which your job will run, with all the required input files in it.
- Write a job submission script, for example, in a file called
jobscript
and asking for 6 cores:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=) One compute node will be used #SBATCH -n 6 # (or --ntasks=) Use 6cores on a single node (can be 2 to 40) # The $SLURM_NTASKS variable will be set to this value. module load apps/intel-2024.2/dl_meso/2.8 mpirun -n $SLURM_NTASKS plbe.exe
- Submit:
qsub jobscript
Parallel LBE multinode batch job submission – 80 to 200 cores using MPI
#!/bin/bash --login #SBATCH -p multinode # (or --partition=) #SBATCH -N 2 # (or --nodes=) 2 or more. The jobs uses all 40 cores on each node. #SBATCH -n 80 # (or --ntasks=) 80 or more - the TOTAL number of tasks in your job. module load apps/intel-2024.2/dl_meso/2.8 mpirun -n $SLURM_NTASKS plbe.exe
- Submit:
qsub jobscript
Parallel DPD multicore batch job submission – 2 to 40 cores using MPI
- Set up a directory from which your job will run, with all the required input files in it.
- Write a job submission script, for example, in a file called
jobscript
and asking for 12 cores:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=) One compute node will be used #SBATCH -n 6 # (or --ntasks=) Use 6cores on a single node (can be 2 to 40) # The $SLURM_NTASKS variable will be set to this value. module load apps/intel-2024.2/dl_meso/2.8 mpirun -n $SLURM_NTASKS pdpd.exe
- Submit:
qsub jobscript
Parallel DPD multinode batch job submission – 80 to 200 cores using MPI
#!/bin/bash --login #SBATCH -p multinode # (or --partition=) #SBATCH -N 2 # (or --nodes=) 2 or more. The jobs uses all 40 cores on each node. #SBATCH -n 80 # (or --ntasks=) 80 or more - the TOTAL number of tasks in your job. module load apps/intel-2024.2/dl_meso/2.8 mpirun -n $SLURM_NTASKS pdpd.exe
- Submit:
qsub jobscript
Parallel DPD multicore batch job submission – 2 to 40 cores using OpenMP
- Set up a directory from which your job will run, with all the required input files in it.
- Write a job submission script, for example, in a file called
jobscript
and asking for 12 cores:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=) One compute node will be used #SBATCH -n 6 # (or --ntasks=) Use 6cores on a single node (can be 2 to 40) # The $SLURM_NTASKS variable will be set to this value. module load apps/intel-2024.2/dl_meso/2.8 export OMP_NUM_THREADS=$SLURM_NTASKS pdpd-omp.exe
- Submit:
qsub jobscript
Parallel DPD multinode batch job submission – 80 to 200 cores using OpenMP
#!/bin/bash --login #SBATCH -p multinode # (or --partition=) #SBATCH -N 2 # (or --nodes=) 2 or more. The jobs uses all 40 cores on each node. #SBATCH -n 80 # (or --ntasks=) 80 or more - the TOTAL number of tasks in your job. module load apps/intel-2024.2/dl_meso/2.8 export OMP_NUM_THREADS=$SLURM_NTASKS pdpd-omp.exe
- Submit:
qsub jobscript
Parallel DPD GPU batch job submission
- There are no GPUs in this Cluster and therefore the DPD GPU builds are not available here
- DL_MESO Homepage
- Example data and cases can be found in
$dlmeso_home/$dlmeso_ver/DEMO
– please see the User Manual for further details. - DL_MESO User Manual can be found in
$dlmeso_home/$dlmeso_ver/MAN/USRMAN.pdf