DAMASK
Overview
DAMASK (Duesseldorf Advanced Material Simulation Kit) is a flexible and hierarchically structured model of material point behavior for the solution of elastoplastic boundary value problems along with damage and thermal physics. Its main purpose is the simulation of crystal plasticity within a finite-strain continuum mechanical framework.
Version 2.0.2 is available on the CSF:
Basic Install Details
Version 2.0.2
- Installed with Intel Compilers 17.0.7 and Openmpi 3.1.3
- The compile flag
-msse4.2 -axAVX,CORE-AVX2,CORE-AVX512
was used. This builds multiple auto-dispatch ‘code paths’ in the executable for use on different Intel architectures. This is deemed beneficial given there are several different Intel CPUs available in the CSF.
Restrictions on use
DAMASK is free software released under the GNU General Public Licence. All users may use DAMASK.
Set up procedure
To access the software you must load the modulefile
module load apps/intel-17.0/damask/2.0.2
This modulefile will load everything needed for all types of DAMASK job (e.g. MPI for muliti-core). You do not need to load anything else.
We recommend that you load the module file in your jobscript now rather than on the login node before you submit. This is covered in the examples below.
Running Serial DAMASK
An example batch submission script:
#!/bin/bash --login #$ -cwd # Load the software module load apps/intel-17.0/intel-17.0/damask/2.0.2 DAMASK_spectral --geom PathToGeomFile/NameOfGeomFile.geom --load PathToLoadFile/FileNameOfLoad.load
Submit with the command: qsub scriptname
Single node – 2 to 32 cores using OpenMP
DAMASK is compiled with multiprocessor (OpenMP) support. To run a parallel job on a single node use:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 12 # Load the software module load apps/intel-17.0/intel-17.0/damask/2.0.2 DAMASK_NUM_THREADS=$NSLOTS DAMASK_spectral --geom PathToGeomFile/NameOfGeomFile.geom --load PathToLoadFile/FileNameOfLoad.load
Submit with the command: qsub scriptname
Single node – 2 to 32 cores using MPI
DAMASK is compiled with multimode (MPI) support.
Please note when using MPI the grid dimension along z has to be an integer multiple of the intended number of nodes to be used. In the example below this would mean a multiple of 12.
To run a parallel job on a single node use:
#!/bin/bash --login #$ -cwd #$ -pe smp.pe 12 # Load the software module load apps/intel-17.0/intel-17.0/damask/2.0.2 mpirun -np $NSLOTS DAMASK_spectral --geom PathToGeomFile/NameOfGeomFile.geom --load PathToLoadFile/FileNameOfLoad.load
Submit with the command: qsub scriptname
Multi-node – 48 or more cores in multiples of 24
Please note when using MPI the grid dimension along z has to be an integer multiple of the intended number of nodes to be used. In the example below this would mean a multiple of 48.
#!/bin/bash --login #$ -cwd #$ -pe mpi-24-ib.pe 48 # Load the software module load apps/intel-17.0/intel-17.0/damask/2.0.2 mpirun -np $NSLOTS DAMASK_spectral --geom PathToGeomFile/NameOfGeomFile.geom --load PathToLoadFile/FileNameOfLoad.load
3. Submit with the command: qsub scriptname
Please note – there are no Broadwell or Skylake nodes with Infiniband connections. Jobs submitted to mpi-24-ib.pe will by default run only on Haswell nodes.
Further info
- The DAMASK web site has extensive documentation: https://damask.mpie.de/Documentation/WebHome