PRISMS-PF
Overview
PRISMS-PF is a powerful, massively parallel finite element code for conducting phase field and other related simulations of microstructural evolution.
We do not provide a centrally installed version of PRISMS-PF. Instead you should download and compile you own copy. This is because you need the source code for your own projects. HOWEVER, we do provide the necessary tools and libraries needed to complete the compilation. It is a simple task to compile PRISMS-PF on the CSF (instructions below).
It will use the DEAL.II library (v9.1.1) which uses the P4EST library (v2.2). All dependency code has been compiled with the Intel 18.0.3 compiler and OpenMPI 3.1.4.
Restrictions on use
There are no restrictions on accessing the code on the CSF. It is released under the GNU Lesser GPL v2.1 and all usage must adhere to that license.
Set up procedure
We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.
Compiling the PRISMS-PF library and applications
The following modulefile is a helper modulefile used to load everything you need to compile PRISMS-PF:
module load apps/intel-18.0/prismspf-helper/dealii-9.1.1
This will load the DEAL.II modulefile (which includes the P4EST settings), and also the cmake
3.5.2 modulefile and a git
modulefile. cmake is used to compile PRISMS-PF applications. If you need to checkout the PRISMS-PF source code, the git modulefile will allow you to do this. To see which modulefiles have been loaded, run module list
after loading the above modulefile.
Running PRISMS-PF applications
To run a PRISMS-PF application from a jobscript you can either load the above helper modulefile or load just the DEAL.II modulefile:
module load libs/intel-18.0/dealii/9.1.1
Compiling your own copy of PRISMS-PF
We DO NOT provide a central copy of PRISMS-PF because you need your own copy of the source code to compile your own projects.
However, compiling PRISMS-PF is simple and we provide complete instructions here. We DO provide a central copy of the DEAL.II library which is needed by PRISMS-PF (and is more difficult to compile on the CSF). Please follow the instructions below – it should only take a few minutes to compile everything you need.
# Do the following in an interactive sessions on a compute node with 4 cores qrsh -l short -pe smp.pe 4 # Wait until you are logged in to a compute node, then... module load apps/intel-18.0/prismspf-helper/dealii-9.1.1 # In this example we use the 'scratch' area. You may want to use your 'home' directory # if compiling PRISMS-PF to develop your own project code. Otherwise the automatic scratch-tidy # policy may delete some files (it can delete files that are 3 months old, or older). mkdir ~/scratch/prisms-pf # Create a new dir in scratch cd ~/scratch/prisms-pf # Go to the new dir git clone https://github.com/prisms-center/phaseField.git # Download the source code cd phaseField # Compile the PRISMS-PF library cmake . # Prepare (notice the . at the end) make -j4 # Compile (you can ignore "This file is deprecated") ls -l *.a # You should see two .a library files # Now compile a sample application (this method works for all of the samples) cd applications/allenCahn # Go to one of the sample apps cmake . # Prepare (notice the . at the end) make # Compile ls -l main # You should now see a 'main' program # Terminate your interactive session and go back to the login node exit # Now see below for sample jobscripts you can use to run the allenCahn 'main' app
The above method will give you a local copy of PRISMS-PF which you can use for your own projects. You can now use the sample jobscripts below to run the main
example application.
Running the application
Please do not run PRISMS-PF applications on the login node. Jobs should be submitted to the compute nodes via batch. You may compile your code against the PRISMS-PF libraries on the login node, although long compilations can also be submitted as batch jobs.
Please note that the PRISMS-PF library will have been compiled against the OpenMPI library and so all applications built with the PRISMS-PF library are parallel applications. Even when running with a single core you must start the application using the mpirun
method used by MPI applications (see below).
Serial batch job submission
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd # Job will run from the current directory # NO -V line - we load modulefiles in the jobscript # Load the version you require. The dealii modulefile is all that is needed when # running PRISMS-PF apps. But you could load the prismspf-helper modulefile instead. module load libs/intel-18.0/dealii/9.1.1 # $NSLOTS is automatically set to the number of cores (1 for a serial job) # Supply the name of the executable you have compiled ('main' in this example) mpirun -n $NSLOTS ./main
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Small Parallel batch job submission
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd # Job will run from the current directory #$ -pe smp.pe 16 # Number of cores on a single compute node (can be 2--32) # Load the version you require module load libs/intel-18.0/dealii/9.1.1 # $NSLOTS is automatically set to the number of cores requested above. # Supply the name of the executable you have compiled ('main' in this example) mpirun -n $NSLOTS ./main
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Large Parallel batch job submission
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #$ -cwd # Job will run from the current directory #$ -pe mpe-24-ib.pe 48 # Number of cores (whole compute nodes) # Can be 48 or more in multiple of 24 # Load the version you require module load libs/intel-18.0/dealii/9.1.1 # $NSLOTS is automatically set to the number of cores requested above. # Supply the name of the executable you have compiled ('main' in this example) mpirun -n $NSLOTS ./main
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Visualizing your Results
The example applications (and possibly your own applications) generate VTK .vtu
files. You can visualized these using Paraview.
Further info
Updates
None.