PBMPI
If you are a windows user – please ensure you create your jobscript ON THE CSF directly using gedit. This will prevent your job going into error (Eqw). Text files created on windows have hidden characters that linux cannot read. For further information please see the guide to using the system from windows, in particular the section about text & batch submission script files. |
Overview
Phylobayes MPI is a Bayesian software for phylogenetic reconstruction using mixture models which can run in parallel using MPI.
Version 1.8 compiled with Intel 17.0 and Open MPI 3.1.3 is available for use on CSF3.
Set up procedure
To use PBMPI, you must first load the module into your environment:
module load apps/intel-17.0/pbmpi/1.8
This will also automatically load the appropriate MPI modulefile into your environment.
Running the application
Please do not run pbmpi tools on the login node. Jobs should be submitted to the compute nodes via batch. There are a number of command-line tools available:
bpcomp cvrep pb_mpi readpb_mpi tracecomp
You may run each command without any args / parameters on the login node to see the help text. For example:
pb_mpi mpirun -nppb_mpi -d [options] creates a new chain, sampling from the posterior distribution, conditional on specified data mpirun -np pb_mpi starts an already existing chain mpirun -np : number of parallel processes (should be at least 2) -cat -dp : infinite mixture (Dirichlet process) of equilibirium frequency profiles -ncat : finite mixture of equilibirium frequency profiles ...
Single node parallel batch job submission (2-32 cores)
The following will run pb_mpi
on 16 cores to create new chain chainname from datafile in your current working directory:
#!/bin/bash --login #$ -cwd # App will run from current directory #$ -pe smp.pe 16 # Request 16 cores in smp parallel environment module load apps/intel-17.0/pbmpi/1.8 mpirun -np $NSLOTS pb_mpi -d datafile chainname
Multi node parallel batch job submission (48 – 120 cores, multiples of 24)
The following will run pb_mpi
on 48 cores on the existing chain chainname in your current working directory:
#!/bin/bash --login #$ -cwd # App will run from current directory #$ -pe mpi-24-ib.pe 48 # Request 48 cores across Infiniband connected nodes module load apps/intel-17.0/pbmpi/1.8 mpirun -np $NSLOTS pb_mpi chainname
Further info
- The manual is available on the CSF using
evince $PBMPIHOME/pb_mpiManual1.8.pdf
- pbmpi github repo
Updates
None.