The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
Plumed
Overview
PLUMED PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines.
Version 2.4.0 is installed on the CSF. It was compiled with the Intel 15.0.3 compiler and uses the Intel MKL for blas and lapack functions.
Restrictions on use
There are no restrictions on accessing this software on the CSF. It is released under the GNU Lesser GPL v3.0 and any usage must adhere to that license.
Set up procedure
To access the software you must first load one of the following modulefile:
module load apps/intel-15.0/plumed/2.4.0 # Serial or Single-node OpenMP parallel module load apps/intel-15.0/plumed/2.4.0-mpi # Single-node MPI parallel module load apps/intel-15.0/plumed/2.4.0-mpi-ib # Multi-node MPI parallel using Infiniband network
Running the application
Please do not run plumed on the login node. Jobs should be submitted to the compute nodes via batch.
Plumed provides a list of commands / tools that are run via the main plumed
executable. To see a list of available commands / tools you may run the following on the login node:
plumed -h
The tools are:
driver kt simplemd mklib driver-float manual sum_hills partial_tempering gentemplate pathtools newcv vim2html info pesmd config patch
Serial batch job submission
Make sure you have the OpenMP plumed modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings # Inform plumed how many cores to use (1 for serial) export OMP_NUM_THREADS=$NSLOTS plumed toolname list of input flags for that tool # # See above for list of tool names. # Example: To display the plumed version use: # plumed info --version
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
OpenMP Parallel batch job submission
Make sure you have the OpenMP plumed modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -pe smp.pe 8 # Number of cores (2-23 for single-node multi-core jobs) # Inform plumed how many cores to use export OMP_NUM_THREADS=$NSLOTS plumed toolname list of input flags for that tool # # See above for list of tool names. # Example: To display the plumed version use: # plumed info --version
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
MPI Parallel batch job submission
Make sure you have one of the MPI plumed modulefiles loaded (the -ib version should be used when running multi-node parallel jobs) then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #### Choose one of the following #$ -pe smp.pe 8 # Number of cores (2-23 for single-node parallel jobs) ### OR #$ -pe orte-24-ib.pe 48 # Number of cores (48 or more in multiples of 24 for multi-node jobs) # Run the requested number of parallel instances of plumed mpirun -n $NSLOTS plumed toolname list of input flags for that tool # # See above for list of tool names. # Example: To display the plumed version use: # plumed info --version
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Further info
- PLUMED 2.4 documentation
- PLUMED website
Updates
None.