The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
Autodock4 and Vina
Overview
Autodock is a suite of automated docking tools. It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure.
The following versions are installed on the CSF:
- Autodock4 v4.2.5.1
- Autodock4-MP v4.2.2.1
- Autodock Vina v1.1.2
Autodock Vina is the new version and can run in parallel (unlike Autodock4). Autodock4-MP is a parallelized version of Autodock4 but should be considered experimental on the CSF. Users are encouraged to try Autodock Vina first.
Restrictions on use
Access to Autodock4 and Autodock Vina is not restricted. However users must abide by the license terms of the software and provide citations in publications.
Please see the Autodock4 license (GPL v2) and the Autodock Vina license (a permissive Apache license).
Set up procedure
To access Vina you must first load the modulefile:
module load apps/intel-12.0/autodock/vina-1.1.2
or, for serial autodock4:
module load apps/intel-12.0/autodock/4.2.5.1
For parallel autodock4 load one of the following modulefiles:
module load apps/intel-12.0/autodock/4.2.2.1-MP-omp # Single node multithreaded module load apps/intel-12.0/autodock/4.2.2.1-MP-mpi # Single-node MPI module load apps/intel-12.0/autodock/4.2.2.1-MP-mpi-ib # Multi-node MPI
Any dependent modulefiles will be automatically loaded for you.
The 4.2.2.1-MP-omp
modulefile provides a version that can be used on single Intel compute nodes (up to 24 cores). This also provides a serial version of Autodock4 MP which can be used to verify results from the parallel versions. It is highly recommended you do this to ensure the parallel versions are producing the results you expect!
The 4.2.2.1-MP-mpi
modulefile provides a version that can be used on single compute node (up to 24 cores) using MPI.
The 4.2.2.1-MP-mpi-ib
modulefile provides a version that can be used across multiple InfiniBand-connected Intel nodes and hence uses fast networking. The number of cores must be a multiple of 24 and the minimum number of cores is 48. This is the recommended MPI version.
Running the application
Please do not run Autodock on the login node. Jobs should be submitted to the compute nodes via batch.
Autodock4 Serial batch job submission
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings autodock4 -p macro.dpf
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Autodock Vina Serial batch job submission
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings # NSLOTS will be set to 1, use it to inform vina how many CPUs we requested vina --cpu $NSLOTS --receptor file1 --flex file2 --ligand file3 -out outfile
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Autodock Vina Parallel batch job submission
Note: Autodock4 will not run in parallel (see Autodock MP below).
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -pe smp.pe 8 # Number of cores (2-24 permitted) # NSLOTS will be set to 8 (in this case), use it to inform vina how many CPUs we requested vina --cpu $NSLOTS --receptor file1 --flex file2 --ligand file3 -out outfile
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Autodock4 MP Serial batch job submission
We recommend running the Autodock4 MP version’s serial compilation to give you a set of result against which you can compare the parallel version (see below for how to run in parallel). Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings autodock4_serial -p macro.dpf
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
To see the complete list of command-line flags simply run autodock4_serial
on the login node without any input file.
Autodock4 MP Parallel batch job submission (multithreaded)
Note that the OpenMP (multithreaded) version uses the same command-line flags as the above serial version with the addition of a -r
option for the seed type (if unspecified the default_seed is used):
-r seed_type
The seed_type accepted values are:
same_seed
– Use a hard-coded seed value, useful only in doing performance measurements by deterministically running the docking.default_seed
– Use the time of day plus the threadid for the seed.
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -pe smp.pe 8 # Number of cores (max 24) in a single node # You MUST set this to inform autodock how many cores to use: export OMP_NUM_THREADS=$NSLOTS autodock4_omp -p macro.dpf -l macro.dlg # Send output to logfile macro.dlg
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Autodock4 MP Parallel batch job submission (multi-node)
Note that the MPI version (multi-node) uses a different set of command-line flags to the above two examples. It requires the following options in the order given here:
docking_list_file docking_base_directory status_directory seed_type map_file_usage
where
- docking_list_file: file containing the list of ligands to be docked
- docking_base_directory: base directory containing the ligand directories created with the python scripts for the virtual screening setup (eg. the ‘Dockings’ directory from the UsingAutoDock4forVirtualScreening_v4.pdf tutorial)
- status_directory: directory containing the following status and performance logging files, updated in real time as each docking finishes
- seed_type: Indicator for the value used to seed the RNG, values must be:
same_seed
– Use a hard-coded seed value, useful only in doing performance measurements.default_seed
– Use the time of day plus the threadid for the seed.unique_node_seed
– Use the time of day plus the threadid plus the MPI rank for the seed.
Note that in this version the MPI ranks are single-threaded.
- map_file_usage: Indicator for whether or not grid maps should persist in memory on the slave nodes from docking to docking – value must be either
reuse_maps
orreload_maps
.
Refer to the paper Multilevel Parallelization of AutoDock 4.2 for further information for these runtime options and for details on the parallelization strategy and performance improvements.
Further Advice on Running MPI Jobs
We have received the following advice from a CSF user who has tested autodock4_mpi:
Assuming you are running the job from the scratch directory, the following files should be placed there:
receptor.pdbqt
– to be read byautogrid4.py
, a script that can be downloaded from the AutoDock website. It must be run before theautodock4_mpi
job to have the grid map files previously generated (see below).file.gpf
– file you use to create the grid map files. Take into account that if you reload the grid maps, all the gpf files used must be also in that directory.ligand_receptor.dpf
– a file you create with theprepare_dpf4.py
script that is available on the CSF by doing:module load apps/binapps/mgltools/1.5.6 $ADTUTILS/prepare_dpf4.py -l ligand.pdbqt -r receptor.pdbqt
This will output a
ligand_receptor.dpf
file by default.
To run the autogrid4
command to generate grid map files run:
module load apps/intel-12.0/autodock/4.2.5.1 autogrid4 -p sample.gpf -l sample.glg # # This will also read the receptor.pdbqt file
For further information see the tutorial on the AutoDock website.
You will also need to create the following directories using the mkdir
command in your scratch area (we assume you are running from your scratch area):
etc/
(here one must save a docking.list file with a list of all the names of the ligands. The UsingAutoDock4forVirtualScreening_v4.pdf tutorial can help to know how to create it)results/pdbqt_lip_rules/
(inside thepdbqt_lip_rules/
directory one must save all theligands.pdbqt
files)
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -pe orte-24-ib.pe 48 # Run on faster IB-connected h/w (multiple of 24 cores) # $NSLOTS is automatically set to the number of cores given above mpirun -np $NSLOTS autodock4_mpi docking_list_file docking_base_directory \ status_directory seed_type map_file_usage
where the path of the docking_base_directory and the status_directory should be the directory where all the files have been saved (scratch in our example).
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Once autodock4 has run, it will generate:
- The glg file or files (depending on the number of file.gpf used)
- The dlg files
- The following files: docking_performance.csv, failed_dockings, submitted_dockings, successful_dockings.
Further info
Updates
None.