FSL

Overview

FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data.

Versions available on CSF3:

  • 6.0.7 (singularity container, w/ GPU support)
  • 6.0.4 (gpu capability being tested)
  • 6.0.3 (gpu capability being tested)
  • 6.0.2 (gpu capability being tested)
  • 6.0.1 (gpu capability being tested)
  • 6.0.0 (no GPU capability)
  • 5.0.6 (legacy, no GPU capability. It is highly recommended you use a newer version).

Due to a reliance on certain python modules version 6.0.0 and 6.0.1 have their own miniconda-based python distribution ( named fslpython ) installed.

Restrictions on use

This software requires that users are added to a unix group to access it. Requests for access should be directed to its-ri-team@manchester.ac.uk.

All users MUST read and agree to the FSL license before they can be added to the unix group. The information below provides some guidance, but is not a substitute for the license:

What may FSL be used for?

You may use FSL only for academic research purposes.

What may FSL not be used for?

The licence is for non-commercial use only. The license defines this as “Use for which any financial return is received” and includes:

  1. integration of all or part of the source code or the Software into a product for sale or license by or on behalf of Licensee to third parties or
  2. use of the Software or any derivative of it for research with the final aim of developing software products for sale or license to a third party or
  3. use of the Software or any derivative of it for research with the final aim of developing non-software products for sale or license to a third party, or
  4. use of the Software to provide any service to an external organisation for which payment is received.”

How should the program be cited?

To quote the relevant references for FSL tools in publications you should look in the individual tools’ manual pages, and also reference one or more of the FSL overview papers as detailed on the FSL website.

You are not permitted to use freesurfer logos and associated trademarks.

Modifications, reproduction, transmission and transference of the software

This is prohibited without express permission from the University of Oxford exept where there is no finanical return provided the terms of the license are imposed on the receiver and all orginal and amended source code is included in the transmitted product.

Risks

The University is required to indemnify The University of Oxford against all claims, damage and liabilites asserted by Third Parties which arise directly or indirectly from the use of the Software or sale of any produce based on the Software.

Please therefore be aware of and comply with the licence agreement and discuss anything that is unclear with your line management in the first instance to obtain further guidance.

Set up procedure

To access the software you must first load one one of the modulefiles:

module load apps/singularity/fsl/6.0.7
    ## run via container w/support for GPUs
module load apps/binapps/fsl/6.0.4 
    ## Also loads CUDA. CUDA components not yet tested.
module load apps/binapps/fsl/6.0.3 
    ## Also loads CUDA. CUDA components not yet tested.
module load apps/binapps/fsl/6.0.2
    ## Also loads CUDA. CUDA components not yet tested.
module load apps/binapps/fsl/6.0.1
    ## Also loads CUDA. CUDA components not yet tested.
module load apps/binapps/fsl/6.0.0  
    ## Does not load CUDA as we do not have a compatible driver/library.
module load apps/binapps/fsl/5.0.6 
    ## Does not load CUDA as we do not have a compatible driver/library.

Running the application

6.0.7 – New Method via Singularity Container

The below job script runs using a GPU (optional)

#!/bin/bash --login

#$ -cwd
#$ -pe smp.pe 8
#$ -l nvidia_v100=1  ## this is only required if you want to run using a GPU 

module load apps/singularity/fsl/6.0.6

# XTRACT automatically detects if $SGE_ROOT is set and if so uses FSL_SUB. For optimal performance, use the GPU version!!!!

# fsl starts on the GPU node, then immediately tries to submit an SGE job 
export SGE_ROOT=''

# Then, on a GPU node (via qrsh or qsub)
fsl 'xtract -bpx bedpostx.bedpostX/ -out xtract_here -species HUMAN -gpu'

Please note the correct syntax for running FSL using a containter

fsl 'process input1 input2 .. inputx'

Previous Method <6.0.7

Some FSL executables (e.g., bedpostx) are run on the login node because they will automatically submit batch jobs for you. Others should be submitted to batch in the usual ways (write a jobscript – in which it is now recommended that you load your modulefile rather than on the login node before submission).

Please ensure the processing performed by your FSL app is ultimately done in the batch system (on compute nodes), not the login node.

You should consider using qrsh for some of the setup steps.

A list of the components that can be used with the batch system, SGE, can be found on the FSL website – please note, they have not all been tested on the CSF.

fslpython has not had any special configuration added to the modulefile. It can be accessed at the following path $FSLDIR/fslpython .

Serial batch job submission

You do not need to write the usual batch submission script for some FSL tools. They submit automatically to batch for you when you reach the right stage in your setup. For example the command tbss_2_reg submits a serial job array while melodic can submit from the GUI (when you press Go) and will split each stage up into separate jobs. The bedpostx tool will submit three batch jobs – a preprocessing job, a job array to do the main processing then a post-processing job.

All FSL jobs will go to the main 7 day pool of serial cores that have 4GB per core of memory. It is not possible to add flags to request specific things such as highmem, haswell, short etc.

Note: Please consider this software to be still undergoing testing. Depending on what commands you are using you may find that jobs fail or do not submit. If this is the case it may indicate that some configuration is still required. Please report problems to the CSF Team via its-ri-team@manchester.ac.uk.

Parallel batch job submission

As noted above, some FSL tools will automatically submit jobs to the batch system. Hence you do not need to write a jobscript for those tools. Instead you run the commands directly on the CSF login node and batch jobs will be submitted on your behalf. Other tools do require a jobscript and are submitted to the batch system in the usual way.

We have tested the bedpostx command so far. This submits a number of jobs: an initial serial pre-processing job, a job-array to process multiple input files (data slices) in parallel and then a final serial post-processing job. The parallelism comes from the job array. This runs many copies of the executable to processes many input files. Each individual executable is a serial (1-core) program but the fact that many (hundreds) of them can be running in parallel to process all of your data slices gives you a speed-up in your processing.

The following example should be run on the CSF login node.

# After loading the appropriate modulefile (see above)...

# Now run bedpostx on the login node - it will submit jobs for you
bedpostx ~/scratch/data/subject_directory -n 1 -w 1 -b 1000 -c
                  #                                          #
                  #                                          # Turn off checking for GPU support
                  #                                          # (not currently available on CSF)
                  # Change to suit your requirements
qstat
  #
  # You should see three jobs queued

Documenting the flags passed to bedpostx is beyond the scope of this document – you should read the FSL manual (see below) for how to run bedpostx and for a description of the data files required. For a summary of the options run on the login node:

module load apps/binapps/fsl/6.0.0
bedpost

Error: ‘is not a submit host’

FSL can try to be a bit too clever by detecting the SGE batch system and submitting a job. Without the bold line in the jobscript below, FSL tries to submit an SGE job when it runs on the GPU node – which will naturally fail with the error output: Unable to run job: denied: host “node810.pri.csf3.alces.network” is not a submit host

#!/bin/bash --login
#$ -cwd
#$ -pe smp.pe 8
#$ -l nvidia_v100=1

module load apps/binapps/fsl/6.0.4

# XTRACT automatically detects if $SGE_ROOT is set and if so uses FSL_SUB. For optimal performance, use the GPU version!
export SGE_ROOT=''

xtract -bpx bedpostx.bedpostX/ -out xtract_here -species HUMAN -gpu

Further info

Last modified on February 20, 2024 at 12:51 pm by Chris Grave