WPS

Overview

WPS is the preprocessing system for the Weather Research and Forecasting Model (WRF).

Versions 4.0.3 and 4.3.1 are installed on CSF4. We are awaiting v4.3.2 to be released.

Restrictions on use

WRF is open source, and has been released with no restrictions on use (licensing or otherwise).

Set up procedure

We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this.

Load one of the following modulefiles:

# Note that WPS 4.3.3 and 4.3.2 are not available yet even though WRF 4.3.3 and 4.3.2 are.
module load wps/4.3.1-foss-2020a-dmpar
module load wps/4.0.3-foss-2020a-dmpar

Please see the WRF instructions for the corresponding WRF modulefiles.

Copy the running directory (which contains standard input files required by WPS) for your version of the model using:

# Work in your scratch area. For example:
cd ~/scratch

# Copy the running directory from the central install to your current directory
cp -a $WPS_RUNDIR scenario_name

This will create a directory named scenario_name in the current directory.

Change into the new directory, and link to the geography data directory:

ln -s $WPS_GEOG geog

Running the application

Please do not run the WPS processes on the login node – they are computationally intensive, and some require MPI parallisation to run properly. Jobs should be submitted to the compute nodes via batch.

Serial batch job submission

If you need to run any of the WPS utilities using a single core, the following serial jobscript would be suitable:

#!/bin/bash --login
#SBATCH -p serial      # (or --partition=serial) Optional line - default is serial

# Load the version you require, first removing any inherited modules
module purge
module load wps/4.3.1-foss-2020a-dmpar

# Run the model
ungrib.exe

Parallel multicore batch job submission

If you wish to run any of the WPS utilities using multiple cores, the following multicore jobscript would be suitable:

#!/bin/bash --login
#SBATCH -p multicore    # (or --partition=) Multicore (single node) parallel job
#SBATCH -n 8            # (or --ntasks=) Number of cores - can be 2--40

# Load the version you require, first clearing any inherited modules
module purge
module load wps/4.3.1-foss-2020a-dmpar

# Run the model
mpirun geogrid.exe
          #
          # Use your required WPS executable (geogrid.exe, metgrid.exe)

Parallel multi-node batch job submission

If you wish to run any of the WPS utilities using multiple cores on multiple compute nodes, the following multicore jobscript would be suitable:

#!/bin/bash --login
#SBATCH -p multinode    # (or --partition=) Multinode parallel job
#SBATCH -n 80           # (or --ntasks=) Number of cores, 80 or more in multiple of 40

# Load the version you require, first clearing any inherited modules
module purge
module load wps/4.3.1-foss-2020a-dmpar

# Run the model
mpirun geogrid.exe
          #
          # Use your required WPS executable (geogrid.exe, metgrid.exe)

Further info

Updates

None.

Last modified on February 11, 2022 at 10:48 am by George Leaver