The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
WRF and WPS
Overview
Weather Research & Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both atmospheric research and operational forecasting needs.
Version Installed
WPS + WRF ARW (Advanced Research WRF) v3.6 compiled for Real cases, (distributed memory parallelism – dmpar – version) with basic nesting is installed on the CSF.
3.6
Version 3.6 has been compiled for use on AMD Bulldozer nodes only.
3.8
Version 3.8 has been compiled for use on Intel nodes only.
WRF Tutorial for CSF Users
All users of WRF on the CSF are strongly encouraged to work through the tutorial at the UoM WRF Community WRF / CSF tutorial page. This provides a complete example of a WRF simulation including pre-processing with WPS and post-processing with NCL. Thanks to Jonathan Fairman (CAS) for developing this tutorial.
Compilation Info
3.6
WPS + WRF and their dependency libraries were compiled using the PGI 13.6 with ACML (math libraries) optimized for Bulldozer FMA4 instructions (see modulefiles below). The following compiler flags were used:
-tp bulldozer -O3 -fast
See RAC Community Wiki WRF Build Pages for how this version was compiled.
3.8
WPS + WRF and their dependency libraries were compiled using Intel 15.0, using the following compiler flags:
-w -O3 -ip -msse2 -axSSE4.2,AVX,CORE-AVX2
See here for build details.
Related Tools
For post processing of WRF results NCL is installed on the CSF. This is a binary install so can be run on any CSF nodes (not just AMD Bulldozer).
Restrictions on use
None — Public Domain.
Set up procedure
To access the software you must first load the appropriate modulefile from the options below (if you’re not sure, it’s probably the first one):
module load apps/intel-15.0/WRF/3.8 module load apps/pgi-13.6-acml-fma4/wrf/3.6-ib-amd-bd
Settings applied by the WRF modulefile
The WRF
modulefiles will automatically load the following modulefiles, which indicate the dependencies used when compiling WRF:
3.6
# These are automatically loaded for you compilers/PGI/13.6-acml-fma4 # PGI 13.6 with optimized maths libraries mpi/pgi-13.6-acml-fma4/openmpi/1.6-ib-amd-bd # OpenMPI 1.6 with InfiniBand libs/pgi-13.6-acml-fma4/zlib/1.2.8-ib-amd-bd # ZLIB compression libs/pgi-13.6-acml-fma4/hdf/5/1.8.13-ib-amd-bd # HDF-5 (serial I/O) libs/pgi-13.6-acml-fma4/netcdf/4.3.2-ib-amd-bd # NetCDF4 inc FORTRAN libraries
3.8
# These are automatically loaded for you compilers/intel/c/15.0.3 compilers/intel/fortran/15.0.3 mpi/intel-15.0/openmpi/1.8.30-ib libs/intel-15.0/netcdf/4.4.0 libs/intel-15.0/zlib/1.2.8 libs/intel-15.0/hdf/5/1.8.16
Both versions
Both wrf
modulefiles will set the following environment variables (for convenient access to the installation directories and for some compilation settings):
$WRF_DIR
– the WRFV3 installation directory$WPS_DIR
– the WPS installation directory$WPS_GEOG
– the new static WPS_GEOG data installation directory$JASPERLIB
– Location of JPEG libraries for GRIB2$JASPERINC
– Location of JPEG headers for GRIB2$WRFIO_NCD_LARGE_FILE_SUPPORT
– Is set to 1 to enable compilation of Large File support in NetCDF$NETCDF4
– Is set to 1 to enable compilation of NetCDF support
To see the actual values of these variables (e.g. so you can put the WPS_GEOG directory in a namelist file) run:
echo $WPS_GEOG
The $WRF_DIR/run/
and $WPS_DIR
directories will also be added to your $PATH
environment variable.
Running the application
Please do not run WRF on the login node. Jobs should be submitted to the compute nodes via batch.
Important: you must run WRF in your scratch directory, not your home directory. WRF input and output files can be large and you can generate lots of them. You will very likely fill up the home filesystem which is shared with other members of your group. This will cause your jobs and other users’ jobs to fail – resulting in very unhappy colleagues! Please run in scratch. Any important results (and small input/config files such as namelist files and batch jobscripts) can be copied back to home for safe-keeping (home is backed up, scratch is not).
Parallel batch job submission – all versions
The WRF and WPS executables were built as dmpar executables during WRF config (i.e., they use MPI). The following are available:
- In
$WRF_DIR/run/
: ndown.exe, nup.exe, real.exe tc.exe wrf.exe - In
$WPS_DIR/
: geogrid.exe, metgrid.exe, ungrib.exe(this one is not an MPI executable)
Version 3.8 only
Make sure you have loaded the modulefile, then create a batch submission script using the template $WRF_DIR/run/qsub-wrf.sh
as an example:
#!/bin/bash #!/bin/bash #$ -S bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings #$ -N WRF_job # Give the job a name of your choosing. #$ -m bea # Send yourself an email when job starts, ends, or on error. #$ -M firstname.surname@manchester.ac.uk # Change to your email address. ##### Multi-node MPI ##### #$ -pe orte-24-ib.pe 48 -l haswell # Use a multiple of 24 mpirun -n $NSLOTS ./wrf.exe # $NSLOTS is automatically set to number of cores
Version 3.6 only
Note: If trying to run any of the above executables on the login node you will get an error message:
Illegal instruction
This is because the executables are compiled for the AMD Bulldozer architecture. The login node uses an Intel CPU.
Make sure you have the modulefile loaded then create a batch submission script to run on the AMD Bulldozer nodes, for example:
#!/bin/bash #$ -S /bin/bash #$ -cwd # Job will run from the current directory #$ -V # Job will inherit current environment settings ########### Choose ONE of these PEs ########## ##### Multi-node MPI ##### #$ -pe orte-64bd-ib.pe 128 # 128 cores or more in this PE, multiples of 64 only ##### Single-node MPI or OpenMP ##### #$ -pe smp-64bd.pe 64 # 64 cores or fewer mpirun -n $NSLOTS real.exe
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Interactive Use and Compilation
If you need to compile a model against the WRF installation or wish to quickly run one of the WPS tools then you’ll need to start an interactive session on the Bulldozer short node as follows:
qrsh -l bulldozer -l short # # Wait for new prompt (or try again later if asked) # Now set up environment on backend Bulldozer node module load apps/pgi-13.6-acml-fma4/wrf/3.6-ib-amd-bd
You can now run WRF and WPS tools interactively. However, please note:
- There is only one interactive Bulldozer node. Do not run on all 64-cores.
- Maximum runtime is 12 hours
- The above
qrsh
command reserves you only one core. If you plan to run small MPI jobs interactively you must reserve the correct number of cores using:qrsh -l bulldozer -l short -pe smp-64bd.pe 8 # e.g., 8 cores
Further info
- WRF / CSF tutorial page
- NCL on the CSF for post-processing and visualization of WRF results.
- WRF website – General WRF info
- WRF ARW info – Details about the Advanced Research WRF (ARW) version used on CSF
- WRF NMM info – Details about the Nonhydrostatic Mesoscale Model (NMM) version which is NOT currently installed on CSF (but may be of interest)
- WRF ARW tutorial including the build procedure followed on the CSF
- RAC Community Wiki WRF Build Pages Step-by-step details of the build of WRF 3.6 on the CSF
Updates
None.