GNU Compilers

Overview

The GNU Compiler Collection supports a number of programming languages.

Several versions are available on the CSF – please see the table below.

Advice on programming in Fortran or C is currently beyond the scope of this webpage.

Restrictions on use

Code may be compiled on the login node, but aside from ‘very’ short test runs (e.g., one minute on fewer than 4 cores), executables must always be run by submitting to the batch system, SGE. If you need to do a bigger test than this then please use batch or qrsh (see below).

Set up procedure

This depends on which version you require.

Version Commands / compilers available Module required Additional Notes
14.2.0 gcc, g++, gfortran module load compilers/gcc/14.2.0
14.1.0 gcc, g++, gfortran module load compilers/gcc/14.1.0
13.3.0 gcc, g++, gfortran module load compilers/gcc/13.3.0 Use this or newer if optimizing for AMD nodes
12.2.0 gcc, g++, gfortran module load compilers/gcc/12.2.0
11.2.0 gcc, g++, gfortran module load compilers/gcc/11.2.0
9.3.0 gcc, g++, gfortran module load compilers/gcc/9.3.0
8.2.0 gcc, g++, gfortran module load compilers/gcc/8.2.0
6.4.0 gcc, g++, gfortran module load compilers/gcc/6.3.0
4.8.5 gcc, g++, gfortran None System default, used if no modulefile loaded
4.2.3 gcc, g++, gfortran module load compilers/gcc/4.2.3

By loading/swapping modules, the correct LD_LIBRARY_PATH will be set.

Running the application

Example Code Compilations

The following are minimum / basic compiler commands.

gcc hello_world.c -o hello

gfortran hello_fworld.f77 -o f77hello
gfortran hello_fworld.f95 -o f95hello

Optimizing Flags for CSF Hardware

Note that in general, you will not need to recompile or reinstall any applications, python envs, R packages, conda envs for the AMD Genoa nodes. Code that you have previously compiled for older CSF nodes will run perfectly well on the new nodes. However, you may wish to recompile to see whether the compiler can optimize your code for the newer hardware.

The AMD Genoa hardware provides the avx, avx2 and avx512 vector instructions found in the CSF’s Intel CPUs. So applications are expected to perform at least as well on the new nodes. A full discussion of this hardware is outside of the scope of this page, so please see the AMD documentation if you want more in-depth information.

You may wish to compile code, to be optimized a little more for the AMD nodes. We will be providing more information about this in the next few months, but for now, we have some advice below.

We recommend using the GCC 13.3.0 compiler (or newer) as this supports the AMD znver4 microarchitecture, which enables the AVX-512 extensions.

AMD provide some recommended compiler flags (PDF) to use with various compilers (GNU compiler collection, Intel OneAPI C/C++ and the AMD AOCC compiler.) You will need to use at least anarchitecture flag to enable the AVX-512 extensions available in the Genoa CPUs:

# GNU compilers
-march=znver4                           # Code will only run on AMD Genoa and Intel Skylake (or newer)
-march=haswell -mtune=znver4            # Code will run on all CSF3 node types, with some further
                                        # tuning for the AVX-512 extensions found in the AMD and
                                        # Intel Skylake nodes where possible. 

# Intel OneAPI compilers
-mavx2 -axCORE-AVX512,CORE-AVX2,AVX     # Code will run on all CSF3 node types, with AVX-512
                                        # instruction enabled if supported

# AMD AOCC compilers (not yet installed on the CSF - coming soon)
-march=znver4                           # Code will only run on AMD Genoa and Intel Skylake (or newer)

# Note that the above flags can be applied when compiling code on the login nodes.
# An alternative is to login to the AMD nodes, using qrsh, and then compile for
# the "current" node's architecture, using:
-march=native

The above PDF provides further optimization flags you may wish to use in addition to the above architecture flags.

An example of using the common configure command when compiling on CSF3 (SGE or Slurm) that we’ve used when installing applications, is:

./configure 'CFLAGS=-march=haswell -mtune=znver4' CPPFLAGS='-march=haswell -mtune=znver4' --prefix=path/to/install/area

Serial batch job submission

To submit a single core batch job to SGE:

  • Make sure you have the correct module loaded if appropriate (see table above).
  • An example SGE qsub script for use with a binary executable called myfortranprog compiled by using the GNU compilers:
    #!/bin/bash --login
    #$ -cwd              # Use the current directory
    
    # Load the software
    module load compilers/gcc/6.4.0
 
    # Run the code
    ./myfortranprog
  • To submit:
     qsub jobscript
           #
           # where 'jobscript' is replaced with the name of your file
           #

Parallel batch job submission

Your code, and thus the resulting executable, must use either OpenMP and/or MPI in order to run in parallel. Please follow these links to find out how to submit batch of these types to SGE:

Testing via qrsh and batch

qrsh can be used to gain interactive access to a compute node (limited resources reserved for this). This is useful for both compilation and testing of your code. Example:

qrsh -l short
module load compilers/gcc/6.4.0
gcc hello_world.c -o hello
./hello

You can also add

#$ -l short

To your jobscript and this will then submit to a very small section of the cluster that has a maximum runtime of 1 hour. The short option is not valid for production runs, please submit those to the cluster in the usual way.

Further info

  • Online manuals available from the command line:
     man gcc
         # for the C/C++ compiler

     man gfortran
         # for the fortran compiler
  • GNU Compiler Collection website
  • If you require advice on programming matters, for example how to debug a code, or how to use MKL, please email its-ri-team@manchester.ac.uk

Last modified on May 27, 2025 at 3:24 pm by George Leaver