Intel oneAPI

This is the recommended method of accessing compilers from Intel.

A suite of tools from Intel including compilers. We have installed the Base Toolkit and the HPC Toolkit. See below for the modulefile to load, which will then make the Intel modulefiles available.

Detailed information about the oneAPI products can be found on the Intel oneAPI website.

To access Intel oneAPI you must first load a helper file, a precursor to loading the compiler or other components of the toolchain.

module load compilers/intel/oneapi/2023.1.0
module load compilers/intel/oneapi/2024.2.0
module load compilers/intel/oneapi/2025.0.1

which gives you access to the Intel oneAPI modules. You MUST then load the tool you are interested in, for example:

module load compiler/<version>     # Provides icx/icpx/ifx
module load vtune/<version>        # Provides the Intel Vtune Profiler
module load mkl/<version>          # Provides oneAPI Math Kernel Library (oneMKL)

# Example:
module load compiler/2025.0.1
module load vtune/2025.0.1
module load mkl/2025.0.1

For list of all available Intel oneAPI components – after loading the helper file run the following command

module search <version>

Example: oneAPI 2025.0.1 helper modulefile:

[username@login1[csf3] ~]$ module load compilers/intel/oneapi/2025.0.1
Ability to load oneAPI 2025.0.1 components added to your setup.
You MUST now either:

   Load modulefiles of the required tools, e.g
   (Note that individual components may have their own version numbers.)

     module load umf compiler-rt tbb compiler
     module load vtune

   Or source the traditional Intel environment script:

     source $ONEAPIDIR/setvars.sh

To list modulefiles of oneAPI tools, run:
     module search 2025.0
OR   module keyword 2025.0

[username@login1[csf3] ~]$ module load umf compiler-rt tbb compiler

This module will set your environment to use the new Intel compilers, including:

C compiler: icx
C++ compiler:  icpx
Fortran compiler: ifx

# Intel oneAPI has deprecated the icc, icpc, and ifort compilers 
# if required they are still available by loading version 2023.1.0

Running the application – Compiling

We give example firstly of how to compile and then how to run the compiled executable.

Example fortran compilation

Make sure you have loaded the modulefile first (see above).

ifx hello.f90 -o hello
   #
   # Generates a binary executable "hello" from source code file "hello.f90"

Example C compilation

Make sure you have loaded the modulefile first (see above).

icx hello.c -o hello
 #
 # Generates a binary executable "hello" from source code file "hello.c"...

Example C++ compilation

Make sure you have loaded the modulefile first (see above).

icpx hello.cpp -o hello
  #
  # Generates a binary executable "hello" from source code file "hello.cpp"...

Note that is it perfectly acceptable to run the compiler as a batch job. This is recommended if you have a large compilation to perform – one that has a lot of source files and libraries to compile.

Running your Compiled Code

Once you have compiled your application you will run that as a batch job as normal on the CSF. For example, your compiled application will do some data processing.

You must ensure the same compiler modulefile used to compile your application is loaded when you run your application as a batch job. This is because there are system libraries (and other libraries such as the MKL) that are specific to the compiler version. If you compile your code with Intel compiler 2025.0.1, say, then you must run your application with that compiler’s modulefile loaded.

We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this.

Serial Job submission

Ensure that you have the same fortran or C/C++ module loaded that was used to compile the application. The create a batch script similar to:

#!/bin/bash --login
#SBATCH -p serial
#SBATCH -t 1-0        # Wallclock time limit (e.g., 1-0 is 1 day)

# Load the helper file and required components
module purge
module load compilers/oneapi/2025.0.1
module load umf compiler-rt tbb compiler/2025.0.1

# Run your application, found in the current directory
./hello

Submit the job to the batch system using:

sbatch jobscript

where jobscript is replaced with the name of your submission script.

Parallel Job submission

Your code, and thus the resulting executable, must use either OpenMP (for single-node multicore jobs) and/or MPI (for multi-node multicore jobs) in order to run in parallel. Please follow these links to find out how to submit batch jobs of these types to SGE:

Useful Compiler Flags

Please see the relevant man pages. Note that ifx, icx and icpx generally have the same set of flags but will differ for language-specific features.

We suggest a few commonly used flags below but this is by no means an exhaustive list. Please see the man pages.

-helpPrint available options-debug inline-debug-infoDebug info includes inlined functions

Option Description
-O0, -O1, -O2, -O3 Optimisation level (higher = faster, less debug info)
-g Include debugging information
-qopenmp Enable OpenMP parallelisation
-qopenmp-simd Enable OpenMP SIMD directives
-march=arch Target specific microarchitecture (e.g. core-avx2)
-mtune=arch Optimise for specific microarchitecture
-I/path Add include directory to search
-L/path Add linker search directory
-qopt-report=5 Generate optimisation report
-qopt-report-phase=vec Optimisation report focused on vectorisation
-std=gnu++17 Set C++ standard (similar for C and Fortran)

Optimising Flags for CSF Hardware

The CSF contains compute nodes with a range of Intel and AMD CPUs. Intel nodes include Ivybridge (AVX), Haswell/Broadwell (AVX2), and Skylake (AVX512). AMD Genoa nodes support AVX, AVX2, and AVX512 instructions Applications are expected to run at least as well on AMD Genoa as on Intel hardware.

General Recommendations

No recompilation is required for standard software on AMD Genoa nodes; everything runs out of the box.

If you wish to optimise specifically for AMD Genoa, use GCC version 13.3.0 or newer, which supports the AMD znver4 microarchitecture and AVX-512 instructions – GNU Compiler

AMD provides recommended compiler flags (see their PDF guide) for GNU, Intel oneAPI, compilers to enable full use of AVX-512.

Intel oneAPI Compiler Flags for Optimal Performance

-mavx2 -axCORE-AVX512,CORE-AVX2,AVX

Compiling directly on AMD/Intel nodes

Note that the above flags can be applied when compiling code on the login nodes.
An alternative is to login to the AMD/Intel nodes, using srun, and then compile for
the “current” node’s architecture, using:

-march=native

The above flags can be used directly on login nodes or interactively on specific architecture hosts.

Example: Building with Common Configure Command

When building applications from source on CSF, a typical configure line using the Intel oneAPI command is as follows – this will allow the app to run on both intel/AMD nodes:

./configure 'CFLAGS=-mavx2 -axCORE-AVX512,CORE-AVX2,AVX' 'CPPFLAGS=-mavx2 -axCORE-AVX512,CORE-AVX2,AVX' --prefix=path/to/install

The Intel compilers will inform you when it compiles a function it thinks can benefit from optimisations specific to a particular architecture. You’ll see a message of the form:

filename.c(linenum): remark: function_name has been targeted for automatic cpu dispatch

Intel Math Kernel Library

The Intel Math Kernel Library (MKL) is a collection of high-performance, multithreaded mathematical libraries designed for tasks such as linear algebra, fast Fourier transforms, vector mathematics, and more

Set-up

To load the default MKL, run the following commands:

module load compilers/intel/oneapi/2025.0.1 #Loads Intel oneAPI helper file
module load umf compiler-rt tbb compiler
module load mkl/2025.0

Intel MKL Link Line Advisor

Intel Math Kernel Library (MKL) includes a variety of libraries to support different environments, tools, and programming interfaces. To determine the most suitable libraries and linking options for your specific use case, you can use the Intel MKL Link Line Advisor. This tool helps you generate the appropriate linking commands and linker flags tailored to your development environment.

Last modified on October 13, 2025 at 12:41 pm by Chris Grave