CFX

Overview

CFX is another general purpose computational fluid dynamics (CFD) software tool by ANSYS.

Restrictions on use

CFX is installed alongside ANSYS Fluent. You will need to be in the fluent unix group to access it. Only MACE users may be added to the fluent group.

Set up procedure

You must load one of the following ANSYS modulefiles to access CFX. Several versions are available:

module load apps/binapps/ansys/2024R2          # Max job size: 32 cores
module load apps/binapps/ansys/2023R1          # Max job size: 32 cores
module load apps/binapps/ansys/2021R1:         # Max job size: 32 cores

# Legacy Module files. May not work in CSF3 with Slurm:
module load apps/binapps/cfx/19.2
module load apps/binapps/cfx/18.1

If you wish to compile your own user-defined routines (e.g., a fortran .F file to be compiled in to your simulation), you should also load one of the Intel Compiler modulefiles. For example:

module load compilers/intel/17.0.7

See the CSF Intel Compiler page for more details of available versions.

Running the application

Please do not run CFX on the login node.

CSF2 user should no longer use the fluent-smp.pe parallel environment. Please see below for how to run the application on CSF3

Ansys CFX in serial mode

The main command to run CFX is cfx5solve. By default cfx5solve will run a simulation in serial mode, as in the example batch script below:

#!/bin/bash --login
#SBATCH -p serial   # Partition name is required. Serial partition runs on Intel cores
#SBATCH -t 1-5      # Job "wallclock" limit is required. Max permitted is 7 days (7-0)
                    # In this example 1-5 is 1 days and 5 hours

# clean environment and load any of the ansys/fluent modules 
module purge
module load apps/binapps/ansys/2024R2

# define input .def file and output dir paths
INPUT_FILE=~/scratch/path/to/input_file.def
OUTPUT_DIR=~/scratch/path/to/output_dir

# run cfxsolve using default serial mode. See cfx5solve -help for more options
cfx5solve -batch -def $INPUT_FILE -fullname $OUTPUT_DIR
#          |
#          |-> required for batch submissions

The above simulation will only report errors in the standard slurm-jobID.out file. Instead it will create a new file named after the OUTPUT_DIR with .out extension appended (e.g. if OUTPUT_DIR=results it will be named results.out). This file will report on the progress of the simulation.

To submit, run sbatch jobscript.

Ansys CFX in MPI Local parallel mode

Run a parallel CFX job using MPI in one node only

#!/bin/bash --login
#SBATCH -p multicore   # Partition name is required. This gives you an AMD Genoa (168-core) node
#SBATCH -n 16          # (or --ntasks=) Number of cores (2--168 on AMD), limited to 32 by ANSYS licence
#SBATCH -t 1-5         # Job "wallclock" limit is required. Max permitted is 7 days (7-0)
                       # In this example 1-5 is 1 days and 5 hours

# clean environment and load any of the ansys/fluent modules 
module purge
module load apps/binapps/ansys/2024R2

# define input .def file and output dir paths
INPUT_FILE=~/scratch/path/to/input_file.def
OUTPUT_DIR=~/scratch/path/to/output_dir

# run cfxsolve using default serial mode. See cfx5solve -help for more options
cfx5solve -batch -def $INPUT_FILE -fullname $OUTPUT_DIR -double -start-method 'Intel MPI Local Parallel' -part $SLURM_NTASKS
#          |                                             |       |                                        |-> number of cores (sim "partitions")
#          |-> required for batch submissions            |       |-> choose Parallel method
#                                                        |-> double-precision Partitioner, Interpolator and Solver
#

To submit, run sbatch jobscript.

Serial batch job submission

Make sure you have your input file available on the CSF. Then write a batch submission script, for example:

#!/bin/bash --login
#$ -cwd

module load apps/binapps/cfx/19.2

cfx5solve -def CombustorEDM.def

Now submit it to the batch system:

qsub scriptname

replacing scriptname with the name of your submission script.

Parallel batch job submission

Make sure you have your input file available on the CSF in the directory you wish to run the job. Then create a batch submission script in that directory, for example:

#!/bin/bash --login
#$ -cwd
#$ -pe smp.pe 4

module load apps/binapps/cfx/19.2

cfx5solve -start-method "$PLATMPI" -def CombustorEDM.def -par-local -partition $NSLOTS

Notes about the script:

  1. -start-method "$PLATMPI" (including the quotes as shown) is important and ensures that the most suitable MPI is used.
  2. -partition $NSLOTS (no quotes needed here) is important to ensure that the number of cores requested is used.
  3. Minimum number of cores for parallel CFX jobs is 2, maximum is 4. You may run more than one job at a time if there are resources available.

Now submit it to the batch system:

qsub scriptname

replacing scriptname with the name of your submission script.

Errors

The SGE/batch error output file e.g. mycfxjob.e12345 may report:

map size mismatch; abort
: File exists

several times. This is common and does not cause a problem to running jobs.

Interactive use

Please do not run the GUI on the login node. If you require the GUI please run it via qrsh.

  • Log into the CSF with X11 enabled.
  • Make sure you have the modulefile loaded:
module load apps/binapps/cfx/19.2

Use qrsh to start the GUI on a compute node:

qrsh -l short cfx5

If you get the error ‘Your “qrsh” request could not be scheduled, try again later!’ it means that there are no interactive resources available. You can try to submit as a serial job instead.

Further info

Documentation is available via the GUI.

Updates

Additional updates made to the package if appropriate

Last modified on February 18, 2026 at 2:22 pm by Paraskevas Mitsides