Fluent

Overview

Ansys Fluent is a computational fluid dynamics application. Fluent versions 2022R1, 2021R1, 19.5 (also known as 2019 R3), 19.3, 19.2 and 18.1 are available.

Restrictions on Use

Only users who have been added to the Fluent group can run the application. Owing to licence restrictions, only users from the School of MACE and one specific CEAS Research Group can be added to this group. Requests to be added to the Fluent group should be emailed to

its-ri-team@manchester.ac.uk

See below for maximum job sizes (they are a limitation imposed by the license).

Fluent jobs must not be run on the login node. If you need to run an interactive job please use qrsh as detailed below.

Set Up Procedure

Once you have been added to the Fluent group, you will be able to access the executables by using one of the following module commands:

module load apps/binapps/fluent/2024R2
module load apps/binapps/fluent/2022R1          # Max job size: 32 cores
module load apps/binapps/fluent/2021R1          # Max job size: 32 cores
module load apps/binapps/fluent/19.5            # Max job size: 16 cores
module load apps/binapps/fluent/19.3            # Max job size: 16 cores
module load apps/binapps/fluent/19.2            # Max job size: 16 cores
module load apps/binapps/fluent/18.1            # Max job size: 16 cores

# Custom modulefiles for specific research groups
module load apps/binapps/fluent/2024R2-frangi   # Prof. Alex Frangi's group
module load apps/binapps/fluent/19.5-fonte      # Dr.  Claudio Pereira Da Fonte's group

Required Input Files

To run a fluent job you will need a fluent journal file, a fluent case file and usually a data file which the case file will load.

The journal file can be created either by asking fluent to write it while using the fluent GUI (which can be run from the CSF or from a desktop PC installation). Alternatively, the journal file can be written by hand. This is common – they are often very simple files which load the case and data files, then start the simulation (we have seen journal files that are only 3 lines long).

UDF Files

If you need to compile a User Defined Function (UDF) file, this should be done on the CSF – UDF files compiled elsewhere will not usually run on the CSF.

There are two methods. The first method is only recommended when you know there are no mistakes in your UDF source code – e.g., you’ve tested it elsewhere. You can let Fluent do the compilation automatically when your job runs – it just requires that you have a line in your case (.cas) file similar to:

(udf/compile/files (("libudf" (source "/scratch/username/my_project/my_udf.c") (header))))

This would ask fluent to compile your UDF source file (named my_udf.c in a directory named my_project in your scratch area – change as required).

However, you DO NOT need to edit your .cas file to add the above line. A better method is to compile the UDF before your job runs, then fluent won’t need to do it itself. This is also a good way to check your UDF code actually compiles. If you wait until fluent compiles it (when your job runs), any mistakes in your code will mean you have to fix the code then resubmit the job. It is much easier to try to compile the UDF on the login node before the job runs and make any corrections to the code if you need to do so.

If you wish to check your UDF code compiles on the CSF before submitting a job to the batch system, you can run:

csf_compile_udf simtype1 simtype1_node simtype1_host ...

# Example 1: Compile for a 3ddp (3D double-precision) simulation:
csf_compile_udf 3ddp 3ddp_node 3ddp_host

# Example 2: Compile for a 2d (2D single-precision) simulation:
csf_compile_udf 2d 2d_node 2d_host

# For more info:
csf_compile_udf --help

This script will compile your my_udf.c file in to a libudf.so file. It will create a new sub-directory named libudf in the current directory (where you run the script). Each simulation type (2d, 3ddp, …) will have its own libudf.so file in directories inside the libudf/ sub-dir.

If you compiled the UDF on Windows before transferring the files to the CSF, remove any automatically-generated Windows files from the CSF folder (e.g. udf_names.c, ud_io1.h and any existing libudf/ folder.) These files will cause the compilation on the CSF to fail. You should have only the source files you wrote (e.g., my_udf.c) in your CSF directory. All of the necessary files will then be generated by fluent when you compile the UDF on the CSF.

Running Fluent in Parallel in batch

Fluent is SGE-aware, so there is no need to write a batch-system jobscript – it will submit the job for you. However, a jobscript gives you a permanent record of how you ran your job, which might be useful in future if you need to rerun something. We recommend you write a jobscript!

First we show how to use the no-jobscript method.

Then we give the equivalent jobscript. You can run the fluent command on the login node and it will submit a job to the batch system for you. You must tell Fluent that you wish to run your job under SGE and pass the name of the parallel environment and number of cores you would like to use. For example, to run a 16-core job:

# Load the modulefile on the login node, for the version you require, eg:
module load apps/binapps/fluent/19.2

# Then run this on the CSF login node to ask fluent to submit a batch job
fluent 3d -g -t16 -ssh -sge -sgepe smp.pe 16 -t $NSLOTS -pethernet -mpi=openmpi -i input.jou
       ^       ^                         ^      ^
       |       |                         |      | 
       |       +---------------+---------+      +--- Journal file to run
       |                       |
       |       The value given to the "-t" option
       |       MUST be the same as the value given
       |       to the "-sgepe smp.pe" option
       |       (in this example, 16 [cores])
       |
       +---- Simulation type. If you have compiled a UDF using the csf_compile_udf
             script then it must have been compiled for this type of simulation.

Or you can submit as a batch job with the following in a file named fluent-batch.sge:

#!/bin/bash --login
#$ -cwd
#$ -pe smp.pe 16

# Load the modulefile for the version you require, eg:
module load apps/binapps/fluent/19.2

# Now run fluent
fluent 3d -g -t$NSLOTS -pethernet -mpi=openmpi -i input.jou
                             #
                             #
                             # Replace input.jou with your input file.

Running:
qsub fluent-batch.sge to submit to the batch system.

Please check the maximum job size permitted by your chosen version (see modulefiles above.)

The argument '-sge' is not required if used with Fluent 2021R1 or above.

Running High Memory Fluent Jobs in Parallel

Please DO NOT use this option unless you genuinely need more than 4GB per core as the available resources for such work is very limited.

The built in SGE functions of fluent are unable to take account of ‘-l‘ options passed to it. So please use the method below to submit a fluent job to the high-memory nodes.

Requesting high memory nodes

Add ONE of the following options to your jobscript

#$ -l mem512    ## for 32GB per core
# Run this on the CSF login node to submit a batch job yourself

qsub -cwd -l mem512 -pe smp.pe 2 -b y fluent 3ddp -g -t2 -ssh -t $NSLOTS -pethernet -mpi=openmpi -i input.jou
     ^                              ^        ^                           ^
     |                              |        |                           |
     +--------------+---------------+        +-------------+-------------+
                    |                                      |
     Flags given to the qsub command         Flags given to the fluent command
     used to submit the batch job.           that will be run in the batch job.

The -cwd is important to ensure your job finds the input file. The above command is equivalent to the following jobscript:

#!/bin/bash --login
#$ -cwd
#$ -l mem2512
#$ -pe smp.pe 2
module load apps/binapps/fluent/19.2
# In a jobscript the $NSLOTS variable is automatically set to the number
# given on the '#$ -pe' line (2 in this case). Now run fluent.

fluent 3ddp -g -t$NSLOTS -pethernet -mpi=openmpi -i input.jou

If you use a jobscript you should then submit it from the login node using

qsub jobscript

where jobscript is the name of your jobscript file.

Running Serial Interactive Jobs via qrsh

The running of serial, interactive jobs is tolerated. This is mainly used to set up a simulation which can then be saved to file and run in batch. You may need to use the GUI to set up a UDF. If you are setting up a parallel UDF please email its-ri-team@manchester.ac.uk.

You may run fluent interactively using the short option which has a time limit of 1 hour.

Starting Fluent interactively on short nodes

qrsh -l short

Then, once qrsh has returned a commandline on a compute node, load the appropriate environment module and start Fluent:

module load apps/binapps/fluent/19.2          # Load the modulefile for the version you require
fluent &     ## The & ensures you get your command line and the GUI at the same time.

Notes:

  1. Attempting to combine the above steps by, from the login node, using the single command
    # Do NOT do this - it will fail!
    qrsh -l short /opt/gridware/apps/binapps/fluent/14.0/ansys_inc/v140/fluent/bin/fluent

    will result in an incorrect environment for Fluent and the GUI may not operate correctly.

  2. Linux: if the render window section of the GUI does not display your model try setting the following before running fluent:
    export LIBGL_ALWAYS_INDIRECT=1
  3. alternatively run fluent using (useful for version 18.1):
    fluent -driver x11
  4. Windows: if the render window is slow to redraw or flashes try running fluent using:
    fluent -driver x11

Note: For the GUI to work you must have an X server running on your PC. See the instructions on the using GUI based applications page.

Further Information

Further information on Fluent and other CFD applications may be found by visiting the MACE CFD Forum.

More information about qrsh on the CSF.

Last modified on October 8, 2024 at 4:59 pm by George Leaver