{"id":764,"date":"2018-11-07T10:55:10","date_gmt":"2018-11-07T10:55:10","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=764"},"modified":"2026-02-18T14:22:00","modified_gmt":"2026-02-18T14:22:00","slug":"cfx","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/cfx\/","title":{"rendered":"CFX"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"https:\/\/www.ansys.com\/products\/fluids\/ansys-cfx\">CFX<\/a> is another general purpose computational fluid dynamics (CFD) software tool by ANSYS.<\/p>\n<h2>Restrictions on use<\/h2>\n<p>CFX is installed alongside ANSYS Fluent. You will need to be in the <code>fluent<\/code> unix group to access it. Only MACE users may be added to the fluent group.<\/p>\n<h2>Set up procedure<\/h2>\n<p>You must load <em>one<\/em> of the following ANSYS modulefiles to access CFX. Several versions are available:<\/p>\n<pre>\r\nmodule load apps\/binapps\/ansys\/2024R2          # Max job size: 32 cores\r\nmodule load apps\/binapps\/ansys\/2023R1          # Max job size: 32 cores\r\nmodule load apps\/binapps\/ansys\/2021R1:         # Max job size: 32 cores\r\n\r\n# Legacy Module files. May not work in CSF3 with Slurm:\r\nmodule load apps\/binapps\/cfx\/19.2\r\nmodule load apps\/binapps\/cfx\/18.1\r\n<\/pre>\n<p>If you wish to compile your own user-defined routines (e.g., a fortran .F file to be compiled in to your simulation), you should also load one of the Intel Compiler modulefiles. For example:<\/p>\n<pre>\r\nmodule load compilers\/intel\/17.0.7\r\n<\/pre>\n<p>See the <a href=\"\/csf3\/software\/compilers\/intel\/\">CSF Intel Compiler page<\/a> for more details of available versions.<\/p>\n<h2>Running the application<\/h2>\n<p>Please do not run CFX on the login node.<\/p>\n<table class=\"hint-wide\">\n<tr>\n<td>CSF2 user should no longer use the <code>fluent-smp.pe<\/code> parallel environment. Please see below for how to run the application on CSF3<\/td>\n<\/tr>\n<\/table>\n<h3>Ansys CFX in serial mode<\/h3>\n<p>The main command to run CFX is <code>cfx5solve<\/code>. By default <code>cfx5solve<\/code> will run a simulation in serial mode, as in the example batch script below:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p serial   # Partition name is required. Serial partition runs on Intel cores\r\n#SBATCH -t 1-5      # Job \"wallclock\" limit is required. Max permitted is 7 days (7-0)\r\n                    # In this example 1-5 is 1 days and 5 hours\r\n\r\n# clean environment and load any of the ansys\/fluent modules \r\nmodule purge\r\nmodule load apps\/binapps\/ansys\/2024R2\r\n\r\n# define input .def file and output dir paths\r\nINPUT_FILE=~\/scratch\/path\/to\/input_file.def\r\nOUTPUT_DIR=~\/scratch\/path\/to\/output_dir\r\n\r\n# run cfxsolve using default serial mode. See <strong>cfx5solve -help<\/strong> for more options\r\ncfx5solve -batch -def $INPUT_FILE -fullname $OUTPUT_DIR\r\n#          |\r\n#          |-> required for batch submissions\r\n<\/pre>\n<p>The above simulation will only report errors in the standard slurm-<em>jobID<\/em>.out file. Instead it will create a new file named after the OUTPUT_DIR with .out extension appended (e.g. if OUTPUT_DIR=results  it will be named <em>results.out<\/em>). This file will report on the progress of the simulation.<\/p>\n<p>To submit, run <code>sbatch <em>jobscript<\/em><\/code>.<\/p>\n<h3>Ansys CFX in MPI Local parallel mode <\/h3>\n<p>Run a parallel CFX job using MPI in one node only<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore   # Partition name is required. This gives you an AMD Genoa (168-core) node\r\n#SBATCH -n 16          # (or --ntasks=) Number of cores (2--168 on AMD), limited to 32 by ANSYS licence\r\n#SBATCH -t 1-5         # Job \"wallclock\" limit is required. Max permitted is 7 days (7-0)\r\n                       # In this example 1-5 is 1 days and 5 hours\r\n\r\n# clean environment and load any of the ansys\/fluent modules \r\nmodule purge\r\nmodule load apps\/binapps\/ansys\/2024R2\r\n\r\n# define input .def file and output dir paths\r\nINPUT_FILE=~\/scratch\/path\/to\/input_file.def\r\nOUTPUT_DIR=~\/scratch\/path\/to\/output_dir\r\n\r\n# run cfxsolve using default serial mode. See <strong>cfx5solve -help<\/strong> for more options\r\ncfx5solve -batch -def $INPUT_FILE -fullname $OUTPUT_DIR -double -start-method 'Intel MPI Local Parallel' -part $SLURM_NTASKS\r\n#          |                                             |       |                                        |-> number of cores (sim \"partitions\")\r\n#          |-> required for batch submissions            |       |-> choose Parallel method\r\n#                                                        |-> double-precision Partitioner, Interpolator and Solver\r\n#\r\n<\/pre>\n<p>To submit, run <code>sbatch <em>jobscript<\/em><\/code>.<\/p>\n<h3>Serial batch job submission<\/h3>\n<p>Make sure you have your input file available on the CSF. Then write a batch submission script, for example:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n\r\nmodule load apps\/binapps\/cfx\/19.2\r\n\r\ncfx5solve -def CombustorEDM.def\r\n<\/pre>\n<p>Now submit it to the batch system:<\/p>\n<pre>\r\nqsub scriptname\r\n<\/pre>\n<p>replacing <code>scriptname<\/code> with the name of your submission script.<\/p>\n<h3>Parallel batch job submission<\/h3>\n<p>Make sure you have your input file available on the CSF in the directory you wish to run the job. Then create a batch submission script in that directory, for example:<\/p>\n<pre>#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe smp.pe 4\r\n\r\nmodule load apps\/binapps\/cfx\/19.2\r\n\r\ncfx5solve -start-method \"$PLATMPI\" -def CombustorEDM.def -par-local -partition $NSLOTS\r\n<\/pre>\n<p>Notes about the script:<\/p>\n<ol>\n<li><code>-start-method \"$PLATMPI\"<\/code> (including the quotes as shown) is important and ensures that the most suitable MPI is used.<\/li>\n<li><code>-partition $NSLOTS<\/code> (no quotes needed here) is important to ensure that the number of cores requested is used.<\/li>\n<li>Minimum number of cores for parallel CFX jobs is 2, maximum is 4. You may run more than one job at a time if there are resources available.<\/li>\n<\/ol>\n<p>Now submit it to the batch system:<\/p>\n<pre>\r\nqsub scriptname\r\n<\/pre>\n<p>replacing <code>scriptname<\/code> with the name of your submission script.<\/p>\n<h3>Errors<\/h3>\n<p>The SGE\/batch error output file e.g. <code>mycfxjob.e12345<\/code> may report:<\/p>\n<pre>map size mismatch; abort\r\n: File exists\r\n<\/pre>\n<p>several times. This is common and does not cause a problem to running jobs.<\/p>\n<h3>Interactive use<\/h3>\n<p>Please do not run the GUI on the login node. If you require the GUI please run it via <code>qrsh<\/code>.<\/p>\n<ul>\n<li>Log into the CSF with <a href=\"\/csf3\/getting-started\/connecting\/gui-apps\">X11 enabled<\/a>.<\/li>\n<li>Make sure you have the modulefile loaded:<\/li>\n<\/ul>\n<pre>module load apps\/binapps\/cfx\/19.2<\/pre>\n<p>Use qrsh to start the GUI on a compute node:<\/p>\n<pre>qrsh -l short cfx5<\/pre>\n<p>If you get the error &#8216;Your &#8220;qrsh&#8221; request could not be scheduled, try again later!&#8217; it means that there are no interactive resources available. You can try to submit as a serial job instead.<\/p>\n<h2>Further info<\/h2>\n<p>Documentation is available via the GUI.<\/p>\n<ul>\n<li><a href=\"\/csf3\/batch\/qrsh\/\">CSF qrsh documentation<\/a>.<\/li>\n<\/ul>\n<h2>Updates<\/h2>\n<p>Additional updates made to the package if appropriate<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview CFX is another general purpose computational fluid dynamics (CFD) software tool by ANSYS. Restrictions on use CFX is installed alongside ANSYS Fluent. You will need to be in the fluent unix group to access it. Only MACE users may be added to the fluent group. Set up procedure You must load one of the following ANSYS modulefiles to access CFX. Several versions are available: module load apps\/binapps\/ansys\/2024R2 # Max job size: 32 cores module.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/cfx\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":7,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-764","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/764","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=764"}],"version-history":[{"count":18,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/764\/revisions"}],"predecessor-version":[{"id":11889,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/764\/revisions\/11889"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=764"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}