{"id":5985,"date":"2022-04-12T12:40:47","date_gmt":"2022-04-12T11:40:47","guid":{"rendered":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=5985"},"modified":"2025-11-10T17:43:43","modified_gmt":"2025-11-10T17:43:43","slug":"ansys-mechanical","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/ansys-mechanical\/","title":{"rendered":"Ansys Mechanical"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>Ansys Mechanical can perform a variety of engineering simulations, including stress, thermal, vibration, thermo-electric, and magnetostatic simulations. The program has many finite-element analysis capabilities, ranging from a simple, linear, static analysis to a complex, nonlinear, transient dynamic analysis.<\/p>\n<p>Versions 19.2, 19.5, 2021R1, 2023R1 and R2024R2 is installed.<\/p>\n<div class=\"hint\"><strong>Please note:<\/strong> when requesting help with, or access to, this application, please use the name &#8220;Ansys Mechanical&#8221; and not simply &#8220;Ansys.&#8221; There are other Ansys products installed on the CSF (see <a href=\"..\/fluent\">Ansys Fluent<\/a>) so it isn&#8217;t always obvious to the sysadmins which product you mean if you simply request &#8220;Ansys.&#8221;<\/div>\n<h2>Restrictions on Use<\/h2>\n<p>Only users who have been added to the <strong>Fluent group<\/strong> can run the application (yes, the Fluent group due to the way all Ansys products are installed.) Owing to licence restrictions, only users from the School of MACE and one specific CEAS Research Group can be added to this group. Requests to be added to the Fluent group should be sent to us via our <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/overview\/help\/\">Connect Portal Form<\/a>.<\/p>\n<p>Ansys Mechanical jobs must not be run on the login node. If you need to run an interactive job please use <a href=\"\/csf3\/batch\/qrsh\/\">qrsh<\/a> as detailed below.<\/p>\n<h2>Set Up Procedure<\/h2>\n<p>Once you have been added to the Fluent group, you will be able to access the executables by using <em>one<\/em> of the following module commands:<\/p>\n<pre>\r\nmodule load apps\/binapps\/ansys\/2024R2       # Command to run is \"<strong>ansys242<\/strong>\"\r\nmodule load apps\/binapps\/ansys\/2023R1       # Command to run is \"<strong>ansys231<\/strong>\"\r\nmodule load apps\/binapps\/ansys\/2021R1       # Command to run is \"<strong>ansys211<\/strong>\"\r\nmodule load apps\/binapps\/ansys\/19.5         # Command to run is \"<strong>ansys195<\/strong>\"\r\nmodule load apps\/binapps\/ansys\/19.2         # Command to run is \"<strong>ansys192<\/strong>\"\r\n<\/pre>\n<p>We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job <abbr title=\"add '#$ -V' to your jobscript\">inherit these settings<\/abbr>.<\/p>\n<h2>Running the application<\/h2>\n<p>Please do not run Ansys Mechanical on the login node. Jobs should be submitted to the compute nodes via batch.<\/p>\n<p>The software can be run as a singe-node multicore application, a single-node distributed MPI application or a multi-node distributed MPI application. Please ensure you include all of the steps in your jobscripts to ensure they are run correctly.<\/p>\n<p>Note that the parallel method used to run the application may restrict which parallel solvers or other features can be used. Please consult the Ansys documentation about this &#8211; see end of this page for how to access the Ansys online documentation.<\/p>\n<h3>Single-node 2-168 cores &#8211; shared-memory<\/h3>\n<p>Note that the default parallel mode is distributed memory (MPI) mode so we must use the <code>-smp<\/code> flag on the Ansys command-line to indicate we are running in shared-memory (single compute-node) parallel mode. Create a jobscript similar to the following:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore     # (or --partition=) AMD Genoa nodes\r\n#SBATCH -n 168           # (or --ntasks=) 2-168 cores permitted in multicore\r\n#SBATCH -t 1-0           # Wallclock time limit, 1-0 is 1 day (max permitted is 7 days: 7-0)\r\n\r\nmodule load apps\/binapps\/ansys\/<strong>2021R1<\/strong>\r\n\r\n# $NSLOTS is set automatically the number of cores requested above\r\n<strong>ansys211<\/strong> -smp -np $SLURM_NTASKS -b -i <em>mysim<\/em>.inp -o <em>mysim<\/em>.out -j <em>mysim<\/em>\r\n   #                                                            #\r\n   #                                                            # Job Name (used for per-process files\r\n   #                                                            # output file such as .rst, .err, .db)\r\n   #\r\n   # See the \"<strong>Command to run<\/strong>\" name in the modulefile list above.\r\n<\/pre>\n<p>Submit the job using<\/p>\n<pre class=\"slurm\">sbatch <em>jobscript<\/em><\/pre>\n<p>where <em><code>jobscript<\/code><\/em> is the name of your jobscript file.<\/p>\n<pre class=\"sge\">\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n\r\n#### Chose ONE of the following PEs\r\n#$ -pe smp.pe 32           # 2-32 cores permitted in smp.pe (Intel nodes)\r\n#### OR\r\n#$ -pe amd.pe 168          # 2-168 cores permitted in amd.pe (AMD Genoa nodes)\r\n\r\nmodule load apps\/binapps\/ansys\/<strong>2021R1<\/strong>\r\n\r\n# $NSLOTS is set automatically the number of cores requested above\r\n<strong>ansys211<\/strong> -smp -np $NSLOTS -b -i <em>mysim<\/em>.inp -o <em>mysim<\/em>.out -j <em>mysim<\/em>\r\n   #                                                         #\r\n   #                                                         # Job Name (used for per-process output\r\n   #                                                         # files such as .rst, .err, .db)\r\n   #\r\n   # See the \"<strong>Command to run<\/strong>\" name in the modulefile list above.\r\n<\/pre>\n<p>Submit the job using<\/p>\n<pre class=\"sge\">qsub <em>jobscript<\/em><\/pre>\n<p>where <em><code>jobscript<\/code><\/em> is the name of your jobscript file.<\/p>\n<h3>Single-node 2-168 cores &#8211; distributed memory MPI<\/h3>\n<p>Distributed memory mode is usually used for larger multi-node jobs. However it can be used for a single-node job. You may wish to use this if a particular solver can only be run in this mode, so isn&#8217;t supported by the shared-memory parallel mode described above. You may also want to compare timing of both methods if possible to see which is faster. Create a jobscript similar to the following:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p multicore     # (or --partition=) AMD Genoa nodes\r\n#SBATCH -n 168           # (or --ntasks=) 2-168 cores permitted in multicore\r\n#SBATCH -t 1-0           # Wallclock time limit, 1-0 is 1 day (max permitted is 7 days: 7-0)\r\n\r\n# Load your required version\r\nmodule load apps\/binapps\/ansys\/2021R1\r\n\r\n# Note: An extra setup step is required for Ansys Mechanical MPI jobs. You <strong>must<\/strong> do this!\r\nsource setup_ansys\r\n\r\nansys211 -mpi openmpi -mpifile $HOSTS_FILE -b -i <em>mysim<\/em>.inp -o <em>mysim<\/em>.out -j <em>mysim<\/em>\r\n                                  # \r\n                                  # This is set by the \"setup_ansys\" script above\r\n<\/pre>\n<p>Submit the job using<\/p>\n<pre class=\"slurm\">sbatch <em>jobscript<\/em><\/pre>\n<p>where <em><code>jobscript<\/code><\/em> is the name of your jobscript file.<\/p>\n<pre class=\"sge\">\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n\r\n#### Chose ONE of the following PEs\r\n#$ -pe smp.pe 32           # 2-32 cores permitted in smp.pe (Intel nodes)\r\n#### OR\r\n#$ -pe amd.pe 168          # 2-168 cores permitted in amd.pe (AMD Genoa nodes)\r\n\r\n# Load your required version\r\nmodule load apps\/binapps\/ansys\/2021R1\r\n\r\n# Note: An extra setup step is required for Ansys Mechanical MPI jobs. You <strong>must<\/strong> do this!\r\nsource setup_ansys\r\n\r\nansys211 -mpi openmpi -mpifile $HOSTS_FILE -b -i <em>mysim<\/em>.inp -o <em>mysim<\/em>.out -j <em>mysim<\/em>\r\n                                  # \r\n                                  # This is set by the \"setup_ansys\" script above\r\n<\/pre>\n<p>Submit the job using<\/p>\n<pre class=\"sge\">qsub <em>jobscript<\/em><\/pre>\n<p>where <em><code>jobscript<\/code><\/em> is the name of your jobscript file.<\/p>\n<h3>Multi-node HPC Pool cores or more in multiples of 32 &#8211; distributed memory MPI<\/h3>\n<p>Distributed memory mode <em>must<\/em> be used for larger multi-node jobs. Create a jobscript similar to the following:<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p hpcpool                # The 32-core Intel HPC Pool nodes\r\n#SBATCH -N 8                      # ( or --nodes=) Number of compute nodes\r\n#SBATCH -n 256                    # (or --ntasks=) Total number of MPI processes\r\n#SBATCH -A hpc-<em>projectcode<\/em>        # hpc-<em>projectcode<\/em>: we will issue you with a project code\r\n\r\n# Load your required version\r\nmodule load apps\/binapps\/ansys\/2021R1\r\n\r\n# Note: An extra setup step is required for Ansys Mechanical MPI jobs. You <strong>must<\/strong> do this!\r\nsource setup_ansys\r\n\r\nansys211 -mpi openmpi -mpifile $HOSTS_FILE -b -i <em>mysim<\/em>.inp -o <em>mysim<\/em>.out -j <em>mysim<\/em>\r\n                                  # \r\n                                  # This is set by the \"setup_ansys\" script above\r\n<\/pre>\n<p>Submit the job using<\/p>\n<pre class=\"slurm\">sbatch <em>jobscript<\/em><\/pre>\n<p>where <em><code>jobscript<\/code><\/em> is the name of your jobscript file.<\/p>\n<h3>GPU job with shared memory CPU parallelism<\/h3>\n<p>Everyone has access to the L40s and A100 GPUs on the CSF (Slurm) system.<\/p>\n<p>The GPU does not replace CPU usage in Ansys Mechanical &#8211; computation will be offloaded from the CPUs to the GPU when appropriate. You should also check the Ansys documentation for more details on which solvers can use the GPU and how to get the best performance from those solvers.<\/p>\n<p>The following example uses two GPUs and 16 CPU cores &#8211; the number of GPUs you have access to is dependent on the contributing group in which you run CSF jobs.<\/p>\n<pre class=\"slurm\">\r\n#!\/bin\/bash --login\r\n#SBATCH -p gpuL           # L40s GPUs partition\r\n#SBATCH -G 2              # (or --gpus=) Number of GPUs - can be 1-4 depending on your level of access\r\n#SBATCH -n 16             # Can be up to 12 cores <em>per<\/em> L40s or A100 GPU\r\n#SBATCH -t 1-0            # Wallclock time limit, 1-0 is 1 day (max permitted is 4-0, 4 days)\r\n\r\n# Load your required version\r\nmodule load apps\/binapps\/ansys\/2021R1\r\n\r\n# $SLURM_NTASKS is set automatically the number of cores requested above.\r\n# $SLURM_GPU is set automatically to the number of GPUs requested above.\r\nansys211 -acc nvidia -na $SLURM_GPUS -smp -np $SLURM_NTASKS -b -i <em>mysim<\/em>.inp -o <em>mysim<\/em>.out -j <em>mysim<\/em>\r\n                #          #\r\n                #          # -na flag says how many accelerators (GPUs) are to be used\r\n                #\r\n                # -acc flag says which type of accelerator (GPU) is to be used\r\n<\/pre>\n<p>Submit the job using<\/p>\n<pre class=\"slurm\">sbatch <em>jobscript<\/em><\/pre>\n<p>where <em><code>jobscript<\/code><\/em> is the name of your jobscript file.<\/p>\n<pre class=\"sge\">\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -l v100=2              # Number of GPUs - can be 1-4 depending on your level of access\r\n#$ -pe smp.pe 16          # Can be up to 8 cores <em>per<\/em> GPU\r\n\r\n# Load your required version\r\nmodule load apps\/binapps\/ansys\/2021R1\r\n\r\n# $NSLOTS is set automatically the number of cores requested above.\r\n# $NGPUS is set automatically to the number of GPUs requested above.\r\nansys211 -acc nvidia -na $NGPUS -smp -np $NSLOTS -b -i <em>mysim<\/em>.inp -o <em>mysim<\/em>.out -j <em>mysim<\/em>\r\n                #          #\r\n                #          # -na flag says how many accelerators (GPUs) are to be used\r\n                #\r\n                # -acc flag says which type of accelerator (GPU) is to be used\r\n<\/pre>\n<p>Submit the job using<\/p>\n<pre class=\"sge\">qsub <em>jobscript<\/em><\/pre>\n<p>where <em><code>jobscript<\/code><\/em> is the name of your jobscript file.<\/p>\n<h2>Further Information<\/h2>\n<p>To access the Ansys online help, run the following command on the login node after loading the required modulefile:<\/p>\n<pre>\r\nanshelp\r\n\r\nAttempting to open help page \"<strong>https:\/\/ansyshelp.ansys.com\/account\/Secured?Token=.....\"<\/strong>.\r\nIf this page does not open, you may need to install ...\r\n   #\r\n   # A web-browser will NOT be opened on the CSF. Instead, paste the generated URL\r\n   # in to your own web-browser. The URL is valid for a short period of time.\r\n<\/pre>\n<p>Paste the generated URL (<code>https:\/\/ansyshelp.ansys.com\/account\/Secured?Token=.....<\/code>) in to your web-browser. Note that you need to do this soon after running the <code>anshelp<\/code> command because the URL will only be valid for a short time. You should NOT need to log in to the Ansys support portal. If you are asked to log in to the Ansys portal, run the above <code>anshelp<\/code> command to generate a new URL.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview Ansys Mechanical can perform a variety of engineering simulations, including stress, thermal, vibration, thermo-electric, and magnetostatic simulations. The program has many finite-element analysis capabilities, ranging from a simple, linear, static analysis to a complex, nonlinear, transient dynamic analysis. Versions 19.2, 19.5, 2021R1, 2023R1 and R2024R2 is installed. Please note: when requesting help with, or access to, this application, please use the name &#8220;Ansys Mechanical&#8221; and not simply &#8220;Ansys.&#8221; There are other Ansys products installed.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/ansys-mechanical\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-5985","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/5985","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=5985"}],"version-history":[{"count":20,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/5985\/revisions"}],"predecessor-version":[{"id":11379,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/5985\/revisions\/11379"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=5985"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}