{"id":743,"date":"2018-11-02T11:27:27","date_gmt":"2018-11-02T11:27:27","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf3\/?page_id=743"},"modified":"2025-12-01T14:23:53","modified_gmt":"2025-12-01T14:23:53","slug":"starccm","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/starccm\/","title":{"rendered":"StarCCM+"},"content":{"rendered":"<h2>Overview<\/h2>\n<p><a href=\"https:\/\/www.plm.automation.siemens.com\/global\/en\/products\/simcenter\/STAR-CCM.html\">StarCCM+<\/a> is a computation continuum mechanics application which can handle problems relating to fluid flow, heat transfer and stress. See below for the available versions on the CSF.<br \/>\n<em><strong>Please note that the CSF Team do not supply copies of STARCM+ for laptops\/desktops, you will need to request that via the<\/strong><\/em> <a href=\"https:\/\/research-it.manchester.ac.uk\/services\/application-support\/\">Research Applications Team<\/a>.<\/p>\n<h2>Restrictions on Use<\/h2>\n<div class=\"hint\">Only users who have been added to the StarCCM group can run the application (run <code>groups<\/code> to see your group memberships). Owing to licence restrictions, only users from the School of MACE can be added to this group. Requests to be added to the StarCCM group should be requested via the <a href=\"\/csf3\/overview\/help\/\">appropriate HPC Help Form<\/a>.<\/div>\n<p>When the job runs there may not be enough licenses for the number of cores you have requested. There is currently a high demand for licenses and they may run out. To check whether there are at least enough licenses for you job you can add the following line to your <strong>jobscript<\/strong>:<\/p>\n<pre>. $STARCCM_HOME\/liccheck.sh\r\n<\/pre>\n<p>(that&#8217;s a full-stop followed by a space at the start of the line &#8211; please copy it carefully.)<\/p>\n<p>If there are <em>not<\/em> enough licenses for your job the job will automatically re-queue in the CSF batch system. If there are enough licenses the job will run StarCCM. If you omit this check your job will simply fail if there are not enough licenses.<\/p>\n<h2>Set Up Procedure<\/h2>\n<p>Once you have been added to the StarCCM+ group, you will be able to access the modulefiles. We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job <abbr title=\"add '#$ -V' to your jobscript\">inherit these settings<\/abbr>.<\/p>\n<p>Load <em>one<\/em> of the following modulefiles:<\/p>\n<pre># Choose required precision (double is more accurate but slower)\r\nmodule load apps\/binapps\/starccm\/20.04-double                   # Also called v2025.06\r\nmodule load apps\/binapps\/starccm\/20.04-mixed\r\n\r\nmodule load apps\/binapps\/starccm\/19.04-double                   # Also called v2024.06\r\nmodule load apps\/binapps\/starccm\/19.04-mixed\r\n\r\nmodule load apps\/binapps\/starccm\/19.02-double                   # Also called v2024.02\r\nmodule load apps\/binapps\/starccm\/19.02-mixed\r\n\r\nmodule load apps\/binapps\/starccm\/18.02-double                   # Also called v2023.02\r\nmodule load apps\/binapps\/starccm\/18.02-mixed\r\n\r\nmodule load apps\/binapps\/starccm\/17.04-double              \r\nmodule load apps\/binapps\/starccm\/17.04-mixed\r\n\r\nmodule load apps\/binapps\/starccm\/17.02-double                   # Also called v2022.1\r\nmodule load apps\/binapps\/starccm\/17.02-mixed\r\n\r\nmodule load apps\/binapps\/starccm\/15.04-double                   # Also called v2020.2.1\r\nmodule load apps\/binapps\/starccm\/15.04-mixed\r\n\r\nmodule load apps\/binapps\/starccm\/15.02-double                   # Also called v2020.1\r\nmodule load apps\/binapps\/starccm\/15.02-mixed\r\n\r\nmodule load apps\/binapps\/starccm\/14.06-double \r\nmodule load apps\/binapps\/starccm\/14.06-mixed \r\n\r\nmodule load apps\/binapps\/starccm\/13.04-double \r\nmodule load apps\/binapps\/starccm\/13.04-mixed\r\n<\/pre>\n<h2>Running the Application<\/h2>\n<p>Currently, use of StarCCM+ in batch mode only is supported; no attempt should be made to use the StarCCM+ GUI directly on the CSF (however, see below for the <a href=\"#clientserver\">client-server<\/a> method). Once you have loaded the modulefile for the version you wish to run please write a jobscript based on one of those below and then submit it to the batch system with this command:<\/p>\n<pre>sbatch <em>myjobscript<\/em>\r\n<\/pre>\n<p>replacing <code><em>myjobscript<\/em><\/code> with the name of your file.<\/p>\n<h2>Parallel Jobs<\/h2>\n<p>These are jobs that use a single AMD compute node &#8211; between 2 and 168 cores.<\/p>\n<h3>Jobs of up to 168 cores<\/h3>\n<p>This example describes how to submit an SMP job (uses multiple cores in a single compute node):<\/p>\n<pre>#!\/bin\/bash --login\r\n#SBATCH -p multicore  # (or --partition=) Run on the AMD 168-core nodes\r\n#SBATCH -n 16         # (or --ntasks=) Number of cores to use.\r\n#SBATCH -t 4-0        # Wallclock time limit. 4-0 is 4 days. Max permitted is 7-0.\r\n\r\n# We now recommend loading the modulefile in the jobscript. Use your required version\r\nmodule load apps\/binapps\/starccm\/17.04.008-mixed\r\n\r\n# Check for licenses, will requeue job if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\nstarccm+ -batch -pio -mpi openmpi -batchsystem slurm <em>myinput.sim<\/em> \r\n                  #                                   #\r\n                  #                                   #\r\n                  #                                   # Replace <em>myinput.sim<\/em> with\r\n                  #                                   # your input file name.\r\n                  #\r\n                  # Optional: may speed up your job by performing parallel file I\/O.\r\n                  # BUT: you MUST be running your job from the \"scratch\" filesystem\r\n                  # for this to work.\r\n\r\n\r\n<\/pre>\n<p>If submitting a job with fewer than 168 cores, then you should add the following flag to the <code>starccm+<\/code> command-line (e.g., at the end of the line):<\/p>\n<pre>-cpubind off\r\n<\/pre>\n<p>For example:<\/p>\n<pre>starccm+ -batch -mpi openmpi -batchsystem slurm <em>myinput.sim<\/em> <strong>-cpubind off<\/strong>\r\n<\/pre>\n<p>If your job uses <em>exactly<\/em> 168 cores you must <em>not<\/em> use this flag &#8211; it will slow down your job.<\/p>\n<h2>Large Parallel Jobs &#8211; HPC Pool Users<\/h2>\n<div class=\"warning\">\n<p>Only users who are members of a valid HPC Pool project, and who know their HPC Pool project code, can run multi-node jobs in the HPC Pool.<\/p>\n<p>If you are not a member of an HPC Pool project you <em>cannot<\/em> run in the <code>hpcpool<\/code> partition.<\/p>\n<\/div>\n<h3>Please check how your job scales before running large parallel jobs!<\/h3>\n<p>Do not assume that using twice as many cores will run you job in half the time! Jobs do not always scale linearly. Users are encouraged to test how their problem scales when considering a new problem or mesh size. This can be achieved easily, using an inbuilt script provided by CD-Adapco, which can be invoked using the <code>-benchmark<\/code> flag. For example:<\/p>\n<pre>#!\/bin\/bash --login\r\n#SBATCH -p hpcpool        # The \"partition\" - named hpcpool\r\n#SBATCH -N 4              # (or --nodes=) Minimum is 4, Max is 32. Job uses 32 cores on each node.\r\n#SBATCH -n 128            # (or --ntasks=) TOTAL number of tasks. Max is 1024.\r\n#SBATCH -t 1-0            # Wallclock limit. 1-0 is 1 day. Maximum permitted is 4-0 (4-days).\r\n#SBATCH -A <em>hpc-proj-name<\/em>  # Use your HPC project code\r\n\r\n# We now recommend loading the modulefile in the jobscript. Use your required version\r\nmodule purge\r\nmodule load apps\/binapps\/starccm\/17.04.008-mixed\r\n\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\nstarccm+ <strong>-benchmark<\/strong> -mpi openmpi -batchsystem slurm <em>myinput.sim<\/em>\r\n             #                                       #\r\n             # Run the automatic benchmarks          #\r\n             # on your input model                   # Replace <em>myinput.sim<\/em> with\r\n                                                     # your input file name.\r\n<\/pre>\n<p>This will run a few iterations on 1, 2, 4, 8, 16, &#8230; and finally 128 cores, timing the runs. The output is a <code>.html<\/code> file which can be opened using a web browser. This file will show how your problem scales, which can be used to decide how many cores to use. The parallel efficiency can be computed as speedup \/ number of workers from the resulting table. The point at which efficiency drops below ~85% users should consider using fewer cores, which may actually increase the speed of your computations.<\/p>\n<h3>Multi-node jobs of 128 cores or more<\/h3>\n<p>This is a multi-node example using four HPC Pool 32-core nodes (hence 128 cores in total). You can specify more nodes by using multiples of 32 for the number of cores (e.g., 256 for eight nodes).<\/p>\n<pre>#!\/bin\/bash --login\r\n#SBATCH -p hpcpool        # The \"partition\" - named hpcpool\r\n#SBATCH -N 4              # (or --nodes=) Minimum is 4, Max is 32. Job uses 32 cores on each node.\r\n#SBATCH -n 128            # (or --ntasks=) TOTAL number of tasks. Max is 1024.\r\n#SBATCH -t 1-0            # Wallclock limit. 1-0 is 1 day. Maximum permitted is 4-0 (4-days).\r\n#SBATCH -A <em>hpc-proj-name<\/em>  # Use your HPC project code\r\n\r\n# We now recommend loading the modulefile in the jobscript. Use your required version\r\nmodule purge\r\nmodule load apps\/binapps\/starccm\/17.04.008-mixed\r\n\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\nstarccm+ -batch -pio -mpi openmpi -batchsystem slurm <em>myinput.sim<\/em>\r\n                                                        #\r\n                                                        #\r\n                                                        # Replace <em>myinput.sim<\/em> with\r\n                                                        # your input file name.\r\n<\/pre>\n<p>Note that some users have found using IntelMPI faster on the HPC Pool. This has been tested in v18 (so should be OK in newer versions):<\/p>\n<pre># Use Intel MPI\r\nstarccm+ -batch -pio -mpi intel -bs slurm <em>myinput.sim<\/em>\r\n<\/pre>\n<p>However larger jobs (320 cores and up, may not run with IntelMPI.) Please try openmpi first.<\/p>\n<h2>Force a Checkpoint and optionally Abort<\/h2>\n<p>Checkpointing saves the current state of your simulation to file so that you can run the job again from the current state rather than from the beginning of the simulation. This is needed if your simulation is going to run for longer than the maximum runtime permitted (7 days on the Intel nodes in the CSF).<\/p>\n<p>When asked to checkpoint, StarCCM+ will write out a new <code>.sim<\/code> file with <code>@<em>iteration<\/em><\/code> in the name to indicate the iteration number at which the checkpoint file was made. For example <code>&#x6d;&#121;i&#x6e;&#x70;&#117;t&#x40;&#50;5&#x30;&#x30;&#48;&#46;&#x73;&#105;m<\/code>. You can then use this as the input <code>.sim<\/code> file for another job on the CSF.<\/p>\n<h3>Manually<\/h3>\n<p>To force a checkpoint manually, leaving your batch job to carry on running the simulation after the checkpoint file has been written, run the following command on the login node in the directory from where you submitted the job:<\/p>\n<pre>touch CHECKPOINT\r\n<\/pre>\n<p>StarCCM+ checks after every iteration for this file. Once it sees the file it will save the current state of the simulation then rename the <code>CHECKPOINT<\/code> file to be <code>CHECKPOINT~<\/code> (so that it doesn&#8217;t keep checkpointing) then carry on with the simulation. You can run <code>touch CHECKPOINT<\/code> again at some time in the future to generate a new checkpoint file.<\/p>\n<p>If you wish to checkpoint and then terminate your simulation (which will end your CSF job), run the following on the login node in the directory where your simulation is running:<\/p>\n<pre>touch ABORT\r\n<\/pre>\n<p>Your batch job will then terminate after it has written the checkpoint file.<\/p>\n<h3>Automatically in your jobscript<\/h3>\n<p>It is possible to automatically checkpoint your job near the end of the job&#8217;s runtime and have it re-queue itself on the CSF using the jobscript as shown below.<\/p>\n<p>In this example we run a simulation file named <code>myinput.sim<\/code>. When the job re-queues after a checkpoint, it will run <code>myinput@<em>nnnnn<\/em>.sim<\/code> where <code><em>nnnnn<\/em><\/code> is the iteration number at which the checkpoint was written. The script will automatically use the most recent checkpoint file. Eventually, the simulation will converge and StarCCM will exit normally. When this happens no further checkpoints are made and the job will not re-queue.<\/p>\n<p>Note, it is important to change the <code>SAVETIME<\/code> setting to be the duration you want StarCCM to run for before it checkpoints, aborts and requeues the job. Given that a checkpoint occurs at the end of an iteration, you need to allow enough time for the current iteration to finish and for the checkpoint file to be written. In the example below we run on the Intel nodes which have a maximum runtime of 7 days. Hence we set the checkpoint time to be at 6 days, 23 hours and 50 minutes. This give StarCCM 10 minutes to finish the current iteration and write the checkpoint file. <strong>Edit this setting as required<\/strong>.<\/p>\n<p>Note that the script below does not delete any checkpoint files. If your simulation runs for a long time it may checkpoint a number of times and leave you with many large checkpoint files. You can periodically delete all but the most recent of these files.<\/p>\n<pre>#!\/bin\/bash --login\r\n#SBATCH -p multicore  # (or --partition=) Run on the AMD 168-core nodes\r\n#SBATCH -n 16         # (or --ntasks=) Number of cores to use.\r\n#SBATCH -t 4-0        # Wallclock time limit. 4-0 is 4 days. Max permitted is 7-0.\r\n\r\n# We now recommend loading the modulefile in the jobscript. Use your required version\r\nmodule purge\r\nmodule load apps\/binapps\/starccm\/17.04.008-mixed\r\n\r\n# ------ Edit these settings (carefully) -------\r\n\r\n# How long the job should run for before we checkpoint\r\n# and abort. Check the PE max runtime on the CSF webpages.\r\n# dd:hh:mm:ss\r\n\r\nSAVETIME=06:23:50:00\r\n\r\n# Name of simulation file - <strong>do not<\/strong> add .sim to end\r\n# of name. EG: Will use myinput.sim on first run or latest\r\n# checkpoint file myinput@<em>nnnnn<\/em>.sim if available.\r\n\r\nSIMFILE=myinput\r\n\r\n# --------------------------------------------\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\n# Find newest checkpoint file \r\nSIM=$(ls -t ${SIMFILE}*.sim | head -1)\r\n\r\nif [ -n \"$SIM\" ]; then\r\n  if [[ \"$SIM\" =~ @[0-9] ]]; then\r\n    echo \"Using checkpoint file $SIM\"\r\n  else \r\n    echo \"Using original simulation file $SIM\"\r\n  fi\r\nelse\r\n  echo \"Failed to find any .sim files. Job will exit!\"\r\n  exit 1\r\nfi\r\n\r\n# Add a timer to create checkpoint file (the <strong>&amp;<\/strong> is important!)\r\n(sleep $(echo $SAVETIME | awk -F: '{printf \"%dd %dh %dm %ds\",$1,$2,$3,$4}'); echo \"ABORT...\"; touch ABORT) <strong>&amp;<\/strong>\r\nSLEEP_PID=$!\r\n\r\n# Run starccm as usual for a 32-core job (see earlier in this webpage for more details)\r\nstarccm+ -batch -mpi openmpi -batchsystem slurm $SIM -np $SLURM_NTASKS\r\n\r\n# Tidy up the sleep timer\r\npkill -P $SLEEP_PID\r\n\r\n# Automatically resubmit the job if we reached the checkpoint time?\r\nif [ -f ABORT ]; then\r\n  CHKFILE=$(ls -t ${SIMFILE}@*.sim | head -1)\r\n  if [ -f \"$CHKFILE\" ]; then \r\n    echo \"Job checkpointed at $(date) - wrote $CHKFILE - job will be requeued automatically\"\r\n    STATUS=99\r\n  else\r\n    echo \"No checkpoint file found (this is probably an error!) Job will not be requeued.\"\r\n  fi\r\nfi\r\nrm -f ABORT\r\nexit $STATUS \r\n<\/pre>\n<p><a name=\"clientserver\"><\/a><\/p>\n<h2>Client \/ Server Usage<\/h2>\n<p>The StarCCM+ GUI running on a campus desktop PC can be connected to a batch simulation running on the CSF. This allows the GUI to display the current state of the simulation (for example you can see graphs showing how particular variables are converging).<\/p>\n<p>Note that the method below will mean that the CSF job, once it is running, does NOT automatically start the simulation. Instead StarCCM+ will wait for you to connect the GUI to the job. But the CSF job is running and will be consuming its available runtime (max 7 days on the CSF).<\/p>\n<p>Please follow the instructions below:<\/p>\n<ol class=\"gaplist\">\n<li>Open two terminal windows on your PC. For example, two MobaXterm windows (use the + button above the black command window to open a second command window in a new <em>tab<\/em>) or run MobaXterm twice. On Mac or Linux, open two <em>Terminal<\/em> applications.<\/li>\n<li>In the first command-window, log in to the CSF as normal. Then write your StarCCM+ batch job. This should be familiar. However, there is a small change to one of the flags on the <code>starccm+<\/code> command-line as show in this example:\n<pre>#!\/bin\/bash --login\r\n#SBATCH -p multicore  # (or --partition=) Run on the AMD 168-core nodes\r\n#SBATCH -n 16         # (or --ntasks=) Number of cores to use.\r\n#SBATCH -t 4-0        # Wallclock time limit. 4-0 is 4 days. Max permitted is 7-0.\r\n\r\n# We now recommend loading the modulefile in the jobscript. Use your required version\r\nmodule purge\r\nmodule load apps\/binapps\/starccm\/17.04.008-mixed\r\n\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\n# Run starccm, but in <em>server<\/em> mode instead of <em>batch<\/em> mode\r\nstarccm+ <strong>-server<\/strong> -load <em>myinput.sim<\/em> -mpi openmpi -batchsystem slurm -np $SLURM_NTASKS\r\n           #            #\r\n           #            # Replace <em>myinput.sim<\/em> with your own input file.\r\n           #         \r\n           # Now we use -server instead of -batch (as used previously)\r\n\r\n# If a different user will be connecting to the server, i.e usernames differ between the\r\n# client and server. Add the \"-collab\" option, e.g.\r\n# starccm+ -server <strong>-collab<\/strong> -load <em>myinput.sim<\/em> -mpi openmpi -batchsystem slurm -np $SLURM_NTASKS\r\n<\/pre>\n<\/li>\n<li>\n<pre><\/pre>\n<p>Submit the job using <code>sbatch <em>myjobscript<\/em><\/code> as usual.<\/li>\n<li>Wait for the above job to run. When it does you will see a file named <code>slurm-<em>12345<\/em>.out<\/code> where <code><em>12345<\/em><\/code> will be unique to your job. Have a look in this file:\n<pre>cat slurm-<em>12345<\/em>.out\r\n<\/pre>\n<p>At the end of the file will be a message informing you where the job is running and on which <em>port<\/em> the server is listening:<\/p>\n<pre>Server::start -host <strong><em>node770<\/em><\/strong>.pri.csf3.alces.network:<strong><em>47827<\/em><\/strong>\r\n                        #                           #\r\n                        #                           # The port number <em>may<\/em> be different\r\n                        #                           # but it is often 47827 (the default).\r\n                        #\r\n                        # The node name will likely be different.\r\n                        # Make a note of <em>your<\/em> node name.\r\n<\/pre>\n<\/li>\n<li>Now, in the second terminal window on <em>your PC<\/em> (<strong>NOT<\/strong> the CSF) log in to the CSF again with the following command:\n<pre>ssh -L <strong><em>47827<\/em><\/strong>:<strong><em>node770<\/em><\/strong>:<strong><em>47827<\/em><\/strong> <strong><em>mxyzabc1<\/em><\/strong>@csf3.itservices.manchester.ac.uk\r\n         #       #     #       #\r\n         #       #     #       # Use your own username here\r\n         #       #     #\r\n         #       #     # Use the port number reported earlier\r\n         #       #\r\n         #       # Use the node name reported earlier\r\n         #\r\n         # Use the port number earlier\r\n<\/pre>\n<p>Enter your CSF password when asked. You must leave this window logged in all time while using the StarCCM+ GUI on your PC. The GUI will be communicating with the CSF through this log-in (<em>tunnel<\/em>). You do not need to type any commands in to this login window.<\/li>\n<li>Now start the StarCCM+ GUI on your desktop PC. For example, on Windows, do this via the <em>Start Menu<\/em>. This will display the main user interface. In the GUI:\n<ol>\n<li>Select <em>File menu<\/em> then <em>Connect to server&#8230;<\/em><\/li>\n<li>In the window that pops up set:\n<pre>Host: localhost\r\nPort: 47827 (or whatever number you got from above).\r\n<\/pre>\n<p>Then hit OK. The GUI will connect to the job on the CSF.<\/li>\n<li>If running the StarCCM+ GUI on a linux desktop, you can connect to the server using:\n<pre>starccm+ -host localhost:<strong>47827<\/strong> # Change the port number to match given above\r\n<\/pre>\n<\/li>\n<li>You can now run (start) the simulation by going to the <em>Solution<\/em> menu then <em>Run<\/em> (or simply press <code>CTRL+R<\/code> in the main window).<\/li>\n<li>If you just want to look at the mesh, Open the <em>Scenes<\/em> node in the tree on the left. Then right-click on a <em>geometry<\/em> node and select <em>Open<\/em>. This will display the 3D geometry in the main viewer window.<\/li>\n<\/ol>\n<\/li>\n<li>You can disconnect the GUI from the CSF job using the <em>File<\/em> menu then <em>Disconnect from server<\/em>. This will leave the simulation running on the CSF but it won&#8217;t update in the GUI. You can close the StarCCM GUI at this point.<\/li>\n<\/ol>\n<h2>Co-Simulation with Abaqus<\/h2>\n<p>It is possible to have STAR-CCM+ perform some calculations with Abaqus, exchanging data between the two applications. For example, in mechanical co-simulation STAR-CCM+ passes traction loads to Abaqus (pressure + wall shear stress), and Abaqus passes displacements to STAR-CCM+. In Abaqus, the traction loads are applied to the surface of the solid structure. In STAR-CCM+, the displacements are used as an input to the mesh morpher. Data is exchanged via the Co-Simulation module of STAR-CCM+.<\/p>\n<p>You will need to set up the co-simulation in your STAR-CCM+ input file and also have available an Abaqus input file. You should also ensure the input files, when run in their respective applications, converge to solutions otherwise the co-simulation will not converge.<\/p>\n<p>More information is available in the STAR-CCM+ user guide, available on the login node by running the following command after you&#8217;ve loaded the starccm modulefile:<\/p>\n<pre>evince $STARCCM_UG\r\n<\/pre>\n<h3>Example 32-core co-simulation job<\/h3>\n<p>In this example we will run a single-node 32-core job with 24 cores used by starccm and 8 cores used by Abaqus.<\/p>\n<p>Create a directory structure for your co-simulation:<\/p>\n<pre>cd ~\/scratch\r\nmkdir co-sim # Change the name as requried\r\ncd co-sim\r\nmkdir abaqus starccm # Two directories to hold the input and output files from each app\r\n<\/pre>\n<p>Now copy your input files to the respective directories. For example:<\/p>\n<pre>cp ~\/abq_cosim.inp ~\/scratch\/co-sim\/abaqus\r\ncp ~\/ccm_cosim.sim ~\/scratch\/co-sim\/starccm\r\n<\/pre>\n<p>Now make some changes to the STAR-CCM+ input file to enable co-simulation. You can do this on your local workstation if you prefer but it is useful to be able to do this on the CSF when you want to change the settings before submitting a job &#8211; you will avoid transferring the input file back and forth between your workstation:<\/p>\n<pre># Load the required version of starccm, for example:\r\nmodule purge\r\nmodule load apps\/binapps\/starccm\/17.04.008\r\n\r\n# Start an interactive job to run the starccm GUI:\r\ncd ~\/scratch\/co-sim\/starccm\r\nsrun -p interactive -t 0-1 --pty starccm+ ccm_cosim.sim\r\n<\/pre>\n<p>When the GUI starts, open the <code>Co-Simulations<\/code> node in the tree viewer, expand <code>Link1<\/code> then look for the following attributes and set their values as follows:<\/p>\n<pre>Co-Simulation Type = Abaqus Co-Simulation # Choose from the drop-down menu\r\n...\r\n### Note there are several simulation settings (e.g., ramping parameters) that control\r\n### the simulation. You will need to set these but they are beyond the scope of this\r\n### web-page. Please refer to the starccm user guide.\r\n...\r\nAbaqus Execution\r\n    Current Job Name = <em>my-abq-cosim<\/em>       # Any name, used to name abaqus output files\r\n    Input file = ..\/abaqus\/abq_cosim.inp  # See directory structure created above\r\n    Executable name = abq2018             # The Abaqus command used on the CSF (choose version)\r\n    Number of CPUs = 8                    # This MUST match the jobscript (see below)\r\n    Remote shell = ssh                    # The default - leave set to this\r\n<\/pre>\n<p>Save the input file and exit the STAR-CCM+ GUI.<\/p>\n<p>Now create a jobscript to load the starccm and abaqus modulefiles, and run starccm with the correct number of cores:<\/p>\n<pre>cd ~\/scratch\/co-sim\/starccm\r\ngedit cosim.sbatch\r\n<\/pre>\n<p>The jobscript should contain the following:<\/p>\n<pre>#!\/bin\/bash --login\r\n#SBATCH -p multicore  # (or --partition=) Run on the AMD 168-core nodes\r\n#SBATCH -n 32         # (or --ntasks=) Number of cores to use.\r\n#SBATCH -t 4-0        # Wallclock time limit. 4-0 is 4 days. Max permitted is 7-0.\r\n\r\n## Load the modulefiles in the jobscript so we always know which version we used\r\nmodule purge\r\nmodule load apps\/binapps\/starccm\/17.04.008\r\nmodule load apps\/binapps\/abaqus\/2018\r\n\r\n## We will use 24 (out of 32) cores for starccm and 8 (out of 32) cores for abaqus.\r\n## Manually set the special SLURM_NTASKS variable to these numbers so the license check\r\n## scripts test for the correct number of licenses. If there are not enough licenses\r\n## then the job will requeue.\r\nexport SLURM_NTASKS=8\r\n. $ABAQUS_HOME\/liccheck.sh\r\n\r\n## NOTE: We will leave SLURM_NTASKS set to the number needed for starccm after this license check\r\nexport SLURM_NTASKS=24\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\n## Now run starccm with 24 cores. It runs the 'abq2018' command with 8 cores (set in input file)\r\nstarccm+ -batch ccm_cosim.sim -mpi openmpi -batchsystem slurm -np $SLURM_NTASKS\r\n<\/pre>\n<p>Submit the job using<\/p>\n<pre>sbatch cosim.sbatch<\/pre>\n<p>When the job runs you will see output files written to the <code>~\/scratch\/co-sim\/abaqus<\/code> and <code>~\/scratch\/co-sim\/starccm<\/code> directories. For example, to see what is happening in each simulation:<\/p>\n<pre>cd ~\/scratch\/co-sim\/abaqus\r\ntail -f <em>my-abq-cosim<\/em>.msg # The abaqus output file name was set in the starccm input file above\r\n  #\r\n  # Press CTRL+C to exit out of the 'tail' command\r\n\r\ncd ~\/scratch\/co-sim\/starccm\r\ntail -f slurm-<em>123456<\/em>.out # Replace <em>123456<\/em> with the job id number of your starccm job\r\n  #\r\n  # Press CTRL+C to exit out of the 'tail' command\r\n<\/pre>\n<p>The simulation should run until it is converged.<\/p>\n<h2>Further Information<\/h2>\n<p>Further information on StarCCM+ and other CFD applications may be found by <a href=\"http:\/\/cfd.mace.manchester.ac.uk\/twiki\/bin\/view\/Forum\/ForumCFD \">visiting the MACE CFD Forum<\/a>.<\/p>\n<p>The STAR-CCM+ user guide is available on the CSF using the following command <em>after<\/em> you have loaded the starccm modulefile:<\/p>\n<pre>evince $STARCCM_UG\r\n<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Overview StarCCM+ is a computation continuum mechanics application which can handle problems relating to fluid flow, heat transfer and stress. See below for the available versions on the CSF. Please note that the CSF Team do not supply copies of STARCM+ for laptops\/desktops, you will need to request that via the Research Applications Team. Restrictions on Use Only users who have been added to the StarCCM group can run the application (run groups to see.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/software\/applications\/starccm\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":7,"featured_media":0,"parent":86,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-743","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/743","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/comments?post=743"}],"version-history":[{"count":23,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/743\/revisions"}],"predecessor-version":[{"id":11441,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/743\/revisions\/11441"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/pages\/86"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf3\/wp-json\/wp\/v2\/media?parent=743"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}