{"id":298,"date":"2013-04-26T08:20:57","date_gmt":"2013-04-26T08:20:57","guid":{"rendered":"http:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/?page_id=298"},"modified":"2020-02-27T12:55:56","modified_gmt":"2020-02-27T12:55:56","slug":"starccm","status":"publish","type":"page","link":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/starccm\/","title":{"rendered":"StarCCM+"},"content":{"rendered":"<h2>Overview<\/h2>\n<p>StarCCM+ is a computation continuum mechanics application which can handle problems relating to fluid flow, heat transfer and stress. See below for the available versions.<\/p>\n<h2>Restrictions on Use<\/h2>\n<table class=\"warning\">\n<tbody>\n<tr>\n<td>Only users who have been added to the StarCCM group can run the application (run <code>groups<\/code> to see your group memberships). Owing to licence restrictions, only users from the School of MACE can be added to this group. Requests to be added to the StarCCM group should be emailed to<br \/>\n<a href=\"&#x6d;&#x61;&#x69;&#x6c;&#x74;&#x6f;&#x3a;&#x69;&#x74;&#x73;&#x2d;&#x72;&#x69;&#x2d;&#116;&#101;&#97;&#109;&#64;&#109;&#97;&#110;&#99;&#104;&#101;ster&#46;ac&#46;&#x75;&#x6b;\">&#x69;&#x74;&#115;&#45;ri&#x2d;&#x74;&#x65;&#97;m&#64;&#x6d;&#x61;&#x6e;&#99;&#104;e&#x73;&#x74;&#x65;&#114;&#46;a&#x63;&#x2e;&#x75;&#107;<\/a>.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>Please note:<\/strong> As of October 2014 all CSF users are encouraged to use a <strong>total of 64 cores running StarCCM jobs<\/strong> initially. This will allow you determine if the job will complete using that number of cores and the amount of memory available to those cores. If you need more cores (and memory) you should then increase the number of cores requested in the jobscript (see below). Using more cores than you need will increase queueing time and license usage which may prevent teaching clusters in MACE from running the software during term time.<\/p>\n<p>When the job runs there may not be enough licenses for the number of cores you have requested. There is currently a high demand for licenses and they may run out. If this happens your CSF jobs will fail with a license error. To check whether there are at least enough licenses for you job you can add the following line to your jobscript:<\/p>\n<pre>. $STARCCM_HOME\/liccheck.sh\r\n<\/pre>\n<p>(that&#8217;s a full-stop followed by a space at the start of the line &#8211; please copy it carefully.)<\/p>\n<p>If there are <em>not<\/em> enough licenses for your job the job will automatically re-queue in the CSF batch system. If there are enough licenses the job will run StarCCM.<\/p>\n<h2>Set Up Procedure<\/h2>\n<p>Once you have been added to the StarCCM+ group, you will be able to access the executables after issuing <strong>one<\/strong> of the following module commands:<\/p>\n<pre># Choose required precision (double is more accurate but slower)\r\nmodule load apps\/binapps\/starccm\/13.04-double\r\nmodule load apps\/binapps\/starccm\/13.04-mixed\r\nmodule load apps\/binapps\/starccm\/12.04-double\r\nmodule load apps\/binapps\/starccm\/12.04-mixed\r\nmodule load apps\/binapps\/starccm\/12.02-double\r\nmodule load apps\/binapps\/starccm\/12.02-mixed\r\nmodule load apps\/binapps\/starccm\/11.06-double\r\nmodule load apps\/binapps\/starccm\/11.06-mixed\r\nmodule load apps\/binapps\/starccm\/11.02-mixed\r\nmodule load apps\/binapps\/starccm\/10.02-single\r\nmodule load apps\/binapps\/starccm\/10.02-double\r\nmodule load apps\/binapps\/starccm\/9.06-single\r\nmodule load apps\/binapps\/starccm\/9.06-double\r\n\r\n# Double precision only\r\nmodule load apps\/binapps\/starccm\/9.04\r\n\r\n# Single precision only\r\nmodule load apps\/binapps\/starccm\/9.02\r\nmodule load apps\/binapps\/starccm\/8.06\r\nmodule load apps\/binapps\/starccm\/8.04\r\nmodule load apps\/binapps\/starccm\/7.06\r\nmodule load apps\/binapps\/starccm\/7.04\r\nmodule load apps\/binapps\/starccm\/5.04\r\n<\/pre>\n<p>Both a double precision and single precision version of 9.06 and later have been installed &#8211; reflected in the above modulefile names.<\/p>\n<p>Version 9.04 &amp; is double precision, all other versions are single precision.<\/p>\n<h2>Running the Application<\/h2>\n<p>Currently, use of StarCCM+ in batch mode only is supported; no attempt should be made to use the StarCCM+ GUI directly on the CSF (however, see below for the <a href=\"#clientserver\">client-server<\/a> method). Once you have loaded the modulefile for the version you wish to run please write a jobscript based on one of those below and then submit it to the batch system with this command:<\/p>\n<pre>qsub <em>myjobscript<\/em>\r\n<\/pre>\n<p>replacing <code><em>myjobscript<\/em><\/code> with the name of your file.<\/p>\n<h2>Available Parallel Environments for StarCCM+<\/h2>\n<p>When submitting StarCCM+ jobs to the batch system, the following <code>hp-mpi-*.pe<\/code> SGE parallel environments must be used. This ensures proper allocation of StarCCM+ processes to compute nodes. If not specified correctly you could be slowing down your own or other users&#8217; jobs.<\/p>\n<table class=\"striped\">\n<tbody>\n<tr>\n<th width=\"30%\">PE Name<\/th>\n<th>Description<\/th>\n<\/tr>\n<tr>\n<td>hp-mpi-smp.pe<\/td>\n<td>Suitable for small parallel jobs, between 2 and 32 cores, using the AMD Magny-Cours nodes with 2GB per core. See <a href=\"#ex32smp\">example<\/a>.<\/td>\n<\/tr>\n<tr>\n<td>hp-mpi-smp-64bd.pe<\/td>\n<td>Suitable for medium, parallel jobs, between 2 and 64 cores, using the AMD Bulldozer nodes with 2GB per core. You should use the <code>hp-mpi-smp.pe<\/code> PE above for small jobs initially. See <a href=\"#ex64smp\">example<\/a><\/td>\n<\/tr>\n<tr>\n<td>hp-mpi-32-ib.pe<\/td>\n<td>Suitable for large parallel jobs, 64 or more cores in multiples of 32, using the AMD Magny-Cours nodes. See <a href=\"#ex32ib\">example<\/a>.<\/td>\n<\/tr>\n<tr>\n<td>hp-mpi-64bd-ib.pe<\/td>\n<td>Suitable for large parallel jobs, 128 or more cores in multiples of 64, using the AMD Bulldozer nodes. See <a href=\"#ex64ib\">example<\/a>.<\/td>\n<\/tr>\n<tr>\n<td>orte-64bd-ib.pe<\/td>\n<td>Suitable for large parallel jobs using an <em>experimental<\/em> method, 128 or more cores in multiple of 64, using the AMD Bulldozer nodes. See <a href=\"#ex64ib2\">example<\/a>.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Please pay attention to the jobscript examples &#8211; some of the flags needed on the <code>starccm+<\/code> command-line change depending on the type of job you are running.<\/p>\n<h2>Small Parallel Jobs<\/h2>\n<p>These are jobs that use a single compute node. If your jobs use fewer than 64 cores please submit your jobs to the 32-core AMD Magny-Cours nodes initially if possible. Small jobs submitted to the AMD Bulldozer nodes (64-core nodes) may be suspended if there is a queue of much larger jobs waiting for the 64-core nodes, and you will be asked to resubmit the job to the AMD Magny-Cours nodes (32-core nodes).<\/p>\n<p><a name=\"ex32smp\"><\/a><\/p>\n<h3>Magny-Cours &#8211; jobs of up to 32 cores<\/h3>\n<p>This example describes how to submit an SMP job (uses cores in a single compute node) to the AMD Magny-Cours nodes on the CSF.<\/p>\n<pre>#!\/bin\/bash\r\n#$ -pe hp-mpi-smp.pe 32           # 2-32 cores permitted\r\n#$ -cwd                           # Run in current directory\r\n#$ -V                             # Inherit environment settings from modulefile\r\n\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\n# $NSLOTS is automatically set by the batch system to the\r\n# number of slots you specified above on the -pe line.\r\n\r\nstarccm+ -batch <em>myinput.sim<\/em> -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID\r\n                  #\r\n                  #\r\n                  # Replace <em>myinput.sim<\/em> with your input file.\r\n<\/pre>\n<p>If submitting a job with <em>fewer<\/em> than 32 cores in the <code>hp-mpi-smp.pe<\/code> PE then you should add the following flag to the <code>starccm+<\/code> command-line (e.g., at the end of the line):<\/p>\n<pre>-cpubind off\r\n<\/pre>\n<p>For example:<\/p>\n<pre>starccm+ -batch <em>myinput.sim<\/em> -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID <strong>-cpubind off<\/strong>\r\n<\/pre>\n<p>If your job uses <em>exactly<\/em> 32 cores you must <em>not<\/em> use this flag &#8211; it will slow down your job.<\/p>\n<p><a name=\"ex64smp\"><\/a><\/p>\n<h3>Bulldozer &#8211; jobs of up to 64 cores<\/h3>\n<p>This example describes how to submit a job to a single AMD Bulldozer node on the CSF.<\/p>\n<pre>#!\/bin\/bash\r\n#$ -pe hp-mpi-smp-64bd.pe 64        # 2-64 cores permitted\r\n#$ -cwd                             # Run in current directory\r\n#$ -V                               # Inherit environment settings from modulefile\r\n\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\n# $NSLOTS is automatically set by the batch system to the\r\n# number of slots you specified above on the -pe line.\r\n\r\nstarccm+ -batch <em>myinput.sim<\/em> -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID <strong>-mpi intel<\/strong>\r\n                  #                                                                 #\r\n                  #                                                                 # <strong>-mpi intel<\/strong> for versions\r\n                  #                                                                 # higher than 8.06 <strong>except 9.04 !!<\/strong>\r\n                  # Replace <em>myinput.sim<\/em> with your input file.\r\n<\/pre>\n<p>If submitting a job with <em>fewer<\/em> than 64 cores in the <code>hp-mpi-smp-64bd.pe<\/code> PE then you should add the following flag to the starccm+ command-line (e.g., at the end of the line):<\/p>\n<pre>-cpubind off\r\n<\/pre>\n<p>For example<\/p>\n<pre>starccm+ -batch <em>myinput.sim<\/em> -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID -mpi intel <strong>-cpubind off<\/strong>\r\n<\/pre>\n<p>If your job uses <em>exactly<\/em> 64 cores you must <em>not<\/em> use this flag &#8211; it will slow it down.<\/p>\n<h2>Please check how your job scales before running large parallel jobs!<\/h2>\n<p>Do not assume that using twice as many cores will run you job in half the time! Jobs do not always scale linearly. Users are encouraged to test how their problem scales when considering a new problem or mesh size. This can be achieved easily, using an inbuilt script provided by CD-Adapco, which can be invoked using the <code>-benchmark<\/code> flag. For example:<\/p>\n<pre>#!\/bin\/bash\r\n########## Choose <strong>ONE<\/strong> of the following lines (to benchmark the AMD Bulldozer or Magny-Cours nodes)\r\n#$ -pe hp-mpi-64bd-ib.pe 128        # The benchmark will use up to 128 cores (two 64-core nodes)\r\n#$ -pe hp-mpi-32-ib.pe 64           # The benchmark will use up to 64 cores (two 32-core nodes)\r\n########## Choose <strong>ONE<\/strong> of the above lines\r\n#$ -cwd                             # Run in current directory\r\n#$ -V                               # Inherit environment settings from modulefile\r\n\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\nstarccm+ <strong>-benchmark<\/strong> -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID <em>myinput.sim<\/em>\r\n              #                                                             #\r\n              # Run the automatica benchmarks                               #\r\n              # on your input model                                         # Replace <em>myinput.sim<\/em> with\r\n                                                                            # your input file.\r\n<\/pre>\n<p>This will run a few iterations on 1, 2, 4, 8, 16, 32, 64 and finally 128 cores, timing the runs. The output is a <code>.html<\/code> file which can be opened using a web browser. This file will show how your problem scales, which can be used to decide how many cores to use. The parallel efficiency can be computed as speedup \/ number of workers from the resulting table. The point at which efficiency drops below ~85% users should consider using fewer cores, which may actually increase the speed of your computations.\u00a0 <a href=\"\/csf-apps\/mace01\/doubleBend-amd_opteron_processor_6276\/benchmark.html\">Example output<\/a> of benchmark.<\/p>\n<h2>Large Parallel Jobs<\/h2>\n<p><a name=\"ex32ib\"><\/a><\/p>\n<h3>Magny-Cours &#8211; multi-node jobs of 64 cores or more<\/h3>\n<p>This is a multi-node example using two 32-core Magny-Cours nodes (hence 64 cores in total). You can specify more nodes by using multiples of 32 for the number of cores (e.g., 96 for three 32-core nodes).<\/p>\n<pre>#!\/bin\/bash\r\n#$ -pe hp-mpi-32-ib.pe 64           # 64 or more cores in multiples of 32\r\n#$ -cwd                             # Run in current directory\r\n#$ -V                               # Inherit environment settings from modulefile\r\n\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\nstarccm+ -batch <em>myinput.sim<\/em> -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID\r\n                  #\r\n                  #\r\n                  # Replace <em>myinput.sim<\/em> with your input file.\r\n<\/pre>\n<p><a name=\"ex64ib\"><\/a><\/p>\n<h3>Bulldozer &#8211; multi-node jobs of 128 cores or more<\/h3>\n<p>This is a multi-node example using two 64-core Bulldozer nodes (hence 128 cores in total). You can specify more nodes by using multiples of 64 for the number of cores (e.g., 192 for three 64-core nodes).<\/p>\n<pre>#!\/bin\/bash\r\n#$ -pe hp-mpi-64bd-ib.pe 128        # 128 cores or more in multiple of 64\r\n#$ -cwd                             # Run in current directory\r\n#$ -V                               # Inherit environment settings from modulefile\r\n\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\nstarccm+ -batch myinput.sim -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID <strong>-mpi intel<\/strong>\r\n                  #                                                                     #\r\n                  #                                                                     # <strong>-mpi intel<\/strong> for versions\r\n                  #                                                                     # 8.06 and above <strong>except 9.04 !!<\/strong>\r\n                  # Replace <em>myinput.sim<\/em> with your input file.\r\n<\/pre>\n<p>Please note that starccm+ run on multiple Bulldozer nodes will output errors\/warnings in your output file similar to this:<\/p>\n<pre>node201:87b8:b67519e0: 92 us(92 us):  open_hca: device mlx4_0 not found\r\nnode201:87b8:b67519e0: 724 us(632 us): node201:87c3:e41719e0: 86 us(86 us):  open_hca: device mlx4_0 not found\r\nnode201:87c3:e41719e0: 189 us(103 us):  open_hca: device mlx4_0 not found\r\n open_hca: device mlx4_0 not found\r\nnode204:c94e:6a9d720: 24120 us(24120 us):  open_hca: getaddr_netdev ERROR: No such device. Is ib1 configured?\r\nnode204:c94e:6a9d720: 36487 us(36090 us):  open_hca: device mthca0 not found\r\n<\/pre>\n<p>but your job should run and complete. It is not able to use the full efficiency of the infiniband. Investigations as to why are ongoing.<\/p>\n<p><a name=\"ex64ib2\"><\/a><\/p>\n<h3>Experimental Bulldozer &#8211; jobs of 128 cores or more<\/h3>\n<p>If you are having problems running large Bulldozer jobs try the following alternative method. Note that the jobscript uses a different PE (the standard <code>orte-64bd-ib.pe<\/code> rather than the special <code>hp-mpi-64bd-ib.pe<\/code> PE). You <strong>must<\/strong> also set the extra environment variable as indicated in the jobscript below.<\/p>\n<p>Before using this jobscript please load the <em>extra<\/em> Bulldozer MPI modulefile (normally StarCCM uses its own MPI files but here we are going to use the centrally-installed MPI on the CSF):<\/p>\n<pre># Extra modulefile needed: for StarCCM version 8.x to 11.x\r\nmodule load mpi\/open64-4.5.2\/openmpi\/1.6-ib-amd-bd\r\n\r\n# Extra modulefile needed: for StarCCM version 12.x and later\r\nmodule load mpi\/open64-4.5.2.1\/openmpi\/1.8.3-ib-amd-bd\r\n\r\n# Now load the starccm version you require as usual - for example:\r\nmodule load apps\/binapps\/starccm\/11.06-mixed\r\n<\/pre>\n<p>Then use the following jobscript &#8211; this is different to your normal StarCCM jobscripts so please copy it carefully (we use 320 cores as an example &#8211; this is a large job!)<\/p>\n<pre>#!\/bin\/bash \r\n#$ -pe orte-64bd-ib.pe 320            # Note: Using the standard orte-64bd-ib.pe, not hp-mpi-64bd-ib.pe\r\n                                      # (still requires 128 cores or more in multiple of 64)\r\n#$ -cwd                               # Run in current directory\r\n#$ -V                                 # Inherit environment settings from modulefile\r\n\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\n### <strong>Extra<\/strong> environment variable needed by StarCCM+\r\nexport OPENMPI_DIR=$MPI_HOME\r\n\r\nstarccm+ -batch -rsh ssh -mpi openmpi -batchsystem sge <em>.\/myinput.sim<\/em>\r\n                                 #                #         #\r\n                                 #                #         # Replace with your input file.\r\n                                 #                #\r\n                                 #                # Extra flag to indicate we are using\r\n                                 #                # the CSF batch system's machinefile.\r\n                                 #\r\n                                 # Extra flag to indicate we are using\r\n                                 # the CSF's central OpenMPI software.\r\n<\/pre>\n<p>The above should also work for smaller job sizes.<\/p>\n<h2>Runtime Limits and Licensing<\/h2>\n<p>Please note the following advice:<\/p>\n<ul>\n<li>February 2017: CSF users may run large jobs (128, 256 or 320 cores) but are requested to first try 64 cores. Larger jobs use more licenses and so they may fail if licenses are unavailable. If large CSF jobs consume too many licenses the MACE teaching clusters may not be able to run StarCCM at which point we will reinstate the 64-core limit described below.<\/li>\n<li>All AMD nodes have 2GB of RAM per core.<\/li>\n<li>All AMD nodes have a runtime limit of 4 days.<\/li>\n<\/ul>\n<p>See the section below on checkpointing if your job needs a longer runtime.<\/p>\n<h2>Force a Checkpoint and optionally Abort<\/h2>\n<p>Checkpointing saves the current state of your simulation to file so that you can run the job again from the current state rather than from the beginning of the simulation. This is needed if your simulation is going to run for longer than the maximum runtime permitted (usually 4 days on the AMD nodes, 7 days on the Intel nodes in the CSF).<\/p>\n<p>When asked to checkpoint, StarCCM+ will write out a new <code>.sim<\/code> file with <code>@<em>iteration<\/em><\/code> in the name to indicate the iteration number at which the checkpoint file was made. For example <code>&#x6d;&#x79;&#x69;&#110;&#112;&#117;t&#64;&#x32;&#x35;&#x30;&#x30;&#48;&#46;si&#x6d;<\/code>. You can then use this as the input <code>.sim<\/code> file for another job on the CSF.<\/p>\n<h3>Manually<\/h3>\n<p>To force a checkpoint manually, leaving your batch job to carry on running the simulation after the checkpoint file has been written, run the following command on the login node in the directory from where you submitted the job:<\/p>\n<pre>touch CHECKPOINT\r\n<\/pre>\n<p>StarCCM+ checks after every iteration for this file. Once it sees the file it will save the current state of the simulation then rename the <code>CHECKPOINT<\/code> file to be <code>CHECKPOINT~<\/code> (so that it doesn&#8217;t keep checkpointing) then carry on with the simulation. You can run <code>touch CHECKPOINT<\/code> again at some time in the future to generate a new checkpoint file.<\/p>\n<p>If you wish to checkpoint and then terminate your simulation (which will end your CSF job), run the following on the login node in the directory where your simulation is running:<\/p>\n<pre>touch ABORT\r\n<\/pre>\n<p>You batch job will then terminate after it has written the checkpoint file.<\/p>\n<h3>Automatically in your jobscript<\/h3>\n<p>It is possible to automatically checkpoint your job near the end of the job&#8217;s runtime and have it re-queue itself on the CSF using the jobscript as shown below.<\/p>\n<p>In this example we run a simulation file named <code>myinput.sim<\/code>. When the job re-queues after a checkpoint, it will run <code>myinput@<em>nnnnn<\/em>.sim<\/code> where <code><em>nnnnn<\/em><\/code> is the iteration number at which the checkpoint was written. The script will automatically use the most recent checkpoint file. Eventually, the simulation will converge and StarCCM will exit normally. When this happens no further checkpoints are made and the job will not re-queue.<\/p>\n<p>Note, it is important to change the <code>SAVETIME<\/code> setting to be the duration you want StarCCM to run for before it checkpoints, aborts and requeues the job. Given that a checkpoint occurs at the end of an iteration, you need to allow enough time for the current iteration to finish and for the checkpoint file to be written. In the example below we run on the AMD nodes which have a maximum runtime of 4 days. Hence we set the checkpoint time to be at 3 days, 23 hours and 50 minutes. This give StarCCM 10 minutes (which is usually plenty of time) to finish the current iteration and write the checkpoint file. Edit this setting as required.<\/p>\n<p>Note that the script below does not delete any checkpoint files. If your simulation runs for a long time it may checkpoint a number of times and leave you with many large checkpoint files. You can periodically delete all but the most recent of these files.<\/p>\n<pre>#!\/bin\/bash \r\n#$ -cwd\r\n#$ -V\r\n#$ -pe hp-mpi-smp-64bd.pe 64  # This example uses a single 64-core AMD node\r\n\r\n# ------ Edit these settings (carefully) -------\r\n\r\n# How long the job should run for before we checkpoint\r\n# and abort. Check the PE max runtime on the CSF webpages.\r\n# dd:hh:mm:ss\r\n\r\nSAVETIME=03:23:50:00\r\n\r\n# Name of simulation file - <strong>do not<\/strong> add .sim to end\r\n# of name. EG: Will use myinput.sim on first run or latest\r\n# checkpoint file myinput@<em>nnnnn<\/em>.sim if available.\r\n\r\nSIMFILE=myinput\r\n\r\n# --------------------------------------------\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\n# Find newest checkpoint file \r\nSIM=`ls -t ${SIMFILE}*.sim | head -1`\r\n\r\nif [ -n \"$SIM\" ]; then\r\n  ISCHK=`echo $SIM | grep -c '@'`\r\n  if [ $ISCHK -eq 1 ]; then\r\n    echo \"Using checkpoint file $SIM\"\r\n  else \r\n    echo \"Using original simulation file $SIM\"\r\n  fi\r\nelse\r\n  echo \"Failed to find any .sim files. Job will exit!\"\r\n  exit 1\r\nfi\r\n\r\n# Add a timer to create checkpoint file (the <strong>&amp;<\/strong> is important!)\r\n(sleep `echo $SAVETIME | awk -F: '{printf \"%dd %dh %dm %ds\",$1,$2,$3,$4}'`; echo \"ABORT...\"; touch ABORT) <strong>&amp;<\/strong>\r\nSLEEP_PID=$!\r\n\r\n# Run ccm as usual for a 64-core job (see earlier in this webpage for more details)\r\nstarccm+ -batch $SIM -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID -mpi intel  \r\n\r\n# Tidy up the sleep timer\r\npkill -P $SLEEP_PID\r\n\r\n# Automatically resubmit the job if we reached the checkpoint time?\r\nif [ -f ABORT ]; then\r\n  CHKFILE=`ls -t ${SIMFILE}@*.sim | head -1`\r\n  if [ -f \"$CHKFILE\" ]; then \r\n    echo \"Job checkpointed at `date` - wrote $CHKFILE - job will be requeued automatically\"\r\n    STATUS=99\r\n  else\r\n    echo \"No checkpoint file found (this is probably an error!) Job will not be requeued.\"\r\n  fi\r\nfi\r\nrm -f ABORT\r\nexit $STATUS \r\n<\/pre>\n<p><a name=\"clientserver\"><\/a><\/p>\n<h2>Client \/ Server Usage<\/h2>\n<p>The StarCCM+ GUI running on a campus desktop PC can be connected to a batch simulation running on the CSF. This allows the GUI to display the current state of the simulation (for example you can see graphs showing how particular variables are converging).<\/p>\n<p>Note that the method below will mean that the CSF job, once it is running, does NOT automatically start the simulation. Instead StarCCM+ will wait for you to connect the GUI to the job. But the CSF job is running and will be consuming its available runtime (max 4 days on the CSF).<\/p>\n<p>Please follow the instructions below:<\/p>\n<ol class=\"gaplist\">\n<li>Open two terminal windows on your PC. For example, two MobaXterm windows (use the + button above the black command window to open a second command window in a new <em>tab<\/em>) or run MobaXterm twice. On Mac or Linux, open two <em>Terminal<\/em> applications.<\/li>\n<li>In the first command-window, log in to the CSF as normal. Then write your StarCCM+ batch job. This should be familiar. However, there is a small change to one of the flags on the <code>starccm+<\/code> command-line as show in this example:\n<pre>#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe hp-mpi-smp-64bd.pe 64        # Single-node (64 core) job to run a StarCCM simulation\r\n\r\n# Load the modulefile for the version your require. For example:\r\nmodule load apps\/binapps\/starccm\/12.02-mixed\r\n\r\n# Check for licenses, requeue if not enough available.\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\n# Run starccm, but in <em>server<\/em> mode instead of <em>batch<\/em> mode\r\nstarccm+ <strong>-server<\/strong> -load <em>myinput.sim<\/em> -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID -mpi intel\r\n           #               #\r\n           #               # Replace <em>myinput.sim<\/em> with your own input file.\r\n           #\r\n           # Now we use -server instead of -batch (as used previously)\r\n<\/pre>\n<p>Submit the job using <code>qsub <em>myjobscript<\/em><\/code> as usual.<\/li>\n<li>Wait for the above job to run. When it does you will see a file named <code>myjobscript.o<em>12345<\/em><\/code> where <code><em>12345<\/em><\/code> will be unique to your job. Have a look in this file:\n<pre>cat myjobscript.o<em>12345<\/em>\r\n<\/pre>\n<p>At the end of the file will be a message informing you where the job is running and on which <em>port<\/em> the server is listening:<\/p>\n<pre>Server::start -host <strong><em>node209<\/em><\/strong>.prv.csf.compute.estate:<strong><em>47827<\/em><\/strong>\r\n                         #                           #\r\n                         #                           # The port number <em>may<\/em> be different\r\n                         #                           # but it is often 47827 (the ccm default).\r\n                         #\r\n                         # The node name will likely be different.\r\n                         # Make a note of <em>your<\/em> node name.\r\n<\/pre>\n<\/li>\n<li>Now, in the second terminal window on <em>your PC<\/em> (<strong>NOT<\/strong> the CSF) log in to the CSF again with the following command:\n<pre>ssh -L <strong><em>47827<\/em><\/strong>:<strong><em>node209<\/em><\/strong>:<strong><em>47827<\/em><\/strong> <strong><em>mxyzabc1<\/em><\/strong>@csf2.itservices.manchester.ac.uk\r\n         #       #     #       #\r\n         #       #     #       # Use your own username here\r\n         #       #     #\r\n         #       #     # Use the port number reported earlier\r\n         #       #\r\n         #       # Use the node name reported earlier\r\n         #\r\n         # Use the port number earlier\r\n<\/pre>\n<p>Enter your CSF password when asked. You must leave this window logged in all time while using the StarCCM+ GUI on your PC. The GUI will be communicating with the CSF through this log-in (<em>tunnel<\/em>). You do not need to type any commands in to this login window.<\/li>\n<li>Now start the StarCCM+ GUI on your desktop PC. For example, on Windows, do this via the <em>Start Menu<\/em>. This will display the main user interface. In the GUI:\n<ol>\n<li>Select <em>File menu<\/em> then <em>Connect to server&#8230;<\/em><\/li>\n<li>In the window that pops up set:\n<pre> \r\nHost: localhost\r\nPort: 47827 (or whatever number you got from above).\r\n<\/pre>\n<p>Then hit OK. The GUI will connect to the job on the CSF.<\/li>\n<li>If running the StarCCM+ GUI on a linux desktop, you can connect to the server using:\n<pre>starccm+ -host localhost:<strong>47827<\/strong>      # Change the port number to match given above\r\n<\/pre>\n<\/li>\n<li>You can now run (start) the simulation by going to the <em>Solution<\/em> menu then <em>Run<\/em> (or simply press <code>CTRL+R<\/code> in the main window).<\/li>\n<li>If you just want to look at the mesh, Open the <em>Scenes<\/em> node in the tree on the left. Then right-click on a <em>geometry<\/em> node and select <em>Open<\/em>. This will display the 3D geometry in the main viewer window.<\/li>\n<\/ol>\n<\/li>\n<li>You can disconnect the GUI from the CSF job using the <em>File<\/em> menu then <em>Disconnect from server<\/em>. This will leave the simulation running on the CSF but it won&#8217;t update in the GUI. You can close the StarCCM GUI at this point.<\/li>\n<\/ol>\n<h2>Co-Simulation with Abaqus<\/h2>\n<p>It is possible to have STAR-CCM+ perform some calculations with Abaqus, exchanging data between the two applications. For example, in mechanical co-simulation STAR-CCM+ passes traction loads to Abaqus (pressure + wall shear stress), and Abaqus passes displacements to STAR-CCM+. In Abaqus, the traction loads are applied to the surface of the solid structure. In STAR-CCM+, the displacements are used as an input to the mesh morpher. Data is exchanged via the Co-Simulation module of STAR-CCM+.<\/p>\n<p>You will need to set up the co-simulation in your STAR-CCM+ input file and also have available an Abaqus input file. You should also ensure the input files, when run in their respective applications, converge to solutions otherwise the co-simulation will not converge.<\/p>\n<p>More information is available in the STAR-CCM+ user guide, available on the login node by running the following command after you&#8217;ve loaded the starccm modulefile:<\/p>\n<pre>\r\nevince $STARCCM_UG\r\n<\/pre>\n<h3>Example 32-core co-simulation job<\/h3>\n<p>In this example we will run a single-node 32-core job with 24 cores used by starccm and 8 cores used by Abaqus.<\/p>\n<p>Create a directory structure for your co-simulation:<\/p>\n<pre>\r\ncd ~\/scratch\r\nmkdir co-sim             # Change the name as requried\r\ncd co-sim\r\nmkdir abaqus starccm     # Two directories to hold the input and output files from each app\r\n<\/pre>\n<p>Now copy your input files to the respective directories. For example:<\/p>\n<pre>\r\ncp ~\/abq_cosim.inp ~\/scratch\/co-sim\/abaqus\r\ncp ~\/ccm_cosim.sim ~\/scratch\/co-sim\/starccm\r\n<\/pre>\n<p>Now make some changes to the STAR-CCM+ input file to enable co-simulation. You can do this on your local workstation if you prefer but it is useful to be able to do this on the CSF when you want to change the settings before submitting a job &#8211; you will avoid transferring the input file back and forth between your workstation:<\/p>\n<pre>\r\n# Load the required version of starccm, for example:\r\nmodule load apps\/binapps\/starccm\/13.04-mixed\r\n\r\n# Start an interactive job to run the starccm GUI:\r\ncd ~\/scratch\/co-sim\/starccm\r\nqrsh -l inter -l short -V -cwd starccm+ ccm_cosim.sim\r\n<\/pre>\n<p>When the GUI starts, open the <code>Co-Simulations<\/code> node in the tree viewer, expand <code>Link1<\/code> then look for the following attributes and set their values as follows:<\/p>\n<pre>\r\nCo-Simulation Type = Abaqus Co-Simulation       # Choose from the drop-down menu\r\n...\r\n### Note there are several simulation settings (e.g., ramping parameters) that control\r\n### the simulation. You will need to set these but they are beyond the scope of this\r\n### web-page. Please refer to the starccm user guide.\r\n...\r\nAbaqus Execution\r\n    Current Job Name = <em>my-abq-cosim<\/em>             # Can be any name - abaqus output files will have this name\r\n    Input file = ..\/abaqus\/abq_cosim.inp        # See directory structure created above\r\n    Executable name = abq2016                   # This is the Abaqus command used on the CSF (choose your version)\r\n    Number of CPUs = 8                          # This MUST be set correctly for the jobscript (see below)\r\n    Remote shell = ssh                          # The default - leave set to this\r\n<\/pre>\n<p>Save the input file and exit the STAR-CCM+ GUI.<\/p>\n<p>Now create a jobscript to load the starccm and abaqus modulefiles, and run starccm with the correct number of cores:<\/p>\n<pre>\r\ncd ~\/scratch\/co-sim\/starccm\r\ngedit cosim.qsub\r\n<\/pre>\n<p>The jobscript should contain the following:<\/p>\n<pre>\r\n#!\/bin\/bash --login\r\n#$ -cwd\r\n#$ -pe hp-mpi-smp.pe 32       # Single-node 32-core job\r\n\r\n## Load the modulefiles in the jobscript so we always know which version we used\r\nmodule load apps\/binapps\/starccm\/13.04-mixed\r\nmodule load apps\/binapps\/abaqus\/abq\r\n\r\n## We will use 24 (out of 32) cores for starccm and 8 (out of 32) cores for abaqus.\r\n## Manually set the special NSLOTS variable to these numbers so the license check\r\n## scripts test for the correct number of licenses. If there are not enough licenses\r\n## then the job will requeue.\r\nexport NSLOTS=8\r\n. $ABAQUS_HOME\/liccheck.sh\r\n\r\n## NOTE: We will leave NSLOTS set to the number needed for starccm after this license check\r\nexport NSLOTS=24\r\n. $STARCCM_HOME\/liccheck.sh\r\n\r\n## Now run starccm with 24 cores. It will run the 'abq2016' command with 8 cores as set in the input file.\r\nstarccm+ -batch plate-cosim.sim -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID\r\n<\/pre>\n<p>Submit the job using<\/p>\n<pre>qsub cosim.qsub<\/pre>\n<p>When the job runs you will see output files written to the <code>~\/scratch\/co-sim\/abaqus<\/code> and <code>~\/scratch\/co-sim\/starccm<\/code> directories. For example, to see what is happening in each simulation:<\/p>\n<pre>\r\ncd ~\/scratch\/co-sim\/abaqus\r\ntail -f <em>my-abq-cosim<\/em>.msg        # The abaqus output file name was set in the starccm input file above\r\n  #\r\n  # Press CTRL+C to exit out of the 'tail' command\r\n\r\ncd ~\/scratch\/co-sim\/starccm\r\ntail -f cosim.qsub.o<em>123456<\/em>               # Replace <em>123456<\/em> with the job id number of your starccm job\r\n  #\r\n  # Press CTRL+C to exit out of the 'tail' command\r\n<\/pre>\n<p>The simulation should run until it is converged.<\/p>\n<h2>Further Information<\/h2>\n<p>Further information on StarCCM+ and other CFD applications may be found by <a href=\"http:\/\/cfd.mace.manchester.ac.uk\/twiki\/bin\/view\/Forum\/ForumCFD \">visiting the MACE CFD Forum<\/a>.<\/p>\n<p>The STAR-CCM+ user guide is available on the CSF using the following command <em>after<\/em> you have loaded the starccm modulefile:<\/p>\n<pre>\r\nevince $STARCCM_UG\r\n<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Overview StarCCM+ is a computation continuum mechanics application which can handle problems relating to fluid flow, heat transfer and stress. See below for the available versions. Restrictions on Use Only users who have been added to the StarCCM group can run the application (run groups to see your group memberships). Owing to licence restrictions, only users from the School of MACE can be added to this group. Requests to be added to the StarCCM group.. <a href=\"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/software\/applications\/starccm\/\">Read more &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":31,"menu_order":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-298","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/298","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/comments?post=298"}],"version-history":[{"count":21,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/298\/revisions"}],"predecessor-version":[{"id":5002,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/298\/revisions\/5002"}],"up":[{"embeddable":true,"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/pages\/31"}],"wp:attachment":[{"href":"https:\/\/ri.itservices.manchester.ac.uk\/csf-apps\/wp-json\/wp\/v2\/media?parent=298"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}