The CSF2 has been replaced by the CSF3 - please use that system! This documentation may be out of date. Please read the CSF3 documentation instead. To display this old CSF2 page click here. |
StarCCM+
Overview
StarCCM+ is a computation continuum mechanics application which can handle problems relating to fluid flow, heat transfer and stress. See below for the available versions.
Restrictions on Use
Only users who have been added to the StarCCM group can run the application (run groups to see your group memberships). Owing to licence restrictions, only users from the School of MACE can be added to this group. Requests to be added to the StarCCM group should be emailed toits-ri-team@manchester.ac.uk. |
Please note: As of October 2014 all CSF users are encouraged to use a total of 64 cores running StarCCM jobs initially. This will allow you determine if the job will complete using that number of cores and the amount of memory available to those cores. If you need more cores (and memory) you should then increase the number of cores requested in the jobscript (see below). Using more cores than you need will increase queueing time and license usage which may prevent teaching clusters in MACE from running the software during term time.
When the job runs there may not be enough licenses for the number of cores you have requested. There is currently a high demand for licenses and they may run out. If this happens your CSF jobs will fail with a license error. To check whether there are at least enough licenses for you job you can add the following line to your jobscript:
. $STARCCM_HOME/liccheck.sh
(that’s a full-stop followed by a space at the start of the line – please copy it carefully.)
If there are not enough licenses for your job the job will automatically re-queue in the CSF batch system. If there are enough licenses the job will run StarCCM.
Set Up Procedure
Once you have been added to the StarCCM+ group, you will be able to access the executables after issuing one of the following module commands:
# Choose required precision (double is more accurate but slower) module load apps/binapps/starccm/13.04-double module load apps/binapps/starccm/13.04-mixed module load apps/binapps/starccm/12.04-double module load apps/binapps/starccm/12.04-mixed module load apps/binapps/starccm/12.02-double module load apps/binapps/starccm/12.02-mixed module load apps/binapps/starccm/11.06-double module load apps/binapps/starccm/11.06-mixed module load apps/binapps/starccm/11.02-mixed module load apps/binapps/starccm/10.02-single module load apps/binapps/starccm/10.02-double module load apps/binapps/starccm/9.06-single module load apps/binapps/starccm/9.06-double # Double precision only module load apps/binapps/starccm/9.04 # Single precision only module load apps/binapps/starccm/9.02 module load apps/binapps/starccm/8.06 module load apps/binapps/starccm/8.04 module load apps/binapps/starccm/7.06 module load apps/binapps/starccm/7.04 module load apps/binapps/starccm/5.04
Both a double precision and single precision version of 9.06 and later have been installed – reflected in the above modulefile names.
Version 9.04 & is double precision, all other versions are single precision.
Running the Application
Currently, use of StarCCM+ in batch mode only is supported; no attempt should be made to use the StarCCM+ GUI directly on the CSF (however, see below for the client-server method). Once you have loaded the modulefile for the version you wish to run please write a jobscript based on one of those below and then submit it to the batch system with this command:
qsub myjobscript
replacing myjobscript
with the name of your file.
Available Parallel Environments for StarCCM+
When submitting StarCCM+ jobs to the batch system, the following hp-mpi-*.pe
SGE parallel environments must be used. This ensures proper allocation of StarCCM+ processes to compute nodes. If not specified correctly you could be slowing down your own or other users’ jobs.
PE Name | Description |
---|---|
hp-mpi-smp.pe | Suitable for small parallel jobs, between 2 and 32 cores, using the AMD Magny-Cours nodes with 2GB per core. See example. |
hp-mpi-smp-64bd.pe | Suitable for medium, parallel jobs, between 2 and 64 cores, using the AMD Bulldozer nodes with 2GB per core. You should use the hp-mpi-smp.pe PE above for small jobs initially. See example |
hp-mpi-32-ib.pe | Suitable for large parallel jobs, 64 or more cores in multiples of 32, using the AMD Magny-Cours nodes. See example. |
hp-mpi-64bd-ib.pe | Suitable for large parallel jobs, 128 or more cores in multiples of 64, using the AMD Bulldozer nodes. See example. |
orte-64bd-ib.pe | Suitable for large parallel jobs using an experimental method, 128 or more cores in multiple of 64, using the AMD Bulldozer nodes. See example. |
Please pay attention to the jobscript examples – some of the flags needed on the starccm+
command-line change depending on the type of job you are running.
Small Parallel Jobs
These are jobs that use a single compute node. If your jobs use fewer than 64 cores please submit your jobs to the 32-core AMD Magny-Cours nodes initially if possible. Small jobs submitted to the AMD Bulldozer nodes (64-core nodes) may be suspended if there is a queue of much larger jobs waiting for the 64-core nodes, and you will be asked to resubmit the job to the AMD Magny-Cours nodes (32-core nodes).
Magny-Cours – jobs of up to 32 cores
This example describes how to submit an SMP job (uses cores in a single compute node) to the AMD Magny-Cours nodes on the CSF.
#!/bin/bash #$ -pe hp-mpi-smp.pe 32 # 2-32 cores permitted #$ -cwd # Run in current directory #$ -V # Inherit environment settings from modulefile # Check for licenses, requeue if not enough available. . $STARCCM_HOME/liccheck.sh # $NSLOTS is automatically set by the batch system to the # number of slots you specified above on the -pe line. starccm+ -batch myinput.sim -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID # # # Replace myinput.sim with your input file.
If submitting a job with fewer than 32 cores in the hp-mpi-smp.pe
PE then you should add the following flag to the starccm+
command-line (e.g., at the end of the line):
-cpubind off
For example:
starccm+ -batch myinput.sim -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID -cpubind off
If your job uses exactly 32 cores you must not use this flag – it will slow down your job.
Bulldozer – jobs of up to 64 cores
This example describes how to submit a job to a single AMD Bulldozer node on the CSF.
#!/bin/bash #$ -pe hp-mpi-smp-64bd.pe 64 # 2-64 cores permitted #$ -cwd # Run in current directory #$ -V # Inherit environment settings from modulefile # Check for licenses, requeue if not enough available. . $STARCCM_HOME/liccheck.sh # $NSLOTS is automatically set by the batch system to the # number of slots you specified above on the -pe line. starccm+ -batch myinput.sim -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID -mpi intel # # # # -mpi intel for versions # # higher than 8.06 except 9.04 !! # Replace myinput.sim with your input file.
If submitting a job with fewer than 64 cores in the hp-mpi-smp-64bd.pe
PE then you should add the following flag to the starccm+ command-line (e.g., at the end of the line):
-cpubind off
For example
starccm+ -batch myinput.sim -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID -mpi intel -cpubind off
If your job uses exactly 64 cores you must not use this flag – it will slow it down.
Please check how your job scales before running large parallel jobs!
Do not assume that using twice as many cores will run you job in half the time! Jobs do not always scale linearly. Users are encouraged to test how their problem scales when considering a new problem or mesh size. This can be achieved easily, using an inbuilt script provided by CD-Adapco, which can be invoked using the -benchmark
flag. For example:
#!/bin/bash ########## Choose ONE of the following lines (to benchmark the AMD Bulldozer or Magny-Cours nodes) #$ -pe hp-mpi-64bd-ib.pe 128 # The benchmark will use up to 128 cores (two 64-core nodes) #$ -pe hp-mpi-32-ib.pe 64 # The benchmark will use up to 64 cores (two 32-core nodes) ########## Choose ONE of the above lines #$ -cwd # Run in current directory #$ -V # Inherit environment settings from modulefile # Check for licenses, requeue if not enough available. . $STARCCM_HOME/liccheck.sh starccm+ -benchmark -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID myinput.sim # # # Run the automatica benchmarks # # on your input model # Replace myinput.sim with # your input file.
This will run a few iterations on 1, 2, 4, 8, 16, 32, 64 and finally 128 cores, timing the runs. The output is a .html
file which can be opened using a web browser. This file will show how your problem scales, which can be used to decide how many cores to use. The parallel efficiency can be computed as speedup / number of workers from the resulting table. The point at which efficiency drops below ~85% users should consider using fewer cores, which may actually increase the speed of your computations. Example output of benchmark.
Large Parallel Jobs
Magny-Cours – multi-node jobs of 64 cores or more
This is a multi-node example using two 32-core Magny-Cours nodes (hence 64 cores in total). You can specify more nodes by using multiples of 32 for the number of cores (e.g., 96 for three 32-core nodes).
#!/bin/bash #$ -pe hp-mpi-32-ib.pe 64 # 64 or more cores in multiples of 32 #$ -cwd # Run in current directory #$ -V # Inherit environment settings from modulefile # Check for licenses, requeue if not enough available. . $STARCCM_HOME/liccheck.sh starccm+ -batch myinput.sim -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID # # # Replace myinput.sim with your input file.
Bulldozer – multi-node jobs of 128 cores or more
This is a multi-node example using two 64-core Bulldozer nodes (hence 128 cores in total). You can specify more nodes by using multiples of 64 for the number of cores (e.g., 192 for three 64-core nodes).
#!/bin/bash #$ -pe hp-mpi-64bd-ib.pe 128 # 128 cores or more in multiple of 64 #$ -cwd # Run in current directory #$ -V # Inherit environment settings from modulefile # Check for licenses, requeue if not enough available. . $STARCCM_HOME/liccheck.sh starccm+ -batch myinput.sim -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID -mpi intel # # # # -mpi intel for versions # # 8.06 and above except 9.04 !! # Replace myinput.sim with your input file.
Please note that starccm+ run on multiple Bulldozer nodes will output errors/warnings in your output file similar to this:
node201:87b8:b67519e0: 92 us(92 us): open_hca: device mlx4_0 not found node201:87b8:b67519e0: 724 us(632 us): node201:87c3:e41719e0: 86 us(86 us): open_hca: device mlx4_0 not found node201:87c3:e41719e0: 189 us(103 us): open_hca: device mlx4_0 not found open_hca: device mlx4_0 not found node204:c94e:6a9d720: 24120 us(24120 us): open_hca: getaddr_netdev ERROR: No such device. Is ib1 configured? node204:c94e:6a9d720: 36487 us(36090 us): open_hca: device mthca0 not found
but your job should run and complete. It is not able to use the full efficiency of the infiniband. Investigations as to why are ongoing.
Experimental Bulldozer – jobs of 128 cores or more
If you are having problems running large Bulldozer jobs try the following alternative method. Note that the jobscript uses a different PE (the standard orte-64bd-ib.pe
rather than the special hp-mpi-64bd-ib.pe
PE). You must also set the extra environment variable as indicated in the jobscript below.
Before using this jobscript please load the extra Bulldozer MPI modulefile (normally StarCCM uses its own MPI files but here we are going to use the centrally-installed MPI on the CSF):
# Extra modulefile needed: for StarCCM version 8.x to 11.x module load mpi/open64-4.5.2/openmpi/1.6-ib-amd-bd # Extra modulefile needed: for StarCCM version 12.x and later module load mpi/open64-4.5.2.1/openmpi/1.8.3-ib-amd-bd # Now load the starccm version you require as usual - for example: module load apps/binapps/starccm/11.06-mixed
Then use the following jobscript – this is different to your normal StarCCM jobscripts so please copy it carefully (we use 320 cores as an example – this is a large job!)
#!/bin/bash #$ -pe orte-64bd-ib.pe 320 # Note: Using the standard orte-64bd-ib.pe, not hp-mpi-64bd-ib.pe # (still requires 128 cores or more in multiple of 64) #$ -cwd # Run in current directory #$ -V # Inherit environment settings from modulefile # Check for licenses, requeue if not enough available. . $STARCCM_HOME/liccheck.sh ### Extra environment variable needed by StarCCM+ export OPENMPI_DIR=$MPI_HOME starccm+ -batch -rsh ssh -mpi openmpi -batchsystem sge ./myinput.sim # # # # # # Replace with your input file. # # # # Extra flag to indicate we are using # # the CSF batch system's machinefile. # # Extra flag to indicate we are using # the CSF's central OpenMPI software.
The above should also work for smaller job sizes.
Runtime Limits and Licensing
Please note the following advice:
- February 2017: CSF users may run large jobs (128, 256 or 320 cores) but are requested to first try 64 cores. Larger jobs use more licenses and so they may fail if licenses are unavailable. If large CSF jobs consume too many licenses the MACE teaching clusters may not be able to run StarCCM at which point we will reinstate the 64-core limit described below.
- All AMD nodes have 2GB of RAM per core.
- All AMD nodes have a runtime limit of 4 days.
See the section below on checkpointing if your job needs a longer runtime.
Force a Checkpoint and optionally Abort
Checkpointing saves the current state of your simulation to file so that you can run the job again from the current state rather than from the beginning of the simulation. This is needed if your simulation is going to run for longer than the maximum runtime permitted (usually 4 days on the AMD nodes, 7 days on the Intel nodes in the CSF).
When asked to checkpoint, StarCCM+ will write out a new .sim
file with @iteration
in the name to indicate the iteration number at which the checkpoint file was made. For example myinput@25000.sim
. You can then use this as the input .sim
file for another job on the CSF.
Manually
To force a checkpoint manually, leaving your batch job to carry on running the simulation after the checkpoint file has been written, run the following command on the login node in the directory from where you submitted the job:
touch CHECKPOINT
StarCCM+ checks after every iteration for this file. Once it sees the file it will save the current state of the simulation then rename the CHECKPOINT
file to be CHECKPOINT~
(so that it doesn’t keep checkpointing) then carry on with the simulation. You can run touch CHECKPOINT
again at some time in the future to generate a new checkpoint file.
If you wish to checkpoint and then terminate your simulation (which will end your CSF job), run the following on the login node in the directory where your simulation is running:
touch ABORT
You batch job will then terminate after it has written the checkpoint file.
Automatically in your jobscript
It is possible to automatically checkpoint your job near the end of the job’s runtime and have it re-queue itself on the CSF using the jobscript as shown below.
In this example we run a simulation file named myinput.sim
. When the job re-queues after a checkpoint, it will run myinput@nnnnn.sim
where nnnnn
is the iteration number at which the checkpoint was written. The script will automatically use the most recent checkpoint file. Eventually, the simulation will converge and StarCCM will exit normally. When this happens no further checkpoints are made and the job will not re-queue.
Note, it is important to change the SAVETIME
setting to be the duration you want StarCCM to run for before it checkpoints, aborts and requeues the job. Given that a checkpoint occurs at the end of an iteration, you need to allow enough time for the current iteration to finish and for the checkpoint file to be written. In the example below we run on the AMD nodes which have a maximum runtime of 4 days. Hence we set the checkpoint time to be at 3 days, 23 hours and 50 minutes. This give StarCCM 10 minutes (which is usually plenty of time) to finish the current iteration and write the checkpoint file. Edit this setting as required.
Note that the script below does not delete any checkpoint files. If your simulation runs for a long time it may checkpoint a number of times and leave you with many large checkpoint files. You can periodically delete all but the most recent of these files.
#!/bin/bash #$ -cwd #$ -V #$ -pe hp-mpi-smp-64bd.pe 64 # This example uses a single 64-core AMD node # ------ Edit these settings (carefully) ------- # How long the job should run for before we checkpoint # and abort. Check the PE max runtime on the CSF webpages. # dd:hh:mm:ss SAVETIME=03:23:50:00 # Name of simulation file - do not add .sim to end # of name. EG: Will use myinput.sim on first run or latest # checkpoint file myinput@nnnnn.sim if available. SIMFILE=myinput # -------------------------------------------- # Check for licenses, requeue if not enough available. . $STARCCM_HOME/liccheck.sh # Find newest checkpoint file SIM=`ls -t ${SIMFILE}*.sim | head -1` if [ -n "$SIM" ]; then ISCHK=`echo $SIM | grep -c '@'` if [ $ISCHK -eq 1 ]; then echo "Using checkpoint file $SIM" else echo "Using original simulation file $SIM" fi else echo "Failed to find any .sim files. Job will exit!" exit 1 fi # Add a timer to create checkpoint file (the & is important!) (sleep `echo $SAVETIME | awk -F: '{printf "%dd %dh %dm %ds",$1,$2,$3,$4}'`; echo "ABORT..."; touch ABORT) & SLEEP_PID=$! # Run ccm as usual for a 64-core job (see earlier in this webpage for more details) starccm+ -batch $SIM -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID -mpi intel # Tidy up the sleep timer pkill -P $SLEEP_PID # Automatically resubmit the job if we reached the checkpoint time? if [ -f ABORT ]; then CHKFILE=`ls -t ${SIMFILE}@*.sim | head -1` if [ -f "$CHKFILE" ]; then echo "Job checkpointed at `date` - wrote $CHKFILE - job will be requeued automatically" STATUS=99 else echo "No checkpoint file found (this is probably an error!) Job will not be requeued." fi fi rm -f ABORT exit $STATUS
Client / Server Usage
The StarCCM+ GUI running on a campus desktop PC can be connected to a batch simulation running on the CSF. This allows the GUI to display the current state of the simulation (for example you can see graphs showing how particular variables are converging).
Note that the method below will mean that the CSF job, once it is running, does NOT automatically start the simulation. Instead StarCCM+ will wait for you to connect the GUI to the job. But the CSF job is running and will be consuming its available runtime (max 4 days on the CSF).
Please follow the instructions below:
- Open two terminal windows on your PC. For example, two MobaXterm windows (use the + button above the black command window to open a second command window in a new tab) or run MobaXterm twice. On Mac or Linux, open two Terminal applications.
- In the first command-window, log in to the CSF as normal. Then write your StarCCM+ batch job. This should be familiar. However, there is a small change to one of the flags on the
starccm+
command-line as show in this example:#!/bin/bash --login #$ -cwd #$ -pe hp-mpi-smp-64bd.pe 64 # Single-node (64 core) job to run a StarCCM simulation # Load the modulefile for the version your require. For example: module load apps/binapps/starccm/12.02-mixed # Check for licenses, requeue if not enough available. . $STARCCM_HOME/liccheck.sh # Run starccm, but in server mode instead of batch mode starccm+ -server -load myinput.sim -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID -mpi intel # # # # Replace myinput.sim with your own input file. # # Now we use -server instead of -batch (as used previously)
Submit the job using
qsub myjobscript
as usual. - Wait for the above job to run. When it does you will see a file named
myjobscript.o12345
where12345
will be unique to your job. Have a look in this file:cat myjobscript.o12345
At the end of the file will be a message informing you where the job is running and on which port the server is listening:
Server::start -host node209.prv.csf.compute.estate:47827 # # # # The port number may be different # # but it is often 47827 (the ccm default). # # The node name will likely be different. # Make a note of your node name.
- Now, in the second terminal window on your PC (NOT the CSF) log in to the CSF again with the following command:
ssh -L 47827:node209:47827 mxyzabc1@csf2.itservices.manchester.ac.uk # # # # # # # # Use your own username here # # # # # # Use the port number reported earlier # # # # Use the node name reported earlier # # Use the port number earlier
Enter your CSF password when asked. You must leave this window logged in all time while using the StarCCM+ GUI on your PC. The GUI will be communicating with the CSF through this log-in (tunnel). You do not need to type any commands in to this login window.
- Now start the StarCCM+ GUI on your desktop PC. For example, on Windows, do this via the Start Menu. This will display the main user interface. In the GUI:
- Select File menu then Connect to server…
- In the window that pops up set:
Host: localhost Port: 47827 (or whatever number you got from above).
Then hit OK. The GUI will connect to the job on the CSF.
- If running the StarCCM+ GUI on a linux desktop, you can connect to the server using:
starccm+ -host localhost:47827 # Change the port number to match given above
- You can now run (start) the simulation by going to the Solution menu then Run (or simply press
CTRL+R
in the main window). - If you just want to look at the mesh, Open the Scenes node in the tree on the left. Then right-click on a geometry node and select Open. This will display the 3D geometry in the main viewer window.
- You can disconnect the GUI from the CSF job using the File menu then Disconnect from server. This will leave the simulation running on the CSF but it won’t update in the GUI. You can close the StarCCM GUI at this point.
Co-Simulation with Abaqus
It is possible to have STAR-CCM+ perform some calculations with Abaqus, exchanging data between the two applications. For example, in mechanical co-simulation STAR-CCM+ passes traction loads to Abaqus (pressure + wall shear stress), and Abaqus passes displacements to STAR-CCM+. In Abaqus, the traction loads are applied to the surface of the solid structure. In STAR-CCM+, the displacements are used as an input to the mesh morpher. Data is exchanged via the Co-Simulation module of STAR-CCM+.
You will need to set up the co-simulation in your STAR-CCM+ input file and also have available an Abaqus input file. You should also ensure the input files, when run in their respective applications, converge to solutions otherwise the co-simulation will not converge.
More information is available in the STAR-CCM+ user guide, available on the login node by running the following command after you’ve loaded the starccm modulefile:
evince $STARCCM_UG
Example 32-core co-simulation job
In this example we will run a single-node 32-core job with 24 cores used by starccm and 8 cores used by Abaqus.
Create a directory structure for your co-simulation:
cd ~/scratch mkdir co-sim # Change the name as requried cd co-sim mkdir abaqus starccm # Two directories to hold the input and output files from each app
Now copy your input files to the respective directories. For example:
cp ~/abq_cosim.inp ~/scratch/co-sim/abaqus cp ~/ccm_cosim.sim ~/scratch/co-sim/starccm
Now make some changes to the STAR-CCM+ input file to enable co-simulation. You can do this on your local workstation if you prefer but it is useful to be able to do this on the CSF when you want to change the settings before submitting a job – you will avoid transferring the input file back and forth between your workstation:
# Load the required version of starccm, for example: module load apps/binapps/starccm/13.04-mixed # Start an interactive job to run the starccm GUI: cd ~/scratch/co-sim/starccm qrsh -l inter -l short -V -cwd starccm+ ccm_cosim.sim
When the GUI starts, open the Co-Simulations
node in the tree viewer, expand Link1
then look for the following attributes and set their values as follows:
Co-Simulation Type = Abaqus Co-Simulation # Choose from the drop-down menu ... ### Note there are several simulation settings (e.g., ramping parameters) that control ### the simulation. You will need to set these but they are beyond the scope of this ### web-page. Please refer to the starccm user guide. ... Abaqus Execution Current Job Name = my-abq-cosim # Can be any name - abaqus output files will have this name Input file = ../abaqus/abq_cosim.inp # See directory structure created above Executable name = abq2016 # This is the Abaqus command used on the CSF (choose your version) Number of CPUs = 8 # This MUST be set correctly for the jobscript (see below) Remote shell = ssh # The default - leave set to this
Save the input file and exit the STAR-CCM+ GUI.
Now create a jobscript to load the starccm and abaqus modulefiles, and run starccm with the correct number of cores:
cd ~/scratch/co-sim/starccm gedit cosim.qsub
The jobscript should contain the following:
#!/bin/bash --login #$ -cwd #$ -pe hp-mpi-smp.pe 32 # Single-node 32-core job ## Load the modulefiles in the jobscript so we always know which version we used module load apps/binapps/starccm/13.04-mixed module load apps/binapps/abaqus/abq ## We will use 24 (out of 32) cores for starccm and 8 (out of 32) cores for abaqus. ## Manually set the special NSLOTS variable to these numbers so the license check ## scripts test for the correct number of licenses. If there are not enough licenses ## then the job will requeue. export NSLOTS=8 . $ABAQUS_HOME/liccheck.sh ## NOTE: We will leave NSLOTS set to the number needed for starccm after this license check export NSLOTS=24 . $STARCCM_HOME/liccheck.sh ## Now run starccm with 24 cores. It will run the 'abq2016' command with 8 cores as set in the input file. starccm+ -batch plate-cosim.sim -rsh ssh -np $NSLOTS -machinefile machinefile.$JOB_ID
Submit the job using
qsub cosim.qsub
When the job runs you will see output files written to the ~/scratch/co-sim/abaqus
and ~/scratch/co-sim/starccm
directories. For example, to see what is happening in each simulation:
cd ~/scratch/co-sim/abaqus tail -f my-abq-cosim.msg # The abaqus output file name was set in the starccm input file above # # Press CTRL+C to exit out of the 'tail' command cd ~/scratch/co-sim/starccm tail -f cosim.qsub.o123456 # Replace 123456 with the job id number of your starccm job # # Press CTRL+C to exit out of the 'tail' command
The simulation should run until it is converged.
Further Information
Further information on StarCCM+ and other CFD applications may be found by visiting the MACE CFD Forum.
The STAR-CCM+ user guide is available on the CSF using the following command after you have loaded the starccm modulefile:
evince $STARCCM_UG