StarCCM

Overview

StarCCM+ is a computation continuum mechanics application which can handle problems relating to fluid flow, heat transfer and stress. See below for the available versions.

Restrictions on Use

Only users who have been added to the StarCCM group can run the application (run groups to see your group memberships). Owing to licence restrictions, only users from the School of MACE can be added to this group. Requests to be added to the StarCCM group should be emailed to
its-ri-team@manchester.ac.uk.

When the job runs there may not be enough licenses for the number of cores you have requested. There is currently a high demand for licenses and they may run out. To check whether there are at least enough licenses for you job your can add the following line to your jobscript:

. $STARCCM_HOME/liccheck.sh

(that’s a full-stop followed by a space at the start of the line – please copy it carefully.)

If there are not enough licenses for your job the job will automatically re-queue in the CSF batch system. If there are enough licenses the job will run StarCCM. If you omit this check your job will simply fail if there are not enough licenses.

Set Up Procedure

Once you have been added to the StarCCM+ group, you will be able to access the modulefiles. We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.

Load one of the following modulefiles:

# Choose required precision (double is more accurate but slower)
module load  starccm/19.04.009-mixed           # Also called v2024.6
module load  starccm/19.04.009-double

module load starccm/18.02.010-mixed            # Also called v2023.2
module load starccm/18.02.010-double

module load starccm/17.04.008-mixed            # Also called v2022.6
module load starccm/17.04.008-double


module load starccm/17.02.007-mixed            # Also called v2022.1
module load starccm/17.02.007-double

module load starccm/15.04.010-mixed            # Also called v2020.2.1 
module load starccm/15.04.010-double

module load starccm/15.02.009-mixed            # Also called v2020.1
module load starccm/15.02.009-double

Running the Application

Currently, use of StarCCM+ in batch mode only is supported; no attempt should be made to use the StarCCM+ GUI directly on the CSF (however, see below for the client-server method). Once you have loaded the modulefile for the version you wish to run please write a jobscript based on one of those below and then submit it to the batch system with this command:

sbatch myjobscript

replacing myjobscript with the name of your file.

Small Parallel Jobs

These are jobs that use a single compute node – between 2 and 40 cores.

Jobs of up to 40 cores

This example describes how to submit an SMP job (uses cores in a single compute node)

#!/bin/bash --login
#SBATCH -p multicore   # Single compute-node parallel job
#SBATCH -n 4           # (--ntasks=4) Use 4 cores on a single node (max 40)

# We now recommend loading the modulefile in the jobscript. Use your required version
module load starccm/19.04.009-mixed

# Check for licenses, requeue if not enough available.
. $STARCCM_HOME/liccheck.sh

### For version 19.04 and onwards, the SLURM integration allows a simpler command:
starccm+ -batch -pio -mpi openmpi -bs slurm myinput.sim

### For older versions, please use the following more manual method:
# Create a file listing the nodes to use
NODEFILE=nodes.$SLURM_JOB_ID.txt
scontrol show hostnames $SLURM_JOB_NODELIST > $NODEFILE

# NB: -pio enables parallel IO to speed up file access (CSF4 only)
starccm+ -batch -pio -mpi openmpi -machinefile $NODEFILE -np $SLURM_NTASKS myinput.sim
                                                                              #
                                                                              #
                                                                              # Replace myinput.sim
                                                                              # your sim file name.

Submit the job to the batch system using

sbatch jobscript

where jobscript is the name of your jobscript file.

Large Parallel Jobs

Please check how your job scales before running large parallel jobs!

Do not assume that using twice as many cores will run you job in half the time! Jobs do not always scale linearly. Users are encouraged to test how their problem scales when considering a new problem or mesh size. This can be achieved easily, using an inbuilt script provided by CD-Adapco, which can be invoked using the -benchmark flag. For example:

#!/bin/bash --login
#SBATCH -p multinode   # Multiple compute-node parallel job
#SBATCH -N 2           # Two compute nodes
#SBATCH -n 80          # (--ntasks=80) Two compute nodes, each using 40 cores

# We now recommend loading the modulefile in the jobscript. Use your required version
module load starccm/19.04.009-mixed

# Check for licenses, requeue if not enough available.
. $STARCCM_HOME/liccheck.sh

### For version 19.04 and onwards, the SLURM integration allows a simpler command:
starccm+ -batch -pio -mpi openmpi -bs slurm myinput.sim

### For older versions, please use the following more manual method:
# Create a file listing the nodes to use
NODEFILE=nodes.$SLURM_JOB_ID.txt
scontrol show hostnames $SLURM_JOB_NODELIST > $NODEFILE

# NB: -pio enables parallel IO to speed up file access (CSF4 only)
starccm+ -benchmark "-nps 1,2,4,8,16,32,40,80 -nits 20" -pio -mpi openmpi \
                    -machinefile $NODEFILE -np $SLURM_NTASKS myinput.sim
            #                                                
            # Run the automatic benchmarks                   
            # on your input model                            
                                                              

Submit the job to the batch system using

sbatch jobscript

where jobscript is the name of your jobscript file. Any text printed out by the job will be in the slurm-nnnnn.out text file, where nnnnn is the unique job id number for your job.

This will run 20 iterations on 1, 2, 4, 8, 16, 32, 40 and finally 80 cores, timing the runs. You may wish to run a smaller number of tests, for example, with 1,20,40,80 cores. Ensure you request enough cores using #SBATCH -n numcores to match your largest benchmark. Note that the " (quote) characters are required around the "-nps 1,2..." flag.

The output is a .html file which can be opened using a web browser. This file will show how your problem scales, which can be used to decide how many cores to use. The parallel efficiency can be computed as speedup / number of workers from the resulting table. The point at which efficiency drops below ~85% users should consider using fewer cores, which may actually increase the speed of your computations.

Multi-node jobs of 80 cores or more

This is a multi-node example using two 40-core nodes (hence 80 cores in total). You can specify more nodes by using multiples of 40 for the number of cores (e.g., 120 for three nodes).

#!/bin/bash --login
#SBATCH -p multinode   # Multiple compute-node parallel job
#SBATCH -N 2           # Two compute nodes
#SBATCH -n 80          # (--ntasks=80) Two compute nodes, each using 40 cores

# We now recommend loading the modulefile in the jobscript. Use your required version
module load starccm/19.04.009-mixed

# Check for licenses, requeue if not enough available.
. $STARCCM_HOME/liccheck.sh

### For version 19.04 and onwards, the SLURM integration allows a simpler command:
starccm+ -batch -pio -mpi openmpi -bs slurm myinput.sim

### For older versions, please use the following more manual method:
# Create a file listing the nodes to use
NODEFILE=nodes.$SLURM_JOB_ID.txt
scontrol show hostnames $SLURM_JOB_NODELIST > $NODEFILE

# NB: -pio enables parallel IO to speed up file access (CSF4 only)
starccm+ -batch -pio -mpi openmpi -machinefile $NODEFILE -np $SLURM_NTASKS myinput.sim
                                                                                #
                                                                                #
                                                     # Replace myinput.sim with #
                                                     # your input file name.    #

Submit the job to the batch system using

sbatch jobscript

where jobscript is the name of your jobscript file. Any text printed out by the job will be in the slurm-nnnnn.out text file, where nnnnn is the unique job id number for your job.

Force a Checkpoint and optionally Abort

Checkpointing saves the current state of your simulation to file so that you can run the job again from the current state rather than from the beginning of the simulation. This is needed if your simulation is going to run for longer than the maximum runtime permitted (7 days).

When asked to checkpoint, StarCCM+ will write out a new .sim file with @iteration in the name to indicate the iteration number at which the checkpoint file was made. For example myinput@25703.sim. You can then use this as the input .sim file for another job on the CSF.

Manually

To force a checkpoint manually, leaving your batch job to carry on running the simulation after the checkpoint file has been written, run the following command on the login node in the directory from where you submitted the job:

touch CHECKPOINT

StarCCM+ checks after every iteration for this file. Once it sees the file it will save the current state of the simulation then rename the CHECKPOINT file to be CHECKPOINT~ (so that it doesn’t keep checkpointing) then carry on with the simulation. You can run touch CHECKPOINT again at some time in the future to generate a new checkpoint file.

If you wish to checkpoint and then terminate your simulation (which will end your CSF job), run the following on the login node in the directory where your simulation is running:

touch ABORT

You batch job will then terminate after it has written the checkpoint file.

Automatically in your jobscript

It is possible to automatically checkpoint your job near the end of the job’s runtime and have it re-queue itself on the CSF using the jobscript as shown below.

In this example we run a simulation file named myinput.sim. When the job re-queues after a checkpoint, it will run myinput@nnnnn.sim where nnnnn is the iteration number at which the checkpoint was written. The script will automatically use the most recent checkpoint file. Eventually, the simulation will converge and StarCCM will exit normally. When this happens no further checkpoints are made and the job will not re-queue.

Note that the script below does not delete any checkpoint files. If your simulation runs for a long time it may checkpoint a number of times and leave you with many large checkpoint files. You can periodically delete all but the most recent of these files.

#!/bin/bash --login
#SBATCH -p multinode              # Job will run on the multi-node compute nodes
#SBATCH -N 2                      # Two compute nodes
#SBATCH -n 80                     # This example uses two 40-core nodes
#SBATCH --signal=B:USR1@300       # Send a signal 300 seconds before end of job to write a checkpoint 
                                  # file. Is this enough time to write the checkpoint file?

# -------------- Edit settings (carefully) ----------------

# Name of simulation file - DO NOT add .sim to end of name.
# Will be used to find SIMFILE.sim on first run or the
# most recent SIMFILE@nnnnn.sim checkpoint file if available.

SIMFILE=myinput

# Version of StarCCM to use
module load starccm/19.04.009-mixed

# -------------- End of settings --------------------------

# Check for licenses, requeue if not enough available.
. $STARCCM_HOME/liccheck.sh

function checkpoint_abort {
    # Ask starccm to checkpoint then exit
    touch ABORT

    # Wait for StarCCM to write the checkpoint file
    wait $CCMPID

    # If checkpoint file written, job will requeue itself automatically 
    CHKFILE=`ls -t ${SIMFILE}@*.sim | head -1`
    if [ -f "$CHKFILE" ]; then 
      echo "Job checkpointed at `date` - wrote $CHKFILE - job will be requeued automatically"
      STATUS=99
    else
      echo "No checkpoint file found (this is probably an error!) Job will not be requeued."
    fi
}
trap checkpoint_abort USR1 

# Find newest checkpoint file 
SIM=`ls -t ${SIMFILE}*.sim | head -1`
if [ -z "$SIM" ]; then
  echo "Failed to find any ${SIMFILE}[@nnnn].sim files. Job will exit!"
  exit 1
fi

# Create a file listing the nodes to use
NODEFILE=nodes.$SLURM_JOB_ID.txt
scontrol show hostnames $SLURM_JOB_NODELIST > $NODEFILE

# Start starccm+ (with parallel IO - CSF4 only) and save its process id (the & at the end is important!)
starccm+ -batch -pio -mpi openmpi -machinefile $NODEFILE -np $SLURM_NTASKS $SIM &
CCMPID=$!

# Wait for starccm to finish
wait

# Job will be requeued to carry on from last checkpoint if necessary
exit $STATUS

Submit the job to the batch system using

sbatch jobscript

where jobscript is the name of your jobscript file. Any text printed out by the job will be in the slurm-nnnnn.out text file, where nnnnn is the unique job id number for your job.

Client / Server Usage

The StarCCM+ GUI running on a campus desktop PC can be connected to a batch simulation running on the CSF. This allows the GUI to display the current state of the simulation (for example you can see graphs showing how particular variables are converging).

Note that the method below will mean that the CSF job, once it is running, does NOT automatically start the simulation. Instead StarCCM+ will wait for you to connect the GUI to the job. But the CSF job is running and will be consuming its available runtime (max 7 days on the CSF).

Please follow the instructions below:

  1. Open two terminal windows on your PC. For example, two MobaXterm windows (use the + button above the black command window to open a second command window in a new tab) or run MobaXterm twice. On Mac or Linux, open two Terminal applications.
  2. In the first command-window, log in to the CSF as normal. Then write your StarCCM+ batch job. This should be familiar. However, there is a small change to one of the flags on the starccm+ command-line as show in this example:
    #!/bin/bash --login
    #SBATCH -p multicore     # Single compute-node parallel job
    #SBATCH -n 40            # 40 core job to run a StarCCM simulation
    
    # We now recommend loading the modulefile in the jobscript. Use your required version
    module load starccm/15.02.009-mixed
    
    # Check for licenses, requeue if not enough available.
    . $STARCCM_HOME/liccheck.sh
    
    # Create a file listing the nodes to use
    NODEFILE=nodes.$SLURM_JOB_ID.txt
    scontrol show hostnames $SLURM_JOB_NODELIST > $NODEFILE
    
    # Run starccm, but in server mode instead of batch mode
    starccm+ -server -load myinput.sim -pio -mpi openmpi -machinefile $NODEFILE -np $SLURM_NTASKS
               #               #
               #               # Replace myinput.sim with your own input file.
               #
               # Now we use -server instead of -batch (as used previously)
    

    Submit the job using sbatch myjobscript as usual.

  3. Wait for the above job to run. When it does you will see a file named slurm-12345.out where 12345 will be unique to your job. Have a look in this file:
    cat slurm-12345.out
    

    At the end of the file will be a message informing you where the job is running and on which port the server is listening:

    Server::start -host node070.csf4.local:47827
                            #              #
                            #              # The port number may be different
                            #              # but it is often 47827 (the default).
                            #
                            # The node name will likely be different.
                            # Make a note of your node name.
    
  4. Now, in the second terminal window on your PC (NOT the CSF) log in to the CSF again with the following command:
    ssh -L 47827:node070:47827 mxyzabc1@csf4.itservices.manchester.ac.uk
             #       #     #       #
             #       #     #       # Use your own username here
             #       #     #
             #       #     # Use the port number reported earlier
             #       #
             #       # Use the node name reported earlier
             #
             # Use the port number earlier
    

    Enter your CSF password when asked. You must leave this window logged in all time while using the StarCCM+ GUI on your PC. The GUI will be communicating with the CSF through this log-in (tunnel). You do not need to type any commands in to this login window.

  5. Now start the StarCCM+ GUI on your desktop PC. For example, on Windows, do this via the Start Menu. This will display the main user interface. In the GUI:
    1. Select File menu then Connect to server…
    2. In the window that pops up set:
      Host: localhost
      Port: 47827 (or whatever number you got from above).
      

      Then hit OK. The GUI will connect to the job on the CSF.

    3. If running the StarCCM+ GUI on a linux desktop, you can connect to the server using:
      starccm+ -host localhost:47827 # Change the port number to match given above
      
    4. You can now run (start) the simulation by going to the Solution menu then Run (or simply press CTRL+R in the main window).
    5. If you just want to look at the mesh, Open the Scenes node in the tree on the left. Then right-click on a geometry node and select Open. This will display the 3D geometry in the main viewer window.
  6. You can disconnect the GUI from the CSF job using the File menu then Disconnect from server. This will leave the simulation running on the CSF but it won’t update in the GUI. You can close the StarCCM GUI at this point.

Co-Simulation with Abaqus

It is possible to have STAR-CCM+ perform some calculations with Abaqus, exchanging data between the two applications. For example, in mechanical co-simulation STAR-CCM+ passes traction loads to Abaqus (pressure + wall shear stress), and Abaqus passes displacements to STAR-CCM+. In Abaqus, the traction loads are applied to the surface of the solid structure. In STAR-CCM+, the displacements are used as an input to the mesh morpher. Data is exchanged via the Co-Simulation module of STAR-CCM+.

You will need to set up the co-simulation in your STAR-CCM+ input file and also have available an Abaqus input file. You should also ensure the input files, when run in their respective applications, converge to solutions otherwise the co-simulation will not converge.

More information is available in the STAR-CCM+ user guide, available on the login node by running the following command after you’ve loaded the starccm modulefile:

evince $STARCCM_UG

Example 40-core co-simulation job

In this example we will run a single-node 40-core job with 32 cores used by StarCCM and 8 cores used by Abaqus.

Create a directory structure for your co-simulation:

cd ~/scratch
mkdir co-sim # Change the name as requried
cd co-sim
mkdir abaqus starccm # Two directories to hold the input and output files from each app

Now copy your input files to the respective directories. For example:

cp ~/abq_cosim.inp ~/scratch/co-sim/abaqus
cp ~/ccm_cosim.sim ~/scratch/co-sim/starccm

Now make some changes to the STAR-CCM+ input file to enable co-simulation. You can do this on your local workstation if you prefer but it is useful to be able to do this on the CSF when you want to change the settings before submitting a job – you will avoid transferring the input file back and forth between your workstation:

# Load the required version of starccm, for example:
module load starccm/15.02.009-mixed

# Start an interactive job to run the starccm GUI:
cd ~/scratch/co-sim/starccm
srun --pty starccm+ ccm_cosim.sim

When the GUI starts, open the Co-Simulations node in the tree viewer, expand Link1 then look for the following attributes and set their values as follows:

Co-Simulation Type = Abaqus Co-Simulation # Choose from the drop-down menu
...
### Note there are several simulation settings (e.g., ramping parameters) that control
### the simulation. You will need to set these but they are beyond the scope of this
### web-page. Please refer to the starccm user guide.
...
Abaqus Execution
    Current Job Name = my-abq-cosim       # Any name, used to name abaqus output files
    Input file = ../abaqus/abq_cosim.inp  # See directory structure created above
    Executable name = abq2020             # The Abaqus command used on the CSF (choose version)
    Number of CPUs = 8                    # This MUST match the jobscript (see below)
    Remote shell = ssh                    # The default - leave set to this

Save the input file and exit the STAR-CCM+ GUI.

Now create a jobscript to load the starccm and abaqus modulefiles, and run starccm with the correct number of cores:

cd ~/scratch/co-sim/starccm
gedit cosim.sh

The jobscript should contain the following:

#!/bin/bash --login
#SBATCH -p multicore        # Single-node parallel job
#SBATCH -n 40               # 40-cores in total for StarCCM and Abaqus to use

## Load the modulefiles in the jobscript so we always know which version we used
module load starccm/15.02.009-mixed
module load abaqus/2020

## We will use 32 (out of 40) cores for starccm and 8 (out of 40) cores for abaqus.
## Manually set the special SLURM_NTASKS variable to these numbers so the license check
## scripts test for the correct number of licenses. If there are not enough licenses
## then the job will requeue.
export SLURM_NTASKS=8
. $ABAQUS_HOME/liccheck.sh

## NOTE: We will leave SLURM_NTASKS set to the number needed for starccm after this license check
export SLURM_NTASKS=32
. $STARCCM_HOME/liccheck.sh

# Create a file listing the nodes to use
NODEFILE=nodes.$SLURM_JOB_ID.txt
scontrol show hostnames $SLURM_JOB_NODELIST > $NODEFILE

## Now run starccm with 32 cores. It runs the 'abq2020' command with 8 cores (set in input file)
starccm+ -batch ccm_cosim.sim -mpi openmpi -machinefile $NODEFILE -np $SLURM_NTASKS

Submit the job using

sbatch cosim.sh

When the job runs you will see output files written to the ~/scratch/co-sim/abaqus and ~/scratch/co-sim/starccm directories. For example, to see what is happening in each simulation:

cd ~/scratch/co-sim/abaqus
tail -f my-abq-cosim.msg # The abaqus output file name was set in the starccm input file above
  #
  # Press CTRL+C to exit out of the 'tail' command

cd ~/scratch/co-sim/starccm
tail -f slurm-123456.out # Replace 123456 with the job id number of your starccm job
  #
  # Press CTRL+C to exit out of the 'tail' command

The simulation should run until it is converged.

Further Information

Further information on StarCCM+ and other CFD applications may be found by visiting the MACE CFD Forum.

The STAR-CCM+ user guide is available on the CSF using the following command after you have loaded the starccm modulefile:

evince $STARCCM_UG

Last modified on October 30, 2024 at 11:28 am by George Leaver