OPAL
Overview
OPAL (Object Oriented Parallel Accelerator Library) is a parallel open source tool for charged-particle optics in linear accelerators and rings, including 3D space charge.
Using the MAD language with extensions
Version 2022.1.0 (binary install, not source) is installed on the CSF.
Restrictions on use
OPAL is released under the GPL V3 license. There are no restrictions on accessing the software on the CSF but all usage must adhere to the license.
Set up procedure
We now recommend loading modulefiles within your jobscript so that you have a full record of how the job was run. See the example jobscript below for how to do this. Alternatively, you may load modulefiles on the login node and let the job inherit these settings.
Load one of the following modulefiles:
module load apps/binapps/opal/2022.1.0 # This uses the GCC / OpenMPI toolset
Note that the modulefile will change some of the default settings usually setup by the OPAL profile.d/opal.sh
script:
$OTB_DOWNLOAD_DIR is set to ~/scratch $OTB_SRC_DIR is set to ~/scratch $NJOBS is set to 1
If you wish to override these settings, you can change them in your jobscript after you’ve loaded the modulefile.
Running the application
Please do not run OPAL on the login node. Jobs should be submitted to the compute nodes via batch.
Serial batch job submission
Create a batch submission script (which will load the modulefile in the jobscript), for example:
#!/bin/bash --login #SBATCH -p serial # (or --partition=) Run on the nodes dedicated to 1-core jobs #SBATCH -t 4-0 # Wallclock time limit. 4-0 is 4 days. Max permitted is 7-0. # Start with a clean environment - modules are inherited from the login node by default. module purge module load apps/binapps/opal/2022.1.0 opal mysim.in
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Parallel batch job submission
OPAL can use multiple cores. For example, to run on the multi-core (single-node) AMD nodes:
#!/bin/bash --login #SBATCH -p multicore # (or --partition=) Run on the AMD 168-core nodes #SBATCH -n 16 # (or --ntasks=) Number of cores to use. #SBATCH -t 4-0 # Wallclock time limit. 4-0 is 4 days. Max permitted is 7-0. # Start with a clean environment - modules are inherited from the login node by default. module purge module load apps/binapps/opal/2022.1.0 # mpirun knows to use $SLURM_NTASKS (the -n number from above) cores mpirun opal mysim.in
Submit the jobscript using:
sbatch scriptname
where scriptname is the name of your jobscript.
Further info
Updates
None.