Parallel Jobs (Slurm)
Parallel batch job submission (Slurm)
For jobs that require 2-168 CPU cores – running on one AMD Genoa compute node. A jobscript template is shown below. Please also consult the Partitions page for details on available compute resources.
Please also consult the software page specific for the code / application you are running for advice running your application.
A parallel job script will run in the directory (folder) from which you submit the job. The jobscript takes the form:
#!/bin/bash --login #SBATCH -p multicore # Partition is required. Runs on an Intel hardware. #SBATCH -n numcores # (or --ntasks=) where numcores is between 2 and 168. #SBATCH -t 4-0 # Wallclock limit (days-hours). Required! # Max permitted is 7 days (7-0). # Load any required modulefiles. A purge is used to start with a clean environment. module purge module load apps/some/example/1.2.3 ### OpenMP jobs ### # $SLURM_NTASKS will be set to the numcores given above on the -n line export OMP_NUM_THREADS=$SLURM_NTASKS openmp-app.exe ### MPI jobs ### # mpirun knows how many cores to run (note: do not set OMP_NUM_THREADS unless running mixed-mode jobs) mpirun mpi-app.exe
In the above jobscript, we must explicitly specify the number of cores.
#SBATCH -n 84 # Causes the $SLURM_NTASKS env var to be set (to 84)
If you require the $SLURM_CPUS_PER_TASKS
env var in your jobscript, then you should specify -c
instead of -n
:
#SBATCH -c 84 # Causes the $SLURM_CPUS_PER_TASKS env var to be set (to 84)
Available Hardware and Resources
Please see the Partitions page for details on available compute resources.