Multi-Node Jobs
Large OpenFOAM jobs can be run across multiple nodes.
Job Script Template
Here is an example job submission script for a multi-node OpenFOAM job:
1#!/bin/bash
2#SBATCH --job-name=my_job
3#SBATCH --time=0-01:00:00 # hh:mm:ss
4#SBATCH --nodes=4
5#SBATCH --ntasks-per-node=28
6#SBATCH --cpus-per-task=1
7
8## system error message output file
9## leave %j as it's being replaced by JOB ID number
10#SBATCH -e my_job_%j.err
11
12## system message output file
13#SBATCH -o my_job_%j.out
14
15# record some potentially useful details about the job:
16echo Running on host $(hostname)
17echo Time is $(date)
18echo Directory is $(pwd)
19echo Slurm job ID is ${SLURM_JOBID}
20echo This jobs runs on the following machines:
21echo ${SLURM_JOB_NODELIST}
22printf "\n\n\n\n"
23
24# Load modules required for runtime e.g.
25module load apps/openfoam/6
26
27# Run the solver. Take pisoFoam with 112 processors for example:
28mpirun -np 112 pisoFoam -parallel
29
30echo End Time is $(date)
31echo "Done pisoFoam finish"
32printf "\n\n"
Note
For further information on the structure and syntax of Slurm job scripts, see the ACRC HPC documentation pages on job types.
How to use
Save the script file in the case folder, cd
into the case folder, and submit the script to Slurm. The number of MPI processes to request is the number of nodes times the number of tasks per node.