Last week, I attended a “Midi conférence” (basically a one hour training session during lunch time) offered by Compute Canada dealing with Slurm, GNU Parallel and how to combine them. This was a very useful and timely presentation for me (🇫🇷 slides available online 🔗) as I’ve just got started with Slurm (see my previous notes on the topic) and was eager to learn more.
Earlier today, I found an opportunity to put in practice what I’ve learned last week. Indeed I needed to download hundreds of shapefiles (2 different kind of shapefiles at 2 different for almost 120 years), extract values from them before deleting them. To do so, I wrote the following bash script to distribute the simulations on 5 nodes and use 4 CPUs per node:
#SBATCH instructions do the following:
#SBATCH --time=6:00:00: allocate 6 hours for the job;
#SBATCH --nodes=5: allocate 5 nodes;
#SBATCH --array=1-5: generate 5 tasks with 1, 2, 3, 4 and 5 as identifiers (represented by
#SBATCH --ntasks-per-node=1: only one task per node;
#SBATCH --cpus-per-task=4: allocate 4 CPUs per task;
#SBATCH --mem-per-cpu=4G: allocate 4Gb of RAM per CPU.
and in the main command:
scr_extract.R takes 3 arguments:
- the first one are the task identifiers (
$SLURM_ARRAY_TASK_ID) managed by Slurm and, given the setup I described above, this argument varies with the node!
- the second and third arguments are handled by GNU parallel so that, on each node, it generates the same four combinations (i.e. (1,1), (1,2), (2,1), (2,2)), each of which is run by one of the four CPUs alloacated per node.
⚠️ Note that if
SBATCH --ntasks-per-node=1 is used without specifying
the number of CPUs per task, only one CPU will be allocated, making
useless! That is why I added
#SBATCH --cpus-per-task=4 to have 4 instead of 1.
So, this set up allowed me to
- use five different nodes and to have a unique ID for those (
- do the same parallelization on each of them (with a different input).
It turned out the only good reason for doing that was to apply what I’ve learned a few days ago 😆😆😆!