Difference between revisions of "Administration tips and tricks"

From HPC Wiki
Jump to navigation Jump to search
m
Line 8: Line 8:
  
  
When the local scheduler daemon starts, it nmost likely will report the node as "ready to receive jobs". If the mounts of remote filesystems are initiated, but not finished yet, the first jobs will fail due to missing directories/files.
+
When the local scheduler daemon starts, it nmost likely will report the node as "ready to receive jobs". If the mounts of remote filesystems are initiated, but not finished yet, the first job(s) will fail due to missing directories and files.
  
You could now write node-local checker scripts trying to read or write on mount points, with all bells and whistles like using <code>timeout</code>.
+
You could now write node-local checker scripts trying to read or write on mount points, with all bells and whistles like using <code>timeout ... touch /mount/point/tmp/$(uname -n).checker</code>.
 
Or you could write fine-grained ''systemd'' dependencies (with <code>PathExists=</code> or <code>DirectoryNotEmpty=</code>.
 
Or you could write fine-grained ''systemd'' dependencies (with <code>PathExists=</code> or <code>DirectoryNotEmpty=</code>.
  
All these will fail inevitably, if the [[shared filesystem]] takes longer than expected to get operational.
+
All these will fail inevitably, if the [[HPC-Dictionary#Shared filesystems|shared filesystem]] takes longer than expected to get operational.
  
 
;Suggestion
 
;Suggestion
Try to "turn the tables" and check whether your [[shared filesystem]] supports any kind of "Now, I am really ready and operational" callback or signal.
+
Try to "turn the tables" and check whether your [[HPC-Dictionary#Shared filesystems|shared filesystem]] supports any kind of "Now, I am really ready and operational" callback or signal.
Then, have your [[shared filesystem]] start up your local scheduler daemon--when all is ready.
+
Then, have your [[HPC-Dictionary#Shared filesystems|shared filesystem]] start up your local scheduler daemon--when all is ready.
  
In the case of GPS, you can define "user callbacks" which are triggered ''locally on each node'' at certain events. Creating such a callback (using <code>systemd</code> and <code>slurmd</code> as an example:
+
In the case of GPS, you can define "user callbacks" which are triggered ''locally on each node'' at certain events. Creating such a callback (using <code>systemd</code> and <code>slurmd</code> as an example):
 
   mmaddcallback YourNameOfCB --command /bin/systemctl --parms "start slurmd" --event startup -N all,my,compute,node,classes
 
   mmaddcallback YourNameOfCB --command /bin/systemctl --parms "start slurmd" --event startup -N all,my,compute,node,classes
  

Revision as of 17:13, 1 October 2019


General tips & tricks in administrating HPC clusters

Mutual dependencies of services

Problem

After reboot or power cycle/failure, the local compute nodes' scheduler daemon is started too early: the global filesystem is not ready yet and the first job fails on those nodes.


When the local scheduler daemon starts, it nmost likely will report the node as "ready to receive jobs". If the mounts of remote filesystems are initiated, but not finished yet, the first job(s) will fail due to missing directories and files.

You could now write node-local checker scripts trying to read or write on mount points, with all bells and whistles like using timeout ... touch /mount/point/tmp/$(uname -n).checker. Or you could write fine-grained systemd dependencies (with PathExists= or DirectoryNotEmpty=.

All these will fail inevitably, if the shared filesystem takes longer than expected to get operational.

Suggestion

Try to "turn the tables" and check whether your shared filesystem supports any kind of "Now, I am really ready and operational" callback or signal. Then, have your shared filesystem start up your local scheduler daemon--when all is ready.

In the case of GPS, you can define "user callbacks" which are triggered locally on each node at certain events. Creating such a callback (using systemd and slurmd as an example):

 mmaddcallback YourNameOfCB --command /bin/systemctl --parms "start slurmd" --event startup -N all,my,compute,node,classes

The event startup is in fact GPFS's "full readiness" state. The callback will thus be called on each node only after it has completed all GPFS joining and mounting stuff.

On your nodes, simply disable the systemd unit of your local scheduler's daemon:

 systemctl disable slurmd

and watch the next reboot for the orderly coming up of GPFS, followed by slurmd.