*) Need to evaluate lots of day-lasting computations IN PARALLEL on different data sets (eg prediction features on EEG of different epileptic patients)

*) A basic Linux scripting knowledge?

*) Want to have a script-based “three-click-Apple interface” which distributes your ever evolving matlab code, quickly changing control scripts, which is developed on your Mac (BUT NOT THE STATIC HUGE RAW DATA) to your homes at various computer clusters and runs the computation job there with a single

screen -c .screenrc.clustername

command?

*) The ssh-connection from your Mac to the clusters cannot be kept all the time?

yes? read on…

1) Create the UNIX-scriptset called ~/bin/clusters_* (sourcecode see below). Run it (invoke ~/bin/clusters_1*, ~/bin/clusters_2*, clusters_3* by clicking in Finder.app) whenever you plan to start a stable version of your algorithm on all the clusters.

1.1) ~/bin/clusters_1rsync.command, which runs at first, because it takes the longest. It rsyncs your ~/matlab directory (your algorithms which have to be run in parallel on each patient) and evtl. some patient-specific meta-data (together only some MB) to each cluster home. We here do not touch the raw patient data at all!

1.2) ~/bin/clusters_2edit.command, which opens all necessary additional control-scripts with your favorite editor on your Mac. You need to edit them locally (only once or twice per job-constellation;).

Job-constellation: which patient on which cluster on which computer!

1.3) ~/bin/clusters_3copy.command, which copies the control files for starting your specified job-constellations to each of your cluster homes. This set of control files contains one matlab-header-script ~/matlab/patientname.m per patient, a shellscript ~/bin/clustername.patientname for each patient specifying the target-computer, and a ~/.screenrc.clustername configfile for each cluster for the wonderful screen command (man screen, google screen).

2) After ~/bin/clusters_1rsync.command finished and you edited all config-scripts (all files opened by ~/bin/clusters_2edit.command are saved&closed by you;) and ~/bin/clusters_3copy.command was executed (after ~/bin/clusters_1rsync.command finished), you can start the sessions with one single

screen -c .screenrc.clustername

command on each cluster in a manually opened ssh session. Then, a screen session with one shell for each patient=matlab instance (as specified in ~/.screenrc.clustername) is created. In each screen session you need to invoke ~/bin/clustername.patientname, where the matlab script ~/matlab/patientname.m is executed. It selects some parameters (patientname, paths, etc) and calls ~/matlab/your_master_script.m. Goodnight.

3) You can quit the whole Terminal.app, the screen command allows that the shells you started (one per computer and patient) keep on running. You can get the display back at any time anywhere in any ssh session (screen -ls)

3.1) Use Stickies.app to keep track of your ongoing computations! U can insert an Exel-table there;)

4) PS: You need to take care for yourself (on the underlying BSD and in your ~/matlab universe) for

4.-1) Check for proper execution rights:
chmod u+x ~/bin/clusters_*
chmod u+x ~/bin/clustername.patientname (ONE FILE FOR EACH PATIENT)

4.0) Setup passwd-free ssh connection (man ssh-keygen) to each single cluster computer

4.1) Placing your huuuge raw static data onto each cluster home

4.2) Look for enough matlab and toolbox licenses. Thats the most annoying thing about this
commercial product:(). therefore, actually, i designed these sripts for several clusters…

4.3) Store your computation results (down in ~/matlab/your_master_script.m) on the cluster homes and and retrieve them onto your Mac (man scp).

5) The scripts

5.1) ~/bin/clusters_1rsync.command
#———–
echo +++ rsync ~/matlab and ~/meta-data to clustername1 (from Mac)
nice -n??? rsync -r meta-data user@clustername1:
nice -n??? rsync -r matlab user@clustername1:

echo +++ rsync ~/matlab and ~/meta-data to clustername2 (from clustername1, its usually faster)
nice -n??? ssh user@clustername1 rsync -r meta-data user@clustername2:
nice -n??? ssh user@clustername1 rsync -r matlab user@clustername2:

#…etc.
#———–

5.2) ~/bin/clusters_2edit.command
#———–
open -a /Applications/TextEdit.app ~/bin/clusters_2edit.command
open -a /Applications/TextEdit.app ~/bin/clusters_1rsync.command
open -a /Applications/TextEdit.app ~/bin/clusters_3copy.command

open -a /Applications/TextEdit.app ~/bin/clustername?. patientname1
open -a /Applications/TextEdit.app ~/bin/clustername?. patientname2
#…etc.

open -a /Applications/TextEdit.app ~/matlab/patientname1.m
open -a /Applications/TextEdit.app ~/matlab/patientname2.m
#…etc.

open -a /Applications/TextEdit.app ~/.screenrc.clustername
#———–

5.3 ~/bin/clusters_3copy.command
#———–
echo +++copy configfiles from ~/, ~/bin, ~/matlab to clustername1 (from Mac)
nice -n??? scp ~/.screen* user@clustername1:
nice -n??? scp ~/matlab/patientenname*.m user@clustername1:matlab/

nice -n??? scp ~/bin/clustername*. patientname* user@clustername1:bin/
#…etc.

echo +++copy configfiles from ~/, ~/bin, ~/matlab to clustername2 (from clustername1, its usually faster)
nice -n5 ssh user@clustername1 scp ~/.screen* user@clustername2:
nice -n5 ssh user@clustername1 scp ~/matlab/patientenname*.m user@clustername2:matlab/

nice -n5 ssh user@clustername1 scp ~/bin/clustername*. patientname* user@clustername2:bin/
#…etc.
#———–

EDIT THE FOLLOWING TEMPLATES ACCORDING TO YOUR NAMING OF CLUSTERS, COMPUTERS, PATIENTS!

5.4) ~/matlab/patientenname*.m (ONE FILE FOR EACH PATIENT)
#———–
clear% dont forget the clear — good for multiple runs…
% your path here
% your top-level parameters here
your_master_script % Goodnight;)
#———–

5.5) ~/bin/clustername*.patientname* (ONE FILE FOR EACH PATIENT)
#———–
echo +++start patientname.m
/usr/bin/nice -n??? /usr/local/bin/matlab -r patientname.m
#———–

5.6) ~/.screenrc.clustername* (ONE FILE FOR EACH CLUSTER — ONE LINE FOR EACH PATIENT IN THE RESPECTIVE CLUSTER)
#———–
screen -t sessionname ssh computer_for_this_patient_in_this_cluster
#…etc.
#———–

This is the last of the config files. You need to run screen on each cluster with this file as -c argument to start all computers in this cluster (see above, first formula;). Then, in each screen session, you need to execute ~/bin/clustername.patientname. Goodnight