cronhandler -a { save | load | start } -s save_dir -u { userlist | homedir }
cronhandler -a stop -s save_dir
cronhandler takes care of the user`s crontab entries and transfers even defined at jobs to the other cluster node. Furthermore all cron and at control files ( cron.allow, cron.deny, at.allow, at.deny ) are maintained properly. Therefore it is ensured that a user related to a package can only define crontab and at entries if this user logs on to the node where the package is up and running (i.e. where the crontab entries and at jobs are loaded).
The basic concept of cronhandler is to save all crontab and at entries to a set of files located on a filesystem related to a MC/ServiceGuard package.
After the package switch the saved files are loaded and applied to the system where the package is starting. crontab entries are loaded like they were defined on the source node. If the execution of an at job falls into the short period of time between the package stop and a restart, the job will be rescheduled (current time + 60 seconds contingency time) on the target node. If there is a job planned on the target note at the same time a job from the source node likes to run, the job from the source node will be rescheduled one second after an already existing job.
Failover mechanism:
Preparation
Failover situation
( ) = step will only happen if the cluster node can be shut down properly.
After a save or load the cron daemon is restarted to ensure that all possibly changed crontab definitions and at jobs are recognized again.
In this example the users sys_asys and ora_asys are related to a MC/ServiceGuard (see also http://www.hp.com ) package. Both users are allowed to create at jobs, but only the user ora_asys is allowed to have an own crontab. The filesystem /data_asys1 goes with the package (=moving disks) and is therefore used to carry the cron entries and at jobs from one to the other node.
1.1) /etc/passwd (on all nodes):
: : ora_asys:adSKflweIRsdf:253:101::/data_asys1/home/ora_asys:/bin/ksh sys_asys:Wm9MyTyKtRI2c:106:104::/data_asys1/home/sys_asys:/sbin/sh : :
1.2) /usr/lib/cron/cron.allow (on active node):
: ora_asys :
1.3) /usr/lib/cron/at.allow (on active node):
: ora_asys sys_asys :
1.4) cd /data_asys1/home; ls -ald (on active node):
drwxr-xr-x 6 ora_asys oinstall ... ora_asys/ drwxrwxr-x 10 sys_asys autosys ... sys_asys/
HINT: /data_asys1 goes with the asys package.
1.5) /etc/cmcluster/asys_sv1_prod/asys_sv1_prod.cntl (on all nodes):
: : function customer_defined_run_cmds { # ADD customer defined run commands. : : /opt/edrc/bin/cronhandler -a load \ -s /data_asys1/pkg_cron -u /data_asys1/home test_return 51 } : : function customer_defined_halt_cmds { # ADD customer defined halt commands. /opt/edrc/bin/cronhandler -a save \ -s /data_asys1/pkg_cron -u /data_asys1/home : : test_return 52 } : :
1.6) Initial steps to initiate the daemon to work (on active node):
Save the current crontab definitions and at jobs to disk:
/opt/edrc/bin/cronhandler -a save -s /data_asys1/pkg_cron -u /data_asys1/home
cronhandler - handle cron and at entries in a cluster environment, by Chr. Walther stop cronhandler daemon for '/data_asys1/pkg_cron' ...(not running)... done. save crontab and at entries ... user: asys_sv1 ... clear old cron/at saves ... done. disable cron execution ... done. save at jobs ... done. disable at execution ... done. done. user: ora_sys ... clear old cron/at saves ... done. save cron entries ... done. remove cron entries ... done. disable cron execution ... done. save at jobs ... save at job 1189778400.a ... done. remove at job 1189778400.a ... done. done. disable at execution ... done. done. user: sys_asys ... clear old cron/at saves ... done. disable cron execution ... done. save at jobs ... save at job 1189778500.a ... done. remove at job 1189778500.a ... done. save at job 1189779100.a ... done. remove at job 1189779100.a ... done. done. disable at execution ... done. done. clean up remaining saves (corpses) ... done. restart cron ...(wait 5 seconds)... done. done.
Immediately load the saved crontab definitions and at jobs from disk back into the system:
/opt/edrc/bin/cronhandler -a load -s /data_asys1/pkg_cron -u /data_asys1/home
cronhandler - handle cron and at entries in a cluster environment, by Chr. Walther load crontab and at entries ... asys_sv1 ... cron usage not authorized at usage not authorized done. ora_sys ... enable cron execution ... done. load cron entries ... done. enable at execution ... done. load at jobs ... load at job 1189778400.a ... done. done. done. sys_asys ... cron usage not authorized enable at execution ... done. load at jobs ... load at job 1189778500.a ... done. load at job 1189779100.a ... done. done. done. restart cron ...(wait 5 seconds)... done. done. start cronhandler daemon for '/data_asys1/pkg_cron' ...(PID=18790)... done.
From now on the cronhandler daemon writes crontab definitions and defined at jobs in a regular interval to disk and no manual interventions are needed. The correct stop and start is handled via the MC/ServiceGuard control script ( /etc/cmcluster/asys_sv1_prod/asys_sv1_prod.cntl ).
In an earlier version of cronhandler a root crontab entry was needed to write the crontab definitions and the at jobs to disk - this is no longer the case. Eventually existing /opt/edrc/bin/cronhandler -a write ... calls are ignored and do not influence the cronhandler. To avoid logfile entries, the old crontab entries should be removed eventually.
Use cron.allow and at.allow to control crontab and at job definition.
This is free software; see edrc/doc/COPYING for copying conditions. There is ABSOLUTELY NO WARRANTY; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.