Im using XSIBackup-DC 18.104.22.168 it is installed on VMware 6.7.0 Update 3 (Build 14320388)
Here I defined some cron Jobs to backup my VMs.
I can see that a Job for backup a vm to an NFS Partition was still running when a new Job for backup a different vm to the second VMware Server was started.
For me it Looks like the new xsibackup Job killed the old one, because
- snapshot on vm still exists
- no xsibackup Job is running now ( after second Job finished )
- log file has now entries that Shows the end of Job 1.
is this the designed Action that a new xsibackup Job kills an old one?
Yes, absolutely. XSIBackup-DC client can't run multiple jobs at the same time and if you execute a new job when some other is still running the previous one will be killed. Concurrent backups are a terrible idea that can only lead to clogging the hardware.
You can easily serialize your jobs by coalescing them into a file, i.e.:
Of Course is it terrible when more than one backup Job are running at same time.
But for my unterstanding would the handling be improved when the new Job waits till old is finished or when new Job does not start when old one is still running.
Actuall we found an old Job killed in an undefined state.
As with any other Linux binary, you may need to clean up memory by using ps command and kill -9 on any eventual ghost process.
This should be exceptional of course, just as long as you set things up in a proper way.
Waiting for a job to finish is the opposite approach that we took, it would generate more uncertainties, as the lag would be propagated forward.
Just as long as you just chain jobs in a classic Linux style, you will be safe from overlapping jobs:
Example 1: you run jobs one after another, no matter what happens
Example 2: second job runs in case the first suceeds
/scratch/XSI/XSIBackup-Pro/etc/jobs/001 & \ /scratch/XSI/XSIBackup-Pro/etc/jobs/002