©XSIBackup-Free: Free Backup Software for ©VMWare ©ESXi

Forum ©XSIBackup: ©VMWare ©ESXi Backup Software


You are not logged in.

#2 General matters » Needed Room Discrepancy? » 2018-10-02 13:02:48

russrace
Replies: 3

I just upgraded to Pro 11.0.3 and I am doing some testing. My first backup to a new datastore was successful. I received an email that states 12 VMs were backed up with "No errors detected". The email reads the Sparse size on disk is 2132 Gb and that 1020 Gb of room are needed but my datastore shows only 384 Gb of space used after the backup completed.

Is the needed room listed before de-duplication? I would expect to save very little space on the first backup unless the VMs were all identical and they are not. Am I missing something?

#3 Re: General matters » xsibackup and plink.exe » 2018-01-21 23:50:53

I think I know what nikesg is talking about as I am having the same issue. I have already sent an email to support and have not received a reply yet.
My issue is the same if I use plink or another Linux servers crontab. Either way I can ssh to the host call a backup_job.sh manually and it works fine. If I try to use cron or schedule plink the job will not run. it looks as if when the ssh session closes it kills the sh process that was going to do the backup. see below for details.

Here is the command from the command line. This command works.

ssh -i /mykey_id_rsa -t -p22 root@10.1.0.66 "ash /vmfs/volumes/54c962e9-e1b7442c-0af2-2c0356f9fa57/xsi-dir/remote_job.sh"

Here is the crontab entry. It will not work. I change the 43 and 13 to the current time to test it. I see that it connects to the host but the job does not run.

43 13 * * 7 ssh -i /mykey_id_rsa -t -p22 root@10.1.0.66 "ash /vmfs/volumes/54c962e9-e1b7442c-0af2-2c0356f9fa57/xsi-dir/remote_job.sh"

Here is the ssh login log when it fails.

2018-01-21T18:43:00Z sshd[11396898]: /etc/ssh/sshd_config line 7: Deprecated option UsePrivilegeSeparation
2018-01-21T18:43:00Z sshd[11396898]: /etc/ssh/sshd_config line 15: Unsupported option PrintLastLog
2018-01-21T18:43:00Z sshd[11396898]: Connection from 10.1.0.130 port 41762
2018-01-21T18:43:01Z sshd[11396898]: Accepted publickey for root from 10.1.0.130 port 41762 ssh2: RSA SHA256:uGE4okWP/UWfSvaMObOMhNHCuXNCVHqWLmULIPyqpW8
2018-01-21T18:43:01Z sshd[11396898]: pam_unix(sshd:session): session opened for user root by (uid=0)
2018-01-21T18:43:01Z sshd[11396898]: User 'root' running command 'ash /vmfs/volumes/54c962e9-e1b7442c-0af2-2c0356f9fa57/xsi-dir/remote_job.sh'
2018-01-21T18:43:01Z sshd[11396902]: Session opened for 'root' on /dev/char/pty/t2
2018-01-21T18:44:31Z sshd[11396898]: Session closed for 'root' on /dev/char/pty/t2
2018-01-21T18:44:31Z sshd[11396898]: Received disconnect from 10.1.0.130 port 41762:11: disconnected by user
2018-01-21T18:44:31Z sshd[11396898]: Disconnected from user root 10.1.0.130 port 41762
2018-01-21T18:44:31Z sshd[11396898]: pam_unix(sshd:session): session closed for user root


Here is the processes when it runs from the command line remotely. So if I run it from a command line remotely it will work. If I run it from a cron job remotely it will not work.

[root@host-2:~] pgrep -fl remote_job.sh
11404925 ash /vmfs/volumes/54c962e9-e1b7442c-0af2-2c0356f9fa57/xsi-dir/remote_job.sh
11404924 sh -c /vmfs/volumes/54c962e9-e1b7442c-0af2-2c0356f9fa57/xsi-dir/remote_job.sh
[root@host-2:~]

My thoughts are that is something to do with the ssh session sending a signal to kill the process when the ssh session ends. I tried to use nohup with out success. i see that in esxi the root cron has a 2>&1 at the end to direct the output but I cant seem to get it to work remotely

If I get this to work the way I need it to we will be looking for an enterprise license as we have 6 sites with 3-40 hosts per site. I need to be able to manage the backups from a central location with out only 1 job running at a time as there would not be enough time in our backup window to backup all of our VMs.

Board footer