Last updated on Monday 28th of February 2022 08:52:48 PM

©XSIBackup backup size considerations.

Why size matters and what to take on account

 Please note that this post is relative to old deprecated software ©XSIBackup-Classic. Some facts herein contained may still be applicable to more recent versions though.

For new instalations please use new ©XSIBackup which is far more advanced than ©XSIBackup-Classic.

We will treat this matter in an epistolary fashion and thus I will reuse the text I wrote to a ©XSIBackup user that contacted us by e-mail.


I have a question about the backup script. If you see below we receive the emails and every thing is backed up as expected so thank you for that.

However, as noted below we have 674 GB before the backup needing 161 GB for the backup which makes me think that the result would be 513 GB remaining, but the amount remaining is 631 GB which indicates that the thin provisioning is being retained and only the actual data that is on the disk is being copied.

My question is, When will the system start deleting backups? when there is less than 161 Gb available on the disk or when there is less than ~43 Gb (actual size)?


Needed backup room is calculated by summing up all the .vmdk file sizes in mb. by issuing a du -hms command. That should return the real space used on the datastore (more or less). The available backup size is calculated based on the info returned by the df -m command, and that should be the real available empty space in mb. in the backup disk.


If you are backing up to a NFS device that is not VMFS formatted or over the network with rsync then you will need the real logical size of all the disks in your VMs available in your target device and the automatic room provision feature will not work as you expect. The block size used in the e-mail report is gb., thus if your VM sizes are small you can get severe deviations in the size info. On the other hand .vmdk files are copied using vmkfstools present at the ESXi box, it is a very efficient tool that, not only thins the disk to the destination device, but defragments and punches its zeroes. As a result thinned disk size could be smaller than du -hms command on the same file. On top of that ©XSIBackup uses a simple algorithm recommended by VMWare ($VMSIZE * 1.2 + 12288) to make sure we provision more space than what we actually need.


©XSIBackup will sum up all disk sizes (minus excluded disks) for a given VM and compare that figure, just before backing it up, against the available size on the target device. If the available room is smaller than the needed room ($VMSIZE * 1.2 + 12288) it will delete a folder named YYYYMMDDhhmmss, this will ensure only ©XSIBackup generated folders will be removed (and any other that is named with this mask). After the deletion of that first folder ©XSIBackup will compare sizes again and see if it can fit the current VM, if not it will continue to delete folders until the comparison is successful. That is why you will see deletion notices in between the report lines when you reach your limit.

Taking all this into account, the answer is:

161 (more or less)

I know all of this is too complicated to be consistent at first glance and that the program logic could probably be improved in this way. In any case the most important thing is that ©XSIBackup was designed by somebody that wanted to make sure all his VMs were backed up, so the philosophy behind the logic is:

"I would rather leave some unused space in my backup disk than miss a file"

REMEMBER: a great command that can be used in your ESXi SSH environment is per instance.

ls -lsh "/vmfs/volumes/datastore1/New Virtual Machine"

It will give you an output like this:

17408 -rw------- 1 root root 16.3M Mar 1 18:45 New Virtual Machine-000001-delta.vmdk
0 -rw------- 1 root root 342 Mar 1 18:42 New Virtual Machine-000001.vmdk
1024 -rw------- 1 root root 27.6K Mar 1 18:07 New Virtual Machine-Snapshot68.vmsn
0 -rw-r--r-- 1 root root 13 Mar 1 18:07 New Virtual Machine-aux.xml
2097152 -rw------- 1 root root 2.0G Feb 16 09:07 New Virtual Machine-c0f4bbb7.vswp
69131264 -rw------- 1 root root 140.0G Mar 1 17:45 New Virtual Machine-flat.vmdk
1024 -rw------- 1 root root 8.5K Mar 1 18:08 New Virtual Machine.nvram
0 -rw------- 1 root root 508 Mar 1 18:07 New Virtual Machine.vmdk
0 -rw-r--r-- 1 root root 425 Mar 1 18:07 New Virtual Machine.vmsd
8 -rwxr-xr-x 1 root root 2.7K Mar 1 18:07 New Virtual Machine.vmx
0 -rw------- 1 root root 0 Feb 16 09:07 New Virtual Machine.vmx.lck
0 -rw-r--r-- 1 root root 274 Nov 9 11:58 New Virtual Machine.vmxf
8 -rwxr-xr-x 1 root root 2.7K Mar 1 18:07 New Virtual Machine.vmx~
1024 -rw-r--r-- 1 root root 733.2K Nov 14 18:19 vmware-1.log
1024 -rw-r--r-- 1 root root 363.7K Nov 22 21:33 vmware-2.log
1024 -rw-r--r-- 1 root root 173.2K Nov 30 17:57 vmware-3.log
1024 -rw-r--r-- 1 root root 174.4K Dec 15 14:53 vmware-4.log
1024 -rw-r--r-- 1 root root 191.7K Mar 1 18:43 vmware.log
94208 -rw------- 1 root root 92.0M Feb 16 09:07 vmx-New Virtual Machine-3237264311-1.vswp

Where the first column will inform you about the allocated blocks in the HD, while the sixth column will output the size of the file in the FS. Obviously this two amounts do not necessarily match, especially when dealing with sparse files (thin disks).

Another useful command is "stat", if you run:

stat ../"New Virtual Machine/New Virtual Machine-flat.vmdk"

File: ../New Virtual Machine/New Virtual Machine-flat.vmdk
Size: 150329185280 Blocks: 293613568 IO Block: 131072 regular file
Device: d5303e7602f1c866h/15361847005537224806d Inode: 637548676 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2015-12-19 18:09:37.000000000
Modify: 2015-12-19 18:09:37.000000000
Change: 2015-11-03 15:31:32.000000000

As you can see, the second line offers you the number of blocks that are being used by the file. Do remember as well that the files that hold the data are those ending in -flat.vmdk. stat will give you the number of blocks considering a block size of 512 kb. while the -s option in the ls command will offer you that information considering 1 megabyte blocks. So the blocks in the stat output will always double those in the ls -lsh command output.

Daniel J. García Fidalgo