Last updated on Monday 28th of February 2022 08:52:48 PM

How to backup ©ESXi huge virtual machines

Preparing the ©ESXi hypervisor, checklist.

Download Download latest edition of ©XSIBackup-Free here

©XSIBackup has been designed with full unsigned 64 bit number support for sizes and offsets, thus, the theoretical limits for a VM are way beyond what's plausible today: around 18 Exabytes. Still, there are functions that have to deal with metadata, indexes, etc..., thus, the real practical VM size ©XSIBackup is able to handle is more in the order of some tenths of terabytes, probably more, just as long as you have some adequate server hosting the VM.

Even if you have a massive amount of RAM a mighty CPU and fast disks, you will still need to pay attention to some other things, before being able to successfully backup a 20 TB VM, just to put it down into some concrete figure.

Seagate Ironwolf 18TB One of the weakest points of ©ESXi is the default /tmp dir, which is limited to some 255 MB. Enough for VMs which are some hundreds of gigabytes in size, yet not enough for multiterabyte VMs.

As already stated in some previous posts, and as you can directly infer by just taking a look at any of the VM file manifests: .blocklog file or .map files for virtual machine constituent files: each block metainformation file takes the SHA-1 has plus the semicolon separator plus tipically 7 to 8 bytes for the block size column. We can round it up to 50 bytes per block approximately.

Thus, if you are backing up a 20 TB virtual machine using the default block size of 1 MB, you are going to need some 1 GB just to store the VM block data. Clearly the default /tmp folder is going to be insufficient for the task. If you just ignore reality and you try to backup such a VM without any previous preparation, you are going to hit some SEGFAULT condition or, at best, some clear message stating that you run out of space.

Managing a 20 TB server is not something you can overlook and pretend everything should just work out of the box. ©XSIBackup utilizes a .conf file which you normally don't edit. In this case scenario, you will need to edit it, but before doing it, you will need to provide the physical means to store the temporal data.

Any storage device with enough space will do it, still: in this case scenario you are administering a huge server, you must dedicate some appropriate means to keep it working right. Using an M.2 fast NVME device would be a great way to provide the additional temp space required to hold the meta-data, indexes and logs, making sure that things work fluently enough. You don't need this disk to be that big, a small 240 GB one would be more than enough.

Editing the xsibackup.conf file

Once you have your new fast disk installed, you just need to add it to the ©ESXi host through the Storage Management menu. It will finally appear as some new datastore, you could call it /vmfs/volumes/tmp-disk or any other way you like.

Now edit the etc/xsibackup.conf file, you will see something like this:

# This are the default values for some variables. Most of them may be also set
# in the command line as arguments when creating the backup job


# Default block size for deduplicated backups.


# Default state for compression 1 = compression on, 0 = compression off

# Default level of verbosity for the output log 0 - 10.


# You must first verify that ciphers are supported by your OpenSSH build.
# (!) Be aware that OpenSSH will raise an error if just one of the ciphers in
# the list is not available on the remote side, this is annoying but real
# Setting ssh_ciphers to 'auto' will work most of the times, but may still
# throw some unknown cipher error. Set some single common cipher to make
# sure your OpenSSH tunnel is stablished, i.e. ssh_ciphers=aes128-ctr
# ssh_ciphers=aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc


# Less Secure Ciphers: generally lighter, might pose a security concern, but
# are faster if you don't need security, enable them by appending --options=L
# You can set any custom cipher in ssh_ciphers above, less_secure_ciphers
# option just allows you to organize then into 'secure' vs 'less secure'


[power state]

# When power on/off request is issued, the VM power state is queried every N seconds


# When power on/off request is issued, the VM power state is queried N times
# Thus the power state will be queried a total of power_query*query_times seconds
# Should the query_times limit be reached, a plain power off will be issued


# There are two TMP directories used by ©XSIBackup-DC: the first one is /tmp by default,
# it must be as fast as it can be as it stores job temp files which hold information about
# the blocks being backed up or replicated. /tmp is usually RAM fast by design and moving
# this dir to a regular HD will reduce performance by 30%, you should never change this
# value unless your VMs are so big that the /tmp dir can't hold the block definitions
# as a rule of thumb you will need 60KB per GB stored in your repository

# Primary temp dir


# The secondary temp dir is used to hold logs for prune operations mainly. This will need
# more space than the job temp files. When the regular secondary temp folder at
# /tmp is not big enaough to hold the prune data, the alt_tmp_dir variable
# will be used to locate a path where that data can fit. Again the faster the better,
# always try to mount your temp dirs on volumes rsiding in fast hardware.

# Secondary temp dir


Although as per version there is a variable called alt_tmp_dir, you should not edit this paths. The secondary tmp dir is by default associated to the installation root (/scratch/XSI/XSIBackup-DC).

(*) Newer versions will allow to edit the secondary temp dir directly, this will be announced in the change log and the xsibackup.conf file.

Apart from that, the /scratch partition where ©XSIBackup is installed by default has a default size of 4 GB, which is generally enough even for big VMs.

It is the pri_tmp_dir variable that we must focus our attention on first. As we have used the name /tmp-disk for our new fast datastore, we will just use some folder inside this DS and change the pri_tmp_dir var to:


We should create the folder first to make sure that it is available.

Now we have a big primary tmp dir which is at the same time fast enough to handle temporal files easily.