Last updated on Monday 8th of May 2023 03:29:24 PM

XSIBackup-DC Manual

Read the "Ten Commandments Of ©ESXi backup" to have an overview of the most critical things to keep in mind at all time.


Just use the installer in the package. There is a Youtube video available.


©XSIBackup-Datacenter AKA ©XSIBackup-DC is a software solution developed in plain C, having the GLibC library as its main dependency. It follows the same basic design principles of previous ©XSIBackup releases, namely: an efficient backup and replication tool that can be used directly in any ESXi or Linux host.

©XSIBackup-DC can replicate & backup Virtual Machines hosted in ESXi servers just like ©XSIBackup-Free and ©XSIBackup-Pro do and can also backup & replicate file structures in Linux servers both as replicas or as backups to deduplicated repositories.

Both operations may be performed to a locally accessible file system (any accessible disk or share: NFS, iSCSI, SMB (©XSIBackup-NAS)) or over IP. The nomenclature employed for IP transfers is: user@[FQDN or IP]:Port:/path/in/remote/system In the future we will extend ©XSIBackup-DC functionality to operate on different virtualization platforms like XEN or KVM.

©XSIBackup-DC is capable of performing two kind of operations on data: A/ Replicas.
B/ Deduplicated backups. Both of them are performed by using a simple algorithm that compares data present on the host being backed up against the data eventually already present on the remote volume.

©XSIBackup-DC first detects holes in virtual disks and jumps over them, it can also detect zeroed zones in real data and jump over them as well. Only real data present on disk is actually processed.

The SHA-1 checksum algorithm is used to compare chunks of data and decide whether the block being processed must be sent to the target folder or it is already present on the other side.

When zero awareness is combined with SHA-1 differential algorithm, maximum speed is reached, that is on subsequent data operations to the first run which obviously must process all non zero data.

©XSIBackup-DC downloads data definitions stored on the remote side, so that all comparison operations of the XSIBackup algorithm are performed locally.

REPLICAS: Since version all remote .vmdk disks' SHA-1 hashes of --replica VMs are compared with their stored values before actually performing the --replica job itself. Should some change be detected, the hash tables for every disk will be rebuilt. This allows to switch the VMs on, test them and keep the --replica jobs without any further operation. Rebuilding the hash tables will take some time, nonetheless it will be much less than sending the full VM again from its primary location. You will know that the remote hash tables are being rebuilt because you will see this message on screen.

Target VM at <root@> has changed, hash table must be rebuilt...

Some time will pass without any progress information until the remote tables are refreshed. How long that time will be will depend on the size of the disks and the real data contained in them.

Detecting changes in VMWare ESXi VMs is possible as disk's CID is changed every time a VM is switched on, thus a .vmdk file checksum mismatch will detect it.

You may also run the --check action on a local VM replica folder (a folder containing some .vmx file or .vmdk disk) from the server's command line. This will be equivalent to rehashing, it will be done implicitly while running the --check action. When you perform a rehash operation through the use of the --check action you will be presented a progress text UI with some basic statistics: files affected, changed blocks detected and repaired.

While the virtual disk files are being rehashed you will see a KO in red, should some bad blocks be detected (blocks that have changed in regards to the previously stored hash table), along with the bad block count. Once the operation ends the KO will change to RE (repaired).

When you perform a rehash operation through the --check action on a --replica folder, next time you run the replica job from the client side, no .vmdk file change will be detected and the --replica job will continue normally as if you had not switched the VM on.

You may also run a --check action on a VM --replica folder that hasn't been modified by switching it on. In these cases the check will return no changes.

Q: How do I know that a replica is actually valid?

A: You may use the --check action on a replica and a full re-hash check will be performed. This guarantees that the files contained in the replica are an exact copy of the original ones.

./xsibackup --check /path/to/your/replica/YOUR-VM

BACKUPS: In case of backups, which are always performed to a deduplicated repository, you can choose to compress data by employing the acclaimed LZJB algorithm used by Solaris and ZFS. This allows to compress data as well as deduplicate it. The use of data compression is recommended (just add the

--compression argument to your backup job) it offers some 45% compression ratio. If you are backing up to an already compressed files ystem you may remove the --compression flag and improve effective transfer speed and free your CPU from the compression load.

Over IP Operations (SSH options)

To be able to operate with any compatible remote server over IP, you need to first exchange keys to allow passwordless SSH communication by using the exchanged key to authenticate to the remote end. The --add-key action will allow you to do so from the command line.

Please be aware that regular OpenSSH behavior is to raise an error should just any of the ciphers in the cipher challenge list not be available in the remote server. This can lead to errors when running over IP actions when the OpenSSH versions are too distant in time, as some ciphers are deprecated while some others are newly added to OpenSSH as time goes on. You can edit the ./etc/xsibackup.conf to customize the list of ciphers to use.

©XSIBackup-DC may operate in client/server mode. When you transfer data over IP, you must invoke the xsibackup binary on the other end. If you ommit the --remote-path argument, the client will look for the binary in the /usr/bin folder of the remote host. You may as well indicate the remote path by explicitly stating the remote installation path, just like you do with Rsync.

--remote-path=/vmfs/volumes/ datastore1/xsi-dir/xsibackup ©XSIBackup-DC needs components in the ./bin folder, thus the contents of this directory must be present in the root installation dir and be executable to the user running the software.

©XSIBackup-DC can tunnel data through SSH to a remote server. The cipher algorithm that we may use for this tunnel can greatly affect transfer speed and CPU usage.

The default set of ciphers in use is:

aes128-ctr,aes192-ctr,aes256-ctr ,aes128-cbc,3des-cbc,blowfish-cbc, cast128-cbc,aes192-cbc,aes256-cbc

The above set should work well even between distant Open SSH versions, i.e.: 5.6 => 7.5 and the other way around. Its downside is that they are not very fast, unless your CPU counts with a special set of instructions to handle this workload.

Should you encounter some speed limiting issue, we recommend that you take advantage of the --ssh-ciphers argument and use, so that it's used instead of AES cipher family. If you have a server grade CPU i5 or above, you will probably won't notice the difference, unless you are short of CPU at the time to perform the backup.

This cipher will greatly improve speed due to its efficient design. This cipher was created by Prof. Daniel Bernstein and it's much faster than AES assuming that you don't count with some sort of hardware cryptographic co-processing help.

You can optionally enable the L switch in --options=L (Less Secure Algorithm). It will try to use arcfour,blowfish-cbc,aes128-ctr This last set is comprised by deprecated algorithms, you may want to use them when you don't need that much security in encryption, like in a controlled LAN, or when you need compatibility with older OpenSSH versions.

On addition to the above, you may pass your own preferred algorithsm to be used to cipher SSH data:

--ssh-ciphers=your-preferredalgorithm1, your-preferredalgorithm2

As warned in the heading notice, should just one of the ciphers in our cipher list be missing at the remote end, you might receive an error stating so. It's a bit misleading, as the whole cipher list is presented in the error message.

To avoid this situation, just choose one single common cipher present in the client and server side of the OpenSSH tunnel. Of course, using the same OpenSSH version on both sides minimizes the chances that you fall into these kind of problem.

Backing up to ©Synology NAS devices

You can use any Linux OSs as a remote backend for your backups, you can even perform concurrent backups. You must take into account that concurrent backups will limit the speed of every individual backup and that the locking mechanism as per v. is lock files, nonetheless the worst thing a failed lock can produce is a duplicate hash, which does not affect consistency and can be easily fixed.

In case of Linux servers the xsibackup binary alone is enough to run the server side.

Given the fact that ©Synology devices use Linux, you can easily turn your NAS device into an ©XSIBackup-DC server by just copying any unlicensed ©XSIBackup-DC binary to your /usr/bin folder in the ©Synology appliance. You will, of course, need to enable SSH access to the ©Synology box, but that's something trivial, you can read this post for more details.

In order for your ©Synology ©XSIBackup-DC server to operate correctly, you need to assign execute permissions to the xsibackup binary and have write permissions on the volume you want to use. By now only the use of the root user is supported, thus you should not have much trouble setting the server up.

Once the SSH server is running and the binary is installed you just need to run the --add-key action from within the ©XSIBackup-DC client to exchange the key of the client OS with the server and start using over IP as you would do with any other Linux server.

(*) ©Synology is progressively closing down the DSM OS command line options and turning it into a a more propietary OS. You may be limited in some way by the SSH functionality that ©Synology DSM offers.

Using other NAS appliances.

Just as long as the Linux kernel running in the server is able to run the xsibackup binary you can use whatever you want to act as a server. Nevertheless, the paths to SSH configuration files can vary from one system to another. We support ©Synology systems because we have tried them and we have tweaked ©XSIBackup-DC to be able to inter-operate with them. Should you want to use any other manufacturer's hardware, you do at your own risk, and you may need to exchange keys manually, which will not be guided by our support helpdesk.

Folder structure

©XSIBackup-DC consists in a single executable (xsibackup), plus an additional library bin/xsilib and some additional binaries, which are only needed in case of being installed to an ESXi host. The first time it is executed ©XSIBackup-DC will create a set of folders to store its logs, jobs and service files. This structure will vary depending on if you install it to an ESXi host or to a Linux server. In case of ESXi hosts the folder structure will be created under the installation directory and will be as follows: /bin/ : stores all binaries
/bin/xsilib : auxiliary library for ESXi
/etc/ : etc directory
/etc/jobs : stores backups jobs in files
/etc/Jobs/001 : backup job 001
/etc/xsi : configuration files
/var/ : stores logs and templates
/var/log : main log folder
/var/log/error.log : error log
/var/log/xsibackup.log : backup log
/var/log/backupdb.log : operations log, see definition here
/tmp/xsi : temp xsi jobs folder, deleted after each backup
/var/html : HTML templates folder
/var/spool : main spool folder
/var/spool/cron : cron schedules
/var/spool/mail : HTML e-mails are stored here Whereas when installed to a Linux server, the folders used will be those in the Filesystem Hyerarchy Standard (FHS).

©XSIBackup-DC uses temp files to store block maps, variables and different structures while it is running, thus it depends on a /tmp folder with sufficient space to hold this data. While working with files and VMs in the order of hundreds of gigabytes to one terabyte, this files size will be in the order on some hundreds of KB. Should the files grow beyond that, even ESXi /tmp file system that is 100 MB by default should be able to handle that. In case of Linux FS that may have an arbitrary /tmp partition size, this will never be a problem even on Exabyte VMs.

A folder is created under /tmp for every running job as /tmp/xsi/PID, where PID is the process identification number assigned by the OS. All /tmp/xsi data is deleted both when finishing a process or initiating a new one.

©XSIBackup-DC --replica feature depends on the remote replica set not being touched (modified), should you change a single bit on the remote end, the replica mirror will break and the resulting mirrored set of files may become unusable. This is due to the fact that all the program knows about the remote files is it's hash map, which is not updated when you modify files by other means than the --replica action.

Scheduling jobs

Job schedule is performed by means of the crond service and its corresponding crontab. Just place backup jobs in files, assign them execute permissions and add them to the crontab. You can use the argument --save-job=NNN, which will facilitate the creation of backup job files in the etc/jobs folder.

There are two main cron related arguments: --install-cron: allows to turn your scheduled cron jobs into permanent across reboots by adding a command to the etc/rc.local.d/ file. Run this after setting your cron jobs up. The command that is added to the etc/rc.local.d/ file is nothing but an --update-cron command.

--update-cron: takes the contents in the <install-dir>/var/spool /cron/root-crontab file and places them into the ©ESXi crontab at /var/spool/cron/crontabs/root so that the jobs can be run per the ©ESXi cron daemon.

To set a cron schedule up, create the file <install-dir>/var/spool/cron/root-crontab if it does not exist. Add your cron schedules as you would add them to any other crontab.

0 6 0 0 0 /scratch/XSI/XSIBackup-DC /etc/jobs/001 > /dev/null 2>&1

Then run the --update-cron command like this:

./xsibackup --update-cron

It will take the contents of the root-crontab file and add them to the ©ESXi crontab at /var/spool/cron/crontabs/root.


©XSIBackup-DC spirit it to become a heavy duty backup & replication utility and its basic structural design principles are oriented to that goal. Nevertheless in version 1.X, locking of the data/.blocklog file, which is the main block manifest, which is in turn shared by different backup folders and backup processes is provided via .lock files, this is not the most efficient way to manage concurrency. You could in fact hit some circumstance in which some information is written to a .blocklog file which is supposed to be locked.

This would be quite rare though, only if you try to write from many different processes to the same repository at the same time you may be able to run over some lock. Even in case this circumstance happened, nothing serious would occur, as duplicating some block info in the manifest file is harmless.

The block manifest file can be rebuilt from the underlying data quite fast by using the --repair action, which would eliminate any duplicates.

The files that allow you to restore some backup are the .map files, stored in the backup folders and the data blocks themselves, which are kept in the /data directory. You could even delete the manifest file (/data/.blocklog) and still be able to rebuild it via the --repair action.

To take on account

©ESXi 7.0

©ESXi 7.0 has introduced some drastic changes in VM behaviour. Since this version, when a VM is on ALL files are read locked on the ©ESXi shell. It does not matter if you take a snapshot, still ALL files are read locked, including some eventual existing snapshot. As a result of that only -flat.vmdk files (and also all other basic configuration files) are backed up. ©XSIBackup-DC will delete any eventual pre-existing snapshot when ©ESXi 7.0 or above is detected. Do not keep snapshots in production virtual machines.


It's worth to note that each backup jobs maintains a set of temporal files in an exclusive and independent directory and that it backs data up to an exclusive directory on the server repository, which is uniquely identified by a timestamp and an eventual subfolder set by the subfolder=somesubfolder argument.

If you don't differentiate backups from different servers by using the --subfolder argument, i.e.: --subfolder=CURRENT-SERVER You are taking the small risk that some jobs triggered at the same time are stored to the same time stamped folder . This is unlikely to happen, on top of that the VM being backed up would need to be called the same in both servers for files to mix up.

Nevertheless, always use the subfolder option when backing up from different servers. This is a must, not only because of the situation treated above, but also from a simple organizational point of view.

Take on account that if you trigger multiple simultaneous backups from different servers without having first designed a system to support it, you will most likely clog your network, your disk controller and your server. As known blocks start to accumulate in the block manifest (/data/.blocklog) the traffic will be reduced to blocks that have changed since last backup cycles and the backups will as a result be performed much faster.

You can think of ©XSIBackup-DC as some "Incredible Hulk" that grows in power as you load it with tons of data. Of course the results you get will be bound by your hardware limits and the limits of our software, but you should easily accumulate many terabytes of real data, which will normally correspond to some exabytes in backups.


©XSIBackup-DC stores backups to proprietary repositories, nevertheless the structure and format of these repositories has been designed to be "eye friendly" to the system administrator.

Data chunks are stored in raw format in subfolders of the backup volume file system as well as hash maps corresponding to the files in the backup. Thus you could very well rebuild your files from the data on disk by just adding the data chunks as described in the human friendly .map files, which are nothing but a manifest of the chunks encountered in the file when backed up.

©XSIBackup-DC uses a default block size of 1MB, but it can be as big as 50MB. As you may imagine, this could accumulate a big number of chunks in the data folder structure, in the order of millions.

As you probably already know, the ESXi VMFS file system has around 130.000 possible inodes, thus it is not very convenient to store deduplicated backups, as you will soon run out of inodes.

Any regular Linux file system will do it, but if you are willing to achieve great results we recommend that you use XFS or ext4, as they will allow to store millions of files and are, at the same time, the fastest file systems. Speed is an important factor when you accumulate a lot of data, as blocks need to be sought in the file system. Using a regular Linux system mounted over NFS3 is the ideal target for your backups. It can also be a specialized device like the popular Synology and QNap NAS boxes.

Data chunks are stored in the data folder inside the repository in a hierarchical subfolder manner. Each subfolder corresponds to an hexadecimal character up to 5 levels under the root of it and blocks are stored in their matching first 5 characters folder.

Assuming the robustness of the SHA-1 hash algorithm, which offers astronomical figure collision free unique IDs and the fact that the .map files are stored in unique folders; the probability to lose data due to some collision or repository corruption is very low.

Even if you completely delete the .blocklog manifest file, it can always be rebuilt from the constituent .map files and the deduplicated chunks in the data folder by using the --repair argument.

The .blocklog file in the root of the /data folder is a mere way for the client backup processes to know about the preexisting deduplicated chunks. This file is downloaded to the temp client folder previous to every backup cycle, thus the check on the existence of the block is performed locally. This has a small disadvantage, which is not knowing about blocks pertaining to ongoing backup jobs, but offers the huge advantage of performing block lookups locally at local speed.

Once every backup cycle finishes, the newly generated data, that is: data which was not found on the downloaded .blocklog manifest file, is added to the repository shared .blocklog file. This process locks the .blocklog file for the time it takes to complete, generating a /data/.blocklog.lock file, which is removed once the integration of the differential data completes.

The differential data is stored temporarily in the /tmp/xsi/%PID%/.blocklog.diff file of the client while the backup is taking place. The whole temp folder is deleted upon each backup cycle.

©XSIBackup-DC is a low level tool. It's as secure as dd or rm are in your Linux server, so make sure that you assign it adequate permissions. You may use different remote users than root, that is very convenient, especially when backing up to remote Linux servers, but trying to run it in an ESXi server under a different user than root, will require you to configure permissions accordingly. Also please note that when opening up execute permissions on ©XSIBackup-DC binary to other users than root, you are opening a potential security breach.

IMPORTANT: everything about the .blocklog manifest, the .diff files and the integration of the differential data constitutes a different and isolated subsystem in regards to the backup itself. Loosing differential metadata, registering duplicate block hashes or, as said, deleting the whole .blocklog manifest is unimportant, as it can always be regenerated accurately from the constituent blocks.

Even in the worst of the cases, by receiving a totally corrupt .blocklog file (which of course should never happen) and by messing up all differential data, your files will still be backed up accurately and you will be able to repair your repository afterwords. The worst possible situation in regards to the logic of the deduplication is that some block is reported as inexistent and is copied again. All this assuming that the backup completes and there aren't any hardware or communication issues.

Designed to be useful

©XSIBackup-DC has been designed with you in mind. A datacenter system administrator that needs a tool which is easy to use and extremely powerful at the same time.

As you already know (if you read the previous chapters) ©XSIBackup-DC stores deduplicated and eventually compressed chunks of data to the backup volume file system. Map files are stored to folders like the following:

<root of repo>/subfolder/timestamp/VM/

Whereas blocks are stored in the already explained five level subfolder structure under /data, something like:

<root of repo>/data/a/0/f/3/0/a0f3...

Just as long as you keep this data intact, you can easily rebuild it by using the --repair command. Then it's easy to realize that you can merge preexisting repositories into a single one and still keep data intact. This is useful in case you need to consolidate data into a single backup volume.

You can of course duplicate your repositories contents somewhere else. Thanks to the fact that data is split into thousands of deduplicated chunks, you can use Rsync to keep copies of your repositories offsite and use ©XSIBackup-DC to rebuild your VMs or any other data anywhere.

The xsibackup.conf file

This file is located in the etc/ directory in case of ESXi systems. It contains default values that can be tweaked by the user. Some of this values may also have a command line argument that may in turn modify the default values.

As a general rule XSIBackup Datacenter will use this values if no superseding argument is provided. You may per instance omit the --compression argument if you have activated it in the xsibackup.conf file.

# This are the default values for some variables. Most of them may be also set
# in the command line as arguments when creating the backup job
# Default block size for deduplicated backups.
# Default state for compression 1 = compression on, 0 = compression off
# Default level of verbosity for the output log 0 - 10.
[power state]
# When power on/off request is issued, the VM power state is queried every N seconds
# When power on/off request is issued, the VM power state is queried N times
# Thus the power state will be queried a total of power_query*query_times seconds
# Should the query_times limit be reached, a plain power off will be issued

The variables supported in the xsibackup.conf file are:
- block_size: defaults to 1048576 bytes (1MB), may be set to 10485760 (10MB), 20971520 (20MB) or 52428800 (50MB) apart from the default value.
- compression: defaults to 1 (active), may be set to 0 (disabled)
- verbosity: defaults to 3, you can tweak this value between 0 and 10.
- power_query_interval: stablishes the seconds in between power status probes.
- power_query_times: number of times a VM will be polled every power_query_interval seconds.
When requesting a VM to be switched off by virtue of the --backup -how argument, in case of warm and cold types, the shutdown request will be sent to the VM; from then on, it will be queried every power_query_interval seconds for power_query_times times. In case the limit power_query_interval*power_query_times is reached and the VM is still on, a plain power off request will be issued, causing the immediate stop of the guest.

The smtpsrvs.conf file

This is the file (etc/smtpsrvs.conf) that holds the SMTP servers configured to be used with XSIBackup-DC. It works exactly the same as in previous editions of XSIBackup: one server per line preceded by an integer number that is in turn the unique Id for the SMTP server itself. You will use this Id when calling or referencing the SMTP server.

# [TITLE] = SMTP Server
# You can add as many SMTP servers as you want
# Columns are separated by colons as described below
# One server per line pledging to the following format (please, do note that the server IP or FQDN and port are separated by a colon)
# ORDINAL(integer);IP or FQDN:port;mail from;SMTP user;SMTP password;SMTP authentication(none|anystring);SMTP security(TLS|anystring);SMTP delay(0-4 sec)
# Do not set any smtp delay above 0 unless you need it
# Only first 8 fields are mandatory
# Example
# 1;;;;Y0urpassw0rd; yes;TLS;0

The above is how the etc/smtpsrvs.conf file looks like. There's a short explanation about the fields and the order they have to be set. All fields are separated by a semicolon (;), except the server and port, which are separated by a colon (:)

Each SMTP server entry is composed by 9 fields, being the latter (--smtp-delay) optional.
1 - The SMTP ordinal.
This field is just an integer positive number starting at 1. It cannot be repeated and must identify each SMTP server univocally.

2 - The SMTP server.
This entry must correspond to any SMTP server in your LAN or the internet, identified by its IP address or its FQDN. Please do remember that if you use a FQDN you need DNS enabled in your ESXi box.

3 - The SMTP port.
This is the port in which the above SMTP server is listening on.

4 - The Mail From address.
This field is an e-mail address where the e-mail will be sent from, it usually is the same as the SMTP user.

5 - The SMTP user.
This is the SMTP user that will authenticate against the SMTP server, same as --smtp-usr argument.

6 - The SMTP password.
The SMTP password used to authenticate against the SMTP server

7 - The SMTP Auth scheme.
This field only accepts one value "none", all other possible strings and values will be interpreted as "yes, use SMTP-AUTH", in the example we use a plain "yes".

8 - The SMTP security scheme.
This can be TLS or anything else, TLS will use TLS authentication, any other value will avoid using it. You need to set this to TLS when using GMail, per instance.

9 - SMTP delay.
This field holds a delay between SMTP commands, this is not needed most of the times and this argument is optional, just don't use it unless you are absolutely sure that you need it.

To use this newly configured server in your backup jobs, just append... --use-smtp=N To the rest of arguments. You don't need to set anything but this two values: an e-mail address where to mail the results and an ordinal number (N), corresponding to the first field ORDINAL(integer) in the above configured SMTP server.

Since version two new actions allow to add and test servers through a command line user interface.

--smtp-add: call it without any argument. It will sequentially ask you for the SMTP server and port, mail from address, username, password, security options and optional delay. It has the advantage of probing the SMTP server is reachable and that the e-mail addresses are correctly written before saving the data. It will also preformat it, so you are save from unadvertedly pasting invisible characters with a different page code. ./xsibackup --smtp-add
--smtp-test: call it without any argument. This action will present you a list of the SMTP servers available in the etc/smtpsrvs.conf file, just select one of the Ids and then provide an e-mail address where to send a test. ./xsibackup --smtp-test

(*) In case you want to use some GMail e-mail account since May 2022, you will need to enable 2-step Authentication and generate an App password to make it work with ©XSIBackup.

E-mail reports

E-mail reports provide a way to know how a given backup or replication job behaved. You may activate them by just adding a --mail-to address to the job. Of course you need to have previously added at least one SMTP server to the /etc/smtpsrvs.conf file.

You can specify which e-mail server you want to use by employing the --use-smtp argument and passing it an SMTP server ordinal number. In case you don't use this argument the backup job will fall back to use the first available SMTP server.

E-mails are sent using a template stored in /var/html, name them 000-999[.html], the default Classic XSIBackup e-mail template is provided as 000.html. You may create your own and store the HTML in this folder. Just add the <!-- PLACEHOLDER REPORT --> HTML comment wherever you would like the table containing the backup information to appear.
To use your user created template just add the --html-template=NNN argument.

Job Variables

As soon as the job ends a new entry is added to this file: [install-root]/var/log/backupdb.log with all details on the backup.

Field Type Description
Session UID Int 64 bit Unique Identifier of job session
PID Int 64 bit Process Id casted to Int 64
Action Text Name of the action as passed to the xsibackup binary
VM Name Text Virtual Machine name
VM Id Text Virtual Machine Id
VM State Int 32 The VM is On (1) or Off (0)
VMX file path Text Absolute path to the .vmx file
Target Text Local or remote path to where the backup or replica was made
Compression Int 32 On (1) or Off (0)
Job Start Int 64 bit Unix Epoch
Job End Int 64 bit Unix Epoch
Sparse size Int 64 bit Nominal full size of the VDs
Real size Int 64 bit Non zero data in the VDs
Time taken Int 64 bit As a Unix Epoch difference
IPv4 Text IPv4 used in an IP backup
Port Text Port used in an IP backup
Errors Int 32 Number of errors

Creating Backup & Replica Jobs

Basic usage consists in passing an action first plus one or two paths depending on the type of action being performed, then the rest of arguments. ./xsibackup [action] [source] [target] [options] Quick examples:

ATTENTION: don't copy directly from this document into your SSH client. The chances that some character substitution happens is high.

./xsibackup --request-key
./xsibackup --add-key user@
./xsibackup --backup /home/me/my-data /mnt/NFS/backup --compression
./xsibackup --backup /home/me/my-data /mnt/NFS/backup --compression --rotate=30
./xsibackup --backup "VMs(Win01,Lin03,MyERP)" /vmfs/volumes/backup
./xsibackup --backup "VMs(Win01,Lin03,MyERP)" root@ --compression
./xsibackup --replica "VMs(Win01,Lin03,MyERP)" /vmfs/volumes/backup
./xsibackup --replica "VMs(Win01,Lin03,MyERP)" root@
./xsibackup --backup "VMs(Win01,Lin03,MyERP)" root@ --compression
./xsibackup --replica "VMs(Win01,Lin03,MyERP)" root@
./xsibackup --repair /vmfs/volumes/backup
./xsibackup --info /vmfs/volumes/backup
./xsibackup --prune /vmfs/volumes/backup/20190603211935


Action comes in first place after the call to the binary. It can be one of these:

--backup : this action will perform a deduplicated backup, optionally compressed by the LZJB compression algorithm, to the directory specified in the target argument. Avoid VMFS targets in backups, you may very well use VMFS for replicas though, it's not a good FS to store deduplicated backups, it's very slow when compared to almost any other option.

--replica : this action will perform a replication of the data under the source directory to the directory specified in the target argument. It will preserve the folder structure of the source.

This is a great action to use when you need to migrate some VM over IP or to a secondary datastore, but it is by no means a form of backup

To backup your VMs use --backup action, which will offer you multiple restore points and is absolutely resilient to interruptions on the communication channel across subsequent backups. It does not matter that some backup is interrupted, the following backup will be consistent while at the same time keeping the data seed, that is being differential and deduplicated.

/!\ disks distributed among multiple datastores must be named uniquely. ©XSIBackup puts all disks in the same folder, if you name them the same new disks will overwrite the previous ones.

--restore: this action will work the same way as --backup but the other way around. You point the source argument to some folder containing a backup, that is, some folder in a repository containing some .map files. The target will be the folder where to restore the contents of the source argument.

--check[=file|fast|full]: this action will check a whole repository or a folder inside a repository passed as the second argument, it may optionally accept three values: file (default), fast or full. The fast option will check the size of the chunks in the repo. The full option will eventually uncompress the data and recalculate its SHA-1 checksum to be compared with the stored value.

It also accepts a --replica path as the second argument. When doing so, the replicated set of files will be compared to the checksum of the original set of files, thus it allows to make sure the backup set is O.K. without switching the replicated VM which would corrupt the resulting VM as per v., as the integrity of the replica depends on being in sync with its checksum .map files.

Please, note that when the repository is compressed, even if you don't choose the full option, every chunk's inner header will need to be queried to find the size of the uncompressed data. Finally the default option since version (file), will just check for the existence of the deduplicated chunks files and is much faster. We can nevertheless assume that the block will be OK only if the previous backup job returned no error.

--prune: this action will delete some folder inside a deduplicated repository and calculate which blocks belong exclusively to that backup, it will then delete those blocks only, liberating the space used by that particular backup in the deduplicated repository.

/!\ Do not try to prune a big repository (+->1 million blocks) from within ESXi. NFS & iSCSI are great, but their performance can't compare to deleting millions of small files in a real filesystem.

/!\ Pruning is not recommended in a corporate environment.


Because it's by definition a destructive operation, namely, some blocks are deleted. To delete those blocks, ©XSIBackup-DC depends on the logs composing the hash maps. If for some reason some log file is damaged, altered, the disk fails, you run out of memory, or; the most common scenario, the filesystem is not totally consistent, then, you may end up deleting some block that you don't want to delete.

How should I proceed in a corporate environment?: Acumulate your backups in a repository, once the period you want to cover ends, archive that repository or keep it for some days until a new repository is already full up to some extent and then delete the old one or move it to tape or any other archival media. You can easily achieve that automatically by setting a dynamic path for your repo, like: /vmfs/volumes/backup/repo$(date +%M)

You need to make sure you have enough resources for pruning, read this post: Pruning is not a trivial task, it implies deleting blocks which could in turn belong to other VMs definitions, thus it is extremely important that you pay extra attention to this action in order to make sure that you have enough space (disk or RAM depending on DC version) to fit the block log data.

To correctly identify the blocks to prune when deleting some older backup folder, we need to find out which blocks among all those that compose the files being contained in the whole backup set are used exclusively by that backup that we want to delete. This operation is extremely intensive, it will take a fair amount of RAM, as much as all the non zero blocks hashes (including duplicate data) take on disk (around 50 bytes per block).

And this is where the risks of pruning come from: we will keep blocks that we can find among the other set, failing to find a block among the other backup sets can cause unwanted block deletion. This is something that can hardly happen, our search algorithms are designed to be robust, still, why pruning a repo when you can keep it?, specially since pruning will cause an extra load on the servers.

It's up to you, if you believe pruning is neccessary due to very limited backup room availability, use --check from time to time to verify your data's integrity.

A way to drastically limit resources used by pruning is increasing the block size to 10, 20 or 50 MB, that will reduce the compression ratio on your repos though.

©XSIBackup-DC uses just RAM to --prune since version, you don't need to worry about having enough tmp space. In case you run out of RAM (you will know because an error will be raised stating that fact), just add more RAM to prune the repo or size your repos to the amount of available RAM.

--info: just pass the root of a repository as a second argument to this action to have a quick view of the most relevant figures: blocks in the repository, size used, size of the files hosted and the achieved compression ratio.

--repair: using this action on a previously existing repository will perform the following: all constituent blocks for all files stored in the repository will be read, sorted and any duplicates will be removed, then each block will be sought in the /data folder to check it exists. This action can repair a repository which block manifest /data/.blocklog has been erased and rebuild it from the individual .map files, but it cannot recover deleted blocks or rebuild missing .map files.

Unless you have lost fundamental data (data blocks or .map files), if a --repair operation is successful, you can consider your repository to be healthy.

--update-cron[=user]: (optional) copies the user crontab at

[xsi-dir]/var/spool/cron/[user]-crontab to the ESXi crontab at:


(*)This option only works in ESXi hosts. --install-cron[=user]: (optional) adds a line to the /etc/rc.local.d/ file in ESXi hosts so that the [user] crontab is enabled upon restart of the ESXi host.

--uninstall-cron[=user]: (optional) removes the line from /etc/rc.local.d/ belonging to the [user]. The ESXi crontab remains untouched. Modify the ©XSIBackup crontab and run --update-cron to completely remove the existing schedules or reboot the host.

--add-host (>= this action will connect an ©ESXi host and make it available for its VMs to be backed up or replicated. You do need to connect with the root user, i.e.:

./xsibackup --add-host root@

--remove-host (>= this action will remove a backup host. The mounted dir at /mnt/XSI/srvs/a.b.c.d will be removed and also the entry in the /etc/fstab file that allows to remount it on reboot, i.e.:

./xsibackup --remove-host root@

--add-key: this action will grab the local RSA public key and place it in a remote server's authorized_keys file to allow passwordless communications between the local and the remote system. You need to perform this for every system that you want to use over IP, it's not needed for local data operations.

You do need to use this kind of string as the target: user@IP:Port. The user will be root most of the times, i.e.:


The key exchange routine will look for authorized_keys files in the default locations for OpenSSH servers, namely: /home/some-user/.ssh/authorized_keys or /root/.ssh/authorized_keys.

In case of remote ESXi hosts, the default location for this file will be sought, that is: /etc/ssh/keys-root/authorized_keys or /etc/ssh/keys-[some-user]/authorized_keys in case of using some user other than root.

When the remote host is some kind of Linux server where the location of the users's homes folders is customizable, like in case of ©Synology devices, you must use the --home-path argument to let ©XSIBackup-DC know were to find the root of the homes directory.

Default location in ©Synology devices is /volume1/homes


./xsibackup --add-key root@ # Add key to host on port 22

./xsibackup --add-key alice@ # Add key to alice's SSH profile on host on port 5467

./xsibackup --add-key bob@ # Add key to host on port 5467 on user's bob profile

./xsibackup --add-key root@ --home-path=/volume1/homes/alice # Add key to a remote system for user alice being the profile for alice user at /volume1/homes/alice using the root user to authenticate to the remote host. This will grant you full user rights at the time to write the key to alice's authorized_keys file.

(*) Since ©Synology DSM 6.2, no other user than root can login via SSH, this restricts use to the root user once enabled.


This is the second argument in case of performing a --backup or --replica action and the only path required when executing a --check , --info or --prune operation.

When performing copy operations ( --backup or --replica ) this argument must point to an existing directory containing some files. Those files will be backed up or replicated to the target directory.

You may backup directories, which can be useful in case of VMs that are not registered to the ESXi inventory, or you may also select VMs by name.

To backup a VM stored in a directory (or a series of them), you must point the source argument to the root directory where the .vmx file is contained.

To select Virtual Machines, as in the above examples, just enclose the whole source argument between double quotes and use the VMs keyword (it's case sensitive) followed by a list of Virtual Machines separated by commas, or: the ALL keyword to backup all VMs, the RUNNING keyword to backup VMs which are in an ON state. ./xsibackup --backup /home/me/my-data /mnt/NFS/backup/repo01
The above will backup directory [my-data] to /mnt/NFS/backup/repo01 with the default block size

./xsibackup --replica "VMs(RUNNING)" root@ /vmfs/volumes/backup
The above replicates all running VMs to /vmfs/volumes/backup in

./xsibackup --replica "VMs(ALL)" root@ /vmfs/volumes/backup
Replicates all VMs to folder /vmfs/volumes/backup in

./xsibackup --replica "VMs(^REGEXP)" root@ /vmfs/volumes/backup. How to write regular expressions.
You may also use some Regular Expression to select VMs. The way to tell the VM selection routine that you are using a REGEXP is by prepending the string between parentheses with a '^' sign. You may optionally end the regular expression pattern with a dollar sign '$' to indicate the end of the REGEXP pattern.

./xsibackup --replica "VMs(^WIN.*)" root@

The example above will select all VMs begining by WIN. If you use some fix length algorithm to code the main characteristics of your VMs, you will be able to select them very easily.

Duplicate jobs (

A job with same action + source arguments than an already running job is considered to be a duplicate and xsibackup refuses to run it.


Target is the third argument in the command line. It represents a directory where files will be backed up into an existing deduplicated directory, or replicated to it. If the directory does not exist it will be created and eventually a new repo initialized by XSIBackup-DC.

target can be a local or remote directory in the form user@host:port:/path/to/backup/dir as we have seen before

Server role has been revised in version to limit the xsibackup binary to read data from the SSH tunnel and write to the backup repository or replica folder when using a different user than 'root'. Setting hyerarchical backup topologies up with multiple users can be difficult due to the fact that xsibackup has been conceived as a service and not so much as user software. Thus it requires permissions to create folders and write files on parts of the FHS (Filesystem Hierarchy Standard) where other users than root normally can't, as /etc and /var/log apart from temp folders.

When xsibackup is running as a server and some user other than root is detected, it will not try to create folders or files in restricted parts of the FHS. You should then first run some command as root to create those files to avoid errors on subsequent runs with a non privileged user.

In case some error is raised in the server role, the user with which you invoke the server user@a.b.c.d:22:/some/repo must have rights to create and write logs to /var/log in case of Linux servers. The more privileged the user is, the less permission problems you will find.

In ©ESXi systems running as a server, the situation is more controlled, as the etc and var folders are stored under the installation root, thus you can assign permissions to the bin file and also the etc and var folders as needed.


--backup-host (>= this argument tells ©XSIBackup to run the backup job in that particular ©ESXi host that you must have previously linked to.

xsibackup --backup=cbt "VMs(Win10_01)" root@ --backup-host=

--block-size[=1M(default)|10M|20M|50M]: (optional) this is the block size that will be used to deduplicate data when using the --backup action. In case of replicas a fixed block size of 1M will be used. You can choose between: 1, 10, 20, or 50 megabyte block sizes when performing a --backup action. 1M is the default --block-size. --block-size=1M Set block size to one megabyte (spurious, just don't pass any --block-size argument to use 1MB block size)
--block-size=20M Set block size to twenty megabytes
--block-size=50M Set block size to fifty megabytes (*) We recommend that you use a 1MB block size by just omitting the --block-size argument.
(**) Some features, such as CBT work only with the default 1MB block size.

--quiesce: (optional) this option has no arguments, use it when backing up VMs to quiesce the guest OS previous to taking the backup snapshot. If you don't pass this option, no quiescing will take place.

--remove-all-snapshots: (optional) delete any previously existing snapshot in the VMs to be backed up.

--config-backup: (optional) does a backup of the ©ESXi host configuration and copies it to the backup repository or replica folder. This consists in a copy of the /etc folder. It is the equivalent of running vim-cmd hostsvc/firmware/backup_config and placing the output in the remote repository.

--compression[=yes|no]: (optional & enabled by default) sets whether the backup will compress chunks previous to storing them in the backup repository. It will achieve an additional 50% compression on the data at almost no cost in speed. It's recommended that you always use it, unless you are storing data to a compressed file system, you may avoid using it in that case.

--subfolder[=yourfolder]: (optional) this will add a folder with the specified name before the time-stamped folder, so that you can organize your backups in logical containers. It's useful when storing backups from different clients into a consolidated repository per instance.

--rotate [=N(D)]: (optional) this option will delete the backup folders older than N days when the number N is followed by a D letter (case insensitive), or will just keep the number of folders stated in the numeric value when you parse just an integer number. It also accepts some user defined volume of data followed by GB or TB. When that limit is reached, automatic deletion of the eldest folders in the root path defined by the --rotate-at argument will take place. Don't use --rotate-at when you use a timestamp in the form YYYYMMDDhhmmss, the rotation dir will be automatically set right under. It also allows to automatically delete the eldest folders when you run out of space when passing it the max option.

--rotate-at [path]: (optional) this option accepts a local or remote path where to set the rotation to take place when combined with the --replica action. You don't need to set it in case of --backup action. If you omit it, the rotation process will set it to the immediate lower directory if the last replica directory is detected to be a time stamp in the form YYYMMDDhhmmss.

Related posts:

• Rotating sets of replicas to have multiple restore points readily available.

--verbosity[=N]: (optional) accepts values between 0 and 10, more information messages will appear on STDOUT when you raise it.

--auto: (optional) tries to automatically resolve any user requested action like creating new dirs, deleting PID file, etc..., so that the backup process continues without any halt.

--save-job[=NNN]: (optional) will save the current command line job to a job file in etc/jobs/NNN, where NNN is the three digit numeric code assigned to the job. You may then call this file from the host's crontab to schedule backups or replicas.

--timestamp [=YYYYMMDDhhmmss]: (optional) allows to set a custom timestamp in case you want to place multiple job outcomes into a single backup folder. If you don't use it, XSIBackup-DC will create a timestamped folder per backup job.

--exclude [=REGEXP]: (optional) POSIX regexp pattern to exclude disks from the backup. The REGEXP can be up to 512 characters long. How to write regular expressions.

Excludes all log files like: vmware-2.log, vmware-2.log, vmware-2.log, vmware-000000.log, etc...

--fsblocksize=N: (optional and rarely used) it allows to set the block size of the underlying FS when hosting VMs in NFS volumes. This block size will be used to calculate the used space in sparse files when setting the --options=S flag. Most FSs have a 512 byte block size (including VMFS) that is used by default, you will very rarely need to set this manually.

--cleanup: (optional) (ver. no arguments. Kills all running xsibackup processes. Use when you have some process stuck in memory (ps -c | grep xsibackup ).

--force: (optional) (ver. no arguments. Force deletion of remote locks, use cautiously.

--memory-size[=NNN(MB)]: (optional) (ver. pass memory size in MB. Assigns that amount of memory to the shell binary memory pool. Required to backup/replicate massive VMs or prune big repos from ©ESXi. You should never try to prune a big repository from within the ©ESXi OS though, do so from the backup server.
This argument will set the passed amount in MB as the size of the ©ESXi shell memory pool. When the job ends, the original size of the pool (typically 800MB) will be set again. In case you pass a value that exceeds the available memory, all available memory minus 1GB will be used. If you pass a value below 900, the argument will be ignored.
Read this post: ©VMWare ©ESXi memory constraints and workarounds
for a more deatiled explanation on memory management.

--enable-vsyscall: some newer Linux OSs (like latest Debian 10 & 11) have deprecated vsyscall in favour of vdso, which causes (c)XSIBackup to return a SEGFAULT when accessing the system clock. Running ./xsibackup --enable-vsyscall will add vsyscall emulation to your Linux host.
/!\ This command will reboot your host automatically after enabling vsyscall emulation.

--ssl-key[=path]: (optional) path to the SSL key pair. If it is not provided by means of this argument it will be sought in the program's root folder

--ssh-ciphers[=your_cipher1,your_cipher2...]: (optional) it allows you to set the comma separated list of ciphers that will be used during the OpenSSH handshake. The finally agreed cipher will be used to encrypt data.

Choosing a good cipher, like the above recommended, can visibly improve data transfer speed in some circumstances.

In case you are running XSIBackup-DC in a server grade CPU which is not clogged, you will probably not notice any speed improvement.

If a strong cipher is not a requirement, cause you are in a controlled LAN or simply your data does not require it, your best bet for speed is to use some deprecated lighter cipher, like the list used by the --options=L (Less Secure Ciphers): arcfour,blowfish-cbc,aes128-ctr

Choosing a cipher or a list of them requires you to comprehend the basics of what they are and why they affect performance, as well as being able to query OpenSSH to find out which are the supported ciphers in your OpenSSH client and server's build.

--options[=LCROSf]: (optional) allows to set miscellanea options:

L: use Less Secure Algorithms (--options=L). This is equivalent to setting --ssh-ciphers to some lighter algorithm.
C: avoids stripping the CTK file information from the replicated VM. /!\ This can't be used to chain CBT replicas, again: CBT replicas can't be chained.
N: (optional) (ver. Disables the CBT peer check that makes sure that the previous CBT sequence was saved to the remote repository.
R: register a replica VM running on top of a test snapshot to allow testing of replicated VMs. You will use this in very rare ocassions.
O: override the _XSIBAK filter to backup _XSIBAK VMs.
l: just list a regexp selection in the VMs() argument. Used to test Regular Expressions.
S: consider size on disk, namely: non-zero data on sparse files, instead of full nominal virtual disk size.
f: force autoprovision of space before attempting a backup/ replica, even if the rotate option is based in a number of replicas or backups instead of on size. Increases the chances that the job isn't halted when running out of space in a volume. Nonetheless this are critical scenarios that you should always avoid by setting some user predefined room for backups or by setting a number of rotations that can fit in your backup volume even if the original disks fill up.

--mail-to[]: this is the e-mail address where the e-mail report will be sent. You may add multiple comma separated addresses.

--use-smtp[=N]: this is an integer number starting with 1, that makes reference to an SMTP entry in the SMTP servers configuration file at [xsi-dir]/etc/smtpsrvs.conf. In case you don't provide one, the e-mail submission routine will fall back to SMTP server number 1.

--subject[=some subject]: sets the subject text of the e-mail report. In case you don't provide one a generic subject text will be displayed.

Since version you can use the following variables in the subject string: %hostname%, %jobid%, %source% and %errnum%

--html-template[=NNN]: sets the template to use to embed the backup report. Classic template is included as 000.html, you may use it as a base to create your own designs. The only rule is to keep the placeholder indicator () in its own line, it will be replaced by the data generated during the backup.

You may use numeric string values 000 to 999

--backup-how[=hot|warm|cold]: this argument works the same way as in Classic XSIBackup. It can take one of three arguments: hot, warm and cold. Hot: this is the default value which will be enabled when you don't use the --backup-how argument. It checks whether the VM is on and, if so, it takes a snapshot to hold the I/O data produced while the backup is taking place. Once the backup finishes, the data held in the snapshot is reverted to the base disks by deleting the snapshot.

Warm: this option will turn off each VM right before the backup begins, it will then take a snapshot once the VM is off and turn it on right afterwards before starting to copy data. This will create a VM with a snapshot generated while it was off. This technique ensures data consistency when you run some heavy load database inside your guest, or when you are using an OS for which VMWare Tools is not available.

Cold: this third method will turn off the VM before actually starting the backup, so all files are free while the backup is taking place. Once the backup process finishes, it will turn the VM on again. All switch-off operations are performed the best possible way. Each VM is queried in search of ©VMWare Tools, if they are installed a controlled shut down is requested. Depending on the load and number of services running in your guest, a controlled shut down might take some seconds to complete. The VM power state will then be queried at regular intervals and when the VM is known to be off ©XSIBackup-DC will continue execution.

In case the VM is off, the Hot and Warm options will have no effect.

--disable-vmotion: tells ©XSIBackup-DC to stop the vMotion interface while the backup is taking place in order to prevent VMs from being moved around by DRS service while the backup is taking place.

--remote-path: this is the path in the remote system that allows XSIBackup to find its counterpart. You can omit it if XSIBackup can be found in the /usr/bin directory.

--smtp-add: see the e-mail management section for details.

./xsibackup --smtp-add

--smtp-test: see the e-mail management section for details.

./xsibackup --smtp-test


(*) CBT feature only works with default 1MB block size when used along with the --backup action.

(**) You can rotate backups and prune the eldest folders, you should not delete the previous backup in the CBT sequence, that would break the differential scheme.

New ©XSIBackup based on DC technology commercialized for all versions since Jan 2021 offers CBT (Changed Block Tracking) technology since version First CBT enabled versions will include CBT compatibility for the --replica action. CBT will be available for the --backup command since version (April-May 2021).

To be able to use CBT on any VM you first have to enable CBT for that VM. You do that by using the --enable-cbt="VM" argument. You can reset the CBT sequence at any time by using the --reset-cbt="VM" argument. Both enabling and resetting the CBT feature implies deleting all snapshots and rebooting the VM, that's what the previous arguments will do.

You can alternatively enable CBT for your VM manually. In either case, a general CBT directive will be added to your virtual machine's .vmx file along with a CBT line for each disk.

Once CBT has been enabled for a VM you can start using the feature on any given VM by adding [=cbt] to the --replica (>= or --backup (>= expected Sept 2021) action.

Should some VM or disk not be enabled for CBT, the [=cbt] directive will be ignored. Please make sure that you are indeed controlling how CBT is being applied to your VMs. --enable-cbt=VM: pass some VM name to this argument to add the CBT directives to the .vmx file. They will be added at the end of the file.

--reset-cbt=VM: this will cause the CBT feature to be reset for the VM, namely: the VM will be stopped, the -ctk.vmdk files will be removed and the VM will be switched on again. The CBT scheme will be initialized and a full replica or --backup will be performed next time you request a CBT replica or backup, that is: every block hash will be recalculated and a matching block will be sought and eventually placed in the repository or replica.

--replica=cbt (>= enable CBT for the --replica action

--backup=cbt (>= expected Sept 2021): enable CBT for the --backup action


./xsibackup --replica=cbt "VMs(VM1,VM2,VM3)" /vmfs/volumes/backup
./xsibackup --replica=cbt "VMs(VM1,VM2,VM3)" root@

CBT is not a magical resource. For it to work well you need to control your VM configuration and the state of replicas and backups. ©XSIBackup can handle some typical and foreseeable situations, like switching on a remote replica VM to check it's working state. In that case ©XSIBackup will detect that circumstance and will resync the remote replica before actually allowing you to continue a CBT scheme. In case the job is scheduled to run automatically, all those things will happen without any intervention from your part.

Passing a CBT sequence number

./xsibackup --replica=cbt:17 "VMs(VM1)" /vmfs/volumes/backup

The :17 part indicates that CBT operation should be applied from CBT sequence number 17, regardless of what the actual value of the CBT sequence number may be.

You can also achieve a similar result by editing the files containing the CBT sequence information for every -flat.vmdk file and remote target. As each disk may use a different sequence number, you must choose some integer low enough to resend data from a point in the past old enough to satisfy the resync depth. By issuing 2 you will typically resync all data from the second CBT pass, which is usually still an amount of data well below the whole data contained in the disks.

Checking the remote replicas

You can use the R option passing it to --options to generate a test replica VM that you can switch on to verify the replication state.

(*) Read this post on CBT for a more thorough explanation.