Last updated on Monday 28th of February 2022 08:52:48 PM
©XSIBackup, how to achieve faster data throughput
©XSIBackup is a tool, not user software, it won't keep you from doing anything that the OS allows you to do. And, as Mr. Miyagui stated in the 80's:
"with great power comes great responsibility".
Try to always use the same software and hardware components to avoid nasty surprises. We recommend CentOS Operating System and Intel network hardware, that will for
sure minimize compatibility and performance issues. We have created ©XSIBackup-NAS to
offer you an easy to deploy appliance that you can use as a keystone
to deploy your projects.
Depending on how you use it, you will achieve different results. It's obvious to all of us that if we use a hammer the other way around that it's suppoused to be used,
we won't be as efficient hammering nails as we would be if we used it the right way. Nonetheless when things become not being so obvious and we need to chain
some thoughts, we also tend to ignore what we don't like or what we don't understand and try to build things up keeping ourselves in our comfort zone.
This is never a good strategy in the long term, so we'll try to clarify some key points so that you can choose the best topologies when it comes to design your
backup jobs. This is not something that you can avoid delving into, so please, take your time to comprehend at least the basics.
The most important thing to take into account is to avoid storing deduplicated repositories in VMFS volumes, as the available number of inodes is insufficient.
Previous versions of our deduplication engines would allow to do so, as the block size they used was 50MB. ©XSIBackup-DC uses a default block size of 1MB
which yields better compression ratio and speed, it forces you to use some alternative filesystem though, not an issue as ©ESXi allows to connect to NFS datastores.
©XSIBackup's simplest backup job can be performed to some local datastore. You won't need to exchange keys with other servers and target routes are immediately
available through the ©ESXi shell. Nonetheless there are still very important decisions to be made.
/!\ You MUST be caringly aware about what file systems you are reading data from and writing to. If you don't care or you forget to pay attention, you may be inadvertently
shooting your feet.
Most important thing to note is that ©XSIBackup can perform two basic type of operation with your data:
--backup: stores your data decomposed into thousands or millions of small files to a deduplicated repository, thus achieving a high level of compression, up to 98% as you reach 40 to 50 restore points
Second thing, but not less important is that you may format local datastores as VMFS volumes, or you may mount them
via NFS and thus use any underlying file system that the NFS server allows you to use: XFS, ext4, ext3, BTRFS, etc...
--replica: replicates the structure of a VM to different directory, local or remote. This implies creating just a bunch of files: the ones composing your VM plus the block manifests.
You should pay attention to the following concerns:
Replicating data is not the same as backing it up...
Replicating data is, by no means, a way to backup your virtual machines. When you replicate data you overwrite the mirror set, unless mirroring to
dynamic folders (please note that the article was written for an older version of ©XSIBackup, still the same principles apply),
thus, do not replicate when you want to backup. When you replicate, you just clone the contents of some disk/s, you mirror anything inside, including viruses, corrupt file systems, missing files, etc...
Don't backup to a VMFS volume (*)
(*) Unless you know exactly what you are doing.
When you replicate your VMs you are creating specular copies of what your original VMs are, thus you are creating a bunch of files in the target folder, just a few files. If you do so to a VMFS volume,
great, nothing to worry about. You may even turn your copies on and use them in their container datastores.
Nonetheless, you should never perform deduplicated backups to a VMFS volume. If you do, you will exhaust the available inodes and you will render your VMFS volume unusable.
Place deduplicated repositories in any XFS, ext4, ext3, btrfs, etc... file system on an NFS datastore. The best file systems to host repositories are XFS and ext4, due to their speed and resiliency.
You may also place deduplicated repositories to any Linux system over IP by previously exchanging keys by means of the --add-key argument.
Volumes in the remote Linux server can also be XFS, ext4, ext3, etc..., any general purpose file system will be tuned to host millions of files and a perfect match.
If you like risk sports you can also experiment any possible combination: Linux on Windows 10, NTFS over NFS, etc... We don't offer support for exotic chimera combinations, but you may give it a try, it should
work most of the times. You do it at your own risk though, remember we offer support to run ©XSIBackup-DC in ©ESXi and Linux on adequate hardware (some people try to host backups on Raspberry PIs).
Replicate to a VMFS volume or to any other native file system: xfs, ext4, ext3, btrfs, etc...
As general purpose file systems are designed to host small to huge files using up to a millions of inodes, you are safe to --backup or
--replica to any known working file system. Nonetheless, please try to stick to known file systems, leave risk sport to leasure time.
Be careful when writing your backup paths
If you set your backup target as a non existent path, i.e.: /vnfs/volumes/backup (please, note that the correct path would be /vmfs/volumes/backup) your backup will be saved to a folder named /vnfs
in the root file system, you will fill it and your server will stop responding.
Over IP backups
As a rule of thumb, you should preferably use over IP backups instead of backing up to local datastores as soon as you expect some latency above 2ms to be somewhat stable. Not only to get rid of the accumulated latency
by using the built in network algorithm, but also because the CPU load will balance between the ©ESXi host and the backup server's CPU.
Use decent network hardware
Many of the cheapest NICs, like Realtek chipsets (gorgeous NICs for desktops) don't have a CPU to perform network related tasks, they rely on the main CPU to perform some operations, this requires the driver to be prepared
for this. Realtek 1 GB NICs will per instance work well on Windows and Linux desktops, but will perform bad on servers, where they need to sustain high averages in data throughput. We recommend that you use Intel NICs, or any
other Brand which is recommended by ©VMWare and has its own CPU. Of course if you can afford 10GB cards you should use them.
Needless to say switches are fundamental to keep things going in a fast sustained way. Don't rely on cheap switches to handle your network traffic, you have really good 24 port hardware well below 200.00USD, if you buy some
low end brand, you won't be able to achieve more than 15 to 20 MB/s averages (at most), in contrast to some cheap Intel desktop 1 GB card plus a Microtik switch, will give 60-80 MB/s of sustained network throughput.
Minimize network latency, they all sum up
©XSIBackup's operations over IP have been designed to overcome network latency by grouping data exchange operations between the host and the backup server. Nevertheless, when you work in a local context
the program's routines will asume latency is low enough.
What do I mean by that?:
If you have to backup through a WAN with 500ms of latency, for goodness sake, exchange keys and use a remote folder target (firstname.lastname@example.org:port:/path/to/your/repos), do not attach your datastore over iSCSI or NFS
over a VPN on top of a WAN link and then perform backups to that locally attached datastore. If you do, ©XSIBackup will have to look for every individual chunk over the VPN and will take half a second just to
find out whether a given block exists.
Isn't LAN latency negligible?
If you attach an NFS share to your ©ESXi host as data store and place your backups there, you will be adding some small network latency to your disks' latency. It's OK, most of the times this is not significant. If you
have a decent backup network, your latency should be well below 1 ms., as a result the total time added by the network latency to your backup time will be negligible.
However, sometimes networks aren't what they should be, and you may sometimes find yourself trying to get camels through the eye of a needle. In those cases, you may find it more convenient to use a virtual appliance (some VM) exposing some NFS
file system to the very same server hosting it. Latency will be the lowest possible and your backup speed will be dramatically improved. You should in these cases place the appliance in a different disk and if possible in a different
controller, that will maximize throughput.
Let's say you have a backup LAN with an average latency of 10ms and that you are backing up a disk which is 100GB in size, that is 100K 1MB chunks. If you backup that disk to an NFS share, the network latency will add
10ms x 100,000 chunks => 1000 seconds to the backup time, apart from the time it takes to copy the data.
If you backup over IP to another ©ESXi host which is in turn connected to a NAS server in the same LAN, you will
have to multiply that figure by 2, as you will be having 10ms to get to the ESXi server and then 10ms to get to the NFS NAS. In this case it would be wiser to perform a backup over IP to the NFS server and save 1000 to
2000 seconds of backup time as well as load on the network switch.
Do not use redundant encryption and/or compression
Have you ever chewed sand?. Probably just the thought of it makes you shiver. Then why do you torture your CPU making it perform redundant tasks?. If you use ©XSIBackup's compression (default), don't set
compression on the VPN and also in the backup file system. You will achieve nothing and will on the other side get extremely poor results.
You may very well turn compression off (--compression=0) in your backup job and turn it on in the remote file system. That will most of the times enhance speed, as you will be reducing the CPU load in the ©ESXI host's CPU
and using the CPU on the remote backup server to compress the data instead.
Use a separate network segment to do backups
In case of most SMEs, you can perform backups during the night, as the network will be iddle during those hours. Unfortunately this is not possible in corporations that run 24x7 or in case you need to perform multiple backups
during the day.
The Show Must Go On!
Can you imagine how hellish your life can become if you run a backup every 3 hours on a network that is being used by 200 workers. The switch will saturate on every backup cycle as well as the server's HD controllers. Your phone will
start ringing, people will complain that they can't do their work, clients will have to wait and everything will start to fall through a downward spiral.
Fortunately, we can minimize this nightmarish scenario by designing a good backup strategy. The best approach is to use a dedicated controller or NIC to get your data out of the production server. A 10GB NIC is a must in case
you can afford it. This will free the production disks' controller from being hit by your backup, it will still hit your CPU, but that is something you can assume if your server is correctly sized for the job it has to do.
The first backup cycle will cause the biggest load on the server, you should choose some time window in which the production load is lower. Since then, all subsequent backups' load will be much lighter, as only the changed blocks will
be transferred and the hash computational load will be minimized by the Intel SHA Extensions built into the CPU, which ensure
little CPU cycles will be used for this task.
1/ Intel is a trademark of Intel Corporation or its subsidiaries
2/ Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries
Daniel J. García Fidalgo