In a remote backup scenario, I have hostA that runs an xsitools job on remote (cloud hosted) hostB.
Which host is computing hash for every 50MB chunk?
Assuming a hostB theoretical throghput of 100 mb/s and hostA with about 80 mb/s, reducing chunk size could speed up the whole operations?
In a remote backup scheme, namely "over IP backups" hashes are computed in the originating host, of course, otherwise we would be adding the uncertainty of whether the block was transmited right, the same happens in a local backup.
When you perform backups over IP (Onediff, Rsync or XSIDiff) and you add --certify-backup=yes, the load to compare the hashes is shared, thus the certification is twice as fast as when you perform the same backup locally.
Lowering the block size will not improve hashing speed, on the contrary, you will have to add the CPU cycles needed to slice extra chunks. In any case the hashing speed is faster than read speed, even with SSDs. This is due to OpenSSH using the ©Intel SHA extensions which most CPUs incorporate nowadays, thus hashing speed is not among the main concerns when trying to improve backup speed.
[https://33hops.com/xsibackup-vmware-esxi-backup.html]©XSIBackup-DC[/url] is certainly your best bet to improve backup speed as it performs most operations in RAM, in any case if your bottleneck is your hardware, there won't be much you'll be able to do.
Thank you for the reply, I think my bottleneck is the public internet, I tried DC to test performances, but I had some unexpected errors that stopped the trial. I'll try again with the latest version, when possible.