Registered users
Linkedin Twitter Facebook Google+

In order to improve user's experience and to enable some functionalities by tracking the user accross the website, this website uses its own cookies and from third parties, like Google Analytics and other similar activity tracking software. Read the Privacy Policy
33HOPS, IT Consultants
33HOPS ::: Proveedores de Soluciones Informáticas :: Madrid :+34 91 663 6085Avda. Castilla la Mancha, 95 - local posterior - 28700 S.S. de los Reyes - MADRID33HOPS, Sistemas de Informacion y Redes, S.L.Info

<< Return to index

XSIBackup user: On a Gigabit network, what is the expected throughput?

There seems to be some sort of limited throughput from the ESXi shell, as the NIC throughput is far less than expected, in any case there does not seem to exist any confirmation from part of VMWare and the matter itself looks like being related to encryption CPU usage. The ESXi Shell uses only one of the available cores, thus encription can be a serious bottleneck when transferring big files over SSH, as xsibackup-rsync does. The solution: more powerfull individual cores, which is difficult to get in a short term and low budget (see update at the end of the page).

We use pretty old hardware for our tests and development (this pushes us to try to optimize everything a bit more) and get transfer speeds between ESXi hosts (rsync shell to shell) of around 30 mb/s (we have updated this last figure in Jan 2017) using two i3 with regular Seagate hard disks and a 120 gb. SSD cache disk. This is far less than expected from an Intel gigabit lan which is our reference NIC (we use desktop and server NICs with various chipsets: 82574L, 82576). As the CPU keeps relatively low and the disks throughput is far beyond that empiric limit, there seems to be a clear bottleneck on the network speed achieved from within the shell. On rsync host to host syncronizations the first load of data will be fairly slow, but subsequent updates will be a lot faster, about twice as much.

If you are running this in a LAN you can always make the first copy by other means. Take on account that the preferred backup method is to a datastore, this will ensure a really fast copy. Rsync transfers are an alternative for those standalone servers sitting in a hosting environment where you cannot link to a datastore properly or for people wishing to maintain a fresh copy for "a kind of" failover strategy. In any case my first attempt, even in a locked-down environment like an ESXi dedicated server, would be to try to link to an NFS server and do a regular copy as this will be a lot faster. If you connect to a remote NFS server over a WAN, you will need to activate asyncronous NFS for the mount to be usable. If I had limited bandwith and/or some unpredictable network instabilities I would then go for a Rsync strategy.

We have added many new options and file copy programs to XSIBACKUP-PRO in 2016 and 2017, from v 6.0 to v. 8.0. Now you can use Borg Backup as a data backend and obtain effective speeds over 50 mb/s with modest hardware and take advantage of combined deduplication and compression. If you use OneDiff you'll be getting effective average speeds over 700 mb/s, calculated over the total size of the file, only the differential data is really transferred though. In 2017 we'll be adding our own propietary file copy and replication mechanism, which will offer native block level deduplication on VMFS and other FSs attached via a DS, and block level de-duplication differential copy over IP.

Daniel J. García Fidalgo

Website Map
IT Manager
In Site
Resources & help
Index of Docs
33HOPS Forum

Fill in to download
The download link will be sent to your e-mail.

            Read our Privacy Policy