Registered users
Linkedin Twitter Facebook Google+

In order to improve user's experience and to enable some functionalities by tracking the user accross the website, this website uses its own cookies and from third parties, like Google Analytics and other similar activity tracking software. Read the Privacy Policy
33HOPS, IT Consultants
33HOPS ::: Proveedores de Soluciones Informáticas :: Madrid :+34 91 663 6085Avda. Castilla la Mancha, 95 - local posterior - 28700 S.S. de los Reyes - MADRID33HOPS, Sistemas de Informacion y Redes, S.L.Info

<< Return to index

© XSIBACKUP | How to build the perfect SME backup system.

In this post I will dissect one of the mayor keystones in the life of a System's Engineer or System's Administrator, by using my favourite method of thinking: "Mayeutics". So I will start by asking myself loudly, what would I ask my backup system to accomplish. I am not going to take on account any consideration but my wildest dreams in regards to this matter. If later on I have to discard something because I dreamt too wild, I'll do and explain my reasons. I'll asume in sake of clarity that we have an ESXi set of servers with around 2 Tb in VMs in a typicall 1 gb. ethernet LAN with a decent 100/10 mb. FO broadband Internet connection. That is something very common in an average SME across the world.

• Well, first of all I would like to have a set of VMs ready to be used, in case something would go wrong with the production set. Depending on your needs and backup windows, this set can be created nightly or more often, by using the latest versions of © OneDiff, you can even have a set of VMs replicated almost in real time (depending on the number the lag will vary from 1 to several minutes). This set must be replicated to a regular datastore within your system, the faster the better, always taking on account that it will be a mirror set and thus, any sort of data corruption or threat present at the production set, will be rapidly transmitted to the mirrored set. So, we need an archive where we can go back in time to the point in which our VMs were healthy, should some Wannacry ransomware infect our Virtual Machines.

• I need some backup software to backup my VMs. If you have just one server © XSIBACKUP-FREE can do part of the work and keep a set of mirrored VMs at some datastore, but it will copy them fully everytime. If you can afford to buy XSIBACKUP-PRO, then OneDiff can help you reduce the replication time to some minutes for terabyte VMs.

• I want to exploit my storage room to the maximum extent, thus, I could use some sort of deduplication, If it can be at the filesystem level that would let me save some time and ease my work. I am going to backup VMs, and they will share many blocks, so I can store my VM backups at a virtual storage device with a deduplicated filesystem. You can use Windows Server or LessFS if you don't have the budget to acquire an MS license, read this article to have an overview of the matter and the facts. I will have a set of local backups, and deduplication will hopefully allow me to store some dozens of them (or even hundreds of sets), depending on the available backup storage capacity. You can tipically expect compression ratios over 85% and even bigger as you add more data on top. That's great, isn't it?, deduplication is ideal for backups.

Using an in-line deduplicated File System, one that deduplicates data and places it in a regular File System, that you can share and access transparently, as any other FS (ext4, NTFS, FAT32, etc...) is very convenient, as you can access data immediately, and even run your VM from the de-duplicated FS (it works, but will not bear many concurrent users unless you have a beast server acting as a NAS). Nevertheless there's a drawback; real time deduplicated File Systems are insanely resource hungry. So, as we already have a way to run our backups directly from a regular datastore, we can use a lightweight de-duplication system.

© XSITools is our response to our own necesity, and probably many other people's. It is a lightweight deduplication engine, that stores its data in transparent repositories where the data blocks are directly accesible. We have designed it so that it has a minimum fingerprint. It is so lightweight, that you won't notice the impact in terms of CPU usage. You could be performing your backups during regular bussiness hours and your users could not tell the difference.

To acomplish all the desired operations xsibackup-cron file would look like this:

The argument --backup-prog=onediff at backupId=00 is optional, use it if you own a license of XSIBACKUP-PRO, backups will be made too with XSIBACKUP-FREE, but they will take longer, as the whole VMs will be copied each time.

We are using an XSITools backup to archive the OneDiff backup to a deduped repository as backupId=01, the --override=xsibakfilter argument allows to override the default behaviour, which is to hide _XSIBAK VMs from the VMs available to XSIBackup. If you can't afford an XSIBACKUP-PRO license, you can use a real time de-duplicated File System or limit your archive to what you can fit in your storage backup device without deduplication.

• I also want a way to have a set of offsite backups to protect against destruction or substraction of the hardware (fire, earthquakes, robbing). I like to be positive, but this things happen, and you need to cover yourself in those cases too. So, how do we accomplish this?:

Now comes the time to decide whether one of those popular cloud services is suitable to store our archive of VMs. The truth is that uploading a typical set of VMs, which can be an average of 500 gb., to a cloud provider, can be a wonderful way to test your patience. We encourage our users to take control of their data, what we mean by that is that a dedicated server will empower you to have many different strategies and possibilities which you will not have with a public cloud service. Not only that, a public cloud service is someone else's server while your own dedicated server is your own responsability.

A very common scenario is that of a multi-seat organization that allows the Sysadmin to have a dedicated server in his own domain. Or a hired dedicated server at some datacenter, which will cost more or less the same than a dedicated FO line.

And never underestimate the poor man's choice, which is to backup to removable media and always keep a copy out of the office. This requires stablishing a human protocol, but that's is always a good choice inside an organization (and helps to keep it as an organized group of people developing a task toghether), that you can even combine with an offsite backup.

If you go for a dedicated server of some kind, own or external, you can take advantage of general purpose tools and have full control. As © XSITools creates repositories which are componded by 50M chukns, you can easily use Rsync to syncronize your repository offsite in a really efficient manner. Just launch an Rsync task from within the context of your NAS device OS by using size+datestamp comparizon and you'll have your repos synced in minutes with minimum overhead. You could launch Rsync from within the context of the ESXi shell, but that is more limiting as VMFS date metadata is incompatible with other File Systems whn using Rsync, thus you would need to compare each chunk fully, calculating hashes and making the hole process more time and resource consuming.

If you decide to use a public cloud service, we recommend that you use Hyper Backup, available in all Synology devices. This simple but powerful backup tool allows us to connect to different cheap cloud backup services like Rackspace, Hidrive, etc... based on OpenStack API. Not only that, it has deduplication features built in, thus you can upload your VM backups to the cloud to be totally safe, and on top of that you can store hundreds of backups and roll back in time in case you need a backup from two months ago where you have that untouched version of that given file.

Website Map
IT Manager
In Site
Resources & help
Index of Docs
33HOPS Forum

Fill in to download
The download link will be sent to your e-mail.

            Read our Privacy Policy