You are not logged in.
Regarding the number of files in a repo vs the capacity of a file system.
is it ok to use the default 1MB block size if a repo is stored on a VMFS6 volumes?
The total size of the VMs to backup is arround 300GB.
Or should I set the block size to 50MB like it was XSIBackup-pro-classic?
Second related question:
If I started backing up in a repo with XSIBackup-DC using a file size of 50MB, can I change it to a smaller file size and still backup into this repo without problems?
Don't use VMFS volumes to store deduplicated repositories (*), that is suicidal.
Use XFS or ext4 on NFS volumes or over IP directly to any Linux server.
Do not use big block sizes unless you have no other choice, 99% of the times there is no justification to do so.
You can't change the block size of a repo once you have created it, why would you want to do so?
(*) Please, note that you can replicate data to a VMFS volume without problems, as replicas are just a set of a few files.
Damn, I did that for the last 2 years using the classic version
It was not a big repo, fortunately.
I'll start using the replica action for storing directly on VMFS!
Ok, so, asking for another advise:
If my backup server is ESXI, with plenty of storage, is it viable to think I can put a deduplicated backup repo within a dedicated linux VM using ext4?
I don't have access to a dedicated NAS for backups, yet.
With the Classic version it was a different thing, as it used a bigger block size.
There are two main concerns when using some FS to store deduplicated content in the FS as a database:
- The FS must be fast
- The FS must be able to handle the data
VMFS is slow, as it was not designed to handle millions of small files, but on the contrary to handle a few big files: the virtual disks.
VMFS is terribly slow when it comes to handle many small files, although VMFS-6 is indeed able to handle millions of files, it is very slow. On the other hand, apart from also being slow, VMFS-5 has an inode limit of aproximately 130.000 files and folders. When it reaches that limit it just blows up.
When using (c)XSIBackup Classic to accumulate data in old XSITools 1.0 repositories all these problems are minimized: first of all cause you are using a big block size (50MB), which reduces the effect of data access slowness by a factor of 50. Also the VMFS-5 inode limit is relative, as 100.000 inodes can store 5.0TB of real data, which may in turn host 15-30 TB in VMs.
(c)XSIBackup-DC is more optimized for speed and data compression. Classic version is a toy when compared to it. It uses a 1MB default block size, which you shouldn't change and which yields awesome compression figures, apart from offering CBT compatibility and a lot of extra features when compared to Classic version.
Still, it uses a block size which is 50 times smaller and this will make VMFS slow behaviour to become obvious and be much slower than expected processing data. In regards to VMFS-5, don't even try to use it for obvious reasons. Just 100GB of real data will make it stop responding.
Ok. Thank you for taking the time to provide a more detailed explanation.
I understand better now why it really matters not to store the repo directly on a VMFS.
Just an update for anybody reading this.
If you have no other choice, namely: you "really" do not have any other choice than using some VMFS file system to store your deduplicated backups.
(This is literally impossible, as you always can use some VM on top of VMFS and share it via NFS or IP).
So, for the above statement: "NO OTHER CHOICE" to be plausible you would need to be in a hurry and not have time or the knowledge to devise a proper solution, or be pointed at with some gun by some furious guy that wants you to do things wrongly.
If any of the above premises is true and you will still use some VMFS file system against our recommendation and all logic, at least use a big block size of 50MB (--block-size=50M) to minimize the effect of a slow FS and eventually a limited inode number in VMFS-5