Last updated on Tuesday 16th of November 2021 08:30:50 PM ©XSIBackup NAS Man Page: storageDownload from sourceforge.netAdd disks ©XSIBackup-NAS will detect any newly added disks and will display them in the Storage => Volumes menu page. They will also be displayed in the Storage => Info page, that is just informative though. You may add disks while the VM is on, this is ideal to keep some eventual previous storage volume in production while you add more. ![]() ©XSIBackup-NAS creates a PV (Physical Volume) per disk, then it creates an LVM2 Volume Group in it and it leaves 10% of the space free. This is done by default by various reasons: • Due to the nature of SSDs and how they work, it is recommended to leave up to 25% of the disk always free. • Leaving 10% free on a regular HD will allow to have some space free to take LVM2 snapshots, which could be useful in many situations. Then a logical volume will be created on top of the 90% of space used from the Volume Group. So, do not worry if you notice that not all your available space was taken in the process. Once the Logical Volume has been created, you will need to decide how you want to use that space in your virtualization host. ©VMWare ©ESXi basically offers two ways to connect to storage resources, apart from local disks: NFS and iSCSI. ![]() Next step consists in choosing NFS or iSCSI as the data transport protocol. You should know by now which kind of protocol to use depending on your needs, we will summarize by just saying that you should choose iSCSI if you want to host VM in the volume and NFS if you want to use it to backup VMs. Apart of the convenience of using ©XSIBackup-NAS as a VM storage pool, you can use it for any other purpose that you may find it useful for: such as a file server, per instance. iSCSI LUN When you add an iSCSI LUN the GUI takes care of invoking the targetcli binary to build an iSCSI resource that is already configured to work with ESXi. It lacks any authentication, thus you may need to access the targetcli binary from the command line to add some username and password if you need them. ![]() When you use a remote iSCSI LUN resource, what the iSCSI protocol does is to make the remote volume appear as a local block storage device, namely a hard disk. It will behave from most perspectives as just that, a hardware device connected to a local controller. In case of ©ESXi, iSCSI will be another way to format a block device as VMFS, which is interesting, as many ©ESXi features will depend on having the VMs running on top of this file system. Configure a remote iSCSI LUN in ©ESXi ![]() ![]() ![]() ![]() After completing the process the new disk will appear in the Storage section of the GUI and you will be able to use it to host VMs you create on the ©ESXi host. NFS, Network File System v. 3 and 4 NFS is probably the best data sharing protocol available. It is fast resilient and extremely flexible. ©ESXi allows to use a remote NFS share to host VMs although you will loose some of the features offered by VMFS, like automatic space reclamation in VMFS6 or the ability to jump over zeroed zones when using ©XSIBackup-Pro or ©XSIBackup-DC. You can create shares in ©XSIBackup-NAS and add them to your ©ESXi host. The first part of the process is similar as that of creating a ISCSI LUN: that is creating a PV, a Volume Group with 90% of the available space and finally an LVM2 Logical Volume on the 90% of remaining space. The only difference with regards to the iSCSI LUN set up is the last step of creating a share, in which you will select NFS. If you now add a new disk to ©XSIBackup-NAS and you go to Storage => Info, you will see a new empty disk in the report of disks and shares. ![]() Now reproduce the same steps as in creating an iSCSI LUN but choose NFS instead at the end of the process. When you choose NFS you are required to format the volume using some file system. ©XSIBackup-NAS offers: XFS, ext4 and ext3. Any of them will be a great choice to store deduplicated backups, not so much to store VMs, still you can do the latter. Choose ext3 for maximum compatibility with older systems, choose ext4 for maximum speed, repository size and reliability, finally choose XFS if you want a more modern file system that offers both speed and growth potential. ![]() As in the case of iSCSI, the default values employed in setting up the NFS share ensure compatibility with the ©ESXi hypervisor. NFS3 will control access to the share by the allowed network IP with no user authentication, while NFS4 will ask you to enter user authentication. By default the volume configuration routine in ©XSIBackup-NAS will grant access to the root user, both in case of NFS4 and Samba/CIFS ![]() All folder structure is kept under /mnt: local shares, external shares and restore points. Thus, to modify the permissions of a NFS4 share belonging to an LVM2 volume that has in turn been created on top of a Volume Group of a Pysical Volume on a physical disk, just browse to /mnt/volumes/nfs/ and choose the share folder based on the disk name. Apart from the file system location of shares, you have a configuration file in /usr/xsi/gui/config/shares that holds the most relevant information on each of the shares. Configure a remote NFS share in ©ESXi As in the previous case with iSCSI, ©ESXi allows you to use some remote NFS file system to be turned into a local datastore via an NFS binding. You have the option to choose whether you want that share to be version 3 or 4. In case of NFS3, you will just need to pass the remote server's IP and share. That will instantly connect to the NFS share and show it as a local datastore in the ©ESXi file system. In case of the NFS4 protocol, you will need, on addition, to add a username and password on the remote file system. Root user is configured by default, any other user will just have read access to the share, thus you would need to, at least, assign write permissions to any user you want to employ on the shared resource present at /mnt/volumes/nfs/ To add an NFS datastore to ©ESXi, just click on New datastore under Storage in the ©ESXi Web GUI. Then choose the last option in the list: Mount NFS datastore. ![]() As previously stated, the root user is configured by default and you can use it to directly mount some NFS3 or NFS4 share, unless you have some stark requirements in regards to user access security. Network configuration Network access is a vital element to be able to access shares in ©XSIBackup-NAS. The appliance comes with a single 10GB nic as DHCP configured by default, which will be enough in most of the simplest the scenarios. That should offer you at least 700 to 800 MB/s in throughput. Apart from that ©XSIBackup-NAS allows you to easily create network bonds, so that you can aggregate multiple NICs to get more throughput. To configure additional NICs in ©XSIBackup-NAS, you must first add them to the VM in the ©ESXi host. You can do that while the VM is on as with disks, then get to the network configuration menu entry in ©XSIBackup-NAS and select Configure. The newly added NICs will be detected and shown in a list. You will then be able to enter any of them to configure a dynamic or static IP for each one of them. Bonding NICs into a single compounded interface ![]() When you do, you receive some messages stating that the NICs have been turned into a slave to the bond and finally a new window where you can set the network configuration that will take the new NIC bond. Most of the times you won't need to reboot the appliance and the new values will take effect immediately. In case you are connected through an SSH client to a TTY, you will generally need to reconnect to the new IP in case that you changed it. If you keep the same IP for the bond that you were using to connect through SSH, Putty at least will be able to keep the connection open after the network service reboot. Bonding Modes There are a number of different bonding modes, depending on how packets arriving to the bond are treated, such as: round-robin, active/pasive, broadcast, etc... If you don't know which mode to choose, then select mode 0 (round-robin) ![]() For details on each of the modes please read the following documents: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-using_channel_bonding https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/technical_reference/bonding_modes Mount remote resources One common scenario you may face is that of having some ©XSIBackup-DC repositories in some server (LAN or Internet) and be willing to restore some files from them. The simplest way to achieve that goal by now is to mount the remote resource containing the repository data as NFS using the GUI. Of course you can also mount that remote share using any other method, protocol or fuse file system (Azure Blob, Amazon S3, etc...), but by now you would need to do that manually using the CentOS shell, thus we will focus in mounting a remote NFS share using the GUI. ![]() Choose a share, the next screen will offer you a list of mount points to use. Be careful not to to use some mount point that is already being used by some other remote share or file system. Once the process completes you will see the new mounted remote file system in the Restore section. Thus, you will be able to choose it when restoring files and browse the contents of the ©XSIBackup-DC repositories stored there. We will add more options in the future, so that you can use different kinds of remote storage and mount them in some local folder to browse the contents of remote repos. ![]() You can of course use any other mount point in the local file system that is empty, we recommend that you abide to the proposed set of folders though, in sake of orderliness, otherwise things can really tangle up once you have some mount points set up. You can also browse the mount points in Storage => Mounts. From there you will also be able to browse the contents or to unmount the remote file system when you are done usig it. |
![]() ![]() |