©XSIBackup-Free: Free Backup Software for ©VMWare ©ESXi

Forum ©XSIBackup: ©VMWare ©ESXi Backup Software


You are not logged in.

#1 2021-05-29 11:41:11

Corbeau
Member
Registered: 2021-02-07
Posts: 25

using DC to maintain a set of replicas

I'm at the final stages of implementing the system below and would like to sanity check what I am doing and also check that I correctly using XSIbackup DC.

Small business running one windows essentials server (AD, DHCP, DNS, File server etc)
The production host is a second user Proliant Gen 9.
Important: There is a requirement to minimise any downtime should a failure happen, so if possible want a replica ready to switch on.
There is an older ProLiant Gen 8 as backup hardware should the main server host fail. Lets call this the replica host.
There are onsite and offsite NAS boxes for backups - using synology's esxi backup.
Both ESXi hosts are running ESXi 6.7 U3 free.
There is a 10Gbps link between the ESXi hosts.
Both servers have more specs than needed.
XSIBackup DC will only be used for replication.

As replicas can be turned on immediately the plan is to maintain a set of replicas on the replica host.
In case of disaster, if a replica could not be used for DR then we would fall back to the backups.

I am planning to keep 3 replicas on the replica host.
Set cron job to run 9pm
1: Mon & Thurs
/vmfs/volumes/ssd/bin/XSIBackup-DC/xsibackup --replica=cbt "VMs(SERVERDC01)" root@Replica-Host:22:/vmfs/volumes/datastore1/replicas/1 --options=R
2: Tues & Fri
/vmfs/volumes/ssd/bin/XSIBackup-DC/xsibackup --replica=cbt "VMs(SERVERDC01)" root@Replica-Host:22:/vmfs/volumes/datastore1/replicas/2 --options=R
3: Wed & Sat
/vmfs/volumes/ssd/bin/XSIBackup-DC/xsibackup --replica=cbt "VMs(SERVERDC01)" root@Replica-Host:22:/vmfs/volumes/datastore1/replicas/3 --options=R

In case of production host failure there are 3 replicas to choose from:
If down mon morning,  3: Sat 9pm, 2: Fri 9pm, 1: Thurs 9pm
down tues: 1: Mon 9pm, 3: Sat 9pm, 2: Fri 9pm
etc etc

Why 3? There is space is on the server and holding 3 replicas seems more secure than one. For instance if one replica somehow corrupted when needed. There may also be some unforseen need to roll back the server a day or two.
I'd like to have more instantly bootable replicas and wondered is there a way to have say 3 weeks worth - 3 different replicas which have 7 days of snapshots?

In a disaster, if the production host was still running and the XSIBackup-DC job ran would it overwrite a running replica?
I imagine killing the cron job could easily be overlooked in this situation.

Are there any downsides to using the --options=R option? If I understand correctly this would always keep a snapshot that could be turned on as test, but no intervention is required.
In a DR situation would the snapshot be run, becoming the production server, or it be consolidated first? Or would I just delete the snapshot?
(I note that to adhere to Microsofts licensing we'd need an extra server license to be able to test a replica while the production serevr is running, if they remain switched off they wouldn't need an extra license!!!)

Hope that lot makes sense and many thanks.

Offline

#2 2021-05-30 10:58:47

admin
Administrator
Registered: 2017-04-21
Posts: 2,055

Re: using DC to maintain a set of replicas

Yes, that's a recommended setup. Please note that you can reduce it to a single job by using some job syntax like the example below:

/scratch/XSI/XSIBackup-DC/xsibackup \
--replica=cbt \
"VMs(RUNNING)" \
/vmfs/volumes/backup/replicas/$(( $(date +%j) % 5 )) \
--options=R \
--use-smtp="1" \
--mail-to=mail-from \
>> /scratch/XSI/XSIBackup-DC/var/log/xsibackup.log 2>&1

In the above job the code snippet $(( $(date +%j) % 5 )) is generating the day of the year 1-365 and getting it's 5 modulus, which would generate

/vmfs/volumes/backup/replicas/0
/vmfs/volumes/backup/replicas/1
/vmfs/volumes/backup/replicas/2
/vmfs/volumes/backup/replicas/3
/vmfs/volumes/backup/replicas/4

Five different replicas. You just have to change the modulus operator to have any number of replicas with a single job.

The --options=R argument keeps a test snapshot on the remote end. Please do understand that this snapshot is used just to store the information generated during a replicated VM test and that it is obliterated on the subsequent CBT cycle. Namely this snapshot is totally removed, none of its information will survive and the VM will always be returned to its replicated state, with an exact mirror copy of the original VM.

If you manipulate this snapshot on your own, like deleting it through the (c)ESXi web interface, instead of letting the --replication process handle it, you will most likely ruin the replication process and you may damage your VM.

If you set some kind of active/passive cluster, you will have to STONITH (Shoot The Other Node In The Head) before turning some replica into the production server.

In automated cluster environments this is done by the cluster management layer. In case of using (c)XSIBackup to keep a set of replicas, you will normally do this by hand, as (c)XSIBackup is more on the side of a backup utility than on that of a real time replication system. Since distributed FSs such as Gluster or Ceph have reached a mature stage, you will normally use this kind of tool as the base of a multinode cluster.

Offline

#3 2021-05-31 11:09:30

Corbeau
Member
Registered: 2021-02-07
Posts: 25

Re: using DC to maintain a set of replicas

Thanks for that. Will use that code.

I have a couple of issues that are probably down to my lack of groking this properly. I created replica like so
xsibackup --replica=cbt "VMs(SERVERDC01)" root@replica-esxi:22:/vmfs/volumes/datastore1/replicas --options=R
Ran OK, full sync. Did not touch anything.
Ran the command again 24hrs later and it finished in about 10 mins. (I had to remove .locked in replica directory to get it to run)

Using ESXi web gui on replica host:
Looked at snapshots but could not see any.
Anyhow went ahead and booted the server vm on it's only virtual network. Added a win 10 vm to domain no issues. All looked fine with the server at the glance I took over it. Guess I need a script to check the server.
Removed win 10 vm from domain and powered it and the replica server vm down.

Went back to ssh on production server and ran
xsibackup --replica=cbt "VMs(SERVERDC01)" root@replica-esxi:22:/vmfs/volumes/datastore1/replicas --options=R
again.
Again there was a locked file in the replica dir. The VM was off for several minutes and there is nothing else configured on replica host might be causing issues.
After deleting the .locked file xsibackup reported the replica had been modified and is doing a full sync again.


So I am not sure what is happening.

What is causing the .locked file?
Did --options=R not create the snapshot?


I am still trying to make sure I clearly understand what is happening. So this is how I think it operates:

start: On production host xsibackup is run with  --replica=cbt & --options=R
A replica vm is created on replica host and a snapshot is created on the replica host also.
When the replica vm is turned on for testing the state of that vm is changed, data is written.
Next time xsibackup runs it removes the snapshot hence setting the replica back to the state it was in when the last replica sync happened. Then the live production VM is synced over to the replica and a new replica snapshot is created for testing. And this process is repeated over and over again until the replica is actually needed.

In the case the replica is needed.
The production system is disconnected from the network to stop an mischief.
Now the relica host becomes the production host.
This really just means the replica VM is powered on, connected to the production network and users let loose.
The original xsibackup job is removed from the failed server - even it it managed to run it would fail anyway.

The original production hardware is replaced / fixed.
To move the production VM back, it's powered off and copied overnight/weekend using Xsibackup from command line to replicate the vm back. Power up the new production vm.
On what is now the replica host again copy VM to USB just in case and delete it from the replica host.
Now go what back to start:


Thanks again.

Offline

#4 2021-05-31 11:33:38

admin
Administrator
Registered: 2017-04-21
Posts: 2,055

Re: using DC to maintain a set of replicas

Your asumptions are mainly correct. The --replica=cbt process generates a replica, which is an exact copy of the replicated VM. The test snapshot is spurious and its only aim is to serve as a sandbox to check the VMs, it's always discarded on the subsequent --replica=cbt cycle and if you happen to manually mix the spurious data (the one generated during a punctual VM test) by deleting the test snapshot through the GUI, you will break the replica process. This will in fact be detected on the following CBT cycle and the --replica will be resync'ed

Now on the empirical side of your tests:

The .locked file is automatically deleted at the end of any replica cycle, thus, if it is still there at the end, then something is not working as expected.

You can replicate data anywhere: a Linux compatible server or another (c)ESXi host. When you replicate to another (c)ESXi host, you happen to be doing it to some system that allows you to perform this kind of "sandbox" testing via a snapshot. Still, you will need to pay attention to some basic principles, like the ESXi hosts being coherent in terms of HW and VMFS version.

If you mix VMFS or HW versions, you will experience the same effects as if you would the same any other way, in the end (c)XSIBackup is just a tool. You take the decissions on what to do with it.

In any case we have just started an especulative thread. If you need concrete answers on concrete matters, please post the job and the output of the job. There are two definitive issues around your description:

1/ .locked file not deleted.
2/ No snapshot, even though you have requested it to be taken as per the --options=R argument.

Offline

#5 2021-06-01 12:54:07

Corbeau
Member
Registered: 2021-02-07
Posts: 25

Re: using DC to maintain a set of replicas

thanks. The ESXi hosts are  both HP flavour 6.7 U3.

I think the locked error is to do with whatever '2' is:

msg = "The specified key, name, or identifier '2' already exists." }
-------------------------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------------------------
2021-06-01T12:13:39 | Error code 194 at file signal.c, line 194 | Error description: raised SIGTERM (11) (2) in job, num of errors: 11, check error.log

--reset-cbt didn't make a difference, I got a full re-sync but the .locked was not removed on the replica host. The only way I can get a replication is by removing it manually. My trial has now run out so I will have to get the powers that be to purchase before continuing.


The entire output from the last sync:
---//----

[root@:~] /vmfs/volumes/ssd/bin/XSIBackup-DC/xsibackup --replica=cbt "VMs(DC01)" root@replica-host:22:/vmfs/volumes/datastore1/replicas --options=R
|---------------------------------------------------------------------------------|
||-------------------------------------------------------------------------------||
|||   (c)XSIBackup-Free 1.5.0.5: Backup & Replication Software                  |||
|||   (c)33HOPS, Sistemas de Informacion y Redes, S.L. | All Rights Reserved    |||
||-------------------------------------------------------------------------------||
|---------------------------------------------------------------------------------|
                   (c)Daniel J. Garcia Fidalgo | info@33hops.com
|---------------------------------------------------------------------------------|
System Information: ESXi, Kernel 6 Major 7 Minor 0 Patch 0
-------------------------------------------------------------------------------------------------------------
License: unlicensed trial version, remaining trial time: 01:33:56 | (c)XSIBackup-Free
-------------------------------------------------------------------------------------------------------------
Remote system: ESXi
-------------------------------------------------------------------------------------------------------------
PID: 3307063, Running job as: root
-------------------------------------------------------------------------------------------------------------
Remote xsibackup binary found at: /scratch/XSI/XSIBackup-DC/xsibackup
-------------------------------------------------------------------------------------------------------------
(c)XSIBackup-Free replicating data to /vmfs/volumes/datastore1/replicas
-------------------------------------------------------------------------------------------------------------
Performing --replica action
-------------------------------------------------------------------------------------------------------------
Item number 1 in this job
-------------------------------------------------------------------------------------------------------------
DC01 Hardware Version is: 14
-------------------------------------------------------------------------------------------------------------
All snapshots were removed, as DC01 is engaged in a CBT job
-------------------------------------------------------------------------------------------------------------
Virtual Machine Name: DC01
-------------------------------------------------------------------------------------------------------------
Creating snapshot VM : DC01 (powered on)
-------------------------------------------------------------------------------------------------------------
*** Snapshot was successfully created ***
-------------------------------------------------------------------------------------------------------------
Virtual Machine: DC01
-------------------------------------------------------------------------------------------------------------
Backup start date: 2021-06-01T10:26:12
-------------------------------------------------------------------------------------------------------------
2021-06-01 10:26:12 | Backing up 43 files, total size is 1.23 TB
-------------------------------------------------------------------------------------------------------------
    NUMBER                                                         FILE             SIZE          PROGRESS
-------------------------------------------------------------------------------------------------------------
    1/43                                                    DC01.vmx          3.99 KB    | Done   0.00%
-------------------------------------------------------------------------------------------------------------
    2/43                                 DC01-flat.vmdk (CBT 1 full)        120.00 GB    | Done   0.00%
-------------------------------------------------------------------------------------------------------------
::: detail ::: 100.00% done | block 122880 out of 122880                                    | Done   9.78%
-------------------------------------------------------------------------------------------------------------
    3/43                                                   DC01.vmdk        562.00 B     | Done   9.78%
-------------------------------------------------------------------------------------------------------------
    4/43                               DC01_1-flat.vmdk (CBT 1 full)        550.00 GB    | Done   9.78%
-------------------------------------------------------------------------------------------------------------
::: detail ::: 100.00% done | block 563200 out of 563200                                    | Done  54.61%
-------------------------------------------------------------------------------------------------------------
    5/43                                                 DC01_1.vmdk        567.00 B     | Done  54.61%
-------------------------------------------------------------------------------------------------------------
    6/43                                                   DC01.vmsd        739.00 B     | Done  54.61%
-------------------------------------------------------------------------------------------------------------
    7/43                                  vmx-DC01-2157451611-2.vswp                    [open excluded]
-------------------------------------------------------------------------------------------------------------
    8/43                                  vmx-DC01-2157451611-1.vswp         94.00 MB    | Done  54.61%
-------------------------------------------------------------------------------------------------------------
    9/43                                                DC01.vmx.lck                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   10/43                                                   vmware-1.log        430.89 KB    | Done  54.61%
-------------------------------------------------------------------------------------------------------------
   11/43                                                   DC01.vmx~          3.96 KB    | Done  54.61%
-------------------------------------------------------------------------------------------------------------
   12/43                                                   vmware-6.log         15.45 MB    | Done  54.61%
-------------------------------------------------------------------------------------------------------------
   13/43                                                  DC01.nvram        264.49 KB    | Done  54.62%
-------------------------------------------------------------------------------------------------------------
   14/43                                                   DC01.vmxf          3.74 KB    | Done  54.62%
-------------------------------------------------------------------------------------------------------------
   15/43                                                   vmware-2.log        280.10 KB    | Done  54.62%
-------------------------------------------------------------------------------------------------------------
   16/43                                                   vmware-3.log        310.60 KB    | Done  54.62%
-------------------------------------------------------------------------------------------------------------
   17/43                                                     vmware.log        275.22 KB    | Done  54.62%
-------------------------------------------------------------------------------------------------------------
   18/43                                                   vmware-4.log          4.16 MB    | Done  54.62%
-------------------------------------------------------------------------------------------------------------
   19/43                                                   vmware-5.log        259.37 KB    | Done  54.62%
-------------------------------------------------------------------------------------------------------------
   20/43                               DC01_3-flat.vmdk (CBT 1 full)         20.00 GB    | Done  54.62%
-------------------------------------------------------------------------------------------------------------
::: detail ::: 100.00% done | block 20480 out of 20480                                      | Done  56.25%
-------------------------------------------------------------------------------------------------------------
   21/43                                                 DC01_3.vmdk        510.00 B     | Done  56.25%
-------------------------------------------------------------------------------------------------------------
   22/43                                          DC01-8098195b.vswp                    [open excluded]
-------------------------------------------------------------------------------------------------------------
   23/43                                             DC01_1-ctk.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   24/43                                               DC01-ctk.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   25/43                                               DC01.vmsd.tmp         44.00 B     | Done  56.25%
-------------------------------------------------------------------------------------------------------------
   26/43                                             DC01_3-ctk.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   27/43                                                DC01.vmx.tmp          3.90 KB    | Done  56.25%
-------------------------------------------------------------------------------------------------------------
   28/43                                        DC01-Snapshot60.vmsn        288.38 KB    | Done  56.25%
-------------------------------------------------------------------------------------------------------------
   29/43                                   DC01-000001-sesparse.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   30/43                                            DC01-000001.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   31/43                                        DC01-000001-ctk.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   32/43                                 DC01_1-000001-sesparse.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   33/43                                          DC01_1-000001.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   34/43                                      DC01_1-000001-ctk.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   35/43                                 DC01_3-000001-sesparse.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   36/43                                          DC01_3-000001.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   37/43                                      DC01_3-000001-ctk.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   38/43                               DC01_2-flat.vmdk (CBT 1 full)        500.00 GB    | Done  56.25%
-------------------------------------------------------------------------------------------------------------
::: detail ::: 100.00% done | block 512000 out of 512000                                    | Done  97.00%
-------------------------------------------------------------------------------------------------------------
   39/43                                                 DC01_2.vmdk        539.00 B     | Done  97.00%
-------------------------------------------------------------------------------------------------------------
   40/43                                             DC01_2-ctk.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   41/43                                 DC01_2-000001-sesparse.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   42/43                                          DC01_2-000001.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
   43/43                                      DC01_2-000001-ctk.vmdk                 [skipped excluded]
-------------------------------------------------------------------------------------------------------------
Total size:                                                                      1.19 TB    | Done 100.00%
-------------------------------------------------------------------------------------------------------------
*** Snapshot was removed ***
-------------------------------------------------------------------------------------------------------------
msg = "The specified key, name, or identifier '2' already exists." }
-------------------------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------------------------
2021-06-01T12:13:39 | Error code 194 at file signal.c, line 194 | Error description: raised SIGTERM (11) (2) in job, num of errors: 11, check error.log
-------------------------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------------------------
SIGTERM (11) condition was trapped: check logs for more details
-------------------------------------------------------------------------------------------------------------
Cleaning up...
-------------------------------------------------------------------------------------------------------------
Removed host <tmp> dir        OK
-------------------------------------------------------------------------------------------------------------
Removed prog <tmp> dir        OK
-------------------------------------------------------------------------------------------------------------
Unlocked backup               OK
-------------------------------------------------------------------------------------------------------------
SSH session was closed        OK
-------------------------------------------------------------------------------------------------------------
Removed PID                   OK
-------------------------------------------------------------------------------------------------------------

Offline

#6 2021-06-01 13:59:09

admin
Administrator
Registered: 2017-04-21
Posts: 2,055

Re: using DC to maintain a set of replicas

The error:

msg = "The specified key, name, or identifier '2' already exists."

Denotes some inconsistency in your VM inventory. You have wether removed some VM directory but you didn't remove the VM entry from the inventory, you have some duplicate VM name or you provoked some conflict around the consistency of the (c)ESXi host contents metadata.

(c)VMWare allows some situations, like registering a VM by the same name. (c)XSIBackup detects VMs by name, as using the Ids would not be very convenient from a user perspective. This forces and requires coherence from a naming point of view.

In any case, the error you are getting is being thrown by vim-cmd, thus there must be some situation that trascends a simple duplicate name. The message states you have a duplicate key in the VM inventory.

Offline

#7 2021-06-05 09:47:31

Corbeau
Member
Registered: 2021-02-07
Posts: 25

Re: using DC to maintain a set of replicas

Hi.

I saw using vim-cmd that the replica was 2. I removed the VM via ESXi's web gui. Which solved that issue.

I keep getting this erorr

/!\ Some changes were detected in the remote replicated VM a full sync will be performed now.           |
|   Add --options=R to allow the remote replica to be switched on without triggering a full resync.         |
|   You may continue to run the CBT job later on, it will restart the CBT chain where it was left.          |
|   If you don't know why you are receiving this message you should run --reset-cbt=THIS-VM.     

I have removed the replica. On the source I have run reset-cbt and then created a new replica in a different directoy on the replica host.
I have now got this error again.

I am using a Synology NAS as a backup, using Synologies backup - which will backup ESXi free using CBT. But this is not to blame as I disabled the backup a week ago. While not the source of the current issue, would using the synology ESXi backup conflict with XSIBackup?

I am now stumped with what to try to get the replica to work with CBT so a full sync isn't required each time. Thanks again.

Offline

#8 2021-06-05 10:41:55

admin
Administrator
Registered: 2017-04-21
Posts: 2,055

Re: using DC to maintain a set of replicas

Something might be changing the CID of the .vmdk disks in the remote replica.
Are you switching the replicated VM on after a replica cycle?
If you do so you will force a new full syncronization cycle.
To prevent that from happening, you have the --options=R argument which will register the replicated VM and create an empty snapshot to be used as a sandbox. Then you can switch the VM without altering the CID of the base disks. Do not manually remove that snapshot, in fact the snapshot info contains a clear "Do not delete me" notice, let the --replica=cbt manage that snapshot. It will be wiped and recreated on every CBT round.

Offline

#9 2021-06-05 12:44:35

Corbeau
Member
Registered: 2021-02-07
Posts: 25

Re: using DC to maintain a set of replicas

admin wrote:

Are you switching the replicated VM on after a replica cycle?

No. I have wiped out the replica VM and retired several times. The replica host a PL gen8 with ESXi 6.7 U3 is not being touched by anything else.

admin wrote:

If you do so you will force a new full syncronization cycle.
To prevent that from happening, you have the --options=R argument which will register the replicated VM and create an empty snapshot to be used as a sandbox. Then you can switch the VM without altering the CID of the base disks. Do not manually remove that snapshot, in fact the snapshot info contains a clear "Do not delete me" notice, let the --replica=cbt manage that snapshot. It will be wiped and recreated on every CBT round.

I am not touching anything on the replica host, just rerunning the xsibackup replica command
in this case
xsibackup --replica=cbt "VMs(DC01)" root@replicahost:22:/vmfs/volumes/datastore1/replicas/2 --options=R
I have just re-run this for a thrid time and it's reporting a full sync again.

Offline

#10 2021-06-06 09:59:29

admin
Administrator
Registered: 2017-04-21
Posts: 2,055

Re: using DC to maintain a set of replicas

This is so weird. There's clearly something altering the .vmdk file descriptor CID or unique identifiers that allow (c)XSIBackup to know whether the disks have changed since last synchronization. Why don't you just use a totally different volume just to make sure that you don't have some other software interfeering in your replicas?.

Offline

#11 2021-06-07 20:28:22

Corbeau
Member
Registered: 2021-02-07
Posts: 25

Re: using DC to maintain a set of replicas

So, I have 2 VMs on the production host. I switched a virtual drive back and fourth between them, it was used to migrate data. I used the ESXi GUI to disconnect the drive and reconnect it to a different VM. Could this have caused the issue? This was months ago, the drive has not changed vm in weeks.

Last edited by Corbeau (2021-06-07 20:29:46)

Offline

#12 2021-06-08 07:22:07

admin
Administrator
Registered: 2017-04-21
Posts: 2,055

Re: using DC to maintain a set of replicas

We don't know the finer details of your setup or procedure, it's easy to find out though: run the --replica while not doing that.

UPDATE:

Asuming you already fixed the issue with your remote replica _XSIREP registering issue, which BTW has already been addressed through a more permissive logic that simply ignores CBT if some inconsistency is detected in version 1.5.0.7, the issue you describe can be caused by some problem updating the remote .map files CID. That should in any case throw a clear error in case something goes wrong.

Offline

#13 2021-07-22 09:11:46

Corbeau
Member
Registered: 2021-02-07
Posts: 25

Re: using DC to maintain a set of replicas

Update:

I had given up on this and was running full replicas.

Issue cleared after deleting the original VM that had the large virtual disk attached. Now replicas are taking a few minutes rather than a few hours.

(On the server there were 2 VMs - a windows server and temp windows vm for moving data. I had been using a temp VM connected to the old network to transfer data to a seperate virtual disk, I then attached the virtual disk with the data to the virual server to move the data onto the servers virtual disks. After removing the temp disk I had issues. After I deleted the temp VM and associated disks did the replication issue clear)

Offline

#14 2021-07-22 10:44:28

admin
Administrator
Registered: 2017-04-21
Posts: 2,055

Re: using DC to maintain a set of replicas

We had not fully understood what you were trying to accomplish.

The replication process relies on the target disks having not changed since the last replication cycle, that's obviously a basic and unavoidable requirement, as otherwise you could corrupt your data.

The way (c)XSIBackup follows track of the disks states is by comparing their CIDs in the .vmdk file descriptors. If you do anything that alters this, (c)XSIBackup will asume a change in the target VM and resync from scratch.

Offline

#15 2021-07-22 10:52:23

Corbeau
Member
Registered: 2021-02-07
Posts: 25

Re: using DC to maintain a set of replicas

Thanks for the reply. Note the vdisk used to move the data hadn't been connected to the server VM in months - but it was still on the ESXi host. It had been reattached to the temp VM months ago and was turned off. Whatever happened it was only on deleting the temp VM, and thus the large data transfer disk, the the delta copies started working. Thanks

Offline

Board footer