You are not logged in.
Hi,
While some of my VMs could be replicated by using XSIBackup-DC 1.5.0.3 with --replica=cbt action, there is at least one VM that reports an error when invoking the --replica=cbt action with --options=R.
This is the command I executed right after having enabled cbt for the VMs named vm and vm-failed:
"/vmfs/volumes/hdd1/XSIBackup-DC/xsibackup" \
--replica=cbt \
"VMs(vm,vm-failed)" \
"root@nas:22:/volume1/backup" \
--options=R \
--exclude="data[-.].*vmdk" \
--check=full \
--quiesce \
--config-backup \
--mail-to="<mail-recipient>" \
--use-smtp="1" \
--description="replica hot" \
--backup-how=hot \
--remote-path="/usr/bin/xsibackup"
The first VM named vm was processed correctly, i.e. without reporting any errors. I got a "CBT 1 full" cycle and further cycles are reported as CBT 2, CBT 3. However, there appears an issue with the --options=R even with the successfully created replica of VM named vm. THe log file related to VM vm can be found in [url=https://33hops.com/forum/viewtopic.php?id=949]this forum post[/url].
Here is the xsibackup.log output for the second VM vm-failed:
(c)XSIBackup-DC replicating data to /volume1/nas
-------------------------------------------------------------------------------------------------------------
Performing --replica action
-------------------------------------------------------------------------------------------------------------
Item number 1 in this job
-------------------------------------------------------------------------------------------------------------
vm-failed Hardware Version is: 14
-------------------------------------------------------------------------------------------------------------
All snapshots were removed, as vm is engaged in a CBT job
-------------------------------------------------------------------------------------------------------------
Virtual Machine Name: vm-failed
-------------------------------------------------------------------------------------------------------------
Creating snapshot VM : vm-failed (powered on)
-------------------------------------------------------------------------------------------------------------
*** Snapshot was successfully created ***
-------------------------------------------------------------------------------------------------------------
Remote target dir is empty, full sync will be performed
-------------------------------------------------------------------------------------------------------------
Virtual Machine: vm-failed
-------------------------------------------------------------------------------------------------------------
Backup start date: 2021-05-16T22:45:10
-------------------------------------------------------------------------------------------------------------
2021-05-16 22:45:10 | Backing up 30 files, total size is 130.33 GB
-------------------------------------------------------------------------------------------------------------
NUMBER FILE SIZE PROGRESS
-------------------------------------------------------------------------------------------------------------
1/30 vm-failed.vmx 3.12 KB | Done 0.00%
-------------------------------------------------------------------------------------------------------------
ble-cbt="vm-failed" first
-------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------
2021-05-16T22:45:14 | Error code 194 at file signal.c, line 194 | Error description: raised SIGTERM (11) (2) in job, num of errors: 11, check error.log
-------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------
SIGTERM (11) condition was trapped: check logs for more details
-------------------------------------------------------------------------------------------------------------
Cleaning up...
-------------------------------------------------------------------------------------------------------------
*** Snapshot was removed ***
-------------------------------------------------------------------------------------------------------------
Removed host <tmp> dir OK
-------------------------------------------------------------------------------------------------------------
The error log does not show any issue.
From the not fully visible remark above
"ble-cbt="vm-failed" first
I assume that CBT is not enabled.
But this is not correct. I have successfully run --enable-cbt="vm-failed" and a second run --enable-cbt="vm-failed" confirms that cbt is active for vm-failed.
Here is a listing of the files in the vm-failed directory at the ESXi host:
-rw-r--r-- 1 root root 13 May 16 22:45 vm-failed-aux.xml
-rw-r--r-- 1 root root 53 Dec 3 18:46 vm-failed-d6f15467.hlog
-rw------- 1 root root 2113929216 May 16 22:40 vm-failed-d6f15467.vswp
-rw------- 1 root root 68718428160 Apr 15 2019 vm-failed-flat.vmdk
-rw------- 1 root root 8684 May 11 15:48 vm-failed.nvram
-rw------- 1 root root 609 Apr 15 2019 vm-failed.vmdk
-rw-r--r-- 1 root root 45 May 16 22:45 vm-failed.vmsd
-rw-r--r-- 1 root root 45 May 16 22:45 vm-failed.vmsd.tmp
-rw-r--r-- 1 root root 3192 May 16 22:45 vm-failed.vmx
-rw------- 1 root root 0 May 16 22:40 vm-failed.vmx.lck
-rw-r--r-- 1 root root 3192 May 16 22:45 vm-failed.vmx.tmp
-rw------- 1 root root 150 Mar 6 14:40 vm-failed.vmxf
-rw-r--r-- 1 root root 3199 May 16 22:45 vm-failed.vmx~
-rw------- 1 root root 4194816 May 16 22:45 vm-failed_1-ctk.vmdk
-rw------- 1 root root 68719476736 May 17 10:12 vm-failed_1-flat.vmdk
-rw-r--r-- 1 root root 82 May 13 04:40 vm-failed_1-flat.vmdk___hash
-rw------- 1 root root 583 May 16 22:45 vm-failed_1.vmdk
This is the vmware.log output when creating the snapshot while executing the --replica=cbt action for VM vm-failed:
2021-05-16T22:45:05.711Z| vmx| I125: VigorTransportProcessClientPayload: opID=vim-cmd-38-da7e seq=520: Receiving Snapshot.Take request.
2021-05-16T22:45:05.712Z| vmx| I125: SnapshotVMX_TakeSnapshot start: 'xsi40150622346030184', deviceState=0, lazy=0, quiesced=1, forceNative=0, tryNative=1, saveAllocMaps=0
2021-05-16T22:45:05.714Z| vmx| I125: DiskLib_IsVMFSSparseSupported: vmfssparse is not supported on /vmfs/volumes/hdd/vm-failed: f532.
2021-05-16T22:45:05.714Z| vmx| I125: DISKLIB-LIB_CREATE : DiskLibCreateCreateParam: Selecting the default child type as SeSparse for /vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk.
2021-05-16T22:45:05.714Z| vmx| I125: DISKLIB-LIB_CREATE : DiskLibCreateCreateParam: seSparse grain size is set to 8 for '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk'
2021-05-16T22:45:05.714Z| vmx| I125: SNAPSHOT: SnapshotPrepareTakeDoneCB: Prepare phase complete (The operation completed successfully).
2021-05-16T22:45:05.726Z| vcpu-0| I125: ToolsBackup: changing quiesce state: IDLE -> STARTED
2021-05-16T22:45:07.729Z| vmx| I125: ToolsBackup: changing quiesce state: STARTED -> COMMITTED
2021-05-16T22:45:07.730Z| vcpu-0| I125: Destroying virtual dev for scsi0:0 vscsi=16167
2021-05-16T22:45:07.730Z| vcpu-0| I125: VMMon_VSCSIStopVports: No such target on adapter
2021-05-16T22:45:07.731Z| vcpu-0| I125: SnapshotVMXTakeSnapshotWork: Transition to mode 0.
2021-05-16T22:45:07.733Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : open successful (5) size = 68719476736, hd = 0. Type 3
2021-05-16T22:45:07.733Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : closed.
2021-05-16T22:45:07.733Z| vcpu-0| I125: Closing all the disks of the VM.
2021-05-16T22:45:07.733Z| vcpu-0| I125: Closing disk 'scsi0:0'
2021-05-16T22:45:07.734Z| vcpu-0| I125: DISKLIB-CBT : Shutting down change tracking for untracked fid 271304579.
2021-05-16T22:45:07.734Z| vcpu-0| I125: DISKLIB-CBT : Successfully disconnected CBT node.
2021-05-16T22:45:07.740Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : closed.
2021-05-16T22:45:07.741Z| vcpu-0| I125: SNAPSHOT: SnapshotConfigInfoOpenVmsd: Creating new snapshot dictionary, '/vmfs/volumes/hdd/vm-failed/vm-failed.vmsd.usd'.
2021-05-16T22:45:07.867Z| vcpu-0| I125: SNAPSHOT: SnapshotDumperOpenFromInfo: Creating checkpoint file /vmfs/volumes/hdd/vm-failed/vm-failed-Snapshot616.vmsn
2021-05-16T22:45:07.869Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : open successful (13) size = 68719476736, hd = 0. Type 3
2021-05-16T22:45:07.870Z| vcpu-0| I125: DiskLib_IsVMFSSparseSupported: vmfssparse is not supported on /vmfs/volumes/hdd/vm-failed: f532.
2021-05-16T22:45:07.870Z| vcpu-0| I125: DISKLIB-LIB_CREATE : DiskLibCreateCreateParam: Selecting the default child type as SeSparse for /vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk.
2021-05-16T22:45:07.870Z| vcpu-0| I125: DISKLIB-LIB_CREATE : DiskLibCreateCreateParam: seSparse grain size is set to 8 for '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk'
2021-05-16T22:45:07.870Z| vcpu-0| I125: DISKLIB-LIB_CREATE : CREATE CHILD: "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk" -- seSparse cowGran=8 allocType=0 policy=''
2021-05-16T22:45:07.870Z| vcpu-0| I125: DISKLIB-LIB_CREATE : CREATE-CHILD: Creating disk backed by 'file'
2021-05-16T22:45:07.871Z| vcpu-0| I125: Const Header:
2021-05-16T22:45:07.871Z| vcpu-0| I125+ constMagic = 0xcafebabe
2021-05-16T22:45:07.871Z| vcpu-0| I125+ version = 2.1
2021-05-16T22:45:07.871Z| vcpu-0| I125+ capacity = 134217728
2021-05-16T22:45:07.871Z| vcpu-0| I125+ grainSize = 8
2021-05-16T22:45:07.871Z| vcpu-0| I125+ grainTableSize = 64
2021-05-16T22:45:07.871Z| vcpu-0| I125+ flags = 0
2021-05-16T22:45:07.872Z| vcpu-0| I125: Extents:
2021-05-16T22:45:07.872Z| vcpu-0| I125+ Header : <1 : 1>
2021-05-16T22:45:07.872Z| vcpu-0| I125+ JournalHdr : <2 : 2>
2021-05-16T22:45:07.872Z| vcpu-0| I125+ Journal : <2048 : 2048>
2021-05-16T22:45:07.872Z| vcpu-0| I125+ GrainDirectory : <4096 : 2048>
2021-05-16T22:45:07.872Z| vcpu-0| I125+ GrainTables : <6144 : 262144>
2021-05-16T22:45:07.872Z| vcpu-0| I125+ FreeBitmap : <268288 : 4096>
2021-05-16T22:45:07.872Z| vcpu-0| I125+ BackMap : <272384 : 262144>
2021-05-16T22:45:07.872Z| vcpu-0| I125+ Grain : <534528 : 134217728>
2021-05-16T22:45:07.872Z| vcpu-0| I125: Volatile Header:
2021-05-16T22:45:07.872Z| vcpu-0| I125+ volatileMagic = 0xcafecafe
2021-05-16T22:45:07.872Z| vcpu-0| I125+ FreeGTNumber = 0
2021-05-16T22:45:07.872Z| vcpu-0| I125+ nextTxnSeqNumber = 0
2021-05-16T22:45:07.872Z| vcpu-0| I125+ replayJournal = 0
2021-05-16T22:45:07.873Z| vcpu-0| I125: DISKLIB-SESPARSE : seSparseExtentCreate: "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : success
2021-05-16T22:45:07.874Z| vcpu-0| I125: DISKLIB-DSCPTR: "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk" : creation successful.
2021-05-16T22:45:07.879Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : open successful (1) size = 273678336, hd = 0. Type 19
2021-05-16T22:45:07.915Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : closed.
2021-05-16T22:45:07.915Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : closed.
2021-05-16T22:45:07.915Z| vcpu-0| A100: ConfigDB: Setting scsi0:0.fileName = "vm-failed_1-000001.vmdk"
2021-05-16T22:45:07.919Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 190331 us
2021-05-16T22:45:07.919Z| vcpu-0| I125: SCSI: switching scsi0 to push completion mode
2021-05-16T22:45:07.920Z| vcpu-0| A100: ConfigDB: Setting scsi0:0.redo = ""
2021-05-16T22:45:07.920Z| vcpu-0| I125: DISK: OPEN scsi0:0 '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' persistent R[]
2021-05-16T22:45:07.921Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : open successful (10) size = 273678336, hd = 279627655. Type 19
2021-05-16T22:45:07.921Z| vcpu-0| I125: DISKLIB-DSCPTR: Opened [0]: "vm-failed_1-000001-sesparse.vmdk" (0xa)
2021-05-16T22:45:07.921Z| vcpu-0| I125: DISKLIB-LINK : Opened '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' (0xa): seSparse, 134217728 sectors / 64 GB.
2021-05-16T22:45:07.922Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : open successful (14) size = 68719476736, hd = 309381000. Type 3
2021-05-16T22:45:07.922Z| vcpu-0| I125: DISKLIB-DSCPTR: Opened [0]: "vm-failed_1-flat.vmdk" (0xe)
2021-05-16T22:45:07.922Z| vcpu-0| I125: DISKLIB-LINK : Opened '/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk' (0xe): vmfs, 134217728 sectors / 64 GB.
2021-05-16T22:45:07.922Z| vcpu-0| I125: DISKLIB-CHAINESX : ChainESXOpenSubChain:(0) fid = 309381000, extentType = 2
2021-05-16T22:45:07.922Z| vcpu-0| I125: DISKLIB-CHAINESX : ChainESXOpenSubChain:(1) fid = 279627655, extentType = 1
2021-05-16T22:45:07.922Z| vcpu-0| I125: DISKLIB-LIB_BLOCKTRACK : Resuming change tracking.
2021-05-16T22:45:07.922Z| vcpu-0| I125: DISKLIB-CBT : Initializing ESX kernel change tracking for fid 281659273.
2021-05-16T22:45:07.922Z| vcpu-0| I125: DISKLIB-CBT : Successfuly created cbt node 10c9c789-cbt.
2021-05-16T22:45:07.922Z| vcpu-0| I125: DISKLIB-CBT : Opening cbt node /vmfs/devices/cbt/10c9c789-cbt
2021-05-16T22:45:07.923Z| vcpu-0| I125: DISKLIB-LIB : Opened "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk" (flags 0xa, type vmfs).
2021-05-16T22:45:07.923Z| vcpu-0| I125: DISK: Disk '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' has UUID '60 00 c2 9e 1f 21 a6 ed-94 e2 7f 6d 0b 59 44 96'
2021-05-16T22:45:07.923Z| vcpu-0| I125: DISK: OPEN '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' Geo (8354/255/63) BIOS Geo (0/0/0)
2021-05-16T22:45:07.923Z| vcpu-0| I125: Creating virtual dev for 'scsi0:0'.
2021-05-16T22:45:07.923Z| vcpu-0| I125: DumpDiskInfo: scsi0:0 createType=11, capacity = 134217728, numLinks = 2, allocationType = 2
2021-05-16T22:45:07.923Z| vcpu-0| I125: SCSIDiskESXPopulateVDevDesc: Using FS backend
2021-05-16T22:45:07.923Z| vcpu-0| I125: DISKUTIL: scsi0:0 : geometry=8354/255/63
2021-05-16T22:45:07.923Z| vcpu-0| I125: SCSIFilterESXAttachCBRCInt: CBRC not enabled or opened without filters,skipping CBRC filter attach.
2021-05-16T22:45:07.923Z| vcpu-0| I125: SCSIFilterSBDAttachCBRC: device scsi0:0 is not SBD. Skipping CBRC attach SBD way.
2021-05-16T22:45:07.923Z| vcpu-0| I125: SnapshotVMXTakeSnapshotWork: Transition to mode 1.
2021-05-16T22:45:07.923Z| vcpu-0| I125: ToolsBackup: changing quiesce state: COMMITTED -> FINISHING
2021-05-16T22:45:07.926Z| vcpu-0| I125: HBACommon: First write on scsi0:0.fileName='/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk'
2021-05-16T22:45:07.926Z| vcpu-0| I125: DDB: "longContentID" = "9b885358f9a744640b58f1a5d2a2b710" (was "e551d9491c6ace741e255e6531761462")
2021-05-16T22:45:07.928Z| vcpu-0| I125: DISKLIB-CHAIN : DiskChainUpdateContentID: old=0x31761462, new=0xd2a2b710 (9b885358f9a744640b58f1a5d2a2b710)
2021-05-16T22:45:08.927Z| vmx| I125: ToolsBackup: changing quiesce state: FINISHING -> DONE_WAIT
2021-05-16T22:45:09.929Z| vmx| I125: ToolsBackup: changing quiesce state: DONE_WAIT -> GET_MANIFEST
2021-05-16T22:45:09.991Z| vcpu-0| I125: HGFileCopyCreateSessionCB: Successfully created the session.
2021-05-16T22:45:09.998Z| vcpu-0| I125: ToolsBackup: changing quiesce state: GET_MANIFEST -> IDLE
2021-05-16T22:45:09.998Z| vcpu-0| I125: ToolsBackup: changing quiesce state: IDLE -> DONE
2021-05-16T22:45:10.006Z| vcpu-0| I125: SnapshotVMXTakeSnapshotComplete: Done with snapshot 'xsi40150622346030184': 616
2021-05-16T22:45:10.006Z| vcpu-0| I125: VigorTransport_ServerSendResponse opID=vim-cmd-38-da7e seq=520: Completed Snapshot request.
2021-05-16T22:45:14.848Z| vmx| I125: VigorTransportProcessClientPayload: opID=vim-cmd-e2-dac3 seq=704: Receiving Snapshot.Delete request.
2021-05-16T22:45:14.850Z| vmx| I125: SNAPSHOT: SnapshotDeleteWork '/vmfs/volumes/hdd/vm-failed/vm-failed.vmx' : 616
2021-05-16T22:45:14.851Z| vmx| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : open successful (5) size = 290459648, hd = 0. Type 19
2021-05-16T22:45:14.851Z| vmx| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : closed.
2021-05-16T22:45:14.851Z| vmx| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : open successful (5) size = 68719476736, hd = 0. Type 3
2021-05-16T22:45:14.851Z| vmx| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : closed.
2021-05-16T22:45:14.855Z| vmx| I125: SNAPSHOT: SnapshotDeleteFile deleted '/vmfs/volumes/hdd/vm-failed/vm-failed-quiesce_manifest616.xml'.
2021-05-16T22:45:14.858Z| vmx| I125: VigorTransport_ServerSendResponse opID=vim-cmd-e2-dac3 seq=704: Completed Snapshot request.
2021-05-16T22:45:14.859Z| vmx| I125: VigorTransportProcessClientPayload: opID=vim-cmd-e2-dac3 seq=705: Receiving Snapshot.Consolidate request.
2021-05-16T22:45:14.860Z| vmx| I125: SnapshotVMX_Consolidate: Starting online snapshot consolidate operation.
2021-05-16T22:45:14.860Z| vmx| I125: Vix: [mainDispatch.c:4239]: VMAutomation_ReportPowerOpFinished: statevar=3, newAppState=1880, success=1 additionalError=0
2021-05-16T22:45:14.860Z| vmx| I125: Turning on snapshot info cache. VM=vm-failed.vmx.
2021-05-16T22:45:14.862Z| vmx| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : open successful (1114117) size = 0, hd = 0. Type 19
2021-05-16T22:45:14.862Z| vmx| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : closed.
2021-05-16T22:45:14.862Z| vmx| I125: SNAPSHOT: Turning on snapshot disk cache.
2021-05-16T22:45:14.862Z| vmx| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : open successful (5) size = 290459648, hd = 0. Type 19
2021-05-16T22:45:14.862Z| vmx| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : closed.
2021-05-16T22:45:14.863Z| vmx| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : open successful (5) size = 68719476736, hd = 0. Type 3
2021-05-16T22:45:14.863Z| vmx| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : closed.
2021-05-16T22:45:14.863Z| vmx| I125: WORKER: Creating new group with numThreads=1 (16)
2021-05-16T22:45:14.863Z| vmx| I125: WORKER: Creating new group with numThreads=1 (16)
2021-05-16T22:45:14.863Z| vcpu-0| I125: Destroying virtual dev for scsi0:0 vscsi=16168
2021-05-16T22:45:14.863Z| vcpu-0| I125: VMMon_VSCSIStopVports: No such target on adapter
2021-05-16T22:45:14.863Z| vcpu-0| I125: Closing disk 'scsi0:0'
2021-05-16T22:45:14.865Z| vcpu-0| I125: DISKLIB-CBT : Shutting down change tracking for untracked fid 281659273.
2021-05-16T22:45:14.865Z| vcpu-0| I125: DISKLIB-CBT : Successfully disconnected CBT node.
2021-05-16T22:45:14.870Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : closed.
2021-05-16T22:45:14.870Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : closed.
2021-05-16T22:45:14.870Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 7494 us
2021-05-16T22:45:14.870Z| vcpu-0| I125: SCSI: switching scsi0 to push completion mode
2021-05-16T22:45:14.871Z| vcpu-0| A100: ConfigDB: Setting scsi0:0.redo = ""
2021-05-16T22:45:14.871Z| vcpu-0| I125: DISK: OPEN scsi0:0 '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' persistent R[]
2021-05-16T22:45:14.872Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : open successful (8) size = 290459648, hd = 281855881. Type 19
2021-05-16T22:45:14.872Z| vcpu-0| I125: DISKLIB-DSCPTR: Opened [0]: "vm-failed_1-000001-sesparse.vmdk" (0x8)
2021-05-16T22:45:14.872Z| vcpu-0| I125: DISKLIB-LINK : Opened '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' (0x8): seSparse, 134217728 sectors / 64 GB.
2021-05-16T22:45:14.872Z| vcpu-0| I125: DISKLIB-CHAINESX : ChainESXOpenSubChain:(0) fid = 281855881, extentType = 1
2021-05-16T22:45:14.872Z| vcpu-0| I125: DISKLIB-LIB_BLOCKTRACK : Resuming change tracking.
2021-05-16T22:45:14.873Z| vcpu-0| I125: DISKLIB-CBT : Initializing ESX kernel change tracking for fid 288081802.
2021-05-16T22:45:14.873Z| vcpu-0| I125: DISKLIB-CBT : Successfuly created cbt node 112bc78a-cbt.
2021-05-16T22:45:14.873Z| vcpu-0| I125: DISKLIB-CBT : Opening cbt node /vmfs/devices/cbt/112bc78a-cbt
2021-05-16T22:45:14.873Z| vcpu-0| I125: DISKLIB-LIB : Opened "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk" (flags 0x8, type seSparse).
2021-05-16T22:45:14.873Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : open successful (522) size = 68719476736, hd = 266192781. Type 3
2021-05-16T22:45:14.873Z| vcpu-0| I125: DISKLIB-DSCPTR: Opened [0]: "vm-failed_1-flat.vmdk" (0x20a)
2021-05-16T22:45:14.873Z| vcpu-0| I125: DISKLIB-LINK : Opened '/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk' (0x20a): vmfs, 134217728 sectors / 64 GB.
2021-05-16T22:45:14.873Z| vcpu-0| I125: DISKLIB-LIB_BLOCKTRACK : Resuming change tracking.
2021-05-16T22:45:14.874Z| vcpu-0| I125: DISKLIB-CBT : Initializing ESX kernel change tracking for fid 266192781.
2021-05-16T22:45:14.874Z| vcpu-0| I125: DISKLIB-CBT : Successfuly created cbt node fddc78d-cbt.
2021-05-16T22:45:14.874Z| vcpu-0| I125: DISKLIB-CBT : Opening cbt node /vmfs/devices/cbt/fddc78d-cbt
2021-05-16T22:45:14.874Z| vcpu-0| I125: DISKLIB-LIB : Opened "/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk" (flags 0x20a, type vmfs).
2021-05-16T22:45:14.875Z| vcpu-0| I125: DISKLIB-CBT : Shutting down change tracking for untracked fid 288081802.
2021-05-16T22:45:14.875Z| vcpu-0| I125: DISKLIB-CBT : Successfully disconnected CBT node.
2021-05-16T22:45:14.876Z| vcpu-0| I125: DISKLIB-CBT : Shutting down change tracking for untracked fid 266192781.
2021-05-16T22:45:14.877Z| vcpu-0| I125: DISKLIB-CBT : Successfully disconnected CBT node.
2021-05-16T22:45:14.883Z| vcpu-0| I125: DISKLIB-CHAINESX : ChainESXOpenSubChain:(0) fid = 266192781, extentType = 2
2021-05-16T22:45:14.883Z| vcpu-0| I125: DISKLIB-CHAINESX : ChainESXOpenSubChain:(1) fid = 281855881, extentType = 1
2021-05-16T22:45:14.883Z| vcpu-0| I125: DISKLIB-CBT : Initializing ESX kernel change tracking for fid 288212874.
2021-05-16T22:45:14.884Z| vcpu-0| I125: DISKLIB-CBT : Successfuly created cbt node 112dc78a-cbt.
2021-05-16T22:45:14.884Z| vcpu-0| I125: DISKLIB-CBT : Opening cbt node /vmfs/devices/cbt/112dc78a-cbt
2021-05-16T22:45:14.884Z| vcpu-0| I125: DISK: Disk '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' has UUID '60 00 c2 9e 1f 21 a6 ed-94 e2 7f 6d 0b 59 44 96'
2021-05-16T22:45:14.884Z| vcpu-0| I125: DISK: OPEN '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' Geo (8354/255/63) BIOS Geo (0/0/0)
2021-05-16T22:45:14.884Z| vcpu-0| I125: ConsolidateDiskOpenCB: Opening disk node 'scsi0:0' for combining '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' -> '/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk'.
2021-05-16T22:45:14.885Z| vcpu-0| I125: DISKLIB-LIB_MISC : Opening mirror node /vmfs/devices/svm/112dc78a-fddc78d-svmmirror
2021-05-16T22:45:14.885Z| vcpu-0| I125: Creating virtual dev for 'scsi0:0'.
2021-05-16T22:45:14.885Z| vcpu-0| I125: DumpDiskInfo: scsi0:0 createType=11, capacity = 134217728, numLinks = 2, allocationType = 2
2021-05-16T22:45:14.886Z| vcpu-0| I125: SCSIDiskESXPopulateVDevDesc: Using FS backend
2021-05-16T22:45:14.886Z| vcpu-0| I125: DISKUTIL: scsi0:0 : geometry=8354/255/63
2021-05-16T22:45:14.886Z| vcpu-0| I125: SCSIFilterESXAttachCBRCInt: CBRC not enabled or opened without filters,skipping CBRC filter attach.
2021-05-16T22:45:14.886Z| vcpu-0| I125: SCSIFilterSBDAttachCBRC: device scsi0:0 is not SBD. Skipping CBRC attach SBD way.
2021-05-16T22:45:15.005Z| worker-34524604| I125: DISKLIB-LIB : DiskLib_IsCombinePossible: Combine 2 links at 0. Need 0 MB of free space (526382 MB available)
2021-05-16T22:45:15.005Z| vmx| I125: ConsolidateMirrorCombineStart: Combine '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' -> '/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk' (node = scsi0:0): 2 links starting from 0. usesCurrent = 1, isNative = 0
2021-05-16T22:45:15.005Z| vmx| I125: Mirror_DiskCopy: Starting disk copy.
2021-05-16T22:45:15.207Z| vcpu-0| I125: Destroying virtual dev for scsi0:0 vscsi=16169
2021-05-16T22:45:15.207Z| vcpu-0| I125: VMMon_VSCSIStopVports: No such target on adapter
2021-05-16T22:45:15.207Z| vcpu-0| I125: Closing disk 'scsi0:0'
2021-05-16T22:45:15.207Z| vcpu-0| I125: ConsolidateDiskCloseCB: Closing disk node 'scsi0:0' after combining '/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001.vmdk' -> '/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk'.
2021-05-16T22:45:15.209Z| vcpu-0| I125: DISKLIB-CTK : Forcing flush of change info for "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-ctk.vmdk".
2021-05-16T22:45:15.215Z| vcpu-0| I125: DISKLIB-CTK : ChangeTracker_EndCombine()
2021-05-16T22:45:15.218Z| vcpu-0| I125: DISKLIB-CTK : Unlinked /vmfs/volumes/hdd/vm-failed/vm-failed_1-ctk.vmdk, tmp file: /vmfs/volumes/hdd/vm-failed/vm-failed_1-ctk.vmdk-tmp
2021-05-16T22:45:15.230Z| vcpu-0| I125: DISKLIB-CTK : resuming /vmfs/volumes/hdd/vm-failed/vm-failed_1-ctk.vmdk-tmp
2021-05-16T22:45:15.231Z| vcpu-0| I125: DISKLIB-CTK : ChangeTracker_EndCombine: Renaming /vmfs/volumes/hdd/vm-failed/vm-failed_1-ctk.vmdk-tmp to /vmfs/volumes/hdd/vm-failed/vm-failed_1-ctk.vmdk
2021-05-16T22:45:15.231Z| vcpu-0| I125: DISKLIB-CTK : ChangeTracker_EndCombine: Attempting to unlink /vmfs/volumes/hdd/vm-failed/vm-failed_1-ctk.vmdk-tmp
2021-05-16T22:45:15.232Z| vcpu-0| I125: DDB: "longContentID" = "9b885358f9a744640b58f1a5d2a2b710" (was "e551d9491c6ace741e255e6531761462")
2021-05-16T22:45:15.246Z| vcpu-0| I125: DISKLIB-CBT : Shutting down change tracking for untracked fid 288212874.
2021-05-16T22:45:15.246Z| vcpu-0| I125: DISKLIB-CBT : Successfully disconnected CBT node.
2021-05-16T22:45:15.247Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : closed.
2021-05-16T22:45:15.247Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : closed.
2021-05-16T22:45:15.247Z| vcpu-0| A100: ConfigDB: Setting scsi0:0.fileName = "vm-failed_1.vmdk"
2021-05-16T22:45:15.248Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : open successful (1115137) size = 0, hd = 0. Type 19
2021-05-16T22:45:15.248Z| vcpu-0| I125: DISKLIB-LIB_BLOCKTRACK : Resuming change tracking.
2021-05-16T22:45:15.256Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-000001-sesparse.vmdk" : closed.
2021-05-16T22:45:15.258Z| vcpu-0| I125: Checkpoint_Unstun: vm stopped for 52053 us
2021-05-16T22:45:15.258Z| vcpu-0| I125: SCSI: switching scsi0 to push completion mode
2021-05-16T22:45:15.259Z| vcpu-0| A100: ConfigDB: Setting scsi0:0.redo = ""
2021-05-16T22:45:15.259Z| vcpu-0| I125: DISK: OPEN scsi0:0 '/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk' persistent R[]
2021-05-16T22:45:15.259Z| vcpu-0| I125: DISKLIB-VMFS : "/vmfs/volumes/hdd/vm-failed/vm-failed_1-flat.vmdk" : open successful (10) size = 68719476736, hd = 254855054. Type 3
2021-05-16T22:45:15.259Z| vcpu-0| I125: DISKLIB-DSCPTR: Opened [0]: "vm-failed_1-flat.vmdk" (0xa)
2021-05-16T22:45:15.259Z| vcpu-0| I125: DISKLIB-LINK : Opened '/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk' (0xa): vmfs, 134217728 sectors / 64 GB.
2021-05-16T22:45:15.259Z| vcpu-0| I125: DISKLIB-LIB_BLOCKTRACK : Resuming change tracking.
2021-05-16T22:45:15.260Z| vcpu-0| I125: DISKLIB-CBT : Initializing ESX kernel change tracking for fid 254855054.
2021-05-16T22:45:15.260Z| vcpu-0| I125: DISKLIB-CBT : Successfuly created cbt node f30c78e-cbt.
2021-05-16T22:45:15.260Z| vcpu-0| I125: DISKLIB-CBT : Opening cbt node /vmfs/devices/cbt/f30c78e-cbt
2021-05-16T22:45:15.260Z| vcpu-0| I125: DISKLIB-LIB : Opened "/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk" (flags 0xa, type vmfs).
2021-05-16T22:45:15.260Z| vcpu-0| I125: DISK: Disk '/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk' has UUID '60 00 c2 9e 1f 21 a6 ed-94 e2 7f 6d 0b 59 44 96'
2021-05-16T22:45:15.260Z| vcpu-0| I125: DISK: OPEN '/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk' Geo (8354/255/63) BIOS Geo (0/0/0)
2021-05-16T22:45:15.260Z| vcpu-0| I125: Creating virtual dev for 'scsi0:0'.
2021-05-16T22:45:15.260Z| vcpu-0| I125: DumpDiskInfo: scsi0:0 createType=11, capacity = 134217728, numLinks = 1, allocationType = 1
2021-05-16T22:45:15.260Z| vcpu-0| I125: SCSIDiskESXPopulateVDevDesc: Using FS backend
2021-05-16T22:45:15.260Z| vcpu-0| I125: DISKUTIL: scsi0:0 : geometry=8354/255/63
2021-05-16T22:45:15.260Z| vcpu-0| I125: SCSIFilterESXAttachCBRCInt: CBRC not enabled or opened without filters,skipping CBRC filter attach.
2021-05-16T22:45:15.260Z| vcpu-0| I125: SCSIFilterSBDAttachCBRC: device scsi0:0 is not SBD. Skipping CBRC attach SBD way.
2021-05-16T22:45:15.265Z| vcpu-0| I125: VigorTransport_ServerSendResponse opID=vim-cmd-e2-dac3 seq=705: Completed Snapshot request.
2021-05-16T22:45:15.265Z| vcpu-0| I125: Turning off snapshot info cache.
2021-05-16T22:45:15.265Z| vcpu-0| I125: Turning off snapshot disk cache.
2021-05-16T22:45:15.265Z| vcpu-0| I125: ConsolidateEnd: Snapshot consolidate complete: The operation completed successfully (0).
2021-05-16T22:45:15.265Z| vcpu-0| I125: Vix: [mainDispatch.c:4239]: VMAutomation_ReportPowerOpFinished: statevar=3, newAppState=1881, success=1 additionalError=0
2021-05-16T22:45:18.774Z| vcpu-0| I125: HBACommon: First write on scsi0:0.fileName='/vmfs/volumes/hdd/vm-failed/vm-failed_1.vmdk'
2021-05-16T22:45:18.774Z| vcpu-0| I125: DDB: "longContentID" = "ad37462b7a10b881f25c55a79f40922f" (was "9b885358f9a744640b58f1a5d2a2b710")
2021-05-16T22:45:18.777Z| vcpu-0| I125: DISKLIB-CHAIN : DiskChainUpdateContentID: old=0xd2a2b710, new=0x9f40922f (ad37462b7a10b881f25c55a79f40922f)
What can be done to replicate the VM vm-failed with cbt?
Best regards,
Peter
Last edited by herrep (2021-05-17 10:23:00)
Offline
Again, you are not providing the full command you have used.
Check your (c)ESXi host for errors, in special when creating the snapshot.
Offline
I have updated the original post to show the full command and also an excerpt of the vmware.log while executing the --replica=cbt action.
Offline
You are using a wrong formatted job.
--check=full
Is not to be used along with --replica job but alone in its own job, --check is action on its own.
Offline
I have removed the ambigous action --check=full from the command. Furthermore, I have run --reset-cbt prior to perform --replica=cbt again and fully deleted the replication directory at the target. Nevertheless, the output is the same negative result.
It appears that you have properly implemented a priority of the action --replica=cbt over the action --check=full if both actions are present in the same command. I do not see any difference in the output when omitting the superflous action --check=full.
Last edited by herrep (2021-05-18 07:45:40)
Offline
Anyway, the problem is related to one out of three VMs. Two other VMs can be replicated by --replica=cbt. Do you see any hint in the provided vmware.log? Is the snapshot correctly provided by the ESXi host?
Offline
I further investigated the issue and found out that --replica=cbt is successfully executed for all other more than 10 VMs. There is only one single VM which causes the error. Are there any other log files I should provide?
Last edited by herrep (2021-05-18 09:46:02)
Offline
Yes, adding the spurious --check action should not cause a SIGTERM, still it should be removed.
To perform any kind of backup on a VM, being able to take a snapshot on the VM is fundamental.
The only keyword resembling a failure in your log is in the very same VM name, "nomen est omen"?
The log you posted doesn't contain any errors, still it seems that you are trying to quiesce (--quiesce) the VM.
Are you doing it for some known reason?
Does the VM contain an active DB server that requires to be quiesced?.
What OS is that VM?
Can it be quiesced?
Are VMware Tools installed and are they the latest version?
Have you revised the documentation of your eventual DB server to know if it can be quiesced?
Quiescing is not a trivial operation. It should only be used when needed and in some cases might not just work out of the box. In fact in some cases, depending on the OS/ DB service combination, it can become hard to configure or even impossible to do so.
Thus, first of all remove the --quiesce part and try again. Then let us know what happened.
Offline
I removed --quiesce and switched to --backup-how=cold to avoid any issues resulting from hot backups as there are databases running there. Unfortunately, the result is the same.
2021-05-18T10:13:42 | Error code 194 at file signal.c, line 194 | Error description: raised SIGTERM (11) (2) in job, num of errors: 11, check error.log
Offline
After checking the issue again, I realized that there was a superflous .vmdk file in the VM's directory which was copied a long time ago. Clearly, this file was not adapted to ctkEnabled as it is not referenced in the .vmx file. I removed the file and started xsibackup again with --replica=cbt. The backup succeeded! Probably it would make sense to catch this error in xsibackup.
Offline
Then, there must be some missing configuration parameter. Probably related to CBT configuration.
That isn't really the error. You have some previous text in the output that is pointing to the real issue:
ble-cbt="vm-failed" first
It's a partial message that should read: [b]Run --enable-cbt="vm-failed" first[/b]
Thus, have you enabled CBT? I guess you have, otherwise you would receive a clear warning, nonetheless it looks like something is wrong about it.
To make sure, just remove the [b]=cbt[/b] part from the --replica action and see what happens. If the rest of the data and configuration files are OK in the VM, you should not receive that error any more.
If my above suspicion is right, manually examine the .vmx file and look for the CTK related lines at the end of it. There should be one general line plus one per disk.
ctkEnabled = "TRUE"
scsi0:0.ctkEnabled = "TRUE"
scsi0:1.ctkEnabled = "TRUE"
There should also be a hidden directory tree in your VM root folder [b].xsi/.cbt/[/b], inside this folder there should be one file per disk.
.seq-W702-flat.vmdk
.seq-history-W702-flat.vmdk
Each file should contain the information on the backup targets and the CBT sequence for each one of them.
192.168.33.183:22:/vmfs/volumes/datastore1/W702; 125
78.27.168.186:22:/vmfs/volumes/VMs/W702; 213
78.27.168.186:22:/vmfs/volumes/datastore1/W702; 9
78.27.169.25:22:/vmfs/volumes/datastore1/W702; 1
Offline
My previous message appears to have crossed. The issue was that a superflous .vmdk file was in the VM directory. After removing it, the backup worked fine. Thanks a lot for your support!
Offline
Thank you for your feedback, we'll try to add more checks to prevent unhandled situations.
Offline