©XSIBackup-Free: Free Backup Software for ©VMWare ©ESXi

Forum ©XSIBackup: ©VMWare ©ESXi Backup Software


You are not logged in.

#1 2022-12-09 17:16:43

wowbagger
Member
Registered: 2017-05-11
Posts: 41

Checking backups

Hello,


Without providing all the necessary logs, I'm testing some restores but there are two specific servers that have the same error when I try to restore them, we backup about 50 servers daily and only 2 generate an error when restoring:

[root@olympus:/vmfs/volumes/0c6d4303-b890de35/xsibackup_1.4.3.17_olympus] ./xsibackup \
>   --restore \
>   /vmfs/volumes/nfs_typhon_xsi_olympus/xsi_repos/2021_DC/20221204020530/vlpr-mongodb01.prd.saas.dpp.company.network \
>   /vmfs/volumes/ssdvol/restored/
|---------------------------------------------------------------------------------|
||-------------------------------------------------------------------------------||
|||   (c)XSIBackup-DC 1.4.3.17: Backup & Replication Software                   |||
|||   (c)33HOPS, Sistemas de Informacion y Redes, S.L. | All Rights Reserved    |||
||-------------------------------------------------------------------------------||
|---------------------------------------------------------------------------------|
                   (c)Daniel J. Garcia Fidalgo | info@33hops.com
|---------------------------------------------------------------------------------|
System Information: ESXi, Kernel 6 Major 7 Minor 0 Patch 0
-----------------------------------------------------------------------------------------------------------
PID: 2120899, Running job as: root
-----------------------------------------------------------------------------------------------------------
SOURCE: /vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/20221204020530/vlpr-mongodb01.prd.saas.dpp.company.network
-----------------------------------------------------------------------------------------------------------
Found .xsitools file at: /vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/.xsitools
-----------------------------------------------------------------------------------------------------------
Restoring from directory: /vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/20221204020530/vlpr-mongodb01.prd.saas.dpp.company.network
-----------------------------------------------------------------------------------------------------------
Restoring to directory: /vmfs/volumes/ssdvol/restored
-----------------------------------------------------------------------------------------------------------
Total size: 620.71 GB, block size: 10.00 MB
-----------------------------------------------------------------------------------------------------------
NUMBER                                                                  FILE            SIZE       PROGRESS
-----------------------------------------------------------------------------------------------------------
1/18                  vlpr-mongodb01.prd.saas.dpp.company.network-60b927b3.hlog     633.00 B  | Done   0.00%
-----------------------------------------------------------------------------------------------------------
2/18                      vlpr-mongodb01.prd.saas.dpp.company.network-flat.vmdk      20.00 GB | Done   0.00%
-----------------------------------------------------------------------------------------------------------
::: detail :::   0.05% done | block       1 out of    2048                                   | Done   0.00%
-----------------------------------------------------------------------------------------------------------

-----------------------------------------------------------------------------------------------------------
SIGTERM (11) condition was trapped: check logs for more details
-----------------------------------------------------------------------------------------------------------
Cleaning up...
-----------------------------------------------------------------------------------------------------------
Removed host <tmp> dir        OK
-----------------------------------------------------------------------------------------------------------
Removed prog <tmp> dir        OK
-----------------------------------------------------------------------------------------------------------
Removed PID                   OK
-----------------------------------------------------------------------------------------------------------

Nothing in the logs.

The backup command used is:

xsibackup --backup "VMs(vlpr-mongodb01.prd.saas.dpp.company.network)" \
/vmfs/volumes/nfs_typhon_xsi_olympus/xsi_repos/2022_DC \
--block-size=10M \
--verbosity=5 \
--use-smtp=1 \
--compression=true \
--config-backup \
--subject='Olympus Backup vlpr-mongodb01.prd.saas.dpp.company.network' \
--mail-to=icc@company.com

This backup command is the same for the other 48 servers that do restore successfully.
None of the backup emails contain any error.

When running with the --check option against the 2 servers that fail to restore xsibackup does not return any error at all, it checks all the files and reports 100% OK.
Servers are pretty big in size, could that be the issue at play?


Thanks for any insight.
L

Offline

#2 2022-12-13 19:00:08

admin
Administrator
Registered: 2017-04-21
Posts: 2,000

Re: Checking backups

Well, restoring does not require much memory, unlike some other actions such as --prune or --repair. Something must be causing the SEGFAULT though. Still you may want, first of all, increase the memory pool by adding the following argument: --memory-size=4096, which will increase the default (c)XSIBackup memory pool size from 800MB to 4GB, just in case you are running out of memory for some reason.

Secondly increase verbosity, or even better, use the --debug-print argument at the end of the job which will increase verbosity and add some extra debug messages.

./xsibackup \
--restore \
/vmfs/volumes/nfs_typhon_xsi_olympus/xsi_repos/2021_DC/20221204020530/vlpr-mongodb01.prd.saas.dpp.company.network \
/vmfs/volumes/ssdvol/restored/ \
--memory-size=4096

If that doesn't offer you any hint, you can visually inspect the -flat.vmdk.map file. Download a copy and open it with Excel if you will. It must contain two columns: hash and block size.

Of course, you will get the highest level of debug resolution by running the restore job prepending strace to the --restore command. If you need to do so post the last part of the strace output, the last 20-30 lines before the SEGFAULT.

strace \
./xsibackup \
--restore \
/vmfs/volumes/nfs_typhon_xsi_olympus/xsi_repos/2021_DC/20221204020530/vlpr-mongodb01.prd.saas.dpp.company.network \
/vmfs/volumes/ssdvol/restored/ \
--memory-size=4096 \
--debug-print

You performed a --check on the backup, thus it seems to be right, still a basic check just tests that the blocks exist. You can use --check=full iinstead, which will not only check that the block file exists, but will on top of it uncompress the block and make sure that the actual SHA-1 hash of the file coincides with the stored one.

BTW: please define "pretty big"

Offline

#3 2022-12-13 19:52:55

wowbagger
Member
Registered: 2017-05-11
Posts: 41

Re: Checking backups

The VM has one 20G and another 600GB disk, at first I though it had something to do with that but it seems it crashes on the 20G disk.

This is the log:

write(1, "\33[90m---------------------------"..., 117-----------------------------------------------------------------------------------------------------------
) = 117
access("/vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/.xsitools", F_OK) = 0
open("/vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/.xsitools", O_RDONLY) = 5
fstat(5, {st_mode=S_IFREG|0644, st_size=51, ...}) = 0
mmap(NULL, 131072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2e5976c000
read(5, "Desc: XSITools Repo v 2.0.0\nBsiz"..., 131072) = 51
read(5, "", 131072)                     = 0
close(5)                                = 0
munmap(0x2e5976c000, 131072)            = 0
brk(0x2e1a122000)                       = 0x2e1a122000
unlink("/vmfs/volumes/ssdvol/restored/vlpr-mongodb01.prd.saas.dpp.company.network/vlpr-mongodb01.prd.saas.dpp.company.network-flat.vmdk") = 0
open("/vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/20221204020530/vlpr-mongodb01.prd.saas.dpp.company.network/vlpr-mongodb01.prd.saas.dpp.company.network-flat.vmdk.map", O_RDONLY) = 5
fstat(5, {st_mode=S_IFREG|0644, st_size=102400, ...}) = 0
mmap(NULL, 131072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2e5976c000
read(5, "d58d0696c5accaf8c2a52dcce61fef6f"..., 131072) = 102400
read(5, "", 131072)                     = 0
lseek(5, 0, SEEK_SET)                   = 0
access("/vmfs", F_OK)                   = 0
access("/vmfs/volumes", F_OK)           = 0
access("/vmfs/volumes/ssdvol", F_OK)    = 0
access("/vmfs/volumes/ssdvol/restored", F_OK) = 0
access("/vmfs/volumes/ssdvol/restored/vlpr-mongodb01.prd.saas.dpp.company.network", F_OK) = 0
brk(0x2e1ab22000)                       = 0x2e1ab22000
open("/vmfs/volumes/ssdvol/restored/vlpr-mongodb01.prd.saas.dpp.company.network/vlpr-mongodb01.prd.saas.dpp.company.network-flat.vmdk", O_RDWR|O_CREAT|O_TRUNC, 0666) = 6
ftruncate(6, 21474836480)               = 0
close(6)                                = 0
open("/vmfs/volumes/ssdvol/restored/vlpr-mongodb01.prd.saas.dpp.company.network/vlpr-mongodb01.prd.saas.dpp.company.network-flat.vmdk", O_RDWR|O_CREAT|O_TRUNC, 0666) = 6
read(5, "d58d0696c5accaf8c2a52dcce61fef6f"..., 131072) = 102400
access("/vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/data/d/5/8/d/0/d58d0696c5accaf8c2a52dcce61fef6f2915134c", F_OK) = 0
stat("/vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/data/d/5/8/d/0/d58d0696c5accaf8c2a52dcce61fef6f2915134c", {st_mode=S_IFREG|0644, st_size=7673185, ...}) = 0
open("/vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/data/d/5/8/d/0/d58d0696c5accaf8c2a52dcce61fef6f2915134c", O_RDONLY) = 7
fstat(7, {st_mode=S_IFREG|0644, st_size=7673185, ...}) = 0
mmap(NULL, 131072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2e5978c000
read(7, "\5\0\0\201\0\353c\220\20\216\320\274\0\0\260\270\0\0\216\330\216\300\0\373\276\0|\277\0\6\271\0"..., 7602176) = 7602176
read(7, "\0\374]\257\332\274\226\226\317\0\212\345\223Z\356\213Zw\0\347\313\34557\7\177v\0\237\270^\273"..., 131072) = 71009
lseek(7, 0, SEEK_SET)                   = 0
read(7, "\5\0\0\201\0\353c\220\20\216\320\274\0\0\260\270\0\0\216\330\216\300\0\373\276\0|\277\0\6\271\0"..., 131072) = 131072
read(7, "ctl\25,\36e\2\276T\16\276attHrib\6\353st.|2\22e\10Dar"..., 7471104) = 7471104
read(7, "\0\374]\257\332\274\226\226\317\0\212\345\223Z\356\213Zw\0\347\313\34557\7\177v\0\237\270^\273"..., 131072) = 71009
fstat(6, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
mmap(NULL, 131072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2e597ac000
write(6, "\353c\220\20\216\320\274\0\260\270\0\0\216\330\216\300\373\276\0|\277\0\6\271\0\2\363\244\352!\6\0"..., 10485760) = 10485760
close(7)                                = 0
munmap(0x2e5978c000, 131072)            = 0
::: detail :::   0.05% done | block       1 out of    2048                                   | Done   0.00%) = 135
access("/vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/data/0/8/4/1/a/0841a2831ad01edb010183ea62baee3cb5ba3c76", F_OK) = 0
stat("/vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/data/0/8/4/1/a/0841a2831ad01edb010183ea62baee3cb5ba3c76", {st_mode=S_IFREG|0644, st_size=11359593, ...}) = 0
open("/vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/data/0/8/4/1/a/0841a2831ad01edb010183ea62baee3cb5ba3c76", O_RDONLY) = 7
mmap(NULL, 11362304, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2e597cc000
fstat(7, {st_mode=S_IFREG|0644, st_size=11359593, ...}) = 0
mmap(NULL, 131072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2e5978c000
read(7, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 11272192) = 11272192
read(7, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 87401
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
close(0)                                = 0
write(1, "\n", 1
)                       = 1
write(1, "\33[90m---------------------------"..., 117-----------------------------------------------------------------------------------------------------------
) = 117
write(1, "\n", 1
)                       = 1
write(1, "\33[90m---------------------------"..., 117-----------------------------------------------------------------------------------------------------------
) = 117
write(1, "SIGTERM (11) condition was trapp"..., 64SIGTERM (11) condition was trapped: check logs for more details

Didn't know about debug-print & strace. Nice!
I checked 0841a2831ad01edb010183ea62baee3cb5ba3c76 and it's there.

Thanks!

Offline

#4 2022-12-13 20:17:01

admin
Administrator
Registered: 2017-04-21
Posts: 2,000

Re: Checking backups

Can you post a ls -la of the block?

ls -la /vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/data/0/8/4/1/a/0841a2831ad01edb010183ea62baee3cb5ba3c76

UPDATE:

Download the 1.6.0.0 RC preview from the user area and use it to restore instead, it contains some additional debug messages. You can just overwrite the main xsibackup file.

Offline

#5 2022-12-13 20:43:04

wowbagger
Member
Registered: 2017-05-11
Posts: 41

Re: Checking backups

Thanks for your quick reply!

[root@olympus:~] ls -la /vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/data/0/8/4/1/a/0841a2831ad01edb010183ea62baee3cb5ba3c76
-rw-r--r--    1 root     root      11359593 Nov 25 03:12 /vmfs/volumes/0c6d4303-b890de35/xsi_repos/2021_DC/data/0/8/4/1/a/0841a2831ad01edb010183ea62baee3cb5ba3c76

It's on an NFS share, I tried copying the file on the ESX server to another location and that worked.
I checked if I could generate a hash from it on ESX, and that worked also, so I think the file should be allright.

I will try 1.6.0.0 now.

Offline

#6 2022-12-13 20:49:01

wowbagger
Member
Registered: 2017-05-11
Posts: 41

Re: Checking backups

Seems I can no longer enter in the user area... support ended sad
I also tried with the --check=full option and that gives the same problem, so perhaps the file is corrupt.

Offline

#7 2022-12-14 16:31:30

admin
Administrator
Registered: 2017-04-21
Posts: 2,000

Re: Checking backups

Could be, still, we will revise the logic.

Offline

Board footer