recuperation wrote: 15 Nov 2020, 14:30
arm512 wrote: 15 Nov 2020, 12:42
And in his case, ZFS is detected, but noone of my ZFS partitions, both lost and healthy, is visible forTestdisk. Why so?
If your children are kidnapped they are kind of lost but you can never say they are healthy.
The same applies to your ZFS partitions. There is no evidence that they are healthy.
Unfortunately you did not bother providing a log file. It could be that by just looking at the locations of the other partition you could guess where the first ZFS partition should start.
Here is something you could do:
Get a new drive.
Partition that drive as GPT with just one ZFS partition. The partition does not have to fill the whole drive. Maybe that will work as well using an USB stick.
Use the machine and configuration that you used to create the now broken pool and create a new one and assign the freshly created partition above.
Run Testdisk and save the log file. That will show you if Testdisk performs under a clean, healthy configuration.
It should provide you with the first and last sector number of your ZFS partition, too.
Using a hexeditor, zero out the GPT and its backup.
Now rerun Testdisk to see what it gives.
Under healthy partitions I ment partitions on other drives with alive GPT. Testdisk doesn't see them too.
I don't think that loop device will be worse than USB-stick.
So I tryed what you've advised, but on the loop device:
Before wiping GPT:
Code: Select all
No partition found or selected for recovery
Log:
Code: Select all
Sun Nov 15 15:54:10 2020
Command line: TestDisk /log /dev/loop26
TestDisk 7.1, Data Recovery Utility, July 2019
Christophe GRENIER <grenier@cgsecurity.org>
https://www.cgsecurity.org
OS: Linux, kernel 5.7.0-rc5-zbod-ym29 (#1 SMP Wed May 13 19:59:50 EEST 2020) x86_64
Compiler: GCC 9.2
ext2fs lib: 1.45.5, ntfs lib: libntfs-3g, reiserfs lib: none, ewf lib: none, curses lib: ncurses 6.1
Hard disk list
Disk /dev/loop26 - 20 GB / 18 GiB - 39062500 sectors, sector size=512
Partition table type (auto): EFI GPT
Disk /dev/loop26 - 20 GB / 18 GiB
Partition table type: EFI GPT
Analyse Disk /dev/loop26 - 20 GB / 18 GiB - 39062500 sectors
Current partition structure:
1 P Linux filesys. data 2048 39061503 39059456
search_part()
Disk /dev/loop26 - 20 GB / 18 GiB - 39062500 sectors
Search for partition aborted
interface_write()
No partition found or selected for recovery
simulate write!
TestDisk exited normally.
After wiping GPT the same:
Code: Select all
No partition found or selected for recovery
Log:
Code: Select all
Sun Nov 15 16:05:34 2020
Command line: TestDisk /log /dev/loop26
TestDisk 7.1, Data Recovery Utility, July 2019
Christophe GRENIER <grenier@cgsecurity.org>
https://www.cgsecurity.org
OS: Linux, kernel 5.7.0-rc5-zbod-ym29 (#1 SMP Wed May 13 19:59:50 EEST 2020) x86_64
Compiler: GCC 9.2
ext2fs lib: 1.45.5, ntfs lib: libntfs-3g, reiserfs lib: none, ewf lib: none, curses lib: ncurses 6.1
Hard disk list
Disk /dev/loop26 - 2000 MB / 1907 MiB - 3906250 sectors, sector size=512
Partition table type defaults to Intel
Disk /dev/loop26 - 2000 MB / 1907 MiB
Partition table type: EFI GPT
Analyse Disk /dev/loop26 - 2000 MB / 1907 MiB - 3906250 sectors
Bad GPT partition, invalid signature.
Trying alternate GPT
Bad GPT partition, invalid signature.
Current partition structure:
Bad GPT partition, invalid signature.
Trying alternate GPT
Bad GPT partition, invalid signature.
search_part()
Disk /dev/loop26 - 2000 MB / 1907 MiB - 3906250 sectors
interface_write()
No partition found or selected for recovery
search_part()
Disk /dev/loop26 - 2000 MB / 1907 MiB - 3906250 sectors
interface_write()
No partition found or selected for recovery
simulate write!
TestDisk exited normall
So, even under clean and healthy configuration Testdisk doesn't see ZFS at all.
Ofcource I can test ZFS created on NetBSD, Illumos/OpenSolaris deriatives, FreeBSD, and original, Oracle's Solaris ZFS. It would be interesting. I think Testdisk would see either Illumos-created ZFS or when ZFS is on Sun Partition table nested inside the GPT partition (grub-path: 'hd0,gpt1,sunpc1', similar to FreeBSD's MBR setup /dev/ad0s1a (grub's 'hd0,msdos1,bsd1')). In Solaris way there will be also partition 9, just after the pool ends... But that's not my case, I use ZoL version of ZFS.
The end of the partition doesn't matter to recover:
Code: Select all
losetup -o $OFFSET -f /dev/sd$LETTER
zpool import -o readonly=on -fN $ZPOOL -d /dev/$LOOP_DEVICE
zfs send -R .... | zfs recv ...
zpool will see it's end after beeing imported, if autoexpand=off, ofcource.