LVM Disk recovery: Testdisk info inconsistent with LVM backup metadata

How to use TestDisk to recover lost partition
Forum rules
When asking for technical support:
- Search for posts on the same topic before posting a new question.
- Give clear, specific information in the title of your post.
- Include as many details as you can, MOST POSTS WILL GET ONLY ONE OR TWO ANSWERS.
- Post a follow up with a "Thank you" or "This worked!"
- When you learn something, use that knowledge to HELP ANOTHER USER LATER.
Before posting, please read https://www.cgsecurity.org/testdisk.pdf
Locked
Message
Author
kitatech
Posts: 2
Joined: 06 Jun 2022, 02:49

LVM Disk recovery: Testdisk info inconsistent with LVM backup metadata

#1 Post by kitatech »

Hello,

Used Testdisk for the first time, but can't see how to recover all data after 3 days, although it should be rather easy ...

Summary
  • I have a 3TB secondary data disk that was holding a big Linux LVM Physical Volume (PV) + Logical Volumes (LV).
  • I lost access to it after mistakenly creating with gdisk a GUID Partition Table (GPT) on it.
  • Connecting that disk to a new system, Testdisk identifies partitions, but they seem inconsistent in size and location with the LVM backup metadata file, that I saved
Question: what is my best course of action from there, e.g. among these below I have been considering :
  1. Either: Use Testdisk to recover the LVM structure of PV, LVs, extents ?
    Doubts:
    1. this LVM PV was, most probably, built from a whole disk (/dev/sdb) without partition table, as LVM allows, as recorded in the webmin.log LVM commands history
    2. Testdisk deep scan finds partitions start and end sectors, sizes and number of partitions that are only partially consistent with LVM backup metadata file (see tables below). Are differences a sign that Testdisk is lost ? Why do some figure match, others not ?
  2. Or: Wipe the GPT partition table with wipefs, then try PVSCAN to recover the PV in LVM ?
    Doubt: I fear gdisk overwrote the PV metadata with the GPT partition table, so PVSCAN would fail. As both GPT and LVM write their metadata at th beginning and the end of the disk, but LVM by default configuration only writes at the beginning.
  3. Or: Use LVM vgcfgrestore from backup file to recover the PV in LVM ?
    Doubt: can I merge the backup file configuration with a pre-existing LVM configuration from the recovery OS ?
Details

Testdisk results:

Code: Select all

+------------+----------------------+----------------+------------------+------------------+----------+-------------------------------------------------------------------------------------------------------------------------------------+
| Partition  |                      | Start          | End              | Size in sectors  | GB size  | Note                                                                                                                                |
+============+======================+================+==================+==================+==========+=====================================================================================================================================+
| P          | Linux filesys. Data  |  2,048         |  419,432,447     |  419,430,400     |  200     | "Mostly readable in Testdisk, containing previous system root / , including the precious:                                           |
|            |                      |                |                  |                  |          | - /var/webmin/webmin.log                                                                                                            |
|            |                      |                |                  |                  |          | - /etc/fstab                                                                                                                        |
|            |                      |                |                  |                  |          | - /etc/lvm/backup, archive etc...                                                                                                   |
|            |                      |                |                  |                  |          | - /var/log/syslog, boot.log, etc …                                                                                                  |
|            |                      |                |                  |                  |          | Same size as LVM root LV metadata combined of 2 extents 187+13 = 200                                                                |
|            |                      |                |                  |                  |          | ext4 format as per /etc/fstab "                                                                                                     |
| P          | Linux filesys. Data  |  420,302,344   |  462,018,015     |  41,715,672      |  20      | Testdisk can't read content, not in LVM metadata                                                                                    |
| P          | Linux filesys. Data  |  462,649,168   |  502,486,863     |  39,837,696      |  19      | Testdisk can't read content, not in LVM metadata                                                                                    |
| P          | Linux filesys. Data  |  502,527,808   |  544,243,479     |  41,715,672      |  20      | Testdisk can't read content, not in LVM metadata                                                                                    |
| >P         | Linux filesys. Data  |  547,170,302   |  3,588,040,701   |  3,040,870,400   |  1,450   | "Testdisk can't read content, same size as the LVM SATA1TBRAID1-01 LV metadata backup file information, but different start sector  |
|            |                      |                |                  |                  |          | ext4 format as per /etc/fstab "                                                                                                     |
+------------+----------------------+----------------+------------------+------------------+----------+-------------------------------------------------------------------------------------------------------------------------------------+

Webmin LVM log trace /var/webmin/webmin.log:

Code: Select all

1492751439.12132.0 [21/Apr/2017 13:10:39] example-admin f5aaa555a7b8f1508a3b42685f0521a2 10.1.80.1 lvm save_lv.cgi "modify" "lv" "root" alloc='n' device='/dev/example-r610-003-vg/root' name='root' number='0' perm='rw' readahead='auto' size='115343360' vg='example-r610-003-vg'
1494324024.15217.0 [09/May/2017 18:00:24] example-admin 5713a0500065ce60e1d3ff8e6a67a89a 10.1.80.1 lvm save_lv.cgi "modify" "lv" "root" alloc='n' device='/dev/example-r610-003-vg/root' name='root' number='0' perm='rw' readahead='auto' size='157286400' vg='example-r610-003-vg'
1496294667.27848.0 [01/Jun/2017 13:24:27] example-admin d8f6c0a258f6c7e834d869e6b1fbf921 10.1.80.1 lvm save_lv.cgi "modify" "lv" "root" alloc='n' device='/dev/example-r610-003-vg/root' name='root' number='0' perm='rw' readahead='auto' size='191664128' vg='example-r610-003-vg'
1501515469.4275.0 [31/Jul/2017 23:37:49] example-admin d7f285ed52f4d39d52e9dec4caa8bc9f 10.1.80.1 lvm save_lv.cgi "create" "lv" "SATA1TBRAID1-01" alloc='n' name='SATA1TBRAID1-01' perm='rw' readahead='auto' size='33' size_of='sdb' stripesize='' vg='example-r610-003-vg'
1501515521.4388.0 [31/Jul/2017 23:38:41] example-admin d7f285ed52f4d39d52e9dec4caa8bc9f 10.1.80.1 lvm mkfs.cgi "mkfs" "lv" "/dev/example-r610-003-vg/SATA1TBRAID1-01" dev='/dev/example-r610-003-vg/SATA1TBRAID1-01' ext2_b_def='1' ext2_c='0' ext2_f_def='1' ext2_i_def='1' ext2_m_def='1' ext3_j_def='1' fs='ext4'
1501515847.4662.0 [31/Jul/2017 23:44:07] example-admin d7f285ed52f4d39d52e9dec4caa8bc9f 10.1.80.1 mount save_mount.cgi "create" "dir" "/NFSlocal/SATA1TBRAID1-01" dev='/dev/example-r610-003-vg/SATA1TBRAID1-01' dir='/NFSlocal/SATA1TBRAID1-01' opts='sync' type='ext4'
1501521941.7240.0 [01/Aug/2017 01:25:41] example-admin d7f285ed52f4d39d52e9dec4caa8bc9f 10.1.80.1 mount save_mount.cgi "create" "dir" "/NFSremote/backup-sata-nfs02" dev='example-r610-003:/NFSlocal/SATA1TBRAID1-01/backup-sata-nfs02' dir='/NFSremote/backup-sata-nfs02' opts='sync,_netdev,soft,nfsvers=3,intr,bg' type='nfs'
1502724264.3328.0 [14/Aug/2017 23:24:24] example-admin 90e1f8aa7a5876508a06246726a1832a 10.1.80.1 lvm save_lv.cgi "modify" "lv" "SATA1TBRAID1-01" alloc='n' device='/dev/example-r610-003-vg/SATA1TBRAID1-01' name='SATA1TBRAID1-01' number='2' perm='rw' readahead='auto' size='471859200' vg='example-r610-003-vg'
1502724383.4408.0 [14/Aug/2017 23:26:23] example-admin 90e1f8aa7a5876508a06246726a1832a 10.1.80.1 lvm save_lv.cgi "modify" "lv" "SATA1TBRAID1-01" alloc='n' device='/dev/example-r610-003-vg/SATA1TBRAID1-01' name='SATA1TBRAID1-01' number='2' perm='rw' readahead='auto' size='681574400' vg='example-r610-003-vg'
1502724412.4523.0 [14/Aug/2017 23:26:52] example-admin 90e1f8aa7a5876508a06246726a1832a 10.1.80.1 lvm save_lv.cgi "modify" "lv" "SATA1TBRAID1-01" alloc='n' device='/dev/example-r610-003-vg/SATA1TBRAID1-01' name='SATA1TBRAID1-01' number='2' perm='rw' readahead='auto' size='1310720000' vg='example-r610-003-vg'
1502724436.4648.0 [14/Aug/2017 23:27:16] example-admin 90e1f8aa7a5876508a06246726a1832a 10.1.80.1 lvm save_lv.cgi "modify" "lv" "SATA1TBRAID1-01" alloc='n' device='/dev/example-r610-003-vg/SATA1TBRAID1-01' name='SATA1TBRAID1-01' number='2' perm='rw' readahead='auto' size='1520435200' vg='example-r610-003-vg'
1571621226.18302.0 [21/Oct/2019 09:27:06] example-admin 3742f320edb50380696a8426af65a25a 10.1.80.181 lvm save_lv.cgi "modify" "lv" "root" alloc='n' device='/dev/example-r610-003-vg/root' name='root' number='0' perm='rw' readahead='auto' size='209715200' vg='example-r610-003-vg'


LVM metadata backup file (as a table), only the Physical Volume PV1 is relevant.

Code: Select all

+------------------------------------+---+---+---+---+---+--------+
| LVM                                |   |   |   |   |   |        |
+====================================+===+===+===+===+===+========+
| 8192                               |   |   |   |   |   | TB     |
| extent_size = 8192  # 4 Megabytes  |   |   |   |   |   |  2.73  |
+------------------------------------+---+---+---+---+---+--------+
+------------------------+-----+------+------------------+---------+-----------+-----------+------------------+----------+---------------+---------------+----------+------------------+----------+------------------+------------------+
| VG                     | PV  | dev  | dev_size         | Dev GB  | pe_start  | pe_count  | LV               | Segment  | start_extent  | extent_count  | stripes  | sector size      | GB size  | sector start     | sector end       |
+========================+=====+======+==================+=========+===========+===========+==================+==========+===============+===============+==========+==================+==========+==================+==================+
| example-r610-003-vg {  | pv0 | sda5 |  141,080,560     |  67     |  2,048    |  17,221   | swap_1           | segment1 | 0             | 5,116         | 0        |  41,910,272      |  20      |  2,048           |  41,912,319      |
| example-r610-003-vg {  | pv1 | sdb  |  5,857,345,536   |  2,793  |  2,048    |  715,007  | root             | segment1 | 0             | 46,793        | 0        |  383,328,256     |  183     |  2,048           |  383,330,303     |
| example-r610-003-vg {  | pv1 | sdb  |  5,857,345,536   |  2,793  |  2,048    |  715,007  | root             | segment2 | 46,793        | 4,407         | 417,993  |  36,102,144      |  17      |  3,424,200,704   |  3,460,302,847   |
| example-r610-003-vg {  | pv1 | sdb  |  5,857,345,536   |  2,793  |  2,048    |  715,007  | SATA1TBRAID1-01  | segment1 | 0             | 371,200       | 46,793   |  3,040,870,400   |  1,450   |  383,330,304     |  3,424,200,703   |
+------------------------+-----+------+------------------+---------+-----------+-----------+------------------+----------+---------------+---------------+----------+------------------+----------+------------------+------------------+

/etc/fstab excerpt (restored from Testdisk )

Code: Select all

/dev/mapper/example--r610--003--vg-root /               ext4    errors=remount-ro 0       1
/dev/mapper/example--r610--003--vg-swap_1 none            swap    sw              0       0
/dev/example-r610-003-vg/SATA1TBRAID1-01        /NFSlocal/SATA1TBRAID1-01       ext4    sync    0       2
Last edited by kitatech on 09 Jun 2022, 05:01, edited 1 time in total.
recuperation
Posts: 3026
Joined: 04 Jan 2019, 09:48
Location: Hannover, Deutschland (Germany, Allemagne)

Re: LVM Disk recovery: Testdisk info inconsistent with LVM backup metadata

#2 Post by recuperation »

Would be so kind as to replace all abbreviations by their real long name, except for the term LVM?

Thank you.
kitatech
Posts: 2
Joined: 06 Jun 2022, 02:49

Re: LVM Disk recovery: Testdisk info inconsistent with LVM backup metadata

#3 Post by kitatech »

Thank you for your response, OK, I mostly expanded, and/or explained acronyms, and added reference links of abbreviations.
recuperation
Posts: 3026
Joined: 04 Jan 2019, 09:48
Location: Hannover, Deutschland (Germany, Allemagne)

Re: LVM Disk recovery: Testdisk info inconsistent with LVM backup metadata

#4 Post by recuperation »

Disclaimer: I have no experience with the use of LVM. All information below is based on cross-reading the internet.

As you can read here, the only thing Testdisk can do for you is to find LVM partitions - and rebuild a partition table of course:
https://www.cgsecurity.org/wiki/TestDisk

Testdisk failed here because it did not find LVM type partitions unfortunately.
And here is where Testdisk support ends!

By writing a GPT partition table you have overwritten the first 34 sectors of drive.

Code: Select all

+------------------------------------+---+---+---+---+---+--------+
| LVM                                |   |   |   |   |   |        |
+====================================+===+===+===+===+===+========+
| 8192                               |   |   |   |   |   | TB     |
| extent_size = 8192  # 4 Megabytes  |   |   |   |   |   |  2.73  |
+------------------------------------+---+---+---+---+---+--------+
+------------------------+-----+------+------------------+---------+-----------+-----------+------------------+----------+---------------+---------------+----------+------------------+----------+------------------+------------------+
| VG                     | PV  | dev  | dev_size         | Dev GB  | pe_start  | pe_count  | LV               | Segment  | start_extent  | extent_count  | stripes  | sector size      | GB size  | sector start     | sector end       |
+========================+=====+======+==================+=========+===========+===========+==================+==========+===============+===============+==========+==================+==========+==================+==================+
| example-r610-003-vg {  | pv0 | sda5 |  141,080,560     |  67     |  2,048    |  17,221   | swap_1           | segment1 | 0             | 5,116         | 0        |  41,910,272      |  20      |  2,048           |  41,912,319      |
| example-r610-003-vg {  | pv1 | sdb  |  5,857,345,536   |  2,793  |  2,048    |  715,007  | root             | segment1 | 0             | 46,793        | 0        |  383,328,256     |  183     |  2,048           |  383,330,303     |
| example-r610-003-vg {  | pv1 | sdb  |  5,857,345,536   |  2,793  |  2,048    |  715,007  | root             | segment2 | 46,793        | 4,407         | 417,993  |  36,102,144      |  17      |  3,424,200,704   |  3,460,302,847   |
| example-r610-003-vg {  | pv1 | sdb  |  5,857,345,536   |  2,793  |  2,048    |  715,007  | SATA1TBRAID1-01  | segment1 | 0             | 371,200       | 46,793   |  3,040,870,400   |  1,450   |  383,330,304     |  3,424,200,703   |
+------------------------+-----+------+------------------+---------+-----------+-----------+------------------+----------+---------------+---------------+----------+------------------+----------+------------------+------------------+

But this information seems to suggest that the lowest sector used by your physical volume is 2048.
Therefore I don't assume that you have overwritten the content of your file system(s) by writing 34 sectors of GPT partition table.

Read:
viewtopic.php?t=6750

Here is a recipe for your LVM rebuild (not a feature of Testdisk):
https://www.golinuxcloud.com/recover-lv ... e_in_Linux

I recommend setting up another Linux machine (don't know if a live linux would be sufficient) with two disks. Set up a LVM structure. USe the space of the two extra disks as phyiscal volumes, build the volume group and create a logical volume that is so big that when filled will need to occupy not just one physical volume.
Write some files whose content can be verified using f3 "fight flash fraud".

Then repeat your error by writing a GPT table.
Apply the recipe from the link above.
See if you regain access to your logical volume and run f3 to verify your files.
Locked