RAID and LVM - is there a chance of recovery?

How to use TestDisk to recover lost partition
Forum rules
When asking for technical support:
- Search for posts on the same topic before posting a new question.
- Give clear, specific information in the title of your post.
- Include as many details as you can, MOST POSTS WILL GET ONLY ONE OR TWO ANSWERS.
- Post a follow up with a "Thank you" or "This worked!"
- When you learn something, use that knowledge to HELP ANOTHER USER LATER.
Before posting, please read https://www.cgsecurity.org/testdisk.pdf
Locked
Message
Author
acoder
Posts: 3
Joined: 24 Sep 2015, 00:07

RAID and LVM - is there a chance of recovery?

#1 Post by acoder »

Earlier today I had to reinstall CentOS. I have three volumes, a 300g for OS, and two data drives (1TB and 2TB respectively). During the installation I was asked which of these drives to install on. I selected the 300g drive and left the other two designated as "storage" meaning they are not formatted. I confirmed this by viewing the proposed partition using the installer, note that sdb and sdc are not checked for formatting.

Image

So after completing installation I discovered that neither of my data drives were showing up as valid volumes. In short, linux sees them as empty. Data on the two drives are critical as the CentOS (re)installation happened due to issues with the network stack that caused backups to silently fail.

Here's output from vchk -vvv: https://gist.github.com/anonymous/076cd514c42ec1d0d356

More system info is posted over on Stack Exchange: http://serverfault.com/questions/724369 ... vm2-volume

As a last resort, I installed testdisk and did a basic Search. It sees all three drives and shows the two data drives as LVM2 volumes.

I selected the sdb option and went for Deeper Search just to see what it finds if anything. This test is currently running:

Image

testdisk finishing running and results are not pretty.

Image

Image

This actually seems like "good" news.. that file difference is my data.
Image
Last edited by acoder on 24 Sep 2015, 15:00, edited 2 times in total.

acoder
Posts: 3
Joined: 24 Sep 2015, 00:07

Re: Data disks show as empty after CentOS installation

#2 Post by acoder »

Here's a little more info on the system in question:

The system has two arrays of drives. One is for the OS (CentOS 6), the other for Data. Here is the physical disk count on the machine:

Code: Select all

    #	Description										Total Gigs
    2	HARD DRIVE, 300GB, SAS6, 10, 2.5, H-CE, E/C		 600
    6	HARD DRIVE, 600G, SAS6, 10, 2.5, W-SIR, E/C		3600
viewing with parted:

Code: Select all

[root@ursula ~]# parted /dev/sdb 'print'
Model: DELL PERC H710 (scsi)
Disk /dev/sdb: 1979GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  1979GB  1979GB  primary               lvm

[root@ursula ~]# parted /dev/sdc 'print'
Model: DELL PERC H710 (scsi)
Disk /dev/sdc: 1019GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  1019GB  1019GB  primary               lvm

The two smaller drives are mirrored as 300g - this is where the OS lives.
The rest of the drives were used for data storage and were also mirrored... I think.

Here's how they are seen by linux right now:

Code: Select all

    [root@ursula ~]# pvscan
      PV /dev/sda5   VG vg_ursula   lvm2 [276.34 GiB / 0    free]
      PV /dev/sdb1                  lvm2 [1.80 TiB]
      PV /dev/sdc1                  lvm2 [948.67 GiB]
      Total: 3 [3.00 TiB] / in use: 1 [276.34 GiB] / in no VG: 2 [2.73 TiB]
The good news is that going by the sizes listed right above, it appears that my data is still on the data drives.. in some configuration or another.

The painful part is that there aren't any backups of the previous Volume config - just the web application and database (and those are stale since I didn't catch the original network issue until several weeks had passed).

So - the problem is that I am unsure whether sdb/c were previously joined as one volume under LVM, or if they were used as separate volumes by the OS.

I used dd to read what I could from these two volumes. Identical output was generated when reading from sdb1 and sdc1.

Code: Select all

    `dd if=/dev/sdb1 bs=1M count=1 | strings -n 16 > dd_dsb1_output.txt`
dd output here: https://gist.github.com/anonymous/d3de8a57c477e62c8eeb


`vgscan` only shows the OS drive volume group, 'vg_ursula'.

Code: Select all

      Reading all physical volumes.  This may take a while...
      Found volume group "vg_ursula" using metadata type lvm2
`lvscan` only shows the OS drive logical volumes.

Code: Select all

      ACTIVE            '/dev/vg_ursula/LogVol05' [29.30 GiB] inherit
      ACTIVE            '/dev/vg_ursula/LogVol04' [48.83 GiB] inherit
      ACTIVE            '/dev/vg_ursula/LogVol03' [48.83 GiB] inherit
      ACTIVE            '/dev/vg_ursula/lv_root' [111.74 GiB] inherit
      ACTIVE            '/dev/vg_ursula/lv_home' [9.77 GiB] inherit
      ACTIVE            '/dev/vg_ursula/lv_swap' [27.89 GiB] inherit
`file -s /dev/sdb1`

Code: Select all

    /dev/sdb1: LVM2 (Linux Logical Volume Manager) , UUID: B1bLeFveeDcnfZ2i0tuqWtHgSd6UAgM
`file -s /dev/sdc1`

Code: Select all

    /dev/sdc1: LVM2 (Linux Logical Volume Manager) , UUID: SMMVLUKEuBPHuTeoarMkDAlJDDY1Gm2

output from `vgck -vvv`: https://gist.github.com/anonymous/076cd514c42ec1d0d356

**TL;DR:** System has multiple drive arrays. OS was reinstalled on separate array. Data storage drives were not formatted or otherwise written to - but may have been previously joined under LVM. Is the data recoverable?

acoder
Posts: 3
Joined: 24 Sep 2015, 00:07

Re: RAID and LVM - is there a chance of recovery?

#3 Post by acoder »

A little more info.

Here's what I got from

`pvck -v /dev/sdb1`

Code: Select all

    [root@ursula rpms]# pvck -v /dev/sdb1
        Scanning /dev/sdb1
      Found label on /dev/sdb1, sector 1, type=LVM2 001
      Found text metadata area: offset=4096, size=1044480
        Found LVM2 metadata record at offset=34816, size=1013760, offset2=0 size2=0
        Found LVM2 metadata record at offset=32768, size=2048, offset2=0 size2=0
        Found LVM2 metadata record at offset=30720, size=2048, offset2=0 size2=0
        Found LVM2 metadata record at offset=28160, size=2560, offset2=0 size2=0
        Found LVM2 metadata record at offset=25088, size=3072, offset2=0 size2=0
        Found LVM2 metadata record at offset=22016, size=3072, offset2=0 size2=0
        Found LVM2 metadata record at offset=18432, size=3584, offset2=0 size2=0
        Found LVM2 metadata record at offset=15360, size=3072, offset2=0 size2=0
        Found LVM2 metadata record at offset=12800, size=2560, offset2=0 size2=0
        Found LVM2 metadata record at offset=10240, size=2560, offset2=0 size2=0
        Found LVM2 metadata record at offset=8192, size=2048, offset2=0 size2=0
        Found LVM2 metadata record at offset=6144, size=2048, offset2=0 size2=0
    [root@ursula rpms]# 

and
`pvck -v /dev/sdc1`

Code: Select all

   [root@ursula rpms]# pvck -v /dev/sdc1
        Scanning /dev/sdc1
      Found label on /dev/sdc1, sector 1, type=LVM2 001
      Found text metadata area: offset=4096, size=1044480
        Found LVM2 metadata record at offset=34816, size=1013760, offset2=0 size2=0
        Found LVM2 metadata record at offset=32768, size=2048, offset2=0 size2=0
        Found LVM2 metadata record at offset=30720, size=2048, offset2=0 size2=0
        Found LVM2 metadata record at offset=28160, size=2560, offset2=0 size2=0
        Found LVM2 metadata record at offset=25088, size=3072, offset2=0 size2=0
        Found LVM2 metadata record at offset=22016, size=3072, offset2=0 size2=0
        Found LVM2 metadata record at offset=18432, size=3584, offset2=0 size2=0
        Found LVM2 metadata record at offset=15360, size=3072, offset2=0 size2=0
        Found LVM2 metadata record at offset=12800, size=2560, offset2=0 size2=0
        Found LVM2 metadata record at offset=10240, size=2560, offset2=0 size2=0
        Found LVM2 metadata record at offset=8192, size=2048, offset2=0 size2=0
        Found LVM2 metadata record at offset=6144, size=2048, offset2=0 size2=0
    [root@ursula rpms]# 


This *looks* right since I think the drive array was RAID 1.

Does it look like the old vg config is here in the dd output? https://gist.github.com/anonymous/d3de8a57c477e62c8eeb

Code: Select all

    # Generated by LVM2 version 2.02.98(2)-RHEL6 (2012-10-15): Tue Sep 10 17:02:29 2013
    contents = "Text Format Volume Group"
    description = ""
    creation_host = "ursula"	# Linux ursula 2.6.32-358.el6.x86_64 #1 SMP Fri
     Feb 22 00:31:26 UTC 2013 x86_64
    creation_time = 1378846949	# Tue Sep 10 17:02:29 2013
    id = "DQJ1Yc-BaLP-dgK8-Bu3f-tass-fcTu-8dckdQ"
    format = "lvm2" # informational
    status = ["RESIZEABLE", "READ", "WRITE"]
    extent_size = 8192
    metadata_copies = 0
    physical_volumes {
    id = "5FLCMI-G326-EXQP-qGJH-ym0Z-st0S-bLV0gr"
    device = "/dev/sda5"
    status = ["ALLOCATABLE"]
    dev_size = 579540992
    pe_count = 70744
    id = "B1bLeF-veeD-cnfZ-2i0t-uqWt-HgSd-6UAgMA"
    device = "/dev/sdb1"
    status = ["ALLOCATABLE"]
    dev_size = 3865464832
    pe_count = 471858
    id = "SMMVLU-KEuB-PHuT-eoar-MkDA-lJDD-Y1Gm2g"
    device = "/dev/sdc1"
    status = ["ALLOCATABLE"]
    dev_size = 1989515264
    pe_count = 242860
    logical_volumes {
    id = "ZdFwA8-lj4Y-XwD0-hFMB-iHf8-kZdo-VfJy89"
    status = ["READ", "WRITE", "VISIBLE"]
    creation_host = "ursula"
    creation_time = 1378846949
    segment_count = 2
    start_extent = 0
    extent_count = 471858
    type = "striped"
    stripe_count = 1	# linear
    start_extent = 471858
    extent_count = 225240
    type = "striped"
    stripe_count = 1	# linear

Locked