partition table and filesystem recovery after raid failure

How to use TestDisk to recover lost partition
Post Reply
Message
Author
dhiaeddine
Posts: 3
Joined: 28 Feb 2016, 03:00

partition table and filesystem recovery after raid failure

#1 Post by dhiaeddine » 28 Feb 2016, 05:03

Hi,
I've run into hard raid disk failure and raid 5 virtual disk went to raid 0 ! !
for recovery I made followig steps so far:
- deletion of raid virtual disk
- replacement of failed disk ( disks order maintained on controller slots)
- recreation of a raid 5 virtual disk
- unplugged new disk out to push raid to degraded mode
- replaced with other disk, which has rebuild for now

now booting server leads to grub> prompt with error of not knowing partition
parted shows 3 partitions:

Code: Select all

#parted /dev/sda print
Error: Can't have a partition outside the disk!
Ignore/Cancel? I  
Model: DELL PERC H710 (scsi)
Disk /dev/sda: 1198GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start  End  Size  Type  File system  Flags
1  1049kB  537MB  536MB  primary  boot
2  537MB  599GB  598GB  primary  lvm
3  599GB  1497GB  898GB  primary  lvm

testdisk discovered the following:

Code: Select all

Disk /dev/sda - 1197 GB / 1115 GiB - CHS 145619 255 63                                            
     Partition               Start        End    Size in sectors                                  
>* Linux                  453 180 48  1759  32  1   20971520                                      
P HPFS - NTFS          11957 209 29 12002 112 14     716800                                      
P Linux                21794  54 21 25840 223 47   65009664                                      
L Linux                39304   0  1 45083 167 20   92850176        
sfdisk dump

Code: Select all

#1456609510 Disk /dev/sda - 1197 GB / 1115 GiB - CHS 145619 255 63
 1 : start=     2048, size=  1046528, Id=83, *
 2 : start=  1048576, size=1168637952, Id=8E, P
 3 : start=1169686528, size=1754529032, Id=8E, P
tesdisk log
in attached file

- system installed is virtualisation environnement proxmox, with some openvz and kvm VMs
- partition table was a primary partition (may be 100MB,10GB for /boot or /, don't remember exactly cause old install, and a vg with 2 or 3 lvms (partition table should be the default partitions and sizes with proxmox 2.x install but I can't find it nor install iso to check)
- If both partition table and file system are corrupted is there a chance that data is definitely lossed?
- I've made a dd copy of the disk to a file, is this enough to continue recovery later while resintalling the server?
help please
thanks

Sponsored links

User avatar
cgrenier
Site Admin
Posts: 3683
Joined: 18 Feb 2012, 15:08
Location: Le Perreux Sur Marne, France
Contact:

Re: partition table and filesystem recovery after raid failu

#2 Post by cgrenier » 28 Feb 2016, 09:06

What is the result of "pvdisplay -a", "vgdisplay -a" and "lvdisplay -a" ?

dhiaeddine
Posts: 3
Joined: 28 Feb 2016, 03:00

Re: partition table and filesystem recovery after raid failu

#3 Post by dhiaeddine » 28 Feb 2016, 15:21

hi,
thanks for your response

Code: Select all

root@debian:/home/partimag# vgdisplay
root@debian:/home/partimag# vgdisplay -a
vgdisplay: invalid option -- 'a'
  Error during parsing of command line.
root@debian:/home/partimag# pvdisplay -a
  Incompatible options selected
  Run `pvdisplay --help' for more information.
root@debian:/home/partimag# pvdisplay
root@debian:/home/partimag# lvdisplay -a
root@debian:/home/partimag# lvdisplay
seems there is no pv vg or lv detected
in deep scan I founded a FAT32 parseable partition containing UEFI folder it's may be related to a windows VM?
I continue deep partitions search
please anyone can confirm that a dd copy of the whole disk /dev/sda will be sufficient to further search later, there is nothing else I must keep before reinstallation?
thanks

dhiaeddine
Posts: 3
Joined: 28 Feb 2016, 03:00

Re: partition table and filesystem recovery after raid failu

#4 Post by dhiaeddine » 29 Feb 2016, 01:06

hello,
I scanned first partition which should be 512MB /boot and I have found an lvm archive file attached bellow (which strangely should be in /etc so at root partition!)
I can conclude/remember that there were 2pv /dev/sda2 /dev/sda3 and
swap 4G LV
root 10G LV
data 700G LV
I hope I can recover data LV which contains .lzo backup files and simfs openvz containers
the partition table seems ok but may be lvm is corrupted?
anyone can conclude if it is possible to recover all the lvm? or may be some LV?
I then tried to locate 'data' LV which is at extends 3584-->108032 pv0/sda2 and 0-->118977 pv1/sda3
converting to chs to compare with testdisk findings

Code: Select all

Disk /dev/sda - 1197 GB / 1115 GiB - CHS 145619 255 63                                                   Partition               Start        End    Size in sectors                                    >P ext4                    13 247 13    79  28 47    1046528                                        
 P NTFS                   447  58 29  4579 175 45   66387968                                        
 P ext4                  4686 138 59 96065 182 51 1468006400                                        
 P ext2                 97213  91 31 97244  85 57     497664                                        
 P Sys=0C               103078 191 60 138725 117 26  572664360     
could'nt attach txt file so here it is

Code: Select all

pve {
id = "eocBoS-eDOI-sYnU-bTTz-4Dm0-tMcA-gNoGcx"
seqno = 66
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192
max_lv = 0
max_pv = 0
metadata_copies = 0

physical_volumes {

pv0 {
id = "wrtArs-xfYt-dLmd-XBJO-T1Gv-KXKw-30Tpww"
device = "/dev/sda2"

status = ["ALLOCATABLE"]
flags = []
dev_size = 1168637952
pe_start = 2048
pe_count = 142655
}

pv1 {
id = "zdhgV0-EHp3-VURW-L1bv-HBZt-ZJTx-GDcQmB"
device = "/dev/sda3"

status = ["ALLOCATABLE"]
flags = []
dev_size = 1754529032
pe_start = 2048
pe_count = 214175
}
}

logical_volumes {

swap {
id = "l5N3aS-e5kl-aUHs-eLjg-xLyy-sldO-NX6FQP"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "proxmox"
creation_time = 1360171072
segment_count = 1

segment1 {
start_extent = 0
extent_count = 1024

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 0
]
}
}

root {
id = "fVwgvG-0Hf8-zXAA-a9uE-IuUR-fefN-qrT7cH"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "proxmox"
creation_time = 1360171072
segment_count = 1

segment1 {
start_extent = 0
extent_count = 2560

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 1024
]
}
}

data {
id = "frulAD-A2gG-ly8m-ycUa-ZttB-xjKa-PLYTEe"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "proxmox"
creation_time = 1360171073
segment_count = 3

segment1 {
start_extent = 0
extent_count = 25600

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 3584
]
}
segment2 {
start_extent = 25600
extent_count = 34623

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 108032
]
}
segment3 {
start_extent = 60223
extent_count = 118977

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv1", 0
]
}
}

vm-100-disk-1 {
id = "UMseq1-YQ5y-wwJ5-qFaE-Lhgq-3z1o-y3wKnT"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
tags = ["pve-vm-100"]
creation_host = "host1"
creation_time = 1360348642
segment_count = 1

segment1 {
start_extent = 0
extent_count = 20480

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 29184
]
}
}

vm-101-disk-1 {
id = "hB1jip-wOrF-qn5S-zvw8-i1t6-nYLP-xdoig1"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
tags = ["pve-vm-101"]
creation_host = "host1"
creation_time = 1360582389
segment_count = 1

segment1 {
start_extent = 0
extent_count = 8192

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 49664
]
}
}

vm-102-disk-1 {
id = "dkLG2K-0q2t-PlpD-YhPl-YaIB-BhSM-efJ4Zb"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
tags = ["pve-vm-102"]
creation_host = "host1"
creation_time = 1361539946
segment_count = 1

segment1 {
start_extent = 0
extent_count = 25600

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 57856
]
}
}

vm-103-disk-1 {
id = "LFnRw1-o2Js-0RpZ-wWOI-2K49-Hzrp-yKngzM"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
tags = ["pve-vm-103"]
creation_host = "host1"
creation_time = 1361894528
segment_count = 1

segment1 {
start_extent = 0
extent_count = 8192

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 83456
]
}
}

vm-105-disk-1 {
id = "MloSWN-oDJ9-jV5G-QBZQ-BW0w-AupE-lEyc9E"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
tags = ["pve-vm-105"]
creation_host = "host1"
creation_time = 1363686128
segment_count = 1

segment1 {
start_extent = 0
extent_count = 8192

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 91648
]
}
}

vm-107-disk-1 {
id = "HJmSpQ-0smR-Y0ps-fDka-louj-HVc2-fzPb1L"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
tags = ["pve-vm-107"]
creation_host = "host1"
creation_time = 1369732219
segment_count = 1

segment1 {
start_extent = 0
extent_count = 8192

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv0", 99840
]
}
}

vzsnap-host1-0 {
id = "HE3IMV-LP9y-JEgh-aavf-Wnbm-Zmbr-A9Grfa"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "host1"
creation_time = 1383963455
segment_count = 1

segment1 {
start_extent = 0
extent_count = 256

type = "striped"
stripe_count = 1        # linear

stripes = [
"pv1", 118977
]
}
}
}
}
# Generated by LVM2 version 2.02.95(2) (2012-03-06): Sat Nov  9 03:17:35 2013

contents = "Text Format Volume Group"
version = 1

description = ""

creation_host = "host1"  # Linux host1 2.6.32-17-pve #1 SMP Wed Nov 28 07:15:55 CET 2012 x86_64
creation_time = 1383963455      # Sat Nov  9 03:17:35 2013
thanks for your guidance and suggestions

Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests