Lost mdadm raid 10 array...

How to use TestDisk to recover lost partition
Forum rules
When asking for technical support:
- Search for posts on the same topic before posting a new question.
- Give clear, specific information in the title of your post.
- Include as many details as you can, MOST POSTS WILL GET ONLY ONE OR TWO ANSWERS.
- Post a follow up with a "Thank you" or "This worked!"
- When you learn something, use that knowledge to HELP ANOTHER USER LATER.
Before posting, please read https://www.cgsecurity.org/testdisk.pdf
Locked
Message
Author
ris8
Posts: 5
Joined: 21 Mar 2014, 23:30

Lost mdadm raid 10 array...

#1 Post by ris8 »

I used to have a 4 x 3T mdadm RAID 10 array
When moving to a new PC... something went wrong
Now back on the old PC I only see one disk with a partition:

Code: Select all

sudo parted -l
Model: ATA WDC WD30EFRX-68A (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name        Flags
 1      1049kB  3001GB  3001GB               Linux RAID  raid



Model: ATA WDC WD30EFRX-68A (scsi)
Disk /dev/sdc: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags


Model: ATA WDC WD30EFRX-68A (scsi)
Disk /dev/sdd: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags


Model: ATA WDC WD30EFRX-68A (scsi)
Disk /dev/sde: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags
Supposedly the partition info should be identical across the disks... is there a way to copy it? Any better course of action?

For the disk that has the partition, both mdadm and testdisk report a bad superblock, what can I do?

Code: Select all

TestDisk 6.14, Data Recovery Utility, July 2013
Christophe GRENIER <grenier@cgsecurity.org>
http://www.cgsecurity.org

Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63
Current partition structure:
     Partition                  Start        End    Size in sectors

Invalid RAID superblock
 1 P Linux Raid                  2048 5860533134 5860531087 [Linux RAID]
 1 P Linux Raid                  2048 5860533134 5860531087 [Linux RAID]











                P=Primary  D=Deleted
>[Quick Search]  [ Backup ]
                            Try to locate partition
Thanks

ris8
Posts: 5
Joined: 21 Mar 2014, 23:30

Re: Lost mdadm raid 10 array...

#2 Post by ris8 »

ran testdisk quick search on one of the other disks, log extract is below (whole log attached)

It does appear to find the 6T partition of the md disk. Isn't it odd, since the raid 10 is supposed to be striped? Anyhow I don't think I can do anything with it until I build the /dev/md back together, right?

This is a small part of the log (whole log below)
testdisk_sdd.zip
/dev/sdd scan
(7.78 KiB) Downloaded 314 times

Code: Select all

Disk /dev/sdd - 3000 GB / 2794 GiB - WDC WD30EFRX-68AX9N0
Partition table type: EFI GPT

Analyse Disk /dev/sdd - 3000 GB / 2794 GiB - CHS 364801 255 63
hdr_size=92
hdr_lba_self=1
hdr_lba_alt=5860533167 (expected 5860533167)
hdr_lba_start=34
hdr_lba_end=5860533134
hdr_lba_table=2
hdr_entries=128
hdr_entsz=128
hdr_size=92
hdr_lba_self=5860533167
hdr_lba_alt=1 (expected 1)
hdr_lba_start=34
hdr_lba_end=5860533134
hdr_lba_table=5860533135
hdr_entries=128
hdr_entsz=128
Trying alternate GPT
Current partition structure:
Trying alternate GPT

search_part()
Disk /dev/sdd - 3000 GB / 2794 GiB - CHS 364801 255 63

[skipping most of the log...]
Disk /dev/sdd - 3000 GB / 2794 GiB - CHS 364801 255 63
Check the harddisk size: HD jumpers settings, BIOS detection...
The harddisk (3000 GB / 2794 GiB) seems too small! (< 8251 GB / 7684 GiB)
The following partitions can't be recovered:
     MS Data 1464927128 13185991575 11721064448
     ext4 blocksize=4096 Large file Sparse superblock Recover, 6001 GB / 5589 GiB
     MS Data 1464927440 13185991887 11721064448
     ext4 blocksize=4096 Large file Sparse superblock Recover, 6001 GB / 5589 GiB
     MS Data 1464927520 13185991967 11721064448
[...more...]
Last edited by ris8 on 22 Mar 2014, 16:00, edited 1 time in total.

ris8
Posts: 5
Joined: 21 Mar 2014, 23:30

Re: Lost mdadm raid 10 array...

#3 Post by ris8 »

Sorry to keep replying to my post... not self-bumping...

I recovered my notes from backup on how I had assembled the raid (note to self... do not keep recovery info on the drive itself)

The disks were partitioned all in the same way (I had some notes that the partitions were not necessary to create the raid, but I am pretty sure I created them anyway)

Code: Select all

sudo gdisk /dev/xxx
>p -> make sure ok
>n
>(enter a few times)
>fd00
>w
The raid and fs should have been created as follows:

Code: Select all

sudo mdadm --create /dev/md1 --metadata=1.2 --verbose --level=10 --raid-devices=4 /dev/sdc1 /dev/sdg1 /dev/sde1 /dev/sdh1 –layout=f2 --chunk=1024 –name=share1
mkfs.ext4 -v -b 4096 -E stride=256,stripe-width=512 /dev/md1
This is the array in the mdadm.conf

Code: Select all

ARRAY /dev/md1 level=raid10 num-devices=4 metadata=1.2 name=SERVER02:share1 UUID=89c4b606:01327383:c367db0f:8e93606f
I have seen posts suggesting to use the UUID to recreate the array (tried, does not work). Other posts suggest re-creating the array (did not try). But I think my first issue is to get back (at least) another partition before I can try to re-assemble the raid

Questions
1. Can I backup the partition from /dev/sda1 and restore it onto the other 3 disks?
1a. Is that a good idea?
2. Should I then re-run the same mdadm --create I did to originally create the disk?

User avatar
cgrenier
Site Admin
Posts: 5432
Joined: 18 Feb 2012, 15:08
Location: Le Perreux Sur Marne, France
Contact:

Re: Lost mdadm raid 10 array...

#4 Post by cgrenier »

Can you try pvdisplay -v and pvck -v ?

ris8
Posts: 5
Joined: 21 Mar 2014, 23:30

Re: Lost mdadm raid 10 array...

#5 Post by ris8 »

See below

Code: Select all

sudo pvdisplay -v /dev/sd[acde]
    Using physical volume(s) on command line
  Failed to read physical volume "/dev/sda"
  Failed to read physical volume "/dev/sdc"
  Failed to read physical volume "/dev/sdd"
  Failed to read physical volume "/dev/sde"
sudo pvck -v /dev/sd[acde]
    Scanning /dev/sda
  Device /dev/sda not found (or ignored by filtering).
    Scanning /dev/sdc
  Device /dev/sdc not found (or ignored by filtering).
    Scanning /dev/sdd
  Device /dev/sdd not found (or ignored by filtering).
    Scanning /dev/sde
  Device /dev/sde not found (or ignored by filtering).
I don't have LVM on this machine or the array

ris8
Posts: 5
Joined: 21 Mar 2014, 23:30

Re: Lost mdadm raid 10 array...

#6 Post by ris8 »

I have seen a couple of pages mentioning the use of COW files to experiment with the disks. Essentially mounting a device which is the original disk, plus a file with the changes to it. So the changes are not written to disk
Examples are given here:
http://stackoverflow.com/questions/7582 ... ock-device
https://raid.wiki.kernel.org/index.php/ ... tware_RAID

Do they work? Would testdisk be able to copy the partition from /dev/sda to one of these fake disks? I am surprised they are not more widely cited in the forums

Locked