linux raid-10 far=2 "filesystem damaged" false alarm?

Using TestDisk to repair the filesystem
Forum rules
When asking for technical support:
- Search for posts on the same topic before posting a new question.
- Give clear, specific information in the title of your post.
- Include as many details as you can, MOST POSTS WILL GET ONLY ONE OR TWO ANSWERS.
- Post a follow up with a "Thank you" or "This worked!"
- When you learn something, use that knowledge to HELP ANOTHER USER LATER.
Before posting, please read https://www.cgsecurity.org/testdisk.pdf
Locked
Message
Author
brad
Posts: 2
Joined: 16 May 2012, 07:59

linux raid-10 far=2 "filesystem damaged" false alarm?

#1 Post by brad »

I hit a (seemingly) disk related hiccup, and while everything seemed fine after a reboot,
I tried out all the checking tools I could. TestDisk gave a report that concerned me,
but I now suspect it might be a false positive due to my configuration.
I'll like to check that interpretation with the experts.

So I have two disk set up in RAID-10, which usually makes no sense, but under
linux with "far" or "near" parameters, can be useful.
/dev/sda2 and /dev/sdb2 are raided with far=2
testdisk on /dev/sda2 reports what I expect
testdisk on /dev/sdb2 doesn't find a filesystem (see below)

I now think that the inverted block organization on /dev/sdb2 caused
by the far=2 parameter may just be making it look less like a filesystem
and so is causing a false alarm. Could that be the case?

The smart tools and mdadm checks seem to be saying everything is fine,
but gparted (not lvm aware?) give a similar "no partition" warning.

(If it's already not clear, I should say I don't have a clue about this stuff
and am staggering around in the dark)

Code: Select all

# mdadm --examine /dev/sda2 /dev/sdb2 | egrep '^/dev|raid|Layout|State'
/dev/sda2:
     Raid Level : raid10
          State : clean
         Layout : near=1, far=2
      Number   Major   Minor   RaidDevice State
/dev/sdb2:
     Raid Level : raid10
          State : clean
         Layout : near=1, far=2
      Number   Major   Minor   RaidDevice State



==> dev-sda <==
Disk /dev/sda - 251 GB / 233 GiB - CHS 30522 255 63
     Partition               Start        End    Size in sectors
D Linux                    0   1  1    13 254 63     224847
D Linux RAID               0   1  1    13 254 63     224847 [md0]
D Linux LVM               14   0  1 30521 254 63  490111020
D Linux RAID              14  14 15 30521 254 63  490110124 [md1]

==> dev-sdb <==
Disk /dev/sdb - 251 GB / 233 GiB - CHS 30522 255 63
     Partition               Start        End    Size in sectors
D Linux                    0   1  1    13 254 63     224847
D Linux RAID               0   1  1    13 254 63     224847 [md0]
* Linux                 6246   0  1 11381 254 63   82509840

No file found, filesystem seems damaged.

Post "Deep Analyse":
Disk /dev/sdb - 251 GB / 233 GiB - CHS 30522 255 63
     Partition               Start        End    Size in sectors
D Linux                    0   1  1    13 254 63     224847
D Linux RAID               0   1  1    13 254 63     224847 [md0]
D Linux                 6246   0  1 11381 254 63   82509840
D Linux                 8182   0  1  9160 254 63   15727635

Is there any other info that would be useful?

Thanks

User avatar
remy
Posts: 457
Joined: 25 Mar 2012, 10:21
Location: Strasbourg, France.
Contact:

Re: linux raid-10 far=2 "filesystem damaged" false alarm?

#2 Post by remy »

EDIT : everything I wrote before edit was wrong, I just read again the utility of "far=2" : http://en.wikipedia.org/wiki/Non-standard_RAID_levels. Your result is normal as far as the first part of your drive is configured as raid 0. If you scan with testdisk you should recover on the second part (f2) the same results, inverted.

brad
Posts: 2
Joined: 16 May 2012, 07:59

Re: linux raid-10 far=2 "filesystem damaged" false alarm?

#3 Post by brad »

remy wrote:EDIT : everything I wrote before edit was wrong
Well lucky that I didn't get to read it :)
remy wrote: I just read again the utility of "far=2" : http://en.wikipedia.org/wiki/Non-standard_RAID_levels. Your result is normal as far as the first part of your drive is configured as raid 0. If you scan with testdisk you should recover on the second part (f2) the same results, inverted.
Thanks for the advice. My second disk is probably still healthy in that case.

Do you think testdisk could be made aware of all the non-standard layouts?
Or does it not make sense to use testdisk below the RAID abstraction?

User avatar
remy
Posts: 457
Joined: 25 Mar 2012, 10:21
Location: Strasbourg, France.
Contact:

Re: linux raid-10 far=2 "filesystem damaged" false alarm?

#4 Post by remy »

No sense to use testdisk on each disk, until the mdadm superblock are lost and you want to recover with testdisk.

You should first reassemble your raid with linux, and then launch testdisk on the md device.

Locked