Page 1 of 1

linux raid-10 far=2 "filesystem damaged" false alarm?

Posted: 16 May 2012, 08:29
by brad
I hit a (seemingly) disk related hiccup, and while everything seemed fine after a reboot,
I tried out all the checking tools I could. TestDisk gave a report that concerned me,
but I now suspect it might be a false positive due to my configuration.
I'll like to check that interpretation with the experts.

So I have two disk set up in RAID-10, which usually makes no sense, but under
linux with "far" or "near" parameters, can be useful.
/dev/sda2 and /dev/sdb2 are raided with far=2
testdisk on /dev/sda2 reports what I expect
testdisk on /dev/sdb2 doesn't find a filesystem (see below)

I now think that the inverted block organization on /dev/sdb2 caused
by the far=2 parameter may just be making it look less like a filesystem
and so is causing a false alarm. Could that be the case?

The smart tools and mdadm checks seem to be saying everything is fine,
but gparted (not lvm aware?) give a similar "no partition" warning.

(If it's already not clear, I should say I don't have a clue about this stuff
and am staggering around in the dark)

Code: Select all

# mdadm --examine /dev/sda2 /dev/sdb2 | egrep '^/dev|raid|Layout|State'
/dev/sda2:
     Raid Level : raid10
          State : clean
         Layout : near=1, far=2
      Number   Major   Minor   RaidDevice State
/dev/sdb2:
     Raid Level : raid10
          State : clean
         Layout : near=1, far=2
      Number   Major   Minor   RaidDevice State



==> dev-sda <==
Disk /dev/sda - 251 GB / 233 GiB - CHS 30522 255 63
     Partition               Start        End    Size in sectors
D Linux                    0   1  1    13 254 63     224847
D Linux RAID               0   1  1    13 254 63     224847 [md0]
D Linux LVM               14   0  1 30521 254 63  490111020
D Linux RAID              14  14 15 30521 254 63  490110124 [md1]

==> dev-sdb <==
Disk /dev/sdb - 251 GB / 233 GiB - CHS 30522 255 63
     Partition               Start        End    Size in sectors
D Linux                    0   1  1    13 254 63     224847
D Linux RAID               0   1  1    13 254 63     224847 [md0]
* Linux                 6246   0  1 11381 254 63   82509840

No file found, filesystem seems damaged.

Post "Deep Analyse":
Disk /dev/sdb - 251 GB / 233 GiB - CHS 30522 255 63
     Partition               Start        End    Size in sectors
D Linux                    0   1  1    13 254 63     224847
D Linux RAID               0   1  1    13 254 63     224847 [md0]
D Linux                 6246   0  1 11381 254 63   82509840
D Linux                 8182   0  1  9160 254 63   15727635

Is there any other info that would be useful?

Thanks

Re: linux raid-10 far=2 "filesystem damaged" false alarm?

Posted: 17 May 2012, 10:51
by remy
EDIT : everything I wrote before edit was wrong, I just read again the utility of "far=2" : http://en.wikipedia.org/wiki/Non-standard_RAID_levels. Your result is normal as far as the first part of your drive is configured as raid 0. If you scan with testdisk you should recover on the second part (f2) the same results, inverted.

Re: linux raid-10 far=2 "filesystem damaged" false alarm?

Posted: 18 May 2012, 01:40
by brad
remy wrote:EDIT : everything I wrote before edit was wrong
Well lucky that I didn't get to read it :)
remy wrote: I just read again the utility of "far=2" : http://en.wikipedia.org/wiki/Non-standard_RAID_levels. Your result is normal as far as the first part of your drive is configured as raid 0. If you scan with testdisk you should recover on the second part (f2) the same results, inverted.
Thanks for the advice. My second disk is probably still healthy in that case.

Do you think testdisk could be made aware of all the non-standard layouts?
Or does it not make sense to use testdisk below the RAID abstraction?

Re: linux raid-10 far=2 "filesystem damaged" false alarm?

Posted: 18 May 2012, 14:55
by remy
No sense to use testdisk on each disk, until the mdadm superblock are lost and you want to recover with testdisk.

You should first reassemble your raid with linux, and then launch testdisk on the md device.