lacie raid 1

Using PhotoRec to recover lost data
Message
Author
peter1
Posts: 13
Joined: 21 May 2012, 15:50

lacie raid 1

#1 Postby peter1 » 21 May 2012, 16:09

i bought a Lacie raid1 and the box failed, easy buy another lacie raid1 and plug in the old drives, sorry lacie is not backwards compatable "this is a feature not a bug".

lacie support recommends installing one drive in a ubuntu 12.04 linux computer and using testdisk to recover data.

when i started i could see the linux array in home folder but could not read it. after a week of playing with testdisk, reading almost ever post in this forum, searching the internet and youtube i can no longer see the linux array in home folder but testdisk still sees it fine.

i still have my second raid 1 drive intact so im safe to play with the first one.

what do i have to do to made the raid 1 disk readable?

thank you for your time
peter

Sponsored links

User avatar
remy
Posts: 457
Joined: 25 Mar 2012, 10:21
Location: Strasbourg, France.
Contact:

Re: lacie raid 1

#2 Postby remy » 21 May 2012, 22:36

Please give feedback :

Code: Select all

sudo sfdisk -luS


and if your disk is /dev/sdb, for each partition (/dev/sdb1, /dev/sdb2...) :

Code: Select all

sudo mdadm --examine /dev/sdbX


and also :

Code: Select all

cat /proc/mdstat


and last, but not least feedback (copy/paste) about what you can see with testdisk (Analyse/Quicksearch/Deepercearch) in different stages.
J'ai pour habitude d'aller au bout de l'aide que j'apporte. Si vous pensez que je vous ai abandonné, et que je ne l'ai pas fait explicitement, c'est que votre message est enfoui avec d'autres. Relancez-moi (modérément) par mail.

peter1
Posts: 13
Joined: 21 May 2012, 15:50

Re: lacie raid 1

#3 Postby peter1 » 22 May 2012, 22:09

ok here is the log from testdisk the 3 commands do not seem to work. i did the search and the deep serach i hope this helps? thank you for your time
-------------------------------------------------------------------------------------------------------------
Mon May 21 19:05:53 2012
Command line: TestDisk

TestDisk 6.13, Data Recovery Utility, November 2011
Christophe GRENIER <grenier@cgsecurity.org>
http://www.cgsecurity.org
OS: Linux, kernel 3.2.0-23-generic-pae (#36-Ubuntu SMP Tue Apr 10 22:19:09 UTC 2012) i686
Compiler: GCC 4.6
Compilation date: 2012-02-05T07:16:54
ext2fs lib: 1.42, ntfs lib: 10:0:0, reiserfs lib: none, ewf lib: none
/dev/sda: LBA, HPA, LBA48, DCO support
/dev/sda: size 156312576 sectors
/dev/sda: user_max 156312576 sectors
/dev/sda: native_max 156312576 sectors
/dev/sda: dco 156312576 sectors
/dev/sdb: LBA, HPA, LBA48, DCO support
/dev/sdb: size 976773168 sectors
/dev/sdb: user_max 976773168 sectors
/dev/sdb: native_max 976773168 sectors
/dev/sdb: dco 976773168 sectors
Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512
Hard disk list
Disk /dev/sda - 80 GB / 74 GiB - CHS 9730 255 63, sector size=512 - WDC WD800JD-08MSA1, S/N:WD-WMAM9WJ42031, FW:10.01E01
Disk /dev/sdb - 500 GB / 465 GiB - CHS 60801 255 63, sector size=512 - ST3500620AS, S/N:9QM5Y5KR, FW:LC11
Disk /dev/sdc - 1000 GB / 931 GiB - CHS 121601 255 63, sector size=512 - ST310005 20AS

Partition table type (auto): Intel
Disk /dev/sdb - 500 GB / 465 GiB - ST3500620AS
Partition table type: Intel

Analyse Disk /dev/sdb - 500 GB / 465 GiB - CHS 60801 255 63
Geometry from i386 MBR: head=255 sector=63

Raid magic value at 40/1/1
Raid apparent size: 1349248 sectors
Raid chunk size: 0 bytes
check_MD 0.90
md2 md 0.90.0 Raid 1: devices 0(8,9)* 1(8,25)
get_geometry_from_list_part_aux head=255 nbr=2
get_geometry_from_list_part_aux head=8 nbr=1
get_geometry_from_list_part_aux head=255 nbr=2
Current partition structure:
1 * Linux RAID 40 1 1 123 254 63 1349397 [md2]
Ask the user for vista mode
Allow partial last cylinder : No
search_vista_part: 0

search_part()
Disk /dev/sdb - 500 GB / 465 GiB - CHS 60801 255 63

recover_EXT2: s_block_group_nr=0/0, s_mnt_count=1/34, s_blocks_per_group=8192, s_inodes_per_group=1984
recover_EXT2: s_blocksize=1024
recover_EXT2: s_blocks_count 7936
recover_EXT2: part_size 15872
Linux 17 1 1 17 252 59 15872
EXT3 Sparse superblock, 8126 KB / 7936 KiB

Raid magic value at 17/252/60
Raid apparent size: 15872 sectors
Raid chunk size: 0 bytes
md0 md 0.90.0 Raid 1: devices 0(8,7)* 1(8,23)
Linux RAID 17 1 1 17 254 61 16000 [md0]
md 0.90.0 Raid 1: devices 0(8,7)* 1(8,23), 8192 KB / 8000 KiB

recover_EXT2: s_block_group_nr=0/21, s_mnt_count=3/34, s_blocks_per_group=8192, s_inodes_per_group=2008
recover_EXT2: s_blocksize=1024
recover_EXT2: s_blocks_count 176576
recover_EXT2: part_size 353152
Linux 18 1 1 39 251 37 353152
EXT3 Sparse superblock, 180 MB / 172 MiB

Raid magic value at 39/251/38
Raid apparent size: 353152 sectors
Raid chunk size: 0 bytes
md1 md 0.90.0 Raid 1: devices 0(8,8)* 1(8,24)
Linux RAID 18 1 1 39 253 39 353280 [md1]
md 0.90.0 Raid 1: devices 0(8,8)* 1(8,24), 180 MB / 172 MiB

recover_EXT2: s_block_group_nr=0/5, s_mnt_count=10/35, s_blocks_per_group=32768, s_inodes_per_group=14080
recover_EXT2: s_blocksize=4096
recover_EXT2: s_blocks_count 168656
recover_EXT2: part_size 1349248
Linux 40 1 1 123 252 40 1349248
EXT3 Sparse superblock, 690 MB / 658 MiB

Raid magic value at 123/252/41
Raid apparent size: 1349248 sectors
Raid chunk size: 0 bytes
md2 md 0.90.0 Raid 1: devices 0(8,9)* 1(8,25)
Linux RAID 40 1 1 123 254 42 1349376 [md2]
md 0.90.0 Raid 1: devices 0(8,9)* 1(8,25), 690 MB / 658 MiB

XFS Marker at 125/0/1

recover_xfs
Linux 125 0 1 121476 250 51 1949519616
XFS 6.2+ - bitmap version, 998 GB / 929 GiB
This partition ends after the disk limits. (start=2008125, size=1949519616, end=1951527740, disk end=976773168)

Raid magic value at 60800/252/58
Raid apparent size: 0 sectors
Raid chunk size: 65536 bytes
md4 md 0.90.0 Raid 4294967295: devices 0(8,2)* 1(8,18)
Linux RAID 60800 252 58 60800 254 59 128 [md4]
md 0.90.0 Raid 4294967295: devices 0(8,2)* 1(8,18), 65 KB / 64 KiB
Disk /dev/sdb - 500 GB / 465 GiB - CHS 60801 255 63
Check the harddisk size: HD jumpers settings, BIOS detection...
The harddisk (500 GB / 465 GiB) seems too small! (< 999 GB / 930 GiB)
The following partition can't be recovered:
Linux 125 0 1 121476 250 51 1949519616
XFS 6.2+ - bitmap version, 998 GB / 929 GiB
get_geometry_from_list_part_aux head=255 nbr=8
get_geometry_from_list_part_aux head=8 nbr=4
get_geometry_from_list_part_aux head=16 nbr=2
get_geometry_from_list_part_aux head=255 nbr=8

Results
Linux 17 1 1 17 254 63 16002
EXT3 Sparse superblock, 8193 KB / 8001 KiB
Linux RAID 17 1 1 17 254 63 16002 [md0]
md 0.90.0 Raid 1: devices 0(8,7)* 1(8,23), 8193 KB / 8001 KiB
Linux 18 1 1 39 254 63 353367
EXT3 Sparse superblock, 180 MB / 172 MiB
Linux RAID 18 1 1 39 254 63 353367 [md1]
md 0.90.0 Raid 1: devices 0(8,8)* 1(8,24), 180 MB / 172 MiB
Linux 40 1 1 123 254 63 1349397
EXT3 Sparse superblock, 690 MB / 658 MiB
Linux RAID 40 1 1 123 254 63 1349397 [md2]
md 0.90.0 Raid 1: devices 0(8,9)* 1(8,25), 690 MB / 658 MiB
L Linux RAID 60800 252 58 60800 254 63 132 [md4]
md 0.90.0 Raid 4294967295: devices 0(8,2)* 1(8,18), 67 KB / 66 KiB

interface_write()
1 E extended LBA 60800 252 1 60800 254 63 189
5 L Linux RAID 60800 252 58 60800 254 63 132 [md4]

search_part()
Disk /dev/sdb - 500 GB / 465 GiB - CHS 60801 255 63

recover_EXT2: s_block_group_nr=0/0, s_mnt_count=1/34, s_blocks_per_group=8192, s_inodes_per_group=1984
recover_EXT2: s_blocksize=1024
recover_EXT2: s_blocks_count 7936
recover_EXT2: part_size 15872
Linux 17 1 1 17 252 59 15872
EXT3 Sparse superblock, 8126 KB / 7936 KiB

Raid magic value at 17/252/60
Raid apparent size: 15872 sectors
Raid chunk size: 0 bytes
md0 md 0.90.0 Raid 1: devices 0(8,7)* 1(8,23)
Linux RAID 17 1 1 17 254 61 16000 [md0]
md 0.90.0 Raid 1: devices 0(8,7)* 1(8,23), 8192 KB / 8000 KiB

recover_EXT2: s_block_group_nr=0/21, s_mnt_count=3/34, s_blocks_per_group=8192, s_inodes_per_group=2008
recover_EXT2: s_blocksize=1024
recover_EXT2: s_blocks_count 176576
recover_EXT2: part_size 353152
Linux 18 1 1 39 251 37 353152
EXT3 Sparse superblock, 180 MB / 172 MiB

block_group_nr 3

recover_EXT2: "e2fsck -b 24577 -B 1024 device" may be needed
recover_EXT2: s_block_group_nr=3/21, s_mnt_count=0/34, s_blocks_per_group=8192, s_inodes_per_group=2008
recover_EXT2: s_blocksize=1024
recover_EXT2: s_blocks_count 176576
recover_EXT2: part_size 353152
Linux 18 1 1 39 251 37 353152
EXT3 Sparse superblock Backup superblock, 180 MB / 172 MiB

Raid magic value at 39/251/38
Raid apparent size: 353152 sectors
Raid chunk size: 0 bytes
md1 md 0.90.0 Raid 1: devices 0(8,8)* 1(8,24)
Linux RAID 18 1 1 39 253 39 353280 [md1]
md 0.90.0 Raid 1: devices 0(8,8)* 1(8,24), 180 MB / 172 MiB

recover_EXT2: s_block_group_nr=0/5, s_mnt_count=10/35, s_blocks_per_group=32768, s_inodes_per_group=14080
recover_EXT2: s_blocksize=4096
recover_EXT2: s_blocks_count 168656
recover_EXT2: part_size 1349248
Linux 40 1 1 123 252 40 1349248
EXT3 Sparse superblock, 690 MB / 658 MiB

block_group_nr 3

recover_EXT2: "e2fsck -b 98304 -B 4096 device" may be needed
recover_EXT2: s_block_group_nr=3/5, s_mnt_count=9/35, s_blocks_per_group=32768, s_inodes_per_group=14080
recover_EXT2: s_blocksize=4096
recover_EXT2: s_blocks_count 168656
recover_EXT2: part_size 1349248
Linux 40 1 1 123 252 40 1349248
EXT3 Sparse superblock Backup superblock, 690 MB / 658 MiB

Raid magic value at 123/252/41
Raid apparent size: 1349248 sectors
Raid chunk size: 0 bytes
md2 md 0.90.0 Raid 1: devices 0(8,9)* 1(8,25)
Linux RAID 40 1 1 123 254 42 1349376 [md2]
md 0.90.0 Raid 1: devices 0(8,9)* 1(8,25), 690 MB / 658 MiB

XFS Marker at 125/0/1

recover_xfs
Linux 125 0 1 121476 250 51 1949519616
XFS 6.2+ - bitmap version, 998 GB / 929 GiB
This partition ends after the disk limits. (start=2008125, size=1949519616, end=1951527740, disk end=976773168)

Raid magic value at 60800/252/58
Raid apparent size: 0 sectors
Raid chunk size: 65536 bytes
md4 md 0.90.0 Raid 4294967295: devices 0(8,2)* 1(8,18)
Linux RAID 60800 252 58 60800 254 59 128 [md4]
md 0.90.0 Raid 4294967295: devices 0(8,2)* 1(8,18), 65 KB / 64 KiB
Disk /dev/sdb - 500 GB / 465 GiB - CHS 60801 255 63
Check the harddisk size: HD jumpers settings, BIOS detection...
The harddisk (500 GB / 465 GiB) seems too small! (< 999 GB / 930 GiB)
The following partition can't be recovered:
Linux 125 0 1 121476 250 51 1949519616
XFS 6.2+ - bitmap version, 998 GB / 929 GiB
get_geometry_from_list_part_aux head=255 nbr=8
get_geometry_from_list_part_aux head=8 nbr=4
get_geometry_from_list_part_aux head=16 nbr=2
get_geometry_from_list_part_aux head=255 nbr=8

Results
Linux 17 1 1 17 254 63 16002
EXT3 Sparse superblock, 8193 KB / 8001 KiB
Linux RAID 17 1 1 17 254 63 16002 [md0]
md 0.90.0 Raid 1: devices 0(8,7)* 1(8,23), 8193 KB / 8001 KiB
Linux 18 1 1 39 254 63 353367
EXT3 Sparse superblock, 180 MB / 172 MiB
Linux RAID 18 1 1 39 254 63 353367 [md1]
md 0.90.0 Raid 1: devices 0(8,8)* 1(8,24), 180 MB / 172 MiB
Linux 40 1 1 123 254 63 1349397
EXT3 Sparse superblock, 690 MB / 658 MiB
Linux RAID 40 1 1 123 254 63 1349397 [md2]
md 0.90.0 Raid 1: devices 0(8,9)* 1(8,25), 690 MB / 658 MiB
L Linux RAID 60800 252 58 60800 254 63 132 [md4]
md 0.90.0 Raid 4294967295: devices 0(8,2)* 1(8,18), 67 KB / 66 KiB

interface_write()
1 E extended LBA 60800 252 1 60800 254 63 189
5 L Linux RAID 60800 252 58 60800 254 63 132 [md4]
simulate write!

write_mbr_i386: starting...
write_all_log_i386: starting...
write_all_log_i386: CHS: 60800/252/1,lba=976767876

TestDisk exited normally.

User avatar
remy
Posts: 457
Joined: 25 Mar 2012, 10:21
Location: Strasbourg, France.
Contact:

Re: lacie raid 1

#4 Postby remy » 23 May 2012, 00:26

There's an XFS partition detected, whose size is not far from 1TB. Your raid may be a Raid0 or a linear. Are you sure for raid 1 ?
J'ai pour habitude d'aller au bout de l'aide que j'apporte. Si vous pensez que je vous ai abandonné, et que je ne l'ai pas fait explicitement, c'est que votre message est enfoui avec d'autres. Relancez-moi (modérément) par mail.

peter1
Posts: 13
Joined: 21 May 2012, 15:50

Re: lacie raid 1

#5 Postby peter1 » 23 May 2012, 05:41

the lacie ships as a raid 0 i switched it to a raid 1 there is a rotory switch with an arrow on it, i changed it to raid 1 and i just checked the setting and it was pointing at raid 1 or safe 100 mode.

reading the manual if you switch modes it will delete the prevous settings, so that may be why the deleted raid 0 shows up.

here is the manual and page 54 has the setting

http://www.lacie.com/download/manual/2bigNetwork_en.pdf

thank you for your time

User avatar
remy
Posts: 457
Joined: 25 Mar 2012, 10:21
Location: Strasbourg, France.
Contact:

Re: lacie raid 1

#6 Postby remy » 24 May 2012, 00:04

Please more precision : what about the second lacie box : was it in raid 1 also ?
Where are you passing the commande I gave ?
J'ai pour habitude d'aller au bout de l'aide que j'apporte. Si vous pensez que je vous ai abandonné, et que je ne l'ai pas fait explicitement, c'est que votre message est enfoui avec d'autres. Relancez-moi (modérément) par mail.

peter1
Posts: 13
Joined: 21 May 2012, 15:50

Re: lacie raid 1

#7 Postby peter1 » 24 May 2012, 03:59

i onlly have one box i thought i could buy another box but they are not backward compatable.
i tried the 3 commands and and i came up with error messages

peter1
Posts: 13
Joined: 21 May 2012, 15:50

Re: lacie raid 1

#8 Postby peter1 » 24 May 2012, 19:21

what do i have to do to made the raid 1 disk readable?

thank you for your time
peter

peter1
Posts: 13
Joined: 21 May 2012, 15:50

Re: lacie raid 1

#9 Postby peter1 » 24 May 2012, 20:58

sorry when i tried mdadm it did not work so with some searching i realized i had to install mdadm so here is what i got
---------------------------------------------------------------------------------------------------------------------------------------------------------
peter@lonovo:~$ sudo mdadm --examine /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 0.90.00
UUID : 85797f43:653296b6:a4ca4023:b3f44802
Creation Time : Fri Dec 31 16:00:19 1999
Raid Level : raid1
Used Dev Size : 674624 (658.92 MiB 690.81 MB)
Array Size : 674624 (658.92 MiB 690.81 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2

Update Time : Tue Jun 24 01:38:14 2003
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 655251b7 - correct
Events : 10904458


Number Major Minor RaidDevice State
this 0 8 9 0 active sync

0 0 8 9 0 active sync
1 1 8 25 1 active sync
peter@lonovo:~$ sudo mdadm --examine /dev/sdb1

User avatar
remy
Posts: 457
Joined: 25 Mar 2012, 10:21
Location: Strasbourg, France.
Contact:

Re: lacie raid 1

#10 Postby remy » 24 May 2012, 23:41

OK, and what about the two other commands (sfdisk -luS and cat /proc/mdstat) ?

After that, disconnect your disk, plug the other one, and give the same 3 commands.
J'ai pour habitude d'aller au bout de l'aide que j'apporte. Si vous pensez que je vous ai abandonné, et que je ne l'ai pas fait explicitement, c'est que votre message est enfoui avec d'autres. Relancez-moi (modérément) par mail.


Return to “File recovery”

Who is online

Users browsing this forum: No registered users and 0 guests