What's lost partition's geometry? Have valid dumpe2fs info.

How to use TestDisk to recover lost partition
Forum rules
When asking for technical support:
- Search for posts on the same topic before posting a new question.
- Give clear, specific information in the title of your post.
- Include as many details as you can, MOST POSTS WILL GET ONLY ONE OR TWO ANSWERS.
- Post a follow up with a "Thank you" or "This worked!"
- When you learn something, use that knowledge to HELP ANOTHER USER LATER.
Before posting, please read https://www.cgsecurity.org/testdisk.pdf
Locked
Message
Author
Damar
Posts: 2
Joined: 29 Jul 2012, 12:19

What's lost partition's geometry? Have valid dumpe2fs info.

#1 Post by Damar »

I had RAID0 have been builded exactly with

Code: Select all

mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
Possibly meta date version 0.90, as it was default for Ubuntu 10.04. Sdb1 and sdc1 both had type 'fd'. This RAID was entirely separted from the OS. It was formated as EXT3 or EXT4 partition.

Now, after clean Ubuntu 12.04 install instead of previous version (on a separate HDD), the RAID was broken with

Code: Select all

mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
Instead of --build/--assemble... Meta data version was left default and it is 1.2, not 0.9. As I catch 1.2 version use more space than 0.9, and some system info was overwritten and destroyed.

Some sensible info was obtained via dumpe2fs /dev/sdc1, for 'sdb1' - "Bad magic number in super-block". The info is below, at the post end.

I want to try to recover lost EXT3/EXT4 partition. Quick Testdisk analysis shows one "Linux" parttion with proper volume label. The deeper Testdisk analysis shows tonns of "Linux" parttions. But in both cases the list files Testdisk's feature do not show proper file names.

I want to re create partion table structures and then try to use any file recovery tools, may be also 'fsck'. If this is a good strategy - what the geometry must be specified?

How to properly assemble this broken RAID0, before analysis and modifications by Testdisk or else? Is it possible for Mdadm?

May be anyone suggest another way?

Code: Select all

# dumpe2fs /dev/sdb1
dumpe2fs 1.42 (29-Nov-2011)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb1
Couldn't find valid filesystem superblock.

# dumpe2fs /dev/sdc1
dumpe2fs 1.42 (29-Nov-2011)
Filesystem volume name:   opt_raid0
Last mounted on:          <not available>
Filesystem UUID:          066bee2f-3bd2-43d6-9042-e479e33215c4
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype sparse_super large_file
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              34799616
Block count:              139191120
Reserved block count:     6959556
Free blocks:              23379667
Free inodes:              34090374
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      990
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              16
RAID stripe width:        32
Filesystem created:       Tue Mar  8 15:46:12 2011
Last mount time:          Fri Jul 27 17:08:11 2012
Last write time:          Fri Jul 27 17:24:05 2012
Mount count:              7
Maximum mount count:      21
Last checked:             Wed Jul 18 09:44:43 2012
Check interval:           15552000 (6 months)
Next check after:         Mon Jan 14 09:44:43 2013
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:             256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      338e0fdb-53d3-48fe-a8d9-d477d443ce09
Journal backup:           inode blocks
dumpe2fs: A block group is missing an inode table while reading journal inode
User avatar
Fiona
Posts: 2835
Joined: 18 Feb 2012, 17:19
Location: Ludwigsburg/Stuttgart - Germany

Re: What's lost partition's geometry? Have valid dumpe2fs in

#2 Post by Fiona »

T access any data you must have a a working array.
If you have on, and your partition table is empty, you should run Quick and Deeper Search to find an appropriate partition and write it to your partition table.
TestDisk only can help you to determine your superblock.
Would be this procedure;
http://www.cgsecurity.org/wiki/Advanced ... SuperBlock
It's in conjunction with fsck;
http://www.cgsecurity.org/wiki/Advanced ... SuperBlock
Should be valid from ext2-ext4.

Fiona
Damar
Posts: 2
Joined: 29 Jul 2012, 12:19

Re: What's lost partition's geometry? Have valid dumpe2fs in

#3 Post by Damar »

After I have read the listing above, I suppose the Cyl. Head Sec. geometry is 139,191,120/2/4. Sector size=512b. I.e. Start at 0/1/1, Stop at (139,191,120 - 1 )/2/4. And I may use this info as some pivot in my "research". Am I right?

The default values shown by Testdisk are 139,191,040/2/4. The difference is between value *040 and *120.

With both geometries Testdisk do not found Superblock backup or acceptable partition. After quick and deep search and selection of any found item, it say: No partition found or selected for recovery

The output is, the found items are:

Code: Select all

Disk /dev/md/0_0 - 570 GB / 530 GiB - CHS 139191120 2 4
The harddisk (570 GB / 530 GiB) seems too small! (< 570 GB / 530 GiB)
Check the harddisk size: HD jumpers settings, BIOS detection...
The following partitions can't be recovered:
     Partition               Start        End    Size in sectors
>  ext3                   127   1  3 139191247   1  2 1113528960 [opt_raid0]
   ext3                   128   0  1 139191247   1  4 1113528960 [opt_raid0]

So, it told me "The following partitions can't be recovered". Really? Nothing else can be tried? How it may be explaned - impossibility to recover? Superblock backups are overwritten for some reason, smth. else?

Keyword "opt_raid0" is the valid label of the lost partition.
Locked