RAID0 Recovery after BIOS Update

How to use TestDisk to recover lost partition
Forum rules
When asking for technical support:
- Search for posts on the same topic before posting a new question.
- Give clear, specific information in the title of your post.
- Include as many details as you can, MOST POSTS WILL GET ONLY ONE OR TWO ANSWERS.
- Post a follow up with a "Thank you" or "This worked!"
- When you learn something, use that knowledge to HELP ANOTHER USER LATER.
Before posting, please read https://www.cgsecurity.org/testdisk.pdf
Locked
Message
Author
gianlucabruno
Posts: 3
Joined: 23 Nov 2020, 07:24

RAID0 Recovery after BIOS Update

#1 Post by gianlucabruno »

I have a raid0 nvme array with 4 drives. I updated my bios the other day and now the array isn't working. I'm not sure on how to fix this array. I'm using Arch Linux and mdadm.

This is missing nvme2n1

Code: Select all

cat /proc/mdstat
Personalities : [raid0] 
md124 : inactive nvme5n1[3](S) nvme4n1[2](S) nvme3n1[1](S)
      5860147464 blocks super 1.2
      
*****************************************
      
sudo mdadm --misc --detail /dev/md124
/dev/md124:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 3
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 3

              Name : workstation:md124  (local to host workstation)
              UUID : feaa3268:1f4e9c90:17941f50:4b90f854
            Events : 0

    Number   Major   Minor   RaidDevice

       -     259        2        -        /dev/nvme5n1
       -     259        8        -        /dev/nvme4n1
       -     259        1        -        /dev/nvme3n1
      
*******************************
sudo mdadm --misc --examine /dev/nvme2n1 
/dev/nvme2n1:
   MBR Magic : aa55
Partition[0] :   3907029167 sectors at            1 (type ee)



sudo mdadm --misc --examine /dev/nvme3n1 
/dev/nvme3n1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : feaa3268:1f4e9c90:17941f50:4b90f854
           Name : workstation:md124  (local to host workstation)
  Creation Time : Sun Nov 15 11:25:43 2020
     Raid Level : raid0
   Raid Devices : 4

 Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : 3ace9aaa:a056a37f:75b8503d:39189a04

    Update Time : Sun Nov 15 11:25:43 2020
  Bad Block Log : 512 entries available at offset 8 sectors
       Checksum : a7fe7073 - correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
   
   
   
 sudo mdadm --misc --examine /dev/nvme4n1 
/dev/nvme4n1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : feaa3268:1f4e9c90:17941f50:4b90f854
           Name : workstation:md124  (local to host workstation)
  Creation Time : Sun Nov 15 11:25:43 2020
     Raid Level : raid0
   Raid Devices : 4

 Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : 32b66b95:9cd8b61f:0ec87950:904ca687

    Update Time : Sun Nov 15 11:25:43 2020
  Bad Block Log : 512 entries available at offset 8 sectors
       Checksum : c9181e58 - correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
   
   
   
   
   sudo mdadm --misc --examine /dev/nvme5n1 
/dev/nvme5n1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : feaa3268:1f4e9c90:17941f50:4b90f854
           Name : workstation:md124  (local to host workstation)
  Creation Time : Sun Nov 15 11:25:43 2020
     Raid Level : raid0
   Raid Devices : 4

 Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : clean
    Device UUID : 1862c371:93a7971d:bf953ac4:3366830e

    Update Time : Sun Nov 15 11:25:43 2020
  Bad Block Log : 512 entries available at offset 8 sectors
       Checksum : 9dee808a - correct
         Events : 0

     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)      
      

gianlucabruno
Posts: 3
Joined: 23 Nov 2020, 07:24

Re: RAID0 Recovery after BIOS Update

#2 Post by gianlucabruno »

Is there any other information that would be beneficial to gather? I'm also having a difficult time locating my files when I look at them via photorec, they are all labeled like f1342343.txt etc.... Is there a way to view the files as I saved them?

gianlucabruno
Posts: 3
Joined: 23 Nov 2020, 07:24

Re: RAID0 Recovery after BIOS Update

#3 Post by gianlucabruno »

Code: Select all

sudo fsck -y /dev/nvme2n1
fsck from util-linux 2.36.1
e2fsck 1.45.6 (20-Mar-2020)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/nvme2n1

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

/dev/nvme2n1 contains `DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 3907029167 sectors' data

sudo fsck -y /dev/nvme3n1
fsck from util-linux 2.36.1

sudo fsck -y /dev/nvme4n1
fsck from util-linux 2.36.1

sudo fsck -y /dev/nvme5n1
fsck from util-linux 2.36.1

User avatar
cgrenier
Site Admin
Posts: 5432
Joined: 18 Feb 2012, 15:08
Location: Le Perreux Sur Marne, France
Contact:

Re: RAID0 Recovery after BIOS Update

#4 Post by cgrenier »

/proc/mdstat shows that your raid0 is inactive:

Code: Select all

md124 : inactive nvme5n1[3](S) nvme4n1[2](S) nvme3n1[1](S)
Only 3 disks are listed instead of 4: nvme2n1 is missing from the raid.

Can you try the following commands ?

Code: Select all

sudo mdadm --stop /dev/md124
sudo mdadm --assemble --scan -v
cat /proc/mdstat

Locked