Schucked external drive for NAS

Using TestDisk to repair the filesystem
Forum rules
When asking for technical support:
- Search for posts on the same topic before posting a new question.
- Give clear, specific information in the title of your post.
- Include as many details as you can, MOST POSTS WILL GET ONLY ONE OR TWO ANSWERS.
- Post a follow up with a "Thank you" or "This worked!"
- When you learn something, use that knowledge to HELP ANOTHER USER LATER.
Before posting, please read https://www.cgsecurity.org/testdisk.pdf
Locked
Message
Author
primehalo
Posts: 2
Joined: 06 Feb 2020, 22:30

Schucked external drive for NAS

#1 Post by primehalo »

Hello all. I have a hard drive that supposedly has a problem. It's an external 4TB Seagate hard drive that I had connected to my NAS via USB and it worked fine. I had another 4TB Western Digital drive that I just put inside the NAS. While copying files to that drive the NAS reported it had some errors, I think relating to sectors. So I took it out, schucked the external Seagate and put it in the NAS where the Western Digital used to be (I checked . The NAS also reported that it had some some sector error, which was strange because it looked like the same error it reported for the other hard drive and also because I had ran a quick SMART test right before shucking to make sure there were no issues with it.

I connected the Seagate internally to my secondary PC and booted into Ubunto. I used the instructions here to get it mounted:
https://www.synology.com/en-us/knowledg ... using_a_PC

I used the instructions here to run fsck:
https://smallbusiness.chron.com/run-chk ... 54071.html

This is the results:
Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/sdb

The superblock could not be read or does not describe a valid ext2/ext3/ext4 filesystem. If the device is valid and it really contains an ext2/ext3/ext5 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsc -b 8193 <device>
or
e2fsck -b 32768 <device>
Image
Image

Next I installed and ran TestDisk. I haven't been able to figure out where the log file is for that scan but I did take some screenshots. Some of the message I got were:
Bad sector count.
The harddisk (4000 GB / 3726 GiB) seems too small! (< 5078 GB / 4730 GiB)
Check the harddisk size: HD jumpers settings, BIOS detection...
The following partitions can't be recovered
Here are the screenshots:

Image
Image
Image
Image
Image

Not sure of what to do next, I tried Seagate's bootable USB SeaTools. The program boots and starts loading but then the screen just goes black and does nothing else while loading extensions. This is the last thing I see before it goes black:
Image

I don't care about recovering what was on the drive. It's all secondary backup stuff that isn't important. I just want to get the drive functional so I can put it back in the NAS.

So I guess my main question is: is there actually anything physically wrong with the Seagate? If not, how do I get it back in a condition to be useful.

By the way, after doing stuff with fsck and TestDisk on the Western Digital drive, it appeared to be fine. I put it in the NAS and run both quick and extended SMART scans and it passed with no problems at all.

User avatar
cgrenier
Site Admin
Posts: 5432
Joined: 18 Feb 2012, 15:08
Location: Le Perreux Sur Marne, France
Contact:

Re: Schucked external drive for NAS

#2 Post by cgrenier »

When using TestDisk, you need to select EFI GPT partition table, not PC Intel.
When using fsck, do not use /dev/sdb, it's the whole disk, you need to select a single partition or logical volume. Use fsck only if the partition table is OK.

primehalo
Posts: 2
Joined: 06 Feb 2020, 22:30

Re: Schucked external drive for NAS

#3 Post by primehalo »

Thanks for the tip. After writing that initial post I was able to get SeaTools working (had to pull out the video card and use the onboard video card). I used it to do short scan which showed no problems but a long scan did. I then did a short repair which said it worked. I then did a long repair which got to 70-something percent and then said it failed. I then ran the long repair again just for the heck of it and it completed 100%. I ran both the short and long scans again and it said everything fine.

I then booted into Ubuntu and ran TestDisk, this time selecting EFI GPT partition table. The first error I got was:
Bad GPT partition, invalid signature
Trying alternate GPT
Bad GPT partition, invalid signature
I moved on to the Quick Search and the result said:
The harddisk (4000 GB / 3726 GiB) seems too small! (< 8001 GB / 7452 GiB)
Check the harddisk size: HD jumpers settings, BIOS detection...
The following partitions can't be recovered
It's simiar to the message I got the first time except it looks like a lot more of the disk is available now. The next screen shows a long list of partitions. Also, over in the Ubuntu Disks window the 4TB Seagate drive is now showing "Disk is OK, 8 bad sectors" whereas before it was showing "Disk is OK, 328 bad sectors". I take that as a good sign. Here are the screenshots for all that:
Image
Image
Image

Any suggestions on where I should go from here?

Locked