What's new

NAS Issues

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

brucefreeman

New Around Here
I’ll try to make this simple. If this is the wrong forum for this question please forgive and advise.

I have a Seagate Blackarmor NAS 220 that originally had two 2 TB Seagate Barracuda LP 2 TB drives configured for Raid 1

Suddenly a few weeks back it would not show up on the network. I removed both drives and put them into an external USB drive case and connected them one at a time to my Ubuntu PC. Neither of them mounted so I assumed they were both bad at that point.

Since I did not know yet if the NAS enclosure was defective I found a couple of used 1 TB Western Digital drives on eBay and ordered those. They came in and I erased both and installed them in my enclosure. The Blue LED was blinking and according to the guide for the NAS that meant it was replicating. When in a reasonable time the drive showed up on the network and I was able to use the NAS even though now the storage was reduced.

Next I decided to give the two original 2TB drives another look. Using the Ubuntu Disk utility this time I was able to determine that one drive was in fact defective. But the other showed up and had what appeared to be the correct set of partitions. The drive would still not mount but I think that might be expected based on the unusual set of Linux partitions on the disk.

Okay so I figured maybe if I found a matching 2 TB Seagate drive and installed those the Raid 1 would restore the new drive and I would be back in business with the original storage and the original data from before the failure.

I installed the two drives back in the NAS enclosure and the blue LED started blinking so I’m thinking it’s restoring the new disk and it would show up on the network once it was finished.

Didn’t happen. I left it for a few days and tried several things to see if it would show up on the network but it never did.

Next I decided to give up on trying to do the restore and decided to repeat the steps I took to get the drive back with the 1 TB disks that worked but with the 2 TB disks. I wiped both of the 2 TB disks and installed the in the enclosure. Again the blue LED started blinking. I left it for at least a couple of days and in again did not show up on the network.

I put the 1 TB disks back in the enclosure and it showed back up on the network.

I again checked the 2 TB disks in Disk Utility on Ubuntu and while they do not mount they both had the full set of partitions that match the original disk.

I’m stumped and need some suggestions.
 
Thoughtful read here...


With a Linux based NAS, most are handled thru a stack...

First is MDADM for most - some might use LVM, which is one layer up, and then the crzy ones that use ZFS/BTRFS, but this might be out of scope..

I suspect your NAS is using MDADM - so you need to repair things there, with a RAID1, you need to remove the disk from the set, but if the RAID was build with LVM, that's a problem...

Read the link carefully - as they go thru different approaches on recovery...

And consider this might be a lesson learned - a NAS needs to be backed up as the opportunity for trouble increases with the number of disks in the array
 
Okay so I figured maybe if I found a matching 2 TB Seagate drive and installed those the Raid 1 would restore the new drive and I would be back in business with the original storage and the original data from before the failure.

And a lesson I learned the hard way - recovery on a RAID1 is pretty simple, one would think - drop in the new drive, and sync the array...

HW based array - server had an LSI Raid Card, so the RAID is managed there - remote hands in the data center removed the broken drive, inserted the replacement drive, and then went into the RAID SW in BIOS, and sync'ed it the wrong way...

Yep, instead of mirroring the good drive to the replacement, it went the other way and zero'ed out the good drive into a perfectly functional RAID1 array with no data on it...
 
Thoughtful read here...


With a Linux based NAS, most are handled thru a stack...

First is MDADM for most - some might use LVM, which is one layer up, and then the crzy ones that use ZFS/BTRFS, but this might be out of scope..

I suspect your NAS is using MDADM - so you need to repair things there, with a RAID1, you need to remove the disk from the set, but if the RAID was build with LVM, that's a problem...

Read the link carefully - as they go thru different approaches on recovery...

And consider this might be a lesson learned - a NAS needs to be backed up as the opportunity for trouble increases with the number of disks in the array
Thank you for sharing this resource. I'll dig and see if I can figure it out.
 
I’ll try to make this simple. If this is the wrong forum for this question please forgive and advise.

I have a Seagate Blackarmor NAS 220 that originally had two 2 TB Seagate Barracuda LP 2 TB drives configured for Raid 1

Suddenly a few weeks back it would not show up on the network. I removed both drives and put them into an external USB drive case and connected them one at a time to my Ubuntu PC. Neither of them mounted so I assumed they were both bad at that point.

Since I did not know yet if the NAS enclosure was defective I found a couple of used 1 TB Western Digital drives on eBay and ordered those. They came in and I erased both and installed them in my enclosure. The Blue LED was blinking and according to the guide for the NAS that meant it was replicating. When in a reasonable time the drive showed up on the network and I was able to use the NAS even though now the storage was reduced.

Next I decided to give the two original 2TB drives another look. Using the Ubuntu Disk utility this time I was able to determine that one drive was in fact defective. But the other showed up and had what appeared to be the correct set of partitions. The drive would still not mount but I think that might be expected based on the unusual set of Linux partitions on the disk.

Okay so I figured maybe if I found a matching 2 TB Seagate drive and installed those the Raid 1 would restore the new drive and I would be back in business with the original storage and the original data from before the failure.

I installed the two drives back in the NAS enclosure and the blue LED started blinking so I’m thinking it’s restoring the new disk and it would show up on the network once it was finished.

Didn’t happen. I left it for a few days and tried several things to see if it would show up on the network but it never did.

Next I decided to give up on trying to do the restore and decided to repeat the steps I took to get the drive back with the 1 TB disks that worked but with the 2 TB disks. I wiped both of the 2 TB disks and installed the in the enclosure. Again the blue LED started blinking. I left it for at least a couple of days and in again did not show up on the network.

I put the 1 TB disks back in the enclosure and it showed back up on the network.

I again checked the 2 TB disks in Disk Utility on Ubuntu and while they do not mount they both had the full set of partitions that match the original disk.

I’m stumped and need some suggestions.
All good now?
 
I never was able to get the NAS back operable with the 2 TB disks. It did re-establish with a pair of 1 TB disks. Can not understand why putting two erased 2 TB disks does not allow the device to show up on the network. Maybe I prepped the disks incorrectly. How would you suggest that I erase the disks?
 
I never was able to get the NAS back operable with the 2 TB disks. It did re-establish with a pair of 1 TB disks. Can not understand why putting two erased 2 TB disks does not allow the device to show up on the network. Maybe I prepped the disks incorrectly. How would you suggest that I erase the disks?

Up to you - recall that a NAS is not a backup - it's more of an aggregation , so the risk of data loss is pretty high...

Stats, they suck, but failure modes across devices are not in your favor...
 

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top