What's new

Keep two home NAS devices synced?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

gstadter

New Around Here
I purchased two ZyXEL NSA325.
I am putting two 3 TB drives in each and will be using raid 0 to have 6TB capacity in each.
My goal is to have the two devices mirrored.
I'm trying to learn if there is either a built-in function or something I can use to automate keeping the two devices synced, aside from manually remembering to copy each new file to both devices.
Would be sweet if there was something that would either instantly, or on a schedule, mirror new/changed/deleted files on one to the other.
any ideas or thoughts would be appreciated.
 
I purchased two ZyXEL NSA325.
I am putting two 3 TB drives in each and will be using raid 0 to have 6TB capacity in each.
My goal is to have the two devices mirrored.
I'm trying to learn if there is either a built-in function or something I can use to automate keeping the two devices synced, aside from manually remembering to copy each new file to both devices.
Would be sweet if there was something that would either instantly, or on a schedule, mirror new/changed/deleted files on one to the other.
any ideas or thoughts would be appreciated.

A lot of NASes have a built in ability to backup or replicate across a LAN or the internet, using built in proprietary services or the generic Rsync. Unfortunately your model doesn't support rsync network backup, if that is your main purpose for getting a pair of them maybe consider other options.

http://www.smallnetbuilder.com/nas/...-bay-power-plus-media-server-reviewed?start=2

I'm also kind of confused as to why you would run 2 x drives in RAID 0 that mirrored itself to another NAS. AFAIK the performance benefits of RAID 0 are mostly to be had when the drives & RAID controller are very fast and contained in a workstation.

edit: this paragraph is incorrect, my mind & fingers had a temporary disconnect...I'm leaving it in so the subsequent posts make sense First, 2 x RAID 0 config 3TB HDs ≠ 6TB of storage space per NAS, it would be 3TB. RAID 0 is drive striping. Perhaps you meant JBOD which does aggregate the total drive capacity with no performance or redundancy.

Maybe if they are in different physical locations your goal is distributing the backup geographically, but if they are on the same LAN you are taking the redundancy that the NAS is designed to do internally over SATA and instead flooding your LAN with double the amount of traffic. And if they are in separate physical locations (and probably even the same location) and you are running in RAID 0 or JBOD you are going to have a much more time consuming and technically aggravating problem on your hands if/when a drive fails.
 
Last edited:
So striping of a 1 gig file across two disks consumes 1 gig of space on each disk?
I know that RAID 1(mirroring) did that, but I thought basic striping would essentially place 500MB(ish) of the file on each disk.
 
So striping of a 1 gig file across two disks consumes 1 gig of space on each disk?
I know that RAID 1(mirroring) did that, but I thought basic striping would essentially place 500MB(ish) of the file on each disk.

I'm sorry, I blanked on that, no it splits it across disks. I misspoke originally.

Long story short I'm not certain that you are well served to mirror two NASes running RAID 0. I think the NAS CPU is a much larger impact on the performance of a NAS than a striped array of spinning disks accessed over a LAN.
And as I mentioned I don't think your Zyxels can easily mirror each other anyway.

You lead with the hardware/config you are trying to use but didn't explain how you arrived at that idea or what you are trying to do, maybe some background on your goals & reasoning would help.
 
It's built in. :cool:
ZyXEL calls it "NSA to NSA synchronization/archive backup".
Page 141 of the user manual. sigh.
"5.18.2 Creating a Synchronization Backup"
so yes, RTFM. (to myself)

My thinking was/is, that while a single copy of the data would be "safer" if I went to the expense of a 4 bay NAS that did RAID 5, for example, but it would be just that... a single copy of the data. It's been years since I dealt with server storage in my job, but getting an error like "parity/configuration lost" gives me nightmares when it comes to having only one copy of my data, even if on RAID 5.
That is what led me to look at the option of having two separate copies of the data on less expensive RAID.
One of the devices could get swallowed up by a black hole and I would still have a copy of my data.
"If the data is important, have a backup. ....and have a backup of the backup." LOL
 
Last edited:
I've been doing quite a bit of reading on RAID levels, and the current recommendation for NAS and servers. The old way of doing it would be with RAID5, or possibly a combination of RAID1 and RAID5 (1 for the OS, and 5 for the data). Over on the Spice Works forums, there's a huge consensus to go with 1 or 10.

RAID5 is now considered to be evil. That was news to me, but the more I kept reading the reasons RAID5 is frowned upon these days, it made sense.

  • Slower because of parity writes
  • Unrecoverable Errors during a resilver
  • Warm (hot) spares that can spur a URE

That's just some of the stuff I remember. RAID1 is super simple and fast.
 
RAID1 is super simple and fast.
Don't forget that RAID1 gives you no protection from a failed NAS power supply or motherboard. Or human error in deleting files. In my 2 drive NAS, I use two independent volumes (file systems) and periodic backup one to the other and USB3.

preachy: "RAID is not a backup"
 
Last edited:
... Or human error in deleting files. In my 2 drive NAS, I use two independent volumes (file systems) and periodic backup one to the other and USB3.

@stevech those are excellent points, the one I quoted above specifically reminded me of another caveat of having two boxes that literally mirror each other (and statistically, probably a much more frequent cause of headache vs drive or NAS failure) is that when you accidentally delete a file, or save over a version you didn't mean to edit/replace, or have a file that is corrupted to use but sits just fine on the NAS, is that now you have two copies of it (one on each NAS).

OP might want to look into a scheduled systematic backup, or backup with versioning, etc (that may not even necessitate another NAS box)

Depending on OP's bandwidth up & down, doing a NAS backup to Amazon S3, or Glacier, or backups (regularly scheduled, versioned, or otherwise) to his spare NAS box placing it at an off site location.

Based on nothing but my own personal experiences with my equipment over the years I enjoy having the redundancy of a RAID 1 type setup in my NAS just because drive failure isn't just common, it's probable (there are very interesting datasets out there by Google on this). That's more of like an every day failure that I can just pop the bad drive out, and drop a new one in. It will not slow down my productivity, or swamp my network with activity to quickly rebuild the RAID 1 array on the fly in the NAS.

Being totally aware (and once or twice on the bad end of the equation) I also think @stevech's stance on not feeling at "at ease" about the actual safety and recoverability of my data should the NAS be stolen, destroyed, stop being supported, etc, I back it up externally once a week, happens automatically and I rotate the drives keeping the last one off site, and rotating it back the next week. That's also a versioned backup in case I realize I need a file that was long ago corrupted or deleted.

At some point in the future I might supplement with having my NAS backup weekly to a friend's NAS that's across the country. Just as another supplemental layer. It's faster and easier to do than backing up to the cloud.

I'd be slightly leery (own personal opinion, other stuff works for other people, and to each his/her own) about mirroring or backing up on site via LAN because of the rather large amount of increased LAN traffic (every single write is now two writes), the non "just throw a new drive in if one fails" situation you're in if/when a drive fails and you have to rebuild it not just within the NAS box, but across the LAN (good luck using your LAN for much else during the day/s that will take, and if your house burns down or gets robbed (god forbid) now you have a lot of downtime to replace all that hardware and hopefully you have another backup to fall back on that you can restore from.

And what if in two months you discover the NAS you've selected can't perform well doing the tasks you want it to do (ie media sharing, backups of networked computers, etc), then you have a heck of a migration to do across all your drives and hardware. I have literally taken my hard drives out of my old Synology 212 and popped them in my 212+, 30-45 min of magic later and it just worked (PS - this is not a recommended best practice, you're supposed to either migrate from one live machine to another [could have pulled a drive, reformatted it, and stick it in new machine then connect together and transfer] or reformat drives and restore from external backup to new machine). But with all drives already "purposed," and two identical NAS boxes, it would add some steps for sure.
 
I've seen a number of people have a NAS issue such as power supply/electronics failing wiped out the file system and stripes, and with RAID, there is no rebuild. And once in a while the file system gets corrupted due to power failure or human error in ejecting drives, etc. Then there's the oops I deleted that folder in error. And we all know someone who's a victim of burglary.

I feel that for small NASes, these errors are more likely than a drive failure (after the "infantile failure" period of a new drive).

Maybe 10 years ago, consumer type drives failed a lot more frequently than today, in my perspective. Esp. if you stick with good drive manufacturers. These failures led to the emphasis on RAID5 and RAID1 for small systems. But ignored the arguably more common data losses, as above.
 
Last edited:
Don't forget that RAID1 gives you no protection from a failed NAS power supply or motherboard. Or human error in deleting files. In my 2 drive NAS, I use two independent volumes (file systems) and periodic backup one to the other and USB3.

preachy: "RAID is not a backup"
I didn't claim RAID provided any mechanism for backup, nor would I, and any RAID level could suffer from the situations you describe. I deal with backups of other people's data just about every day of the week. I have to be fanatical about having not only one form of backup, but also a second form of backup.

Am I reading your description correctly in that you have a two drive NAS that is not in RAID1, nor is it in RAID0, nor is it using JBOD. It is simply two drives operating totally independent in one enclosure, correct? I'm not sure why you'd want to have a NAS and do a manual copy from one drive to the other. The external USB3, however, is logical as a backup target. Why not do RAID1 in the enclosure and have a couple of external USB3 drives as rotating backup targets?
 
Here is the Google hard drive paper I mentioned earlier. The paper was published in 2007 and reflects data collected in 2005 & 2006 from SATA & PATA drives installed in 2001 between 80-400 GB in size.

http://static.googleusercontent.com...ch.google.com/en/us/archive/disk_failures.pdf

So I understand that HD tech has progressed since 2001, it has also been postulated that with the rather quick increase in platter densities since then that on-drive ECC techniques are very heavily relied on to keep them chugging.

I use, what I consider to be, cheap, huge WD Reds knowing that they are made with consumer NAS use in mind, which at least in my mind means that there is a reason they cost substantially less than their enterprise drives. Also if you watch when HDs are released in new larger capacities it is usually the consumer external ones that get the new huge size first, then consumer internal, then enterprise. And WD, (for example) has a 5 year warranty on all Black drives, 3 year on Green & Red drives, and 2 year on Blue drives. So there are clearly some different tolerances for consumer stuff (Reds included) in terms of what they are comfortable manufacturing and how they warranty it.

The "I" in RAID was at one time used to signify "inexpensive," not just colloquially, but officially. That's sort of how I look at the drives in my NAS, I don't buy high end enterprise drives for it (which the manufacturers will give a higher MTBF rating), I use pretty economical drives with the understanding that using them in a RAID 1 gives me a nice redundancy buffer and failure & replacement of one would be of no serious impact on my free time or availability of the network resources.
 
I subscribe to the notion that data loss is more likely to come from causes other than drive failure these days. Thus, I use mechanisms other than RAID1 for protection.
 
I subscribe to the notion that data loss is more likely to come from causes other than drive failure these days. Thus, I use mechanisms other than RAID1 for protection.

Duly noted. RAID 1 fits into my "protection plan" only at the level of convenience, in a multi-pronged strategy.

If anyone has newer, broad, data on 1-4TB consumer SATA HD failure, I'd be interested in seeing it.

(Complete unscientific conjecture) but I think one reason we may hear less about catastrophic HD failure today is (intentionally or out of sheer luck) people spread their data out more in multiple places (internal drives, external drives, USB flash drives, cloud) and might be enabling some automatic data syncing services (photos, music, and docs syncing across devices, etc) purely out of convenience but maybe they are benefiting from that setup if a device should fail too.

It's hard to get clear real world failure & RMA rates from drive manufacturers, but in 2009 Carbonite cloud backup service reported that 11% of its users were restoring a full backup annually. http://www.carbonite.com/blog/cloud-backup-blog/2009/11/23/Laptop-Failure-Rates
 
Sadly, today's low cost big drives lead to non-geeky people buying one for $75, putting their family photos and other irreplaceable data on it, naive of the fact that they WILL lose that data if there's no recurring backup to some other media. The average person just doesn't realize the vulnerability.
 
So I understand that HD tech has progressed since 2001, it has also been postulated that with the rather quick increase in platter densities since then that on-drive ECC techniques are very heavily relied on to keep them chugging.
That is very true. If you have a SAS drive instead of a SATA one, it will actually tell you about them. Here's an excerpt from smartmontools on a Seagate ST3300657SS (zero grown defects, performing normally):
Code:
    Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:   19624396        0         0  19624396   19624396       5132.054           0
write:         0        0         0         0          0       1497.277           0
verify: 27215311        0         0  27215311   27215311       4177.156           0
These numbers are only going to go up once shingled recording is used.

And WD, (for example) has a 5 year warranty on all Black drives, 3 year on Green & Red drives, and 2 year on Blue drives. So there are clearly some different tolerances for consumer stuff (Reds included) in terms of what they are comfortable manufacturing and how they warranty it.
There's also a new series between red and black, with a 5-year warranty, targeted to the 5-8 drive NAS market segment.

It would be interesting to tear down a 2TB drive from each of the 3 higher-end WD families (leaving out blue) and see what is actually different, component-wise. At the higher capacities there are definitely differences (4TB black is a 5-platter drive, while 4TB on the others is 4-platter). I think WD is willing to trade higher heat / power usage for increased reliability on the 4TB black.

I expect that a lot of the premium for the WD black (other than component differences as I describe above) is due entirely to the longer warranty. While customers for black drives are probably less likely to return drives that test as "no problem found", WD has to support them for at least twice as long as the others. [Buyers of black drives are likely to buy them in bulk from distributors, or directly from WD, and put them into service right away, thus having the whole 5 years of service. Buyers of the others are more likely to buy in small quantities and the drives have been sitting in storerooms at retailers while the warranty is running*.]

* Yes, you can register a drive with WD to reset the warranty start date to the date you purchased the drive (if plausible), but very few users do that, and by the time the drive is likely to fail, proof-of-purchase for that specific drive is usually long gone.

Based on my time at a disk drive manufacturer (not WD or Seagate), I can tell you a number of possibly-surprising things:

  • Most of the OEM discount is due to reduced warranty intervals and special warranty terms (for example, the standard warranty might be 2 years and the OEM gets 180 days, and drives can only be returned once per month in batches of at least 250 drives, packaged in a specific manner).
  • Lots of drives come back and test as "no problem found". Those get used as warranty replacements or sold off as refurbs when the drive is close to end-of-life.
  • A large percentage of returned failed drives are due to undetected manufacturing / component defects - bad media, unreliable integrated circuits, etc.
  • Most returned failed drives from end-users are due to mechanical damage, either due to improper handling during packaging / shipping or excessive movement / vibration during use.
  • The real killer on warranty costs is replacing drives later in the warranty period when the original drive is no longer made and no refurbs are left - the replacement is a new model, larger capacity, prime (not refurb) drive. This is one of the main reasons for the OEM discount in exchange for a shorter warranty period.
  • A single warranty return can wipe out most / all of the profit from selling that particular drive.
  • Drive rework falls into "sealed area" and "other". For many models, it wasn't worth opening the sealed area (heads, platters, spindle motor). If it wasn't a logic board problem, the drive got stored until the manufacturer ran out of replacements, then it got opened in the clean room and torn down if that was less costly than sending the customer a prime new model drive. This is apparently a lot less common these days, as those drives get sent back to a separate remanufacturing line that opens them in a clean room and salvages reusable components.
 
Last edited:
Warranty?? HA!
For retail customers, WD/Seagate send you a refurbished unit with god knows how many hours on it.

Maybe red product customers get new, but probably still doesn't reset the warranty clock.
 
Warranty?? HA!
For retail customers, WD/Seagate send you a refurbished unit with god knows how many hours on it.
WD thinks I'm two separate customers - one that's a home user and sends in the occasional drive and one that's a commercial account that gets special service (free advance replacement, dedicated WD point of contact, etc.).

When I send drives back to WD as a home user, I get prompt (but not advance) replacements, and I don't think I've ever had a replacement drive fail. A cursory examination of the labels shows recent (relative to date of shipment) manufacture dates and no evidence of re-labeling (and they're not going to replace the HDA lid just to put a clean label on), so I don't think I'm getting refurbs but can't prove it.

My recent (last 10 years or so - before that I had a dedicated Seagate rep that came to my site with drives) experience with Seagate has been (as a model ages):
  • First: New replacement drives of same model
  • Second: "Certified remanufactured drive" of same model
  • Third: New replacement drive of newer model, same capacity
  • Fourth: New replacement drive of newer model, higher capacity
Maybe red product customers get new, but probably still doesn't reset the warranty clock.
I don't know of any drive manufacturers that reset to a full warranty period on a replacement. Normally you get the same end date as the drive you returned, or 30 to 180 days, whichever is longer. It pays to check the warranty status of replacement drives when you receive them, though - I've occasionally had replacement drives the manufacturer forgot to update the warranty date on.
 
Do red customers get a refurb for warranty replacement?
I've never had to return a Red, so I don't know. I've had some Black RE4's come back as new old stock (the -01 pass instead of the -02), even though they had current date codes. I've never had a WD come back with an obvious refurb label (like Seagate does with their green-border "Certified remanufactured drive" label).

WD certainly reserves the right to supply refurbs. From their Warranty Policy document:

WD said:
Recertified Products

WD recertified products may consist of customer return units and may be repaired. All products are tested and determined to meet WD's stringent quality standards before they are sold as recertified. Please note that some recertified items may have marks, scratches, or other slight signs of wear.

All recertified products carry manufacturer's limited warranty of 6 months.
 
Warranty?? HA!
... but probably still doesn't reset the warranty clock.

You are incorrect on this. I just went thru it, and when I place it in my account on WD's site, the warranty was from that day. (they removed the returned drive from the list) It only saved me a month or so, but it was from the date entered, not the old drive's date.
 

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top