What's new

Commercial or DIY NAS

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Hati

Occasional Visitor
Hi,

while the title is quite general I registered to this forum for a very specific situation. Some time ago I replaced my trusty old NAS with an old server that I got with a price you can't beat. Server had been a VmWare host in its previous life and was filled with disks so I installed Windows for it. It has 12 3.5" slots that had two broken disks, since I didn't find replacement disks of same size I put two smaller ones to them and made a mirror set of those. That left 10 disks for a RAID-6 set that was twice as big as the old NAS.

So I had a big pool that became file share and a small pool that became home of virtual machines. Since this was an old VmWare server it had enough memory and cores for running couple of VMs but the 'old' part leads us towards the reason for me to start this thread. Because processors are old the virtual machines start to be little sluggish. You can compensate that little by adding extra cores but it doesn't make miracles so I'm thinking of renewing the hardware. In the end the final decision will depend on if I can source a new server and what kind it will be but I need little info beforehand so I don't make a bad decision.

The only problematic part on this process is that I would like to get a "final" storage solution now, such that I don't need to change it, just upgrade. And tricky part is that I'm planning on ripping my Blu-Ray collection to NAS. Without it this would be a very short thread, "Buy a two bay NAS of your liking", but since that will eventually take tens of TBs of storage the expandability of the system is crucial. To keep initial cost down I would like to start by purchasing a pair of 18Tb drives and put them on RAID 1, when that fills up buy two more drives and convert the system to RAID 6 and after that buy a new disk and expand the RAID 6 when space gets low.

I think that an 8-bay QNAP could do this but the problem is that it costs about 1000€ (+ disks) which is too big initial investment, especially if my mind changes before disks fill and I don't start the endless process of ripping the Blu-Rays. If I understood right I could achieve this also by Buying a four bay Synology that has an eSata connector and when 4 disk RAID 6 fills up buy a five disk enclosure and expand the raid there. Problem is that the four bay Synology is too close in price to the QNAP. (If someone can point me to a shop in EU where a four bay Synology with eSata connector can be had for under 500€ I would be interested if it's otherwise viable solution.)

That is why I'm thinking of building my own NAS, a case that can hold 8 disks and a disk controller with 8 ports. Other hardware depends heavily on the knowledge if I can source an used server to run the VMs or not (,if I can't find a server I may end up building one box for virtualization and one for NAS or one box that houses the both). But little googling of TrueNAS led me to think that changing raid level and expanding it may not be possible like it's on commercial systems. I don't intend to backup the ripped Blu-Rays, the disks themselves shall act as backup, so the risk of data loss during expansion should be kept on low level. So here is the actual question of this long rant: Is it possible to build a NAS that can be expanded from a two disk mirror solution to four disk parity system and from there add disks one at the time when space gets low? If it can be on both commercial and DIY NAS then which has more reliable expansion process?

One thing I'm wondering, whether I get an old server or build a PC to run the VMs, if there is a method to expand amount of disks in raid, could I use it with QNAPs external enclosure, TL-D800S, connected to the server? It's little more than what I initially intend to spend on NAS part of the project but I could probably stretch it, especially if server part will be cheap. Enclosure is just JBOD but otherwise it would be a better solution that a DIY box.

And I saved the most important question to the last: Is this a doomed plan? Does the big disk size guarantee that something will go wrong on expansion phase and data will disappear?
 
Just keep what you have and if you want to buy something for the VM side of things you can focus on that with higher power options.

The other option is just gut the mobo from the existing chassis and put a new one in. Uplifting the mobo/CPU/ram isn't a huge task if you want to keep everything all in one case.

As for disks there are renewed 18tb disks on Amazon for under $200/ea for Exos drives.

Going with the NAS off the shelf is a step backwards and no need for the higher price. If you want to refresh the whole look then a Meshify 2 case holds at least 13 drives. PCPartPicker.com can help out something together.

You could do a complete refresh with ADL for maybe $600 and get better performance than any NAS off the shelf.
 
just to give an idea of error rate for large TB disk or disks -
1686665490932.png

So for any disk with an error rate of one error in 1x10^15 bits or higher ( 10^14 for example), it is likely that you will have unrecoverable errors on the disk. Many "enterprise" disks are rated at one error in 1x10^15 bits. The error may well be undetected until you try to rebuild the array leading to failure of the array if more than one disk is involved.

Maybe it is cheaper to buy multiple, ethernet connected BD players and just pop in the movies you want to watch that day ?
Then the VMs can reside on a much smaller system.

BTW, copying media that breaks the copyright protection is illegal in the US and thus, to protect this site, we do not discuss such uses.
 
How much do you value the data that the new NAS will be storing? 1000€ is cheap for what it will give you. Some peace of mind. An old server can never be seen as equal in reliability. Even if it saves you all that cost.

When you COPY your data over to the new NAS and have verified and tested it to be as stable and reliable as possible (at least a month, of actual use, for me), then you can use the old server to run your VM's exclusively and decide then if you need more hardware for that use case.

Keep your data and your compute separate. And don't hold back on spending more where you should.

DIY is great for YouTube clicks. For data you really care about, go with QNAP or Synology instead.
 
For data you really care about, go with QNAP or Synology instead.
all of that is marketing and lines their pockets since it uses the same techniques as running a Linux box with raid. Do you need to fluff their marketing budget?

Getting away from polished turds and using better methods of management of your data instead of relying on some company to make it simple leaves you exposed. Thinking about the remote wipe issue some companies have had I. The past. Inadequate cooling of the drives inside the case because there's a small 80mm fan cooling the system. Using old ddr3 ram that's limited to only 2 sticks and usually maxed out at a low amount. Restrictive nic speeds such as 1ge unless you dump more money into expanding the capabilities or buying a different chassis all together to boost performance.

There's a ton of drawbacks if you use a little bit of forward thinking.
 
There's a ton of drawbacks if you use a little bit of forward thinking.

Yes, forward-thinking always gives a ton of drawbacks. :)

But if you want to enjoy your data collection instead of being a full-time admin (or worse, if/when things go wrong), DIY is not the way for most users.

Lining the pockets of companies is the way they're compensated for offering a product worth buying. And in this scenario, once again, it is worth buying.

Buying the right model is called due diligence. That is not up to the business to decide which model is best for any specific individual. Buyer beware.
 
Uplifting the mobo/CPU/ram isn't a huge task if you want to keep everything all in one case.

For proprietary server hardware it usually is. And even if it were possible in this case it probably is cheaper to buy new consumer hardware than used server HW. And definitely cheaper than buying new server hardware to old frame.

Maybe it is cheaper to buy multiple, ethernet connected BD players and just pop in the movies you want to watch that day ?
Then the VMs can reside on a much smaller system.

The idea of dumping disks to HD is ease of access, especially for extras, likelihood of using them would increase exponentially if there were no need to use disks. And without checkin I think you get quite much disk space with a price multi-disk player.

BTW, copying media that breaks the copyright protection is illegal in the US and thus, to protect this site, we do not discuss such uses.

I think it's latest law here that stated it being illegal if it requires breaking a strong encryption. I'm really not sure if the one in Blu-ray disks still counts as strong, I think it does but don't remember reading about any trials where it would be officially decided.

Keep your data and your compute separate. And don't hold back on spending more where you should.

My usage is so light in the end that keeping two boxes is pretty much wasting electricity. When I get answers to my questions I know better if it's beneficial or even needed to keep two boxes instead of one.

One question I didn't ask yet is if there are any people here who have actually tried to expand a pool of big (10-20TB) disks on QNAP or Synology and if yes, what were the results.
 
'Wasting electricity' is not really a big concern when the units are idling 90% of the time. It is wasted if you don't really need them for anything useful.

Decide which you value more. Your data, or mere 'features'. Your call.

Expanding a pool on QNAP is as easy as removing one drive, inserting the new (bigger) drive, waiting for it to re-sync, and continuing to do so until all the drives are upgraded. A backup (or two), to external drives of your most important data is suggested before doing so (always). A NAS is not a backup (on its own).
 
@Hati - While crystal balls aren't the best option for future predictions, did you have an idea on potential top capacity?

Just curious on the decision to build raid1 then move to raid6.

If you're looking at outsized disks then wouldn't you be better with raid10 over raid6? Raid6 is better than 5 but the problem with both is the hit they put on all disks in the array during rebuild - meaning that assuming you get through a rebuild once you're more likely to have another failurel after, increasing the chance of at least a secondary failure during rebuild, so less likely to have a working array at the end of the next recovery attempt.
 
Is it possible to build a NAS that can be expanded from a two disk mirror solution to four disk parity system and from there add disks one at the time when space gets low?

Of course it's possible. It depends on your level of knowledge though.

Home and small office NAS systems are regularly targeted and probed for vulnerabilities. You pay first, hope and pray after.
 
Good points. I run R10 w/ 5 disks in my array with the extra one just sitting there as a hot standby if one does fail.

Code:
 sudo mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Mon Jul 16 14:29:09 2018
        Raid Level : raid10
        Array Size : 19534735360 (18.19 TiB 20.00 TB)
     Used Dev Size : 7813894144 (7.28 TiB 8.00 TB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Jun 13 16:00:44 2023
             State : clean
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : server:0  (local to host server)
              UUID : 4f1abb31:e8466aa3:4b7bea78:28ce6c14
            Events : 277157

    Number   Major   Minor   RaidDevice State
       4       8       33        0      active sync   /dev/sdc1
       6       8        1        1      active sync   /dev/sda1
       2       8       49        2      active sync   /dev/sdd1
       3       8       17        3      active sync   /dev/sdb1
       5       8       65        4      active sync   /dev/sde1

The reason I say hot spare even though it doesn't indicate "spare" is the usable space vs capacity.

1686691067587.png


You pay first, hope and pray after.
That's true when things happen automagically when you add a disk. My concern would be what's happening in the background when you do so. I don't trust it to do what they say it will do and end up wiping out all the data and rebuilding the larger array.

There's just too much unknown processes that will occur when changing drives. Spending the extra 5 minutes to manually enter commands to add a replacement disk or grow the array makes me feel better. The other issue as mentioned before is the risks that they didn't secure their code running on the chassis. That's where the praying comes into play. If it's not secure then it's a waste of money and time.
 
My concern would be what's happening in the background

My concern is hackers looking for ways to break in millions of over the counter identical NAS boxes with $100 hardware inside and Support/R&D outsourced to Southern Asia because it's cheaper there. Hackers were successful not once. I run DIY at home and custom built NAS servers for business.
 
Good points. I run R10 w/ 5 disks in my array with the extra one just sitting there as a hot standby if one does fail.
I was a bit surprised you didn't mention your R10 setup and it's performance earlier ;).

Joking aside though, one point I forgot @Hati - it wouldn't be migrate from a 2 disk R1 array to 4 disk R6, it would be build R6 missing 2 disks, copy over then add the original 2 disks (1 at a time) - that process in itself wouldn't be particularly kind to the drives.
 
Expanding a pool on QNAP is as easy as removing one drive, inserting the new (bigger) drive,

I wasn't asking about changing disk bigger, I was asking about adding more disks.

@Hati - While crystal balls aren't the best option for future predictions, did you have an idea on potential top capacity?

18TB disks, 8-slot NAS, with RAID 6 that makes maximum space of 108TB. I don't how close to that I will go, though. My guestimate is that I have about 30TB data to add to current little over 10TB and it will grow between 2 and 5 TB per year. It will probably be closer to lower so the NAS could serve me well over ten years.

Just curious on the decision to build raid1 then move to raid6.

To ease the initial cost, big disks are quite expensive, at least in this part of the world. Two disks on top of rather expensive hardware is less than four disks on top of expensive hardware.

Home and small office NAS systems are regularly targeted and probed for vulnerabilities.

Yes, but the NAS won't be open to internet so if it's compromised I've screwed up elsewhere first to get my computer infected.
 
That has more to do with your firewall policy than the HW itself. What gets people into trouble is poking holes in their own security by wanting convenience outside of their network. Opening ports for this sort of thing is just dumb.

I was a bit surprised you didn't mention your R10 setup and it's performance earlier ;).
Didn't need to since OP is already running this type of setup though slightly different. Usually reserve that mention for dumb users trying to make a router a NAS with a USB connection.
 
Yes, but the NAS won't be open to internet so if it's compromised I've screwed up elsewhere first to get my computer infected.

It usually happens when people start using cloud apps and file sharing over Internet or from infected other device inside the network.

What gets people into trouble is poking holes in their own security by wanting convenience outside of their network.

Indeed, but malware targets more specific known NAS boxes and less custom built NAS. Again, it all depends on the level of knowledge.
 
Joking aside though, one point I forgot @Hati - it wouldn't be migrate from a 2 disk R1 array to 4 disk R6, it would be build R6 missing 2 disks, copy over then add the original 2 disks (1 at a time) - that process in itself wouldn't be particularly kind to the drives.

Ok, that's not something I want to do. And little googling indicated that you can't go straight from RAID 1 to RAID 6 with QNAP or Synology, you' have to go 1 > 5 > 6. I'm not sure I want to do that either. Which menas that the initial cost is rising so much that I have to be sure that I will need all the space and not just be prepared to expand the system.

And to be sure I first need to know what are my possibilities to go from a four disk double parity (be it RAID 6 or something else) to five disk double parity configuration (and six and so on). It can be done with QNAP and Synology but can it be done in DIY world?

(And I didn't consider RAID 10 because it will continue to take half of disk space when adding disks and because with bad luck a two disk failure can lead to data loss.)
 
It's an interesting contradiction between minimising initial build cost vs total build of potentially (in UK costs) £2,500 in disks alone over the years (without replacements) for easier access to bluray content (you mentioned a lower concern on recovery because the BDs acted as backup)

I get not wanting to re-do storage build but given a trigger for this work is the slowing VMs, I think focusing on big storage (when by your own admission you're not sure you'll get around to the work) is the wrong way around.

I'd say start with the VMs as the focus point. Are you going to want a small fast pool for them? Will you want to access the big storage array from them? (You're used to it all on the same board but shifting big storage to network changes the dynamic)
 
Ok, that's not something I want to do. And little googling indicated that you can't go straight from RAID 1 to RAID 6 with QNAP or Synology, you' have to go 1 > 5 > 6. I'm not sure I want to do that either. Which menas that the initial cost is rising so much that I have to be sure that I will need all the space and not just be prepared to expand the system.

And to be sure I first need to know what are my possibilities to go from a four disk double parity (be it RAID 6 or something else) to five disk double parity configuration (and six and so on). It can be done with QNAP and Synology but can it be done in DIY world?

(And I didn't consider RAID 10 because it will continue to take half of disk space when adding disks and because with bad luck a two disk failure can lead to data loss.)
If you're running a 4 disk R6 array, adding to extend to 5 disk and higher is easy. That doesn't change the hit on the disks though - extending a parity based array is not kind to the existing disks in the array.

While technically a 2 disk failure could result in data loss it depends which disks - and ignores the fact the disk wear is far greater on r5 and R6 array management. They move more towards accelerated wear/age failure rather than fault in the bounds of warranty/reasonable use.

For what it's worth, I write these posts as someone still running an old R5 array on smaller disks that is moving to R1/10 on larger disks (my capacity needs are lower than yours) when it next needs something.
 
I didn't consider RAID 10 because it will continue to take half of disk space when adding disks and because with bad luck a two disk failure can lead to data loss.
The only way you have data loss is if 3 disks fail and 2 of them are the R1 mirror.

The mirror is faster than the parity method as there's no calculations for parity taking place. The faster speed comes from the stripe within the mirror. 4 disks in R10 using 5400rpm drives is still north of 400MB/s in R10. This exceeds your network speed by ~4X if you're running 1GE.

Uplifting the core of the system leaves everything else intact and gives you more CPU power for the VMs. Splitting the storage from the compute leaves things intact and you just move the VMs to the new system.
 

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top