What's new

Best solution for redundant use of various disks (sizes, speeds, brands)?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

gaikokujinkyofusho

Occasional Visitor
Hi I have a drobo and a dns-323 and love them but they have a finite drive capacity so everyone once in a while i end up swaping out smaller drives and adding in larger drives but as a result I have lots (lots is relative i know) of smaller drives (750, 1tb 1.5 etc) that I would love to be able to use just like i do in the drobo, that is as one large redundant partition. I was thinking about a DIY solution but all the software solutions *seem* to require that all the drives be the same size in order to utilize all their space.

My question is are there any free software solutions out there that would allow me to have a "drobo-like" setup? Also, for the record I am hoping to come up with something vastly cheaper than a drobo hence my hoping for free software options (though I am open to paid solutions as well i guess).

Also, side thought, if there is such software can anyone recommend any say atom or neo setups that can take a large number of drives (min 8+).

Cheers,

-Gaiko
 
Sun OS and ZFS

My DIY box has been running with Sun OS and ZFS on six 1T disks for one year now.

ZFS can auto balance space use, so I can plug in bigger disk any time, one by one if I run out of space.

I setup six disk as ZFS with double redundancy. So there will be no data loss even if two out of six disks is broken.

Hope this helpeful
 
windows home server can also use many different drives of different sizes, however it it basically uses a mirroring/duplication type system instead of a raid/stripe.
 
It depends on how complex you want the management of the storage to be. You could use something like LVM in Linux to group a bunch of disks together, creating two separate, closely sized volume groups and mirror two logical volumes (one from each VG) with MDADM, I presume this would work.

lvmz.jpg


You would have to manually balance the capacities in each volume group though.

Personally, I would group the disks into pairs, the closest capacities together, and then add each mirrored pair to a ZFS pool, something like this:

pool 980GB
mirror 160GB
disk1 160GB
disk2 200GB​
mirror 320GB
disk3 320GB
disk4 500GB​
mirror 500GB
disk5 500GB
disk6 1000GB​

You will lose some space, but it will be easier to manage.
 
Last edited:
Thanks to all for your replies. I Think the WHS and SunOS ZFS were closest to what I am looking for. My next question is hardware. I am looking for a realatively cheap, barebones, really compact, setup; say like one of the WHS with more bays (8+)(and hopefully a bit cheaper) or something like a rosewill multi bay tower but that can be a standalone server. Any suggestions would really be appreciated!

Thanks again.

Cheers,

-Gaiko

PS When i say barebones i pretty much mean a mobo and mem + the case; since the purpose of my wanting this box is a place to use my spare assorted drives i don't really need to be buying an extra drive.
 
Last edited:
Does "compact" and "8+ bays" belong in the same sentence? :)

Touche' :)


Thanks for the suggestions. They are "in the ballpark" of what i am looking for however they seem to be full servers (big enough for full ATX mobos). Since the main task for this box would be serving up files i don't think I will need alot of horsepower and therefore was hoping to save on space/power/heat/noise by using some Mini-ATX/ITX setup (and I was thinking that something like one of those form factors could fit in a case about the size of the Rosewill box i posted a link to).

A slightly separate question, hua_qiu had mentioned ZFS and Solaris before which is one of two software options i am considering (other being WHS) but in doing some research I found out about BtrFS and it currently supports RAID1 type functionality (and will support RAID5/6 like functionality later). Since I am a bit more comfortable with using Linux than Solaris (haven't used solaris since undergrad) I was considering going the BtrFS route and wanted to see if anyone else has tried BtrFS RAID1? (I am aware of being able to use ZFS in linux via FUSE but that looks like a bit more of a challenge than I can take on at the moment).

Thanks to all for the help!

Cheers,

-Gaiko
 
Last edited:
I think that mATX and ITX boards should fit in pretty much all cases that can handle the larger boards, you just change around the brass mounting spacers. So really, all you need to do is find a case and the board will fit.

Of course, as I found, finding the (reasonably priced) case is one of the hardest parts of building a NAS. My NAS project didn't get off the ground for a year or two until I found an acceptable case.

I have a DIY ZFS NAS and had never used Solaris before using it in my NAS. I have had a lot of experience with Linux though, I found that the learning curve wasn't that steep.

BtrFS still isn't out of testing, if there is one part of your NAS that you want to be bulletproof, it's the FS.
 
Last edited:
I think that mATX and ITX bourds should fit in pretty much all cases that can handle the larger boards, you just change around the brass mounting spacers. So really, all you need to do is find a case and the board will fit.

Of course, as I found, finding the (reasonably priced) case is one of the hardest parts of building a NAS. My NAS project didn't get off the ground for a year or two until I found an acceptable case.

I have a DIY ZFS NAS and had never used Solaris before using it in my NAS. I have had a lot of experience with Linux though, I found that the learning curve wasn't that steep.

BtrFS still isn't out of testing, if there is one part of your NAS that you want to be bulletproof, it's the FS.

Thanks Hydaral, yeah the more i read the less appropriate BtFS seems at the moment and the better solaris is looking.

As for the case, yeah finding what i want is turning out to be a hell of alot harder than i thought it would. The closest i have some so far is Via's NSD-7800 which i am hoping has come down in price (and software/drivers has caught up as of early 2010 the few posts i have found indicate that there are a range of driver issues for most non-win OSes). How is Solaris with compatibility issues? Anywho, if I can actually find this Via case (seems rather hard to find) I might go that route as it has the right # of bays for the right size. Any other similar hardware suggestions would of course be *really* appreciated!
 
Last edited:
Perhaps I have been going about this all wrong. I really want a “one box” solution but more than that I want a fairly cheap solution, that is fairly low power, and that has lots of internal drive bays so far this has been elusive (the Via box I listed previously is close but finding it coupled with compatibility issues has made me leery of it).

Maybe just getting a storage device (like the Rosewell tower) that has a bunch of 3.5 internal bays, supports JBOD (independently addressed), and has eSTATA (or USB3) + a small Atom/Ion box (running something like Solaris/FreeBSD+ZFS) is a better way to go? That way (assuming the box had more than one eSata/USB3 port) I could just add another storage device if I used up all the bays. Its not quite as neat as the self contained vision I had but anywho… I’ve love to get some thoughts on this, maybe hardware suggestions for small/cheap/linux(or FreeBSD/Solaris) compatible atom/ion boxes.

Regardless, thanks to all again for the suggestions/help.

-Gaiko
 
Solaris compatability can be a problem, you can check the HCL: http://www.sun.com/bigadmin/hcl/data/sol/ but when I was looking through it I couldn't find any of the hardware I wanted to use. I just had to take the chance.

Check out the Antec gaming cases: http://www.antec.com/Believe_it/global/product.php?Family=NA== a quick search says that the Three Hundred is about $60-$80 without PSU with 9 bays. This is still a tower case, but if you want 8+ bays, there aren't that many ways of fitting the drives into a small area without hacking the case up yourself.

This is nice: http://www.lian-li.com/v2/en/produc...ex=546&cl_index=1&sc_index=25&ss_index=63&g=f though it's Lian Li, so it's going to be expensive.
 
Last edited:
ZFS can auto balance space use, so I can plug in bigger disk any time, one by one if I run out of space.l


Is this true? I'd just written out an entire usage scenario (http://forums.smallnetbuilder.com/showthread.php?t=5010) based on the assumption that one would have to manage distribution of RAID arrays over the available drives oneself. Could you show me some links about this feature of ZFS, possibly some use cases?

Thanks!

Gr,

Gr.
 
Last edited:
ZFS can auto balance space use, so I can plug in bigger disk any time, one by one if I run out of space.
Is this true? I'd just written out an entire usage scenario (http://forums.smallnetbuilder.com/showthread.php?t=5010) based on the assumption that one would have to manage distribution of RAID arrays over the available drives oneself. Could you show me some links about this feature of ZFS, possibly some use cases?
Auto-balance can mean various things, in the case of ZFS (essentially a transparent form of LVM, see here) the space in the pool (volume group) can be allocated to any child filesystem (logical volume) dynamically, unless there is a quota on the FS.

There is no "auto balancing" of disks in ZFS similar to X-RAID (I'm still not exactly sure how X-RAID works), I presume X-RAID divides the disks into partitions first though, this is not recommended in ZFS as it is designed to work at the disk level rather than partition level, so you will lose some of the features.

If you are using mirroring in ZFS, then the bare minimum of drives you need to add is two, adding these to the existing pool will increase the space available for the filesystems. You can add one drive, but obviously this would be non-redundant and put your entire pool at risk from a single failure.
 
Last edited:
Another hardware setup scenario (feedback appreciated).

I have been trying to figure out the least expensive/trouble setup for storage of various sized disks and the most recent idea i had was having something like say two raid/jbod towers (like the rosewill raid tower) + a mini computer + esata hub which would give me about 16 drives bays for less than $700 (about the cheapest solution i have been able to come up with). I seem to be between using ZFS or unRAID but was unsure about if my hardware setup would give solaris or unraid a problem (ie two storage towers connected via 4 esata cables). The second cheapest solution i have found is a 12 bay device by limetech (unraid guys) but i don't get the impression i could get solaris to work on it w/o some serious coding kungfu (should i decide i want to go the zfs route) and anyway that is $700 for 12 bays not 16.

any comments welcome.

Cheers,

-Gaiko
 
Last edited:

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top