What's new

Critique my DIY Build

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

travist

Occasional Visitor
I'm new here, but have been looking for info on DIY NAS projects, and most of it was dated - so I'm glad I came across this site!

I have been planning my build for a little over a month now, and have tweaked it a little since reading some posts on this site. Any opinions would be appreciated.

Case: COOLER MASTER Centurion 590 RC-590-KKN1-GP Black SECC / ABS ATX Mid Tower Computer Case
Mobo: MSI 870A-G54 AM3 AMD 870 SATA 6Gb/s USB 3.0 ATX AMD Motherboard
Processor: AMD Sempron 140 Sargas 2.7GHz Socket AM3 45W Single-Core Processor Model SDX140HBGQBOX
Video Card: Some type of PCI card, possibly removed after initial setup.
RAM: Kingston ValueRAM 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Dual Channel Kit Desktop Memory Model KVR1333D3K2/4GR
PSU: CORSAIR CMPSU-650TX 650W ATX12V / EPS12V SLI Ready CrossFire Ready 80 PLUS Certified Active PFC Compatible with Core i7 Power Supply
NIC: Onboard until upgraded to server Intel card
Drives: ?
RAID: 5 ?
OS: Openfiler ?

My main goals are to have centralized storage for my windows domain with some redundancy, stream media (videos/mp3's/pictures) to computers and HTPCs, and possibly some iSCSI targets for VMWare use in the future. Need to keep the build on the cheap side, without comprimising on the upgrade-ability of it to something more robust in the future.

Current plan is to set this up using the on-board RAID controller with 3 drives in a RAID 5 array. I will upgrade by adding additional drives as well as a hardware RAID controller and move the data off of the on-board RAID, then re-utilize the initial 3 drives as part of a second array. Going with the cooler master case for the expandability of up to 15 drives (using hot swap bays).

Some questions I have are regarding the processor, hard drives and OS:

I know that while using the onboard RAID, performance will likely take a hit due to the RAID 5 calculations, which will be handled by the processor. I'm willing to sacrifice until upgrading to hardware RAID, as long as it will limp along until I do. I decided to go this route for power savings in the long run. Any thoughts/recommendations are welcome.

Not sure which is the best type of hard drives to go with, but after being a loyal Seagate customer for many years and having them ship me a bad drive (for an RMA of another bad drive) and wouldn't pay for shipping of the second replacement, I'm done with them.

Finally, I've used Openfiler briefly on some dated Dell hardware using SCSI drives, but no experience with the other open source OS's out there. I do have MSDN access so windows OS's are an option. Looking for dependability though, as long as I can integrate into my current domain structure for permissions, etc.

Thanks for any suggestions - back to digging through more posts now!

TravisT
 
Personally I would opt for a dual core, if not a quad, even if you had to go lower mhz for the same price range.

With a single core, there is much higher chance of some other heavy process blocking your raid processes (ie iscsi process blocking smbd), slowing down the whole chain.

Dual+ cores is even more important if you plan to run a torrent or media servers or whatever else on your raid-box.
 
Good advice. I would like to consider doing a torrent box or something along those lines, so I may do that. I also may do that during an upgrade phase since the processor I listed above is so cheap - it's almost a throw away item.

Also note that the Mobo supports core unlocking and this processor can be unlocked to function as a dual-core. I have no experience with this, but I think it may help my causes. At the very least, it will get me by for $37 and if I upgrade it, I can re-utilize it in an HTPC running XBMC or something.

Great point though!
 
I don't have too much experience in the NAS domain but I am building one myself.

If you have time, I would suggest test each OS yourself. I've test UnRaid, FreeNas, Ubuntu and Windows 2k8 R2. Windows gave me the best performance in average for my config (2 disks EARS, Raid 1 software). I might go with 2k8 given that I feel much more comfortable troubleshooting a windows box than any other OS.

If you decide to go for Windows, you can pretty much take any disks. If you decide to take another OS, I would suggest to avoid the new disks from Western Digital with Advanced Format (EARS) as not all OSes support them yet (FreeNas is the worst). I experienced it myself.
 
Looks ok, but definitely grab a dual core at very least.

The only drives I've never had any problems with are WD.
 
Thanks for the advice. I'll definately look into all of the mentioned things.

I'm really unsure on the OS still. I'm very comfortable with windows, and could easily run server 08 on this box, but really don't want to have to "manage" this box much. At the same time, I would like to learn linux-based OSs more and this would be one way to force my hand at it. Then again, would a file server with tons of my important documents/files/media be a good place to learn? I don't have the answer to this. Also, OS space could present an issue, as I'm considering running something from a flash drive, but have mixed feelings on this as well. I'll keep reading, and any other recommendations are welcomed.

Does anyone have experience unlocking the cores of the single core processors, such as the one I linked above? What are your thoughts on running that dirt cheap processor if I can unlock the second core? Would this suffice?

As for the WD drives, I really doubt that either seagate or wd have a better drive than the other. Sure one might have a feature that the other doesn't, and I'm sure that someone has had a similar experience with wd as I have had with seagate. I just feel that customer service is important, and they could have easily paid the $5-10 shipping charge to keep a loyal customer. Not sure how wd would handle the same situation, but until I do I will be buying wd drives from now on...
 
you should try/experiment with all the freely available nas/raid os's before putting any important data on there.

Once you find/figure out which works best for you, then you can start committing your data to it.

The other thing to remember, is if you use a nas/raid as your primary/only storage of your data, that raid/nas is NOT a substitute for backup.

Drives WILL fail, accidents will happen, murphy's law is in full effect and then some when it comes to storage.

If you only have copy of any particular data/file, then you are at high risk of losing it at some point.

Off site backups are the safest, even simple backup to external drive(s) is better than nothing, but still not safe from fire/flood/theft/etc.
 
I don't plan to experiment after storing anything important on the server. What I am afraid of (slightly) is that sometime down the road I run into a problem that I don't know how to solve in the NAS OS. That's the only problem I could forsee using something other than windows. Then again, I could have a problem with windows just because it's windows.

I also don't intend to store anything of importance on this server without having backups. Although this server would be more fault tolerant than what I'm running now, as most of my data is on single hard drives. My goal is to improve performance as well as offering SOME redundancy. I currently am using external HDs to back up data, but would be interested to see what others are doing for backup. I hear a lot of mentions of off-site backup. What off-site backup options are there (other than burning to DVD and keeping at another location)? I'd like to look into this more, but don't anticipate too many options for large amounts of data.

Thanks again for the input, and please keep it coming.
 
So I'm a day or so away from ordering the basics for my build, and wanted to check back for any last opinions before I drop the hammer. So far I've decided to stick with my original plan. Only concern I have is that the Sempron 140 may not unlock the second core. If not, that will be one of the first upgrades to the system.

I think I've sided with FreeNAS, but I'm not 100% set on that. I am really torn between which FS to use. I'm a little interested in ZFS, but unsure if that is the right way to go. I also have plans to use iSCSI for a couple of Server 2008 boxes, but unfortunately I don't have the equipmnet to test this out prior to putting this NAS into use. I'll have to choose iSCSI support based only on reviews of others.

I don't think I'm going to order hard drives just yet. I have a couple laying around that I can experiment with, but I'm a little confused on the best drive for the $ that will be supported by FreeNAS. Suggestions here would be greatly appreciated.

I really want to evaluate FreeNAS on a real machine, as opposed to in virtualbox, which I have been using. I'm pretty confident that the performance will improve drastically when running off dedicated hardware, and will make the choice much easier.

If anyone has any final comments on the hardware above, or recommendations on hard drives I'd love to hear them.

Thanks!
 
I placed the order last night. Stuck with what is listed above, and got 4 WD 1TB Blue drives that I plan to experiment with. I think I'm going to go with FreeNAS, but I will likely check out the competition once I get the hardware setup to see how I like each one in a "real" environment.

I purchased the 4 drives with the expectations of running RAID-5 and having a usable space of 3TB. I've been reading up on ZFS lately, and although there seem to be mixed feelings, I'm considering going that route. I don't know much about it right now and plan to read up more / test on my hardware to see what performs better and makes more sense.

Any opinions on which way to go? Where can I find what ZFS features are enabled in the latest FreeNAS version, as I've heard it doesn't have the latest and greatest ZFS capabilities yet.
 
ZFS has some screaming advantages in terms of data reliability. These are:
1. No RAID "write hole". You can set up raidz and get the advantages of not worrying about managing content on a per-disk basis, and not suffer loss of all your data if a failure happens in the middle of a write.
2. Built in data scrubbing. It can be set to run a scrub operation in the background, looking for silent bit corruption and fixing it.
3. Much simpler management than other forms of mirroring/raid as far as I've seen. It can be set for mirroring, arrays, arrays of mirrors, and include hot spares, and you only deal with virtual devices on file systems inside the whole size of the file system. You're not concerned with how big what partition or disk is until you get to nearly the full size of the array.
 
ZFS is extremely interesting, but as far as I can tell, it completely and utterly lacks the ability to expand capacity by adding disks to existing pools. Most hardware raid devices as well as software raid can do this.

If I was going to start with the end number of disk that would be ok. But I'd prefer to add disks as I need the capacity....

It depends on your usage scenario.
 
ZFS is extremely interesting, but as far as I can tell, it completely and utterly lacks the ability to expand capacity by adding disks to existing pools. Most hardware raid devices as well as software raid can do this.

If I was going to start with the end number of disk that would be ok. But I'd prefer to add disks as I need the capacity....

It depends on your usage scenario.
It does depend on your usage.

I have a slightly different understanding of the problem with expansion. My understanding is that you cannot expand a Vdev by simply adding a disk. What you can do is to replace one disk at a time disk in the Vdev with a larger disk, and zfs interprets the change as a faulted and replaced disk and resilvers it. When the last disk is replaced with a bigger disk, the size of the Vdev is increased to the new larger size.

I believe you can add new vdevs to a pool. So if you have a raidz2 vdev of five 1TB disks, you can replace all of them, one at a time with 2TB disks to double the size of the Vdev, and you can add a new Vdev to the pool, which may be a bare disk, a mirrored pair, or a new raidz of some number of new and larger disks.

The difficulty in changing the size of the Vdev can be considered a side effect of the robustness of the data storage. Changing the block structure which helps keep the data fault tolerant is difficult to do for the same reasons that it keeps data safe.

I think. :) I'm by no means an expert. I've just had good results with setting up and running my first two zfs servers. Basically, as soon as I knew and could type in the proper magical incantations to set it up, they both ran with no hitches. Both are in test before I commit actual backup data to them. A backup that you have not tested is no backup at all.

N.B. Yes, I understand that a RAID is not a backup. That should be changed to say that a working RAID containing the only source for a file is not a backup. But a RAID used solely as a backup for the existing working files IS a backup.
 
Thanks for your input on that. I'm far from knowledgeable on the subject of different RAID levels and varieties, but I'm learning. One thing I do know is that I've planned for a total size using 20 disks, but don't plan to have 20 disks initially, nor do I want to replace every disk in order to expand my storage. I would like to just add another disk of the same make/model as I install initially. Because of that it seems that hardware RAID (maybe a PERC card) would be in my best interest.

Am I on the right course here?
 
I'm far from knowledgeable on the subject of different RAID levels and varieties, but I'm learning.
I'd have to say that I'm at much the same place. Read a lot, think, read a lot, think.

One thing I do know is that I've planned for a total size using 20 disks, but don't plan to have 20 disks initially, nor do I want to replace every disk in order to expand my storage.
One thing I find very helpful is to write down what I'm trying to do, in complete sentences. In my case, the number one item was to ensure that whatever I put into the array was proof against data loss as much as I could practically make it. Transfer speeds and sheer raw size of the array were secondary.

My servers are for backing up working PCs, with the PCs themselves containing the working copies, and the servers the second backup copies. Things too valuable for live storage are backed up at another level or two. Things that aren't expected to be accessed often or change over years are also archived to DVD with error checking/correcting code to catch and repair DVD bit rot.

That being the objective, I needed only comfortably more storage than the working PCs. I personally don't store a lot of video on disks, period. This cuts way down on the amount of storage I need. Photo storage is largely to backup and DVD archive. So my needs for backup on half a dozen machines are pretty much down at the few-TB region. Your needs may be the same.

I've read about people trying to save money by attaching old disk drives of smaller size into a bigger array. For my particular situation, I consider this false economy. Many smaller-capacity drives eat up more than the cost of new high density drives in electricity over a very short time. A new 1TB drive is about $70, sometimes cheaper. I recently got 1.5TB drives from Frys on sale for $80 each. Just on electricity usage, you can pay for a new TB class drive with the savings in the electricity bill from multiple older, smaller drives.

And for me, the use of older drives is false data economy. An older drive is closer to wear out; nothing lasts forever. If what's important is the data integrity, new drives just past burn in are where the data should go.

And in zfs at least, there is a body of recommended "best practices". From that I get that a Vdev in a pool should not be more than about 6-8 disk drives, depending on the raidz version. Putting in more disks than that complicates things in terms of replacements for failures. If you must have more disks than that, it's smart to group them into multiple vdevs. The pool of storage constructed of vdevs does not care how they are grouped, it uses the mix of all vdevs as a single, well, pool.

In my case, I got a deal on 750GB raid-rated disks, and have eight of them, six live ones and two for replacement spares. That gives me 4TB of live storage, with enough redundancy to withstand up to two disks at a time failing without losing data. This is plenty for my storage needs; and having my objectives written down lets me stay straight with where I was going.

So for me, getting to a dozen disks actually spinning at once is conceivable, maybe, but 20 would be way more than I'd ever need. But then I'm not running a video editing service nor a web server farm. And six live ones at once is enough for my present needs. I did buy a case which has spaces for up to 15, but it was a good deal, and that swayed me.

I don't have the power supply capability to start 15 disks at once. Startup on a 3.5" disk is 2A (new disks) to 3A (older disks) or more. 20 disks at startup is 40-60A of +12, plus another 10A or so to start the motherboard, so you need a power supply which can supply 50-70A of +12 if you can't do staggered spinup. If you got into the 70A case, that's 840W in just +12V, and you're perilously close to the 1KW number that's the biggest available PC power supply.

I would like to just add another disk of the same make/model as I install initially.
It may not be a good idea to use multiple disks of the same make/model. This is because each make/model is going to have its own failure modes. By making them all the same, you maximize the chance that you'll get multiple failures simultaneously and lose your important data. It's the old "don't put all your eggs in one (common design) basket."

Because of that it seems that hardware RAID (maybe a PERC card) would be in my best interest.
Investigate carefully. Some hardware RAID cards are "fake raid" where the hardware doing the actual RAID stuff is the software drive in the machine. And do look into the hardware RAID "write hole" issue.
 
Frustrated!!!

So I'm getting pretty frustrated with this install...

I first installed FreeNAS to a USB drive using AHCI as the drive controller. Installed fine, but couldn't get the software raid to mount. Kept getting a error-retry message. I ended up getting ZFS to work, and used it a little bit.

I wanted to benchmark performance with a couple of different configurations, so I attempted to enable the on-board RAID controller on the MSI 870A-G54 motherboard. Any time that is enabled, as soon as it starts the boot loader, it automatically reboots. I've tried everything I can think of with no luck.

So I think that maybe I should give OpenFiler a try. Begin install, and no matter which settings I configure, it says the drivers can't be found (even though I have a 30G IDE disk installed and all other disks disconnected), and won't continue. Not sure what's going on here, but I'm really frustrated after messing around with this for many hours over the last couple of days.

Any ideas of something I could be missing? Should I give up on the on-board raid (which may not be smart in the first place)? Should I focus on getting sraid to work, or just accept the ZFS since I can make that work?
 
Yes I would give up on trying to use the onboard raid in anything other than Windows.

I am not sure if ACHI mode would be supported for the newer AMD chipsets with FreeNAS. Have you tried another mode?

00Roush
 
FreeNAS seems to install ok in ACHI mode. Didn't realize that about the onboard raid - that may have changed my mobo selection, but good to know nonetheless.

I'd really like to install OpenFiler just to compare. I used openfiler on an old Dell Poweredge server a while back, but didn't spend much time with it. I would like to compare the two, as well as any other NAS OS's out there that may be contenders. OMV really looks interesting, but unfortunately doesn't look like it's ready for release.
 
Well I think you can use the onboard RAID with other OSes but it seems like it is a bit difficult to get working properly. But this is just based on a bit of reading I have done on the subject... I have not actually tried it.

Openfiler is definitely an option. The last time I tested it though it was difficult to setup and didn't offer the ability to change buffer sizes for SAMBA. But this was quiet a while ago so I imagine things have improved.

Not sure if you are interested but Ubuntu Server works just fine as a NAS OS. While I have not tested the latest version I have found previous versions offer good performance out of the box. Also it has one of the largest support communities so most questions can be answered by a quick google search.

00Roush
 
Just to keep everyone updated, I've moved to EON Storage for testing. I'm haven't decided on it yet, but my initial impressions are good. Preliminary tests show just over 100MB/s write and 80MB/s read rates using a 4 drive raidz pool using the IOZone test (32M read/32M write with 8 workers). I've been on the road a lot, so I haven't been able to test extensively yet.

The one thing I like about it so far is that there seems to be great documentation that is all in one place on the commands and syntax for setup. I know that there is plenty of Linux documentation and user support via forums, but finding administrator guides all in one place is a very nice thing for a *nix newb.

I like the idea of using ZFS, and feel that it is the way to go. My knowledge is still lacking on it, but right now the only thing I don't quite understand is how to grow the zpool (ie add additional disks). It seems that if I start with 4 1TB drives (3TB usable), the only way I can add to the zpool is to create another zpool (with parity drive) and combine the two. That seems to work fine, but if your goal is 15 drives, you will end up with way more than one parity drive by expanding instead of starting with the total number of drives.

Anyway, I'll try to update with my experiences and results for others to benefit.
 
Similar threads
Thread starter Title Forum Replies Date
P Questions on my first DIY NAS (+server) build DIY 14

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top