What's new

My DIY NAS Project

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

handruin

Occasional Visitor
I've been reading this site for a while and finally got into building my own NAS device for a variety of reasons which I'll get into. I just wanted to share my build and experience for others that might be interested.

Need:
I set out to build or acquire a NAS device to meet several needs. First I needed a lot of bulk storage that is accessible in my home network. NAS is the obvious solution for this because I'll be digitizing my movie collection to eventually have available to my home theater. I also needed an extra backup location for my photography work where I store lots of RAW images. The NAS would serve as a third online backup location from the primary and secondary sources. Last I needed an iSCSI target that could be accessed from multiple computers. I'll be working on a VMWare ESXi server setup and having shared data store access is needed for my project. I also need VLAN capability and trunking for added performance and learning.

I also needed a unit that could grow over time and also with future storage technology. As hard drives increase in size and speed, I wanted to be able to add more devices into my NAS to get a better return rather than buying all the storage now and replacing it later. I needed a system that could grow fairly easily and also perform.

Hardware:
I researched for a long time and read many different forums and asked a bunch of different questions to different people. I eventually settled on making use of some existing hardware I had at home and some new hardware that I would have to purchase. In my research I had found the Supermicro drive cage system that allowed me to fit 5 drives in the space of 3 5.25" bays which was awesome. I then began a quest to find a reasonably priced case that could use 9 bays. I had originally found a Rosewill case for about $70 which is the newegg house brand (I think). During the time of my research it apparently went out of production.

CPU: AMD X2 4600+ (Existing item)
Motherboard: Gigabyte GA-M57SLI-S4 (Existing item)
Memory: 2x 1GB Corsair XMS DDR2 800 (4-4-4-12) (Existing item)
Memory: 2x 2GB Crucial Ballistix DDR2 800 (4-4-4-12) (Existing item)
Video: BFG 8500GT (Existing item)

Case: COOLER MASTER Centurion 590 (black)
Power: Antec NeoPower 480 Watt (Existing item)
Boot Storage: 1x Hitachi 250GB T7K250 SATA Hard Drive (Existing item)
NAS Storage: 5x SAMSUNG EcoGreen F2 HD154UI 1.5TB 32MB
Drive carrier: 1x SUPERMICRO CSE-M35T-1B Black 5 Bay Hot-Swapable SATA HDD Enclosure
RAID Controller: 2x Dell Perc 6i RAID controller (2 SAS to 8 SATA)
Cables: 2x SFF 8484 SAS to SATA Cables
Misc hardware: 2x PC Bracket back plates that fit the Perc 6i into a standard slot in a case.
Cooling: 2x Scythe 40mm chipset fan (for Perc 6i) (Existing item)
Software: OpenFiler

Build and troubleshooting:
When I first built the system, I started to learn the different NAS softwares that would work for my project. I started with OpenFiler because it seemed to fit all my needs, at least on paper. I ran into some issues because at this time I did not have a RAID controller. I was using my on board SATA ports which I have six of. They are controlled by an nForce 570 controller which is a bit wonky at times. I was having issues with drives disconnecting and not being recognized.

I thought that maybe OpenFiler was the cause so I decided to try freNAS. I had even more difficulties with trying to build the array so I gave up after an hour of fighting with the software. I moved on to Windows Home Server which also gave me equal amounts of grief. I couldn't find the right drivers but eventually the XP version seemed to work. The performance was really poor in WHS with my hardware. I could copy files over to my system but about 2/3rds into the copy, the network would drop from 50-60MB/s to a few hundred KB/sec.

I was never able to figure out why so I moved on to Server 2008 R2 which actually worked pretty solid with my hardware, but it wasn't exactly what I wanted. I didn't want to manage a server OS, I kind of wanted something a little more simple such as OpenFiler or FreeNAS.

Raid Controller research and port expansion:
During all this time I have been reading and thinking about my expansion goals for this system. My motherboard has two 16x PCIe slots and three PCIe 1x slots. I had been doing a lot of reading about the Dell perc 6i RAID controller and the value it offers for the money. I found an enormous resource over at Overclock.net and I wasn't even a forum member there! I read through 400+ pages of that single forum thread and came out knowing that I wanted this RAID card.

So I bought two of them. I found a seller on eBay who was selling the cards as new in the bag with the battery backup unit. it just so happens that this person on eBay was coincidentally an active member on overclocker.net so I felt safe about buying the cards. They arrived perfect in working condition for $150/each including shipping. I know it's not the best deal that people have found, but it worked for me and my budget. I was also able to get to 16 SATA ports for around $300 with a true RAID controller. I looked around at a lot of well known and not-so-known controllers and couldn't get there with the same money.

RAID Controller caveats:
The card isn't perfect right out of the bag, it's missing a few key things.


  • First, it does not come with the traditional metal 'L' plate on the back of the card. I did some research and found the correct plate at a place called Mouser. The plate fit correctly, but I had to use a drill to widen the screw holes so that I could fasten it to the Perc card. This was trivial to solve.
  • Second, the card requires active cooling which does not come with the card. I solved this by using nylon straps to mount a 40mm Scythe fan to the heat sink on the card (see example images below). I also have a 120mm side case fan which blows air over that area of the motherboard for added cooling.
  • Third, the Perc 6i is actually a SAS card and requires proper cables. I initially ordered the proper cables known as SFF 8484 passthrough which turn each of the two SAS ports into 8 SATA ports. The cables were purchased very cheaply from a Chinese company called Cross-mark. I've ordered them a few weeks ago and I've still yet to receive them due to the distance and different country (I'm in the US). Since I'm impatient, I've ordered two cables from Amazon of similar type from Tripp Lite. I paid more than twice as much as all four cables from China are costing me, but I couldn't wait. :)
 
The last piece to share are my next phases with this project. As you can probably see from the images I only have one of the Perc cards installed and I mentioned having two of them.

Phase 2:
Right now my video card is using one of the two 16x PCIe slots that I would need for the Perc. I plan to alter one of the 1x PCIe slot by physically cutting the back edge off so that I can put the video card in a 1x slot because performance doesn't matter at all. The alternative is that I've been asking around for someone with a basic PCI card that I can stick in there and remove my current card all together.

Once the second PCIe 16x slot is free, I can install my second perc 6i into the slot to have more ports and storage expansion.

Phase 3:
Adding more storage is part of this phase. I'll likely wait 6-12 months before buying more hard drives. Hopefully prices will come down, large and faster drives will be available and I'll buy 5 more drives and a second Supermicro drive cage.

As part of this phase, I'll also likely upgrade the power supply. I should have done this right from the beginning but I wanted to make use of the 480W I had sitting around. I actually do have an existing PC Power & Cooling 750 W that will be more than enough for this which I'll likely add in during this phase.

Phase 4:
This phase will be another storage expansion phase with the final 5 more drives and another 5-drive cage, but also a network upgrade phase. With 15 drives, the network is going to be the huge limitation. I'll add in a dual port Intel PT GigE NIC to suppliment the existing built-in GigE NIC. I am also considering an HP 1810G-24 switch to allow me to do VLAN and trunking.
 
Thanks for posting up your build. Looks great. Whenever you get around to trying to get a dual port Intel Pro card make sure you scavenge Craigslist and Ebay. I picked one up from a guy on Craigslist for $65 dollars shipped which was cheaper than the single port server cards. We ended up doing the deal through e-bay but I found the card on Craigslist by doing a google search like this... "intel pro 1000 site:craigslist.org"

So now the $1,000,000 question is... what kind of performance are you seeing for file transfers now? :D

00Roush
 
Thanks for the tips on the Intel card. I'll keep my eyes open for deals on eBay and craigslist. I do know someone who is willing to sell me theirs which is barely used, so I do have some options.

Strangely enough I may need to buy it much sooner than I thought. I turned my NAS on just a few minutes ago and the NIC on the motherboard is causing my entire GiGE switch to go on the fritz. All the ports blink whenever I turn the NAS on causing everything to drop from the network! I tried new cables, rebooting, powering, unplugging. This is the second time this has happened, but the first time was only briefly...grrr. :(

As for the performance...I've been very happy so far. I was going to do some more tests to give you more details, but I'll post what I've tested so far (actually last night).

To set the stage, this is how I tested:

Source machine is my desktop workstation. It's a Core i7 860, 8GB RAM, Gigabyte P55M-UD4 motherboard with a built-in GigE NIC. The internal hard drive used to test is one of the Samsung 1TB F3 7200 RPM HDD.

Small files:
I copied 1,716 mp3 files in 176 folders ranging 3-7MB each for a total size of 9535.02 MB from workstation to NAS.
Time:144.4
Speed: 66 MB/sec

Medium files:
I copied 569 Canon RAW files in 7 folders ranging from 10-15MB each for a total size of 6637.43MB from workstation to NAS.
Time: 81.9
Speed: 81 MB/sec

Large files:
Copied single 3389.63MB file (zip file) from workstation to NAS.
Time: 31.7 seconds
Speed: 106.9 MB/sec

Copied single 7861.72 MB file (zip file) from workstation to NAS.
Time: 74.4 seconds
Speed: 105.6 MB/sec

I attached a quick breakdown of the basic tests I ran so far. If you have some ideas of what is good to test, I'd be glad to try some different things. That actually makes me curious if there is a standard DIY test that people might want to run here to get a baseline for comparison.
 

Attachments

  • transfer_overview.jpg
    transfer_overview.jpg
    32.2 KB · Views: 566
What OS are you running on the workstation that you are using for testing?

I'm surprised that your speeds are so high using only a single SATA drive.
 
Sorry I didn't mention that. I'm running Windows 7 Ultimate 64-bit on my desktop/workstation.

I bench marked the Samsung 1TB F3 drive as a single drive and it was capable of those speeds especially with larger size files. The average read speed in some cases was 117 MB/sec.
 
Interesting project. I'd be interested to see a performance test just on the server drivers. I'm sure someone else can suggest an easy to use drive benchmarking tool.
I currently run a server at home, which is re-purposed old slimline Dell with a 2.8 Prescott based P4. It is everything you DON'T want from a server or a server case! It was free though. If I'm lucky I can get 1MB/s throughput. I really want to put together a new server. I probably won't be able to afford a RAID card to start with but when I do I will now do some more serious investigation of the perc controllers.
 
What do you mean by tests on the server drivers? I do have benchmarks for individual drives and also the actual array by itself outside of the NAS. They were tested under Server 2003 R2 64-bit with a variety of benchmarking tools. I have links for the tools in my results pages linked below if you want to try them yourself.

If you're interested to look at the results, I have the array tests here, and the individual drive tests here.

You should be able to get more than 1MB/s throughput with a Prescott P4 unless if something is configured wrong or broken. All in all, the Perc 6i cost me $150 for each card, $5 for the back plate, and $15 for the cables (about $170).
 
It was benchmarks of the array itself that I was after. Thanks.

My current server is handicapped by not having enough RAM for everything it is running, only having a 100meg NIC and only having a single, not very fast, SATA HDD in it. I'm not suprised by its drive throughput at all.
 
Handruin,

I'm new here and just read this post. I'm glad I stubled across this forum!

I am planning a NAS build now and it seems as if we have very similar goals. I too am planning a build that would be able to expand to 15 drives using the supermicro hot swap bays.

I too was planning on using the cooler master case that you used. How do you like it? Do you think it will be sturdy enough once you get 15 drives in there? How much of a problem do you think cooling 15 drives will be in there?

I don't want to hijack your thread, but I'd really like to hear about any regrets you have with this build, or anything you would do differently.

My current plans are to use an open source NAS OS (and possibly boot from flash drive), and build a 3 drive RAID 5 array using the onboard controller to start. Eventually, I'd like to add hardware RAID cards and additional drives and move all drives from the mobo RAID over to the controller card(s). Need to get something set up quickly that is relatively cheap, but that can be upgraded later. If you have any recommendations for me, I'm all ears!

You're build looks great.

TravisT
 
I have some bad news about my NAS build that I've been meaning to come back and update once I had more to report. I've had some issues with the Supermicro 5-drive bay which caused me some performance degradation. I was finding that after using the NAS array for a little while that my transfer rates which were originally speedy (~100MB/sec) would drop down to a couple MB/sec for no apparent reason.

I discovered that all 5 of my drives were starting to report a greater number of 'UltraDMA CRC Error Count' in their SMART data. I googled it for a bit and everyone suggested this is related to a bad cable. Fortunately I have 5 sets of cables to try and none made a difference in my testing. I also had different brands of cables so it wasn't any one specific brand.

When I took the 5 drives out of the supermicro bay and connected the SATA cables directly to each of the drives (rather than using the back plane), my performance issue disappeared. I decided to return the Supermicro on a suspicion that the back plane was bad. Fortunately Amazon was very good with my returns (they paid return shipping both ways) and I received a brand new replacement in a few days. Sadly the replacement had the same problem as my original which made me really bummed.

I sent back the second drive cage and then I've procrastinated for a few weeks until two days ago. I've been trying to decide if I should attempt ditching my current cooler master case and design and instead go with a Norco 4220 case which I've read a lot of people like. The idea is that instead of paying $100 x 3 for the Supermicro bays plus $60-70 for the coolermaster, I could have just spent $350 and got this Norco 20-bay case which would hold more than I originally planned for and already have hot-swap bays inside.

I went back and forth and back and forth trying to decide if I should switch everything and ditch the cooler master (though nothing is wrong with the case) and decided to take a chance on another drive cage from Icy Dock. It's a little more expensive than the Supermicro and I would have rather the Supermicro work fine, but I'm going to try this one and see how it works out. I don't usually regard Icy Dock as the highest of quality, but maybe they'll prove me wrong.

You asked if I had any regrets and so far I don't. I may have chosen to go with the Norco 4220 case from the begining had I discovered it before my current solution. Using that case would have also allowed me to run less cables from my Dell Perc cards because I could have done the mini SAS which would have fed each back plane with a single cable instead of the current cables I have now which use 4 SATA ports each. That's about the most I can suggest that you consider before trying the Supermicro drive bays with a cooler master case.
 
Thats really good to know. I'll have to see if that is an isolated problem or if others are reporting the same thing. Although that Norco case is pretty nice, it is also a good bit more expensive (in the short term). If I understand correctly, it would force me to purchase a RAID card initially to connect to the SAS connectors on the Norco backplane. I agree that this would be nice once all 20 drives are populated, but until then it only adds cost to my initial build.

I may still go with the cooler master case, but instead of worrying about hot swap bays, just install my 3 initial drives in the case and assess the situation from there.

Are there any problems going from directly connected hard drives to connection through a backplane? In other words, if I originally setup my RAID array through the motherboard with HDs directly connected, then added a backplane would I need to rebuild the array?
 
Thats really good to know. I'll have to see if that is an isolated problem or if others are reporting the same thing. Although that Norco case is pretty nice, it is also a good bit more expensive (in the short term). If I understand correctly, it would force me to purchase a RAID card initially to connect to the SAS connectors on the Norco backplane. I agree that this would be nice once all 20 drives are populated, but until then it only adds cost to my initial build.

I may still go with the cooler master case, but instead of worrying about hot swap bays, just install my 3 initial drives in the case and assess the situation from there.

Are there any problems going from directly connected hard drives to connection through a backplane? In other words, if I originally setup my RAID array through the motherboard with HDs directly connected, then added a backplane would I need to rebuild the array?


I haven't seen anyone else report the issue I've had, so I'm hoping I just got two units that were bad, or there is some other variable I'm missing in my troubleshooting. If the Icy Dock I ordered has the same problem, chances are it's either my Perc or cabling, or maybe the hard drives. I've already tried replacing my 480W power supply with a known-good 750W and that didn't fix the issue.

I agree the Norco is more expensive short term, but if your goal is to get to 15 drives or more like I'm trying for, you'll have to consider in what amount of time will you grow to that capacity. If you think it'll be 2-4 years, then skip that case because something newer and cheaper will come out before then (like most computer stuff). :)

You would not be forced to buying a RAID card with the Norco case. You would just need a specific cable like this Mini-SAS to SATA breakout. Fortunately SAS is backward compatible with SATA, so you would be able to plug the 4 SATA connections into your motherboard and then the one single mini-SAS into the Norco backplane.

The cooler master case works fine with the drive cage that comes with it. There is a 120mm fan sitting in front to cool up to 4 drives which sounds like it will work fine for your current needs. My observations with the cooler master case is that it is somewhat thin with lots of holes for air and fans. Due to this it can be a bit noisier than some other cases. If you do plan on getting the Supermicro or any 5 in 3 bay drive cage, you will have to alter the cooler master case slightly. The case has small support rails for a standard bay drive like a DVD. In order to fit the 5 in 3, you need to bend or cut out the rails. Doing so was achievable with a pair of pliers and it won't hinder you in the future if you want a single bay item like a DVD drive.

I did not have any problems using a software RAID 5 when connecting between the motherboard and drives and then motherboard to the Supermicro. The array was still recognized. The backplane inside the Supermicro is transparent to the drives and motherboard. I only used the onboard SATA for a short amount of time before moving over to a dedicated RAID card, but during that short time I did not have a problem in the situation you asked about.
 
greetings,

amazing numbers there.

so let me get this straight, you are using windows server 2008 R2 and win7 ultimate on clients?
 
No, not entirely. I'm only using Windows 7 Ultimate as my client/workstation for the testing. The server (NAS) portion is running OpenFiler.
 
thanks for the reply, I'm running e5300 on p45 board. with 7200rpm seagate drives. I'm getting 120 MBytes to nas and 55 Mbytes reading from nas using ubuntu server no raid. When I used windows home server 2008 I got 120 Mbytes reading FROM nas and 55 Mbytes writing TO nas, exactly the reverse of ubuntu. I find it fascinating. Client os is vista64 sp2.

Anyway my test proves it's the software limitation not hardware. Time to test openfiler.
 
That is pretty strange results. Did you make sure the network card was adjusted properly in each OS? There are a few settings you can tweak like jumbo frames and also flow control and priority. I'm no expert with those, but you might want to play around with the settings to see if you can obtain better results from your NIC.

If you decide to use and compare OpenFiler, keep in mind that OpenFiler will use any spare RAM as cache. If you have more than 4GB in your system, make sure to select the 64-bit version to make the most use of it. I happen to have 6GB of RAM in this NAS which gives me a pretty decent cache reserve. I tried to test files that exceed the cache size to rule that out in some of my tests. When reading from the NAS if it's in cache I never even see the array activity lights blink and I'm getting almost the maximum transfer from a GigE connection.

Also, once you've installed OpenFiler, use the built-in update feature to get all the latest changes. This made a difference for me in stability and a bit of performance, so it's worth using the updater tool (from within the web-based GUI).
 
Dell Perc Cards

This could be dumb.

What is the advantage or reason for 2 Dell Perc 6. I was under the impression 1 Dell Perc could run 32 drives?
 

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top