What's new

How To Build a Really Fast NAS - Part 1: Introduction

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

First, sorry about the typo, obviously that should be a P35 motherboard, not P53! :cool:

Currently I'm using the onboard network cards of both motherboards. My home PC's motherboard is a Gigabyte GA-965P DQ6 and the motherboard of my windows home server is a Gigabyte GA-G33M-DS2R. I have two Intel PCI-E NIC's on order currently, because I believe I'll achieve even better speeds with these network cards than with the onboard ones.

I've only measured the writing speeds of my server for now... I've done this by copying all the ISO files (several hundreds) of my DVD's (at least 4 GB a piece) that were still on my PC to my WHS. I did this with total commander, because this piece of software actually shows the copy speed in MB/s contrary to windows explorer. Not very scientific, I know, but a nice indication nonetheless.

I've not tried copying something back from the server to my PC. I'll do that tonight if you're interested?
 
Last edited:
Currently I'm using the onboard network cards of both motherboards. My home PC's motherboard is a Gigabyte GA-965P DQ6 and the motherboard of my windows home server is a Gigabyte GA-G33M-DS2R. I have two Intel PCI-E NIC's on order currently, because I believe I'll achieve even better speeds with these network cards than with the onboard ones.
The GA-965P mobo uses a Marvell 8053 gigabit Ethernet controller and the GA-G33M uses a Realtek RTL8111B. Both connect via PCIe. So you don't need the Intel PCIe adapters you have on order.

I've only measured the writing speeds of my server for now... I've done this by copying all the ISO files (several hundreds) of my DVD's (at least 4 GB a piece) that were still on my PC to my WHS. I did this with total commander, because this piece of software actually shows the copy speed in MB/s contrary to windows explorer. Not very scientific, I know, but a nice indication nonetheless.
I'm surprised the write speed is that high with 4GB filesizes because you bust the WHS server 2GB cache. How much RAM and what OS on the machine that is doing the writing?

I've not tried copying something back from the server to my PC. I'll do that tonight if you're interested?
You should. I think you'll be surprised at how different the result is.
 
I've not tried copying something back from the server to my PC. I'll do that tonight if you're interested?The GA-965P mobo uses a Marvell 8053 gigabit Ethernet controller and the GA-G33M uses a Realtek RTL8111B. Both connect via PCIe. So you don't need the Intel PCIe adapters you have on order.
Wouldn't that make one ounce of difference? I mean, with regards to buffering or CPU use or something?

I'm surprised the write speed is that high with 4GB filesizes because you bust the WHS server 2GB cache. How much RAM and what OS on the machine that is doing the writing?
My main desktop is the aforementioned Gigabyte i965 mobo with 2 GB of Ram and an Intel E6600 CPU. The OS used is Windows XP Pro.
The Windows Home Server runs on a Gigabyte P35 mobo, with 2 GB of Ram and an Intel E2200.

You should. I think you'll be surprised at how different the result is.
Ok, I'll let you know what the result was... I'm not at home for the moment, so I can't test immediately. Watch this space! ;)
 
Wouldn't that make one ounce of difference? I mean, with regards to buffering or CPU use or something?

You might get some difference due to the different controllers. But my guess is that you won't get much.

That will be one thing I look at in my tests.
 
Hey thiggins,

I seriously recommend that you do not use the onboard power supply! Great way to burn out your pc and all your data >_<

Spend $40 and get a Corsair 450w unit. You will wonder why you did not listen to me when you lose 4GB of irreplaceable data.
 
If this build is about trying to achieve Gigabit-level throughput, I highly suggest a dedicated RAID controller. Take a look at this article in Maximum PC:

http://www.maximumpc.com/article/raid_controllers_compared

They compared the featureset and performance of on-board versus dedicated RAID cards. The benchmarks are directly here:

http://www.maximumpc.com/sites/future.p2technology.com/files/imce-images/RAIDbenchmarksBIG.gif

With on-board RAID from either an Nvidia 680i chipset or Intel P35, read speeds were up to ~160 MB/sec (exceeding Gigabit throughput limits). However, WRITE speeds were only 13-21 MB/sec. In comparison, dedicated RAID cards reached write speeds of 116-211 MB/sec! These were all benchmarked in RAID5.

So, if I were to build a dedicated NAS box, I would opt for a dedicated RAID controller.!
 
You should. I think you'll be surprised at how different the result is.

Ok, update. I've just done the test and copied 4 DVD ISO's back to my desktop PC from my WHS. And you were right: constant speeds varied from 40 MB/s to 45 MB/s.

After copying one file, the speed dropped immensely. (less than 5 MB/s) until the system hung. That's not good!

Any idea why this happens? And how come this speed differs so much? This kind of action only requires reading from the server I would think? So shouldn't this be even faster than copying to the server? Why is it crashing? What am I missing here?
 
I don't know why you got the crash.

Write takes advantage of OS and NAS caches and can product speeds that exceed both network connection and drive write throughput.

Read can't take advantage of caches. You are running into the limit of the drive read performance. I will explain further in an upcoming article. But basically, like any other spec, SATA drive specs are optimistic at best.

Go check actual performance data for your drive at StorageReview, TomsHardware, etc. You'll see that it bears little resemblance to what you see on the spec sheet.
 
We'll get to RAID controllers as part of the series. But the article series is as much about the journey, as the destination!
 
32Bit OS or 64Bit OS - DIY NAS

All, I am new here, and have been reading for a while, and also checking out google. Has anyone tried using a 64 bit OS? (Ubuntu Desktop/Server, Windows Server 2003/2008).

I am looking to reconfig my NAS setup to try and get some better performance out of it, and thought I would ask.

Regards,
Tazdevil.
 
32Bit OS or 64Bit OS - DIY NAS

FreeNAS offer a 64 bit version, it's currently in beta
 
I don't know that 64 bits will get you anything other than access to more memory...
 
Has anyone tried using a 64 bit OS? (Ubuntu Desktop/Server, Windows Server 2003/2008).

I am looking to reconfig my NAS setup to try and get some better performance out of it, and thought I would ask.

I don't know about Ubuntu and other *nix variants, but there is a significant difference between XP and XP-64. There is much less of a difference between XP-64 and 2003 and 2003-64. The main difference here is the generation of the OS. XP-64, 2003, and 2003-64 are about the same generation. Windows Home Server is in the same category, as it's also built off the 2003 code base. XP was earlier. With the 2003 generation, MS improved network file transfer performance, especially for pushes to the machine. Any of these can hit > 100 MB/s writes in this case, sustained, not cached. Of course this also depends on your hard drives on both sides of the transfer (conventionally requiring RAID on both sides), your sending OS, and everything else in between, as it always does when you're approaching the upper limit of your theoretical speed -- every slight misstep will reduce your effective speed.

2008 is a different case; again another generation of the OS, one which will especially shine with a similar OS as the client -- either Vista or 2008. However, Vista has been somewhat flakey, and sometimes has had the opposite effect on transfer performance. Vista/2008 to Vista/2008, you can effectively saturate gigabit using standard Windows file transfers (again, assuming that all your hardware, drivers, etc., are up to snuff), which IME you can't entirely with older OS's (maybe in one direction -- pushes to the OS, but typically not in both).

Note also that Vista/2008 can use very large buffer sizes for file transfers -- which improves file transfer performance in some cases.
 
Last edited:
Figured I would post up my personal experience trying to build a fast NAS as I felt it might be helpful.

Here is the basics of my personal computer that I have used as the client for most of my testing.

ASUS A8R32-MVP Motherboard
Opteron 165 @ 2.8Ghz
1 GB (2x512MB) Corsair Ram @ 200Mhz (DDR 400)
160 GB Maxtor SATA HD

This board has two network controllers one is a Marvell 88E8053 (PCIe) the other is a Marvell 88E8001. (PCI) OS has been Windows XP PRO SP2 but I have also tested Ubuntu 7.10.

When I first setup my file server (2 years ago) it was an old AMD K6-2+ 500 MHZ with 256MB of ram. I used a IDE controller card to support the larger and faster drives that I had available. Used a Intel Pro/1000 MT desktop PCI network card. OS drive was 40 GB and storage drive was 160 GB. OS was Windows 2000 advanced server SP4. I tried other linux OSes but none could match the network speed of Win 2000. With this setup as the server and my personal computer as the client I could transfer files at around 15MB/sec with no jumbo frames. This is using my install file for Farcry as the test file. (2.63GB) The storage drive on the server was mapped as a network drive on the client and I would just copy and paste the file.

Next step was a 1ghz Intel PIII machine with the same supporting hardware and OS. From what I can remember this could do around 25MB/sec without jumbo frames and 28MB/sec with.

The last file server I had before upgrading to PCIe on the server was a AMD 2500 XP (barton) CPU @ 2.2GHz, 512MB Ram, FIC AU13 motherboard (nforce 2 based), same hard drives as above, Intel Pro/1000 MT server PCI-X network card (in a pci slot), and Win 2000 adv server. Writing from my personal computer to this server would usually bounce between 40-65MB/sec until it ran out of memory then stall for a second and finish out the file at about 35-40MB/sec. Reading was usually much more consistent at 40-45MB/sec. From what I could tell that was about as much as the PCI bus could handle. Even using Iperf I could not get much higher. With or without frames. Since I was still limited by the PCI bus I decided to upgrade to PCIe.

The latest file server for my house is based on the Nforce 4 chipset as it has integrated PCIe networking. Athlon 64 3000 cpu, 1GB of generic ram, NF4UK8AA motherboard, 200 GB IDE OS drive, same 160GB IDE storage drive used in the other builds, and Win XP PRO SP2 as the OS. From my computer to this server I usually see transfer speeds of about 70MB/sec. From the server to my computer it is usually around 60-65MB/sec transferring large files. Last time I checked both computers were up to date on drivers and the only network changes I made was to disable NetBIOS over TCP/IP to cut down on computer chatter.

Here are some graphs on the throughput for my current file server. I also have the actual log file if anyone is interested and might still have others for some of the previous file servers I had setup. Testing was done using IOZONE with this command line "IOZONE -Rab c:\results -i 0 -i 1 -+u -f z:\test\test.txt -q 64k -n 32M -g 1G -z"

writespeed.jpg

readspeed.jpg


Also wanted to note that I have tried out Ubuntu 7.10 64 bit on my personal computer. With the latest server still running win xp pro throughput was noticeably slower than with win xp pro on both sides. Somewhere around 45MB/sec from what I remember.

Hope that isn't too much info.

00Roush
 
Last edited by a moderator:
No, don't -- the author didn't really know what he was writing about when he provided the RAID 5 benchmarks. Either of those chipsets can do much better with a different configuration.

Hey Madwand. I've read the whole article concerning RAID controller cards, and it all seemed very obvious and trusty in my little knowledge about RAID cards.
I'm currently looking to start building my own fast NAS from scratch (having a ReadyNAS NV+ and Duo, but willing to give Windows Home Server a try, plus the NV+ and Duo are not the fastest around..).
So I was wondering what you were suggesting then? Dedicated RAID cards are not the way to go? Software RAID? Ubuntu server or Windows? Looking forward to hear your opinion.
 
Thanks for the detailed report, 00Roush. Sorry if I missed it, but what exactly are the "larger and faster" IDE drives used in the server and are they configured in RAID?
 
I'm currently looking to start building my own fast NAS from scratch (having a ReadyNAS NV+ and Duo, but willing to give Windows Home Server a try, plus the NV+ and Duo are not the fastest around..).
So I was wondering what you were suggesting then? Dedicated RAID cards are not the way to go? Software RAID? Ubuntu server or Windows? Looking forward to hear your opinion.

This is a very broad question, and there are several possible answers, depending on your wishes, needs, and resources. For a large, multi-user server, a RAID card with an on-board processor and cache is usually the way to go. Similarly for a very CPU-intensive application. Also, if money is no concern, then of course the dedicated controller is the way to go -- there's no good reason for on-board or software solutions to be better, and you'd be paying for a better implementation, feature set and support. All the rest are a bunch of compromises made according to budget and needs.

If you're going with a *nix OS, then for the budget-minded, *nix RAID is definitely the first thing to try out -- it's fairly well featured, free, and can give decent performance. Chipset and hybrid RAID 5 are sometimes not well-supported though, partly because there isn't a big reason for it when you have the native *nix RAID. You might encounter a learning curve and performance bottlenecks on the Samba implementation though.

If you're going with Windows, then know that Windows OS RAID has typically performed very poorly, and is not well-featured, etc. (I haven't tried 2008 yet, but if the long preceding history is any indicator...) On-board RAID 5 can be used instead, as can some relatively affordable hybrid solutions, e.g. from Highpoint. There's also the possibility of using Ciprico's RaidCore software RAID on a compatible chipset.

Windows Home Server is a different kettle of fish -- the developers are kind of RAID-hostile, and a big part of the point of WHS is to forget about RAID and let the OS manage the storage for you. So I, like MS, wouldn't really recommend using RAID with WHS. If you really wanted, you could pull it off though, but I'm not sure why you would. XP-64 would probably do just as well for that purpose (assuming you find drivers), and in some cases Vista would be better.

On chipsets, AMD has a new RAID 5 implementation, and I just don't know much about it. nVIDIA has had RAID 5 for quite some time, and in a few specific cases, it performs well, and the feature set isn't bad. But it's tricky, and often performs poorly for writes. Intel's RAID 5 performs better in more cases -- as long as write caching is turned on -- but has had a pretty poor feature set with respect to expansion. I'd probably start with an Intel RAID 5 chipset and probably try RaidCore software on top of that -- it can perform well, and has a fairly rich feature set (based on the feature set of their hardware bundles at least). Going with an Intel chipset, you'd have at least 4 different RAID 5 options: (1) Go native Intel RAID 5. (2) Use it with RaidCore software. (3) Use it with *nix software. (4) Forget about it and add a dedicated controller.

AMD's RAID 5 might be very similar, but I'm just not sure about its native performance at this time. There's no reason why it couldn't be as good or better than Intel's, but perhaps they just haven't made the effort to do that.

The following post explains a bit about why RAID 5 write performance performs poorly in some, but not all cases:

http://forums.smallnetbuilder.com/showthread.php?p=381#post381
 
Thanks for the detailed report, 00Roush. Sorry if I missed it, but what exactly are the "larger and faster" IDE drives used in the server and are they configured in RAID?

Your welcome. Almost forgot the graphs are read and write using 64 KB record size. Same as what is in the NAS performance charts.

As for the IDE drives in my first file server I was referring to the fact that I needed to use a controller card that supported larger hard drives and UDMA 100/133 modes. The motherboard in the AMD K6 and Pentium III computers only supported UDMA 66 and would not fully utilize hard drives larger than 120gb or so. The hard drives used were a Western Digital 40gb (WD400BB) for the OS drive and a Western Digital 160gb (WD1600JB) as the storage drive shared on the network. On all of my home file server setups these two drives have been setup on their own IDE channel instead of a master/slave setup to ensure no extra disk accesses take place on the network drive. The most current file server is setup the same but the OS drive is a Western Digital 200gb drive. (WD2000JB) The same WD 160gb drive is still used as the shared drive on the network. So basically no Raid as of yet.

Just for kicks though I have tried to see how high I could get the MB/sec reading files from my latest server. I installed another hard drive in my personal computer so that I could test using two hard drives simultaneously. So in this test I had the server with drives "A" and "B" shared on the network. My personal computer had drives "C" and "D" which I could copy to. So I tested by coping a file from drive A on the server to drive C on my computer and at the same time I had a file coping from drive B on the server to drive D on my personal computer. Make sense? Looking at just the task manager network utilization I was reading from the server at about 90-100mb/sec. From what I recall this was pretty consistent for the whole file transfer. Files were in the neighborhood of 2gb each.

Thought I would post up my results to let others know that Windows XP can support a fast NAS.

00Roush
 
Tim,

You might look at the Intel DG45FC Mini-ITX board. It's $149 from mini-box.com

It's more of a HTPC board, but it is Socket 775 (looks like 65W TDP and lower CPUs), has PCIe GbE, 1 x 1xPCIe, 4 x SATA II, 1 x eSATA, RAID 0, 1, 5, 10 support and 10 USB 2.0 ports (6 back-panel and 4 via headers)

The Chenbro case comes with its own external adapter. Were you concerned with replacement issues?
 
Last edited:

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top