What's new

How To Build a Really Fast NAS - Part 2: Shaking Down the Testbed

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Hey Tim
I mean using the RAM drive in the NAS, not the client. That way you are truly testing the limits of the cpu/nic/software, not the hard drive(s) that are in the server.

Actually in the home-built NAS, it would be cheaper and easier to stick 4GB RAM in the server and create a RAM-disk using software, but this wouldnt work when you are comparing it to off-the-shelf NAS's, which is where the I-RAM would come in.

Kevin
 
I mean using the RAM drive in the NAS, not the client. That way you are truly testing the limits of the cpu/nic/software, not the hard drive(s) that are in the server.
Sorry I misunderstood, Kevin. Right now, I'm looking at what can be achieved with standard / inexpensive stuff. RAM and Flash based drives don't have the capacity or cost to be seriously considered by most users, do they?
 
I see what you mean Tim, but I guess the point of the article is to find the actual limit of a gigabit nic etc, not the hard drive, so that we can see what to expect over the next 5 years or so as SSD drives become faster and bigger. Basically it is a way to take the slowest/limiting part of the system out of the equation.

Thinking about it, a sofware RAM drive is probably better as it isn't limited by the SATA interface, and it's cheaper. Also it would only need to be 2GB in size which is the maximum of the iozone test.

I'd defintely be interested in another part to the article using a software RAM drive.

Kevin
 
Thanks for the suggestion, jalyst. Any idea what the pricing is?

apologies for the delayed response, could have sworn i subscribed to this thread..

Not sure, I vaguely recall it being quite pricey, but "I think" there's a non-supported version that is slightly less "bleeding edge" which is free..

When I've got time to dedicate towards a NAS project it (or something similar) be the 1st thing I try.
 
I think your limitations are on your network (nix/switch/cable)! I’ve run into your same 40GB problem. I can write to my nas at 90GB/s at thru the most friendly network adapter and switch combination, but I can only seem to read at 40GB/s at best! Using certain router combination can limit my write to 40GB/s. I've also tried connecting computers directly to the server/nas/router with the same result. I've been using 8GB movie iso files as a test.

My home built server/nas/router is built on an Intel dg965wh (965 chipset) motherboard and a 1.8GHz C2D, 2TB six disk software raid5 array. Linux says the array benchmarks at 250GB/s. When writing to this array over the network, the cpu usage is only 50 percent on one core with 90GB writes.

Computers used for testing are two quad xeon servers (supermicro x7da8 and x7da3+) with one running a scsi hardware raid card (Intel srcu42x), two Nvidia 780i motherboards; all running two disk raid0 arrays at minimum. I can copy files at over 130GB/s from array to array on the same computer, so disk thru put is no the problem.

Maybe this is why Intel sent you two network adapters!!!
 
Last edited:
I think your limitations are on your network (nix/switch/cable)! I’ve run into your same 40GB problem. I can write to my nas at 90GB/s at thru the most friendly network adapter and switch combination, but I can only seem to read at 40GB/s at best! Using certain router combination can limit my write to 40GB/s. I've also tried connecting computers directly to the server/nas/router with the same result. I've been using 8GB movie iso files as a test.

My home built server/nas/router is built on an Intel dg965wh (965 chipset) motherboard and a 1.8GHz C2D, 2TB six disk software raid5 array. Linux says the array benchmarks at 250GB/s. When writing to this array over the network, the cpu usage is only 50 percent on one core with 90GB writes.

Computers used for testing are two quad xeon servers (supermicro x7da8 and x7da3+) with one running a scsi hardware raid card (Intel srcu42x), two Nvidia 780i motherboards; all running two disk raid0 arrays at minimum. I can copy files at over 130GB/s from array to array on the same computer, so disk thru put is no the problem.

Maybe this is why Intel sent you two network adapters!!!

In this article Tim showed the results of his network throughput tests using IxChariot and found that PCIe network cards provided around 113 MB/sec of bandwidth. This is a good amount better than the PCI network card results of 67 MB/sec. After seeing this and doing some more testing he came to the conclusion that the test bed needed to be updated to PCIe network cards.

It sounds like your hard drives are up to the task so you might try running similar network tests between your computers to rule out a network bottleneck. I usually use Iperf 1.7 for testing my home network. If the network checks out next up would be looking at software bottlenecks.

You mentioned Linux... what variant are you running? Also what OS are you using on your clients? I only ask because some of the Linux variants don't seem to show as good of performance as others.

00Roush
 
server/nas/router is fedora 9 x64
clients are all vista-64 business

I've run benchmarks with Sandra, but i don't believe them. i'm starting to believe some of these consumer based switches are processor bottleneck and so are the adapters.
 
server/nas/router is fedora 9 x64
clients are all vista-64 business

I've run benchmarks with Sandra, but i don't believe them. i'm starting to believe some of these consumer based switches are processor bottleneck and so are the adapters.

I know my D-Link DGS-2208 gigabit switch can support at least 920 Mbps between two computers here on my home network. According to the D-Link website it has a 16 Gbps capacity which would be 2 Gbps for each port. From what I recall most of the name brand consumer switches all have similar stats so I would assume they would yield similar results.

I would look at systematically breaking down where your bottleneck might be. First and foremost you would need to make sure you are using PCIe based network cards. PCI based cards will limit thoughput. Next run Iperf between a few computers on your network to see if there are any bottlenecks for TCP/IP. If that shows network throughput below 850 Mbps your network cards or switch might be the bottleneck. To take the switch out of the equation test with just a cable between two computers. If the results are still low then most likely one or both of the network cards might need to be upgraded.

00Roush
 
I've also tried connecting computers directly to the server/nas/router with the same result.

I've already tried the cable between two computers just to rule out the switches and and cabling. I have one switch in the "media cabinet" which has the NAS connected to it and then a switch in each room to connect multiple devices. So a room to room file transfer will have three hops, and from any computer to the nas will have two hops.
All network devices are on the pci-e bus.

The NAS is running Fedora 9 64bit all clients are running Vista 64.

Server:
> cat /proc/sys/net/ipv4/tcp_moderate_rcvbuf
> 1

If the parameter tcp_moderate_rcvbuf is present and has value 1 then autotuning is in effect. With autotuning, the receiver buffer size (and TCP window size) is dynamically updated (autotuned) for each connection.

so autotuning is enabled.

I can write to the NAS at 90MB from at least one computer, but that computer can only read at 45MB ish. strange...
 
I've already tried the cable between two computers just to rule out the switches and and cabling. I have one switch in the "media cabinet" which has the NAS connected to it and then a switch in each room to connect multiple devices. So a room to room file transfer will have three hops, and from any computer to the nas will have two hops.
All network devices are on the pci-e bus.

The NAS is running Fedora 9 64bit all clients are running Vista 64.

Server:
> cat /proc/sys/net/ipv4/tcp_moderate_rcvbuf
> 1



so autotuning is enabled.

I can write to the NAS at 90MB from at least one computer, but that computer can only read at 45MB ish. strange...

Have you gotten a chance to test your raw network speed with Iperf?

Also have you tried transferring files between two of the Vista machines to see if you experience the same problem?

Just remember that even if tcp performance is good you still might not see max performance for file transfers due to Samba settings not being set for high performance. I can't speak for Fedora but I know Openfiler did not have ideal Samba settings out of the box as it was limited to around 40 MB/sec from what I remember. With Ubuntu Server speeds have been much better and range from 80-100 MB/sec for large files over my home network. All of the linux variants seem to have different default settings for Samba so this might be something to look at if you haven't already.

You might consider starting a new thread if you are looking for more feedback on why you are not seeing higher performance with your setup.

00Roush
 
I wonder if you are seeing the effect of cache. All writes will be cached before they get to the disk surface. All reads will have to come from the disk surface and unless the item is in the cache are not cached and are therefore slower.
Just a thought!
 

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top