What's new

Great first results from ZFS NAS

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

JPorter

New Around Here
I'm loving the results I've gotten thus far on my "SME" oriented ZFS NAS build.

In short summary, I built a low-cost high performance NAS:
  • new Sandy Bridge architecture Xeon E3 1240
  • large-capacity "Nearline" 7200rpm SAS-2 drives
  • high speed LSI HBA with SAS 6G (no raid)
  • OS is OpenIndiana 148b
  • NAS control panel is napp-it
The system drive is a Crucial M4 SSD, and the primary data array on the platter drives is configured as a single large ZFS pool using RAID-Z2 (double parity, like RAID6).

Benchmarking on the hardware using Bonnie++ from within the napp-it interface returns 529 MB/s sequential reads and 497 MB/s sequential writes. This blows me away considering the (relatively) small amount of money invested. This is from an array with 6 Seagate Constellation ES 1TB drives, and a seventh as hot spare. No read or write cache other than the small 16mb on-board each drive, and Bonnie++ was moving many gigs of data, so no cache impact there.

I wrote up some details on my own site, including a more specific build list if anyone is interested.

In my initial network testing, I'm getting around 80 MB/s writes (directory copies, mixed content) across the network using SMB/CIFS. I haven't done any more properly organized benchmarking over the network yet due to time demand from other projects, but I'll certainly give that a shot if anyone is interested. Can anyone suggest a good across-the-network NAS performance benchmarking tool? I've set up NIC bonding and I'd love to see what this thing will scale to under concurrent load.
 
Welcome to SNB Forums, Jason.

I'd suggest Intel's NASPT for benchmarking.

Bonded NICS won't help throughput from a single workstation. Will help (if the NAS can keep up) for multiple clients.

$2800 is kinda pricey. Did you see Build Your Own Fibre Channel SAN For Less Than $1000 - Part 1 :)

Yes, of course NIC bonding won't affect single-client traffic. I was wondering if there is a benchmarking tool available that actually initiates and measures multi-client traffic.

I beg to differ on the pricing thing. $2800 is quite cheap compared to similarly-performing rack NAS systems from any branded vendor. The DIY article you referenced is a great read, I've followed the whole series... but it would be impossible to build for anywhere near that price using new (non-Ebay) components with full manufacturer warranties.
 
Yes, of course NIC bonding won't affect single-client traffic. I was wondering if there is a benchmarking tool available that actually initiates and measures multi-client traffic.
If you find one, let me know. :) I suspect home brewed ones are lurking in labs somewhere. I'd just start with NASPT on two machines, hand-started and see how that works. My guess is that head thrashing will make throughput go down the tubes pretty quick. Just look at the difference between NASPT file copy (mostly sequential) and directory copy (mix of files and folders of different sizes) results.

I beg to differ on the pricing thing. $2800 is quite cheap compared to similarly-performing rack NAS systems from any branded vendor. The DIY article you referenced is a great read, I've followed the whole series... but it would be impossible to build for anywhere near that price using new (non-Ebay) components with full manufacturer warranties.
Well, you do have a lot of drive slots in your config.

Keep us posted on how things go. I'm sure you've already read through the many ZFS threads already in this forum.
 
Wow quite the setup you have there. Good to see ZFS is providing good performance. I tested NextenaStor a while back and was impressed with the performance of just a 3 x 1TB RAIDZ setup. I am wondering how your multi-client performance will end up.

Since you guys were talking about a benchmarking tool I figured I would chime in. While I have used the Intel NASPT tool I have found it to be inconsistent sometimes and you don't get the most accurate results when testing it on a client with more than 2 GB of RAM. So lately I have been using Iometer more and more. Also I just learned how you can do multi-client benchmarking with Iometer and manage the benchmark from a single client. So I figured I might explain a couple of steps on how to do it.

I am not sure if you are familiar with Iometer or not but it basically is a workload generator and can test disks or a network. It is basically made up of two programs. One is Iometer that basically defines and manages the workload (front end GUI) and the other is dynamo that actually generates the workload. To do multi-client benchmarking you just start up the dynamo program on all the clients you want to test using a command line that points to the client that is running Iometer. In my case the command line is dynamo /i 192.168.0.5 (location of Iometer client) /m 192.168.0.13 (local IP of client). When I startup Iometer on the client I then see two computers listed under the topology section. One is the client just running dynamo and the other is the local client. By default each one has several of its own worker threads. Generally I only select one worker thread and assign a access specification to it. This thread might help a bit with configuration. For more detailed information go here.

Not much time tonight but if you need more details or help with configuration I can probably help.

00Roush
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top