What's new

Best Price/Perf of Higher-End NAS's?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

tji

New Around Here
I have been looking around at the various NAS options for a development lab environment. We work with VMware ESX, and have a lot of VMs that add up to high load on a NAS. I have found the inexpensive NAS devices to be insufficient for performance, so I am looking for the best deal on a more capable device.

There seem to be a lot of good new options in the ~$2,000 range, like the QNap TS-809, ReadyNAS Pro, Iomega StorCenter Pro. But, my issues are mostly around I/O ops/sec, with about a dozen ESX servers, each with ~ 10 VMs the aggregate Write Performance requirements are probably beyond the capabilities of these devices.

The upper limit of my budget is around $10,000. So, I am looking for any input on the best performance NAS in the $5K to $10K range.

Any thoughts on NAS's like:
ReadyNAS 3200
Low-end Netapp or EMC
others?
 
I don't test NASes for this, so have no facts for you.

I do know that QNAP and Thecus are aiming at the Virtualization market with their latest high-drive-count offerings. You probably want as much horsepower as you can get, so stay away from Atom-based NASes.

Yeah, you're probably talking ReadyNAS 3200 or NetApp.
 
You could build your own Super NAS..

*Chenbro 4 bay NAS box.
*High powered Quad core ITX board.
(I'm not sure if Intel VT and AMD SVM CPU hardware is supported in ITX boards.)
*A real raid controller that should take the load off the CPUs give you the redundancy you need. Adaptec maybe?
*You could run anyone of the available free Nas OSs, Windows Home Server or OS of choice really.

Just don't expect any kinda power savings from the device. (Been considering building one myself.)

On second thought you could just use a fuller sized computer with a raid card to meet your requirements.
Although NAS looks nice, is small compact and feature filled you must also look at what your trying to achieve and
consider that some bigger equipment might be better suited for the job. If NAS don't cut it, it's simply not a match.
Virtualization is also a "iffy business" depending on what is running in the machines. Small clean footprint, more VMs.
Big clunky footprint, less VMs. Kinda reminds of the old DESQview multi-tasker days, except the programs being run
are the size of mountains in comparison. You might be trying to "tow a dump truck with a super duper turbo charged
four cylinder". No matter how you look at it, it's the wrong tool for the job.

One last thought. The price and size of machines has come down greatly. I don't know how many VM's you run or what your application is
exactly but you can get a mini 1.2ghz machine (maybe faster) the size of big remote controls for $200ish and stuff 40 or 50 of them
in a rolling luggage carrier then hook them up on mid sized bookshelf in about 30 minutes. The real deal always out performs the virtual
systems. Example, you buy a 3000 dollar machine that can only run 5 VMs before bogging down. How many minis can you get for that
kinda of money, networking them all is not a big a deal at it use to be. Again depending the application.

As side note, I'll tell ya, for personal use I bought the best NAS I could afford. A Synology DS-508. It's expensive for
a home use and was targeted at business use. Had I bought I low end one, I be looking for a new NAS right now.
However it's 2ish years later, its still meeting my needs because I realized this at time of purchase. I probably will
have it till I it can't push enough data, unlikely, or it breaks. Right now I don't run emulation software on it, but if I
did it would something it could handle with out bogging it down. A example would be Freedos under Qemu maybe running
a old bulletin board via telnet in a console deamon. A 800mhz cpu could probably handle that kinda load with too much
fuss.
 
Last edited:
Thanks for the responses.

My first attempt was basically what you described, DevinMe, except with beefier hardware. I got a dual Xeon server board with very good I/O, and installed Linux on it.

The two problems I had were:

- Linux gives very minimal diagnostic capabilities for NFS service. When the box got swamped, it was almost impossible to determine which user / client / file was responsible for the load.

- Performance was not up to my expectations. This is somewhat related to point #1.. without good tools to quantify usage all I had to work from was the very broad stats around MB transferred, and those were not impressive. After tuning some kernel parameters I improved performance somewhat, but still not to where I wanted.

So, I decided to invest more $$ towards a higher-end unit with better management capabilities.

I've been searching for good benchmarks comparing EMC, Netapp, HP, or other storage servers in the $5K to $12K range. But, haven't found any real solid info.
 
We have two "old" SAN devices from netapp. It's a cluster where one head has 4 shelves of FC disk (basically scsi) and the other has 2 shelves of Sata disk (14 disks per shelve with 1-2 as hot swaps). Depending on what your virtuals will do, I would say Sata disk in itself can already give problems. I know we saturated the two shelves of Sata disk very quickly (from an IO point of view). Of course this get's worse if you run something like mail or busy databases on the same spindles.

Anyway, more smaller disks will get you a lot better performance then fewer bigger ones. You might also want to make sure the device is certified for ESX or Vmware will not support it...
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top