What's new

Notes about my inexpensive DIY NAS box for VMware iSCSI.

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

tropmonky

New Around Here
I wanted to document my findings for anyone that's also looking for VMware iSCSI storage solutions.

I already have a HP MSA SAN doing my live VM storage, however I needed an inexpensive solution to push my VM backups to.

The VMware Data Recovery agent can use a NFS share or iSCSI target. It's best to have a iSCSI target so you can Vmotion VM's or Clone VMs in emergency cases.

I have a small budget (as far as SAN's go) of $1,500 shipped.

I had a spare Intel Dual Core 2.0ghz CPU, socket 775 and some DDR2 Ram (4GB total). I purchased a Supermicro 1U SYS-5015B-MTB barebones system and 4 2TB Western Digital GREEN drives.

First I tried Server 2008 and sharing out a NFS share. VMware didn't work well communicating with Windows NFS so I scrapped that and tried OpenFiler.
I got OpenFiler up and running, but VMware 4.0+ also had some performance issues so I decided to try FreeNAS.

FreeNAS 8 beta at the time didn't have iSCSI support (yet) so I installed FreeNAS 7. I'm using XFS on RAID1 and have TWO shares off the NAS at the moment. FreeNAS doesn't have a very robust LUN setup, unlike OpenFiler, so I'll be changing this setup later.

FreeNAS is performing very well in it's current setup, I HAVE run some test VM's off the NAS/SAN via iSCSI and it's quite a bit slower than my main HP SAN... However it's not meant to be fast, it's to be used as a backup destination.

I'm very happy with the outcome, however I plan on adding a PERC 5i card (bought off ebay for $60 shipped) so that I can setup RAID 5 arrays across all 4 drives thus increasing performance. RAID 5 isn't really slower than RAID10 when you use a hardware RAID card, and you get more usable space out of it.

For VMware Storage it's a tantalizing option, however I would NEVER use a FREE OPEN SOURCE platform to host my LIVE Virtual Machines. If you go upgrade something and then your storage stops working YOU'RE IN A HEAP OF TROUBLE WITH NO PROFESSIONAL HELP TO TURN TO. I'll keep my LIVE VM storage as HP, DELL, or QNAP even, but when it comes to BACKUP storage, that's something that I'm willing to play with to save a little money.

I learned that iSCSI support is a PAIN to deal with, and an expensive feature to find in off the shelf equipment. And I re-learned that there is no substitute for raw Hardware RAID... ZFS is nice and all, but no substitute.
 
Not sure if you wanted to do any more testing but you might check out Nexenta. Specifically NexentaStor from Nexenta Systems. The community edition can be downloaded here. I tested it out a little while ago and was impressed with how well it worked. Performance was very good with SMB file transfers. I never did test iSCSI but since Nexenta is based off of Open Solaris it should have good performance.

00Roush
 
I know this message is really old, but it came up 2nd on a Google search for VDR with NFS and iSCSI. The post contains several inaccuracies that may confuse others:

1) A hardware RAID controller does not make RAID5 about the same performance as RAID10. There are several reasons for this:

a) For only 4 x 7.2k SATA drives, the parity calculations are extremely minimal. A full software RAID solution would only result in a very nominal increase in CPU usage compared to a hardware card. The drives just can't handle enough I/O req/sec to result in any kind of significant CPU load. Other than CPU load, performance should be nearly identical between a hardware card or software RAID. The money would be better spent elsewhere for this level of disk I/O.

b) It's completely ignoring RAID write penalties. Four drives in a RAID10 has a write penalty of 2 - each disk write results in 2 IOPS: Disk A mirrors to Disk C, and Disk B to Disk D. Each write generates 2 IOPS. Since there are 4 drives, you can expect write performance on the level of 2 (4 disks/2) disks (minus some overhead). Read requests (if implemented properly) can utilize all 4 disks. However, RAID5 is going to have a write penalty of 4 - each disk write results in 4 IOPS. With 4 drives, that means you only get the performance of 1 (4 disks/4) disk. Writing to disk A means reading from disks B and C, calculating the parity, then writing the parity to disk D. That's 4 IOPS per write compared to just 2 for RAID 10.

That's half the IOPS of the RAID10 solution. Backups generally write close to 50% of the time - compared to around 20% of the time in many common scenarios. RAID5, while giving extra capacity, is going to be considerably slower for a backup solution than RAID10. Local backups are going to most likely be bound by available disk I/O. This doesn't mean you should always go with RAID10 for backups...it just means you need to evaluate the performance and capacity needs for the specific application. Having many slow backups (with normal restore times) is better than only a few fast backups. But if "slow" backups could be an issue, then having fewer working backups is better than many failed backups.

For example:
A company wants weekly backups of their Exchange server. If they go with a RAID5 setup, weekly full backups take 48 hours to run and they can store 6 weeks of data. With a RAID10 setup, weekly full backups take 24 hours but can only store 4 weeks of data. Which is better? The answer is it depends on the business. If the business needs 6 weeks of data for compliance reasons, then RAID5 is better. If the business needs to ensure they don't loose more than 24 hours of data, then RAID10 is the way to go. If they need both, then they need to shell out more money for more or faster drives. :)

2) Hardware RAID versus software RAID, for moderate levels of disk I/O, should be a decision based on the specific application and environment. ZFS has advantages over RAID configurations. It also has drawbacks (i.e. no dedicated BBU). The extra overhead of the processing of the RAID array has such a minimal impact on the system that it doesn't make sense to have it as a significant factor when picking your solution. That said, very high I/O systems (i.e. mutiple SSDs in an array) can be a different story - the CPU usage is considerable to maintain full IOPS, so if the host's CPU is also being taxed then either IOPS and/or compute power is going to be reduced.
 
Last edited:
Similar threads
Thread starter Title Forum Replies Date
P Questions on my first DIY NAS (+server) build DIY 14

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top