What's new

ZFS Testing

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Hi eon,

I did some surfing through the OpenSolaris HCL and it looks like I'll be wasting my time trying to get my test NAS to work with eon. It supports the e1000 just fine, but there is not even the slightest mention of the Intel ICH10 AHCI SATA adapter - or any AHCI compatible adapter at all, if I recall.

However, my "alternative" testmachine that has the SiL3132 SATA should be supported ok. It has the Nvidia MCP55 Nforce ethernet adapter on it. And it's on the list also. I guess I should proceed with that one. I'll re-run my full Linux baseline on this one and then try eon.
 
in defense of OpenSolaris

I will be building a CIFS server based on OpenSolaris with a Supermicro SATA controller (AOC-USAS-S8I). This controller is based on the Intel IOP348 chipset which is supported by OpenSolaris.

SuperMicro also makes a card based on the LSI 1068E chipset which is well supported by OpenSolaris/Solaris. Sun uses this chipset in their own HBA cards.

The SIL 3114/3124 chipsets are supposedly supported in the latest version of OpenSolaris but there are varying reports on performance.

The plan is to configure the 8 data drives (mix of WD 500GB drives) in various ZFS configurations (7+1 raidz, 6+2 raidz2, 3+1|3+1 raidz, 4+1|2+1 raidz, 2|2|2|2 mirror) to find the best capacity/performance balance. The machine will be used as a media server for movie files so the load mix which will be most important to me is large sequential reads and writes.

I'm choosing ZFS on Solaris since the combo has some really good management tools for backups, snapshots, and dead drive replacement. I've tried different Linux and FreeBSD based NAS solutions with previous media servers and there's always been something missing for me.

John
 
Please share the rest of your specs

Hi John,

I've been inspired to try ZFS by this thread but need some handholding on my way.

I'm a 3D animator and, for the first time in my career, I'm soon to become responsible for the hardware needs of a small studio. We will launch our business with about four PCs, running dual boot WinXP64 and Linux Fedora and also an Intel MAC.

I've been researching storage here on SmallNetBuilder and ZFS seems the way to go but, when seasoned pros like Corndog are sceptical of OpenSolaris hardware compatability, I need to follow in the exact footsteps of someone more courageous.

If you could share the exact specs of your ZFS build, and include simple descriptions to go with the acronyms that would be great.

For instance, I'm unclear on what part you are referring to when you say

SIL 3114/3124

Is that a network card?

Sorry to request baby talk but I would really appreciate it.

Good luck with your pioneering

Nigel
Sydney Australia
 
Hey Nigel,

Misery loves company. Nice to have another brave masochist joining us on this Solaris/ZFS misadventure. I have to admit that Solaris humiliates me. I'm somewhat of a major expert in Linux - using it since 1993, and also have done sysadmin and architecture engineering on HP/UX and IBM AIX. But once I get OpenSolaris up and running, I feel like a complete idiot trying to get everything working. I guess it's time to RTFM again and learn the admin tools.

Anyways, I can clear one item up quickly, the SiL3132 is the Silicon Image SATA RAID controller that is on the motherboard of my alt test system. It runs the eSATA ports on the back. The motherboard, by the way, is an ASUS Striker Extreme. Getting a little older now, but that means Solaris might just run on it.

I haven't reported in on this in a while, because work has gotten insane, but when May finally rolls around I should have some updates.

Have at it!
 
Demystifying the terminology

Thanks for the encouragement.

Nice to know I'm not alone.

My plan is to firmly establish our new business equipment needs using the KISS principle (Keep It Simple Stupid). We'll get a couple of 1TB drives, one inside a workstation which we share one as our main work drive, then incrementally back up to the other, external drive, using Microsoft's Sync Toy..

http://en.wikipedia.org/wiki/SyncToy

We'll soon outgrow 1TB though, so, once I've narrowed down the most cost effective NAS for our needs we'll expand onto that. I'm tempted by the cloud backup feature of the new Netgear ReadyNas NVX but I'll wait for Tim's review on that.

I'm still in the dark as to what we will need in terms of Gigabit Switcher, I'll have to re read Dennis Woods article...

TS509, Link Aggregation, 3COM 2916 = good

http://forums.smallnetbuilder.com/showthread.php?t=463&highlight=dennis

In the meantime I'm trying to get my head around OpenSolaris/ZFS and identify some reliable hardware that I can afford to build at home and test thoroughly before implementing that system at work. In regard to this I've been trying to digest this very informative ZFS thread...

http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/

One of the things I'm still puzzled about is the difference between CIFS / NFS / SMB etc. what exactly they do, which I need and whether it's a choice of using one or another or combining them?

As I mentioned before we will have a mixture of PC's running XP and Linux and probably a MAC.

Cheers

3DHack
 
Hi 3DHack,

Although I think the KISS principle might result in you not using ZFS or OpenSolaris at all (because other options are so much easier) we'll leave that thought to one side.

To your questions:

First, about synctoy. I've used this tool off and on ever since it came out. As a production backup tool I'd recommend that you NOT use it. It simply isn't reliable enough. I'd point you to robocopy instead as a more reliable windows backup/copy automation tool. It's built into Windows now.

About the network protcols CIFS, NFS, SMB - you'll find great explanations about these on wikipedia, but here's the quickie version:

NFS: Network File System - released by Sun Microsystems years ago - it is the default way for UNIX and Linux systems to connect to network file servers. It is very lean and fast, but is currently "stateless" - i.e. on an NFS server, you can't get a display of connected clients and all the files they have open like you can on a Windows server, simply because the NFS server "doesn't know".

SMB: I can't seem to find any consensus online whether this means "Service Message Block" or "Server Message Block". Anyways, this is Microsoft's and IBM's network protocol that covers a big suite of stuff from domain logons, network printing, and network file server access, using the NetBIOS protocol as an underlying layer. It comes from Microsoft and IBM's work way back on LAN Manager, OS/2, and Windows NT, and is currently the major network protocol used in Windows to "map drives" and "share" anything, including printers, faxes, modems, and disks.

CIFS: Common Internet File System. This is a rather political attempt by Microsoft in the mid to late 90's to rename the "mapping drives" part of SMB to something that sounds like they've "taken over the world". At the time, you could have made the argument that FTP or NFS were just as valid contenders as SMB for that name, especially since FTP works way better "over the internet" than either SMB or NFS.

A note about SMB and CIFS. The short descriptions I've made above give the proper historical meaning of SMB and CIFS, but there are two other current differences that you should note:

1. Linux kernel drivers. Even though Linux is more naturally geared toward NFS because of it's UNIX heritage, it has great support for SMB/CIFS and can connect to Windows Servers and NASes quite well. However, there are two different versions of the software driver in the Linux kernel that are used to do this. The older version was called smbfs and the newer version is called cifs. They do the same thing, but obviously all the best optimization and bug fixing work is done on the new cifs version.

2. OpenSolaris server support. If you've used Linux at all and are any kind of a fan of open source, you've definitely heard of the SAMBA project - one of the most brilliant open source projects next to the Linux kernel itself. It is a reverse-engineered implementation of Microsoft's SMB/CIFS protocol on Linux, that makes Linux act like a Windows file/print server and Domain controller. This software now works on much more than just Linux. It also runs on Opensolaris, among other things (such as FreeBSD). However, Sun also has their own implementation of the CIFS protocol that they have available in Opensolaris as an option. You can choose between Sun CIFS and SAMBA as your SMB server. (At this point I'm relaying what I've found but I'm no expert here. eon, please cut in if I'm off in the weeds.) This gives you an interesting choice that you don't have on Linux, and it's something I'm interested in trying. For instance, eon (a super-leaned up version of OpenSolaris that gives you a quick bootable D.I.Y. NAS) comes in two flavours - the "SMB" flavour that runs SAMBA, and the "CIFS" flavour that runs Sun CIFS instead.

There's a lot more on this, including the rich history of these protocols, available on wikipedia, which I heartily recommend you read. Wikipedia is not always perfect, but it's reporting of network protocols is more than adequate for any example I've ever checked.

Hope this helps.
 
Apprentice's Initiation

Thanks heaps Corndog,

I hadn't heard of robocopy, I'll see if I can find it on my XP box and I'll keep reading and re reading your post until the info sinks in.

I guess the KISS principle will work as long as our projects don't grow larger than the individual disks we can afford to keep buying.

With 2TB drives out now that may be a little while.

Cheers

3DHack
 
ZFS the path to ...

Hi 3D Hack,

Corndog explained it perfectly. For Opensolaris there are 2 methods of implementating smb/cifs or windows sharing.
1 Samba the open source project, sharing is controlled via a smb.conf file. which you can highly customize but will require a slight learning curve. The good thing is there are numerous config examples around the web. Samba runs as an application (layer 7 OSI) so has to work up and down the OSI layers when doing things. Sun felt they could squeeze some performance from a kernel level implementation hence method 2.

2 CIFS is a kernel level driver implementation of smb/cifs tightly integrated with ZFS. The sharing is controlled by zfs commands and other commands. The command set is reasonable and the again lots of help available at general forum: http://opensolaris.org/os/discussions/
cifs specific: http://www.opensolaris.org/jive/forum.jspa?forumID=214
It is a work in progress and constantly being improved. Personally I give samba the edge on throughput.

I would highly recommend ZFS, but I'm biased, there are numerous reason. Here's a few you can build pools that you can grow with your needs
building a pool zpool create poolname raidz disk1 disk2 disk3. Lets say this is getting full I simply add 3 more disk equal in size disk1=disk2=disk3. zpool add poolname disk4 disk5 disk6. If i outgrow this I repeat.

If a disk dies its easy to replace, you can make unlimited snapshots which is like time machine on the mac but a little more efficient.

For hardware recommendations there are a few examples at http://eonstorage.blogspot.com on the right in the links section based a the D945GCLF2 aka little falls 2 http://www.newegg.com/Product/Product.aspx?Item=N82E16813121359. I think this board has a very in depth review here on smallnetbuilder.

It all comes down to off the shelf pre-packaged or being willing to grind a little and come up with something more fitting to your needs. There are good articles here on making the plunge. Just don't look back once you do. The custom solutions are good but honestly no match for what I could custom build.

Hope this helps you on your storage quest.
 
Last edited:
Ok all,

As this thread is actually called "ZFS Testing" I thought I'd post some actual numbers that include ZFS.

My test server is a techstation with an ASUS Striker Extreme motherboard on it. The CPU is a Dual Core Pentium D 3.2GHz. It has 4G of RAM and a pair of WD Raptor drives. The client is an Intel 975XBX with a Core2Duo 6600 with also 4G of RAM running Vista SP1.

For comparison I have tested also against a Netgear ReadyNAS Pro (6 Seagate 1TB drives in XRAID2) and a QNAP TS-639Pro (6 Seagate 1.5TB drives in RAID6)

On the Test server I first tried Gentoo Linux x64 with the Raptors in a RAID0 Array, formatted with ext3 (looking back, I should have used RAID1 - I'll redo that in a while).

Then I reformatted it and installed OpenSolaris Express b112 x64 using native mirrored ZFS. I activated webmin and used that to set up SAMBA (I'm a bit of a noob at Solaris).

I generated a 30G file of random numbers on Linux using the following command:

dd if=/dev/urandom of=testfile bs=1024000 count=30000

Then I copied the file to and from Vista SP1, rebooting after each copy to make sure the cache was clear. I watched the numbers that Vista was reporting on it's "Copying..." dialog box, and also kept an eye on my ProCurve Webadmin and MRTG graphs to corroborate Vista's reported numbers.

The numbers are as follows:


Write. Read... Server
------ ------- --------------------
42MB/s 69MB/s. ReadyNAS Pro
56MB/s 95MB/s. QNAP TS-639Pro
45MB/s 104MB/s Gentoo Linux x64 ext3 RAID0 on Techstation
73MB/s 100MB/s OpenSolaris Express zfs mirror on Techstation


So there are some interesting things that I notice from this. First (slightly off-topic) is that my QNAP seems to be outperforming my NetGear. This is the same on every test I do with these two NASes. One thing is that they are both about half-full (2.5TB of all kinds of files on the Netgear and about 3TB of similar mix of files on the QNAP). I noticed that when I got these NASes the Netgear was VERY fast when it was fresh out of the box and empty, but it seems to be slowing down as it fills up. The QNAP doesn't lose speed half so bad. I've always seen this with ReadyNASes - they slow down a lot as they fill up.

But more importantly, take a look at how well ZFS performed on it's native Solaris! Much better than 64bit Linux. I'm going to have to try again with 32bit Linux for comparison - in the past I've found it to be much faster than 64bit. (if you get shell access to any NAS these days, such as the QNAP or ReadyNAS, they're all 32bit - I think there's a reason) I'll report back on that, too.
 
Last edited:
The performance of OpenSolaris isn't a big surprise for me.

The german magazin c't has test it last year with an Athlon64 X2, 2GB Ram and they compared it with freenas, openfiler, WHS..
With OpenSolaris and WHS they reached the best results (86MB/s write & 111MB/s read).

In OpenSolaris you should install manually (via packet-manager) the SUN-SMB-server if not done already.
(and activate them with "svcadm enable -r smb/server")
The packages are called SUNWsmbs & SUNWsmbskr.
In OS 2008/11 there is by default the Linux-Samba Server installed, which shows not the same performance....

There is a "Solaris CIFS Administration Guide" available from SUN.
 
Last edited:
To be complete, here's the way it looks with 32bit Linux and RAID1 - just finished the test.


Write. Read... Server
------ ------- --------------------
42MB/s 69MB/s. ReadyNAS Pro
56MB/s 95MB/s. QNAP TS-639Pro
45MB/s 104MB/s Gentoo Linux x64 ext3 RAID0 on Techstation
61MB/s 78MB/s. Gentoo Linux i686 ext3 RAID1 on Techstation
73MB/s 100MB/s OpenSolaris Express zfs mirror on Techstation


I really have to say, Linux is falling behind! Although it appears with some effort you can tweak it, as evidenced by the QNAP numbers.

I wonder if the Solaris CDDL is amenable to using it in future consumer NASes? Might be useful in gaining a performance edge?

I'm going to have to compare also Windows 2008 Server. My earlier tests with this OS were nothing short of phenomenal.

More to come...
 
Hey corndog,


I wonder if the Solaris CDDL is amenable to using it in future consumer NASes? Might be useful in gaining a performance edge?

Teamed with compatible hardware EON will fit the bill.

Glad you finally got to taste the zfs experience.

Roamer,

Are you saying they installed opensolaris on one the HP EX485/7 servers?
 
The plan is to configure the 8 data drives (mix of WD 500GB drives) in various ZFS configurations (7+1 raidz, 6+2 raidz2, 3+1|3+1 raidz, 4+1|2+1 raidz, 2|2|2|2 mirror) to find the best capacity/performance balance. The machine will be used as a media server for movie files so the load mix which will be most important to me is large sequential reads and writes.

The best capacity/performance using all disks will most likely be 1 raidz pool with 2 vdev 4 drives each 3+1, 3+1
The best performer would be mirrored raidz 4 disks each.
 
There is a nice Floss Weekly podcast with a ZFS developper hosted by Leo Laporte and Randall Schwartz. It is an hour long discussion about what ZFS has to offer over other file systems.
http://www.twit.tv/floss58

It really is the future as it is an intelligent file system capable, if configured correctly, to optimize security, integrity, parity, capacity/scale and yes ... 'performance'. If my memory serves me, they said that if the FS spanned an array of drives, the FS was smart enough to write data to the fastest writable sectors on the fastest drives and then move that data, when idle, to the fastest readable sectors on the fastest drives.

It is likely the zfs fuse driver causing the very slow performance as in the case of the ntfs 3g driver. I think if you used opensolaris, you might see better results, but I'm not suggesting anyone benchmark it.
 
Yep,

I definitely saw this. In my limited benchmarking that I did on OpenSolaris ZFS I found it to be much faster than Linux and marginally faster than Windows 2008 Server - i.e. the fastest I had seen anywhere.
 
If you ever get the itch to to some more benchmarking, I'd be interested to see the results on your linux platform using something other than ext3. Ext3 just doesn't do so well with large files and it's very wasteful with large partitions; it's just not a very modern FS. I moved to xfs a few years ago and couldn't be happier. Check out my thread here to see what you can get out of linux with samba and modern but modest hardware. All my data drives on the linux box I mention in that thread are using xfs. I'd also be quite interested in seeing what you get testing ext4.

http://forums.smallnetbuilder.com/showthread.php?t=1958
 
Sure thing.

However, let me check my files. I used to always check ext2, ext3, reiserfs, and xfs on all filesystem checks I did. But in my testing, ext3 always came out just a little faster than reiser and xfs, so I just kept on doing ext3 only, and considered it representative of "the best of Linux".

I will add xfs back into my testing and proceed with it.
 
Similar threads
Thread starter Title Forum Replies Date
thiggins Free NAS Testing Program Invitation from UGreen General NAS Discussion 18

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top