What's new

DIY: FreeNAS zfs/raidz 3TB storage server report

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Secondly, if you do choose ZFS, I strongly not recommend FreeNAS. I test FreeNAS on the same hardware with RAID0 setup. The throughput is just half of a straight out of box ubuntu server. For such highly specialized piece of software it is disappointing, don't you agree?

No, I don't. Or, I should say, only if performance were the sole/primary concern, in which case one would not choose ZFS.

However, if ZFS is the primary/sole concern (or, at least, performance is not), then you are comparing apples to oranges by comparing the performance of ZFS on FreeNAS to RAID0 on Ubuntu.

An appropriate performance comparision – when choosing ZFS – would be to compare a tuned FreeNAS solution to a tuned OpenSolaris solution, since those are comparable. Perhaps even Mac OS X (though I am not 100% positive it's a kernel filesystem there, I believe it is). You can't even do a similar comparison with ZFS on Linux, since OSol/FreeBSD implement ZFS in kernel, but Linux does not.

And I disagree with your statement about ZFS stability. It is pretty darn stable at this point. Yes, perhaps, it is more likely there is a bug in ZFS than in other filesystems. Which if we all adhered to that and never tried new things, we would have never moved beyond FAT and other more primitive filesystems. It is worthwhile to make sure that, whatever you are doing, you have a backup/safety net to cover such possible failures. I do.

hua_qiu, I appreciate you like speed. I do as well, and certainly I would like to accept advice on making my NAS faster and provide my own experiences on what I have done... but within the requirements of my project. Your repeated suggestion to move to Ubuntu does not help in that respect.

I'd certainly enjoy reading about your experiences with Ubuntu and how you've managed to get the high speeds that you have (in another thread). I don't doubt I would learn a thing or two, but simply telling us we're wrong to want ZFS and/or FreeNAS because your NAS blows it out of the water does nothing. I completely recognize your NAS is awesomely fast. But your setup does only a little to accomplish the goals I set out to do.
 
Last edited:
Secondly, if you do choose ZFS, I strongly not recommend FreeNAS. I test FreeNAS on the same hardware with RAID0 setup. The throughput is just half of a straight out of box ubuntu server. For such highly specialized piece of software it is disappointing, don't you agree?

First of all, I have to apologize if I make any reader think my post into this thread is to show off how fast is my NAS. On there other hand, I want to clarify that my observation of the slow performance on FreeNAS with RASD0 setup is on a XFS file system. I am very sorry that I missed it out in my original post, as you all know that it is my favorite file system. I did do similar test on RaidZ, but I wouldn't compare it with linux as it is comparing apple with orange. My comment on FreeNAS has nothing to do with ZFS, it is my genuine disappointment in regard to a would-be ideal NAS solution. btw, I did apply some tuning setting, e.g. tried different send/receive buffer size, enable large SMB request, etc. I did test with and without these setting. Interestedly, larger buffer size doesn't necessary means better CIFS performance.

Secondly, What I really want to say with my comment above is to try OpenSolaris, as my observation leads to me to believe FreeNAS doesn't achieve the hardware potential. I believe it is within your project requirement, am I right? you stressed ZFS is non-negotiable.

Third, my last reply also express my opinion of home brew NAS on a board aspect. Not everyone can afford/run a proper backup solution. I did looked into tape backup solution and soon realized it will cost a lot more than the total cost my NAS. My current not-so-good solution is selectively backup most important data to an external hard drive.

Last, and most important, backup solution may not save you from data corruption caused by file system itself. You just end up with corrupted data on your backup media. Here is a real life case for a matured file system

A company has mirrored data center at different city with flash copy backup solution and try to grow a jfs volume online with the help of qualified IBM consultant without shutting down DB2 server. Later corrupted data blocks were found on disk and backup media. A company lost one month worth of data, need to be recovered manually.

I do not object trying new things, however, I do want to express my concern about the risk, so that NAS DIYer can decide whether to try out new technology with their own precious data.

As of your goal, I am totally with you, as long as I am not put any of my data on it. That is why I just downloaded OSol installation disc.
 
On there other hand, I want to clarify that my observation of the slow performance on FreeNAS with RASD0 setup is on a XFS file system.

Thanks for clarifying; I understand your point better now, and I apologize for being harsh in my reaction.

Secondly, What I really want to say with my comment above is to try OpenSolaris, as my observation leads to me to believe FreeNAS doesn't achieve the hardware potential. I believe it is within your project requirement, am I right? you stressed ZFS is non-negotiable.

Yes, I do intend to try OSol. I don't have a way to conveniently test the ISO which I downloaded, but I did find instructions on getting a USB boot that may work. Or I just have to shell out a bit more $$$ (but not much, really) to make the system bootable for OSol.


As of your goal, I am totally with you, as long as I am not put any of my data on it. That is why I just downloaded OSol installation disc.

None of my data being stored on the NAS is so precious that it can't be recovered in another manner. Certainly, it wouldn't be entirely convenient if the NAS or ZFS failed, but I could deal with it; there would be no permanent loss. And ZFS on OSol should have fewer bugs and less chance of corruption than other filesystems; look at some of the papers and you'll see that ZFS is detecting errors that other filesystems don't notice.
 
That said, all it takes to migrate the ZFS pool is to run 'zpool export tank' in FreeNAS, and run 'zpool import tank' in OpenSolaris (where "tank" is the name of the pool). That's it.

Be carefully ... FreeNAS & OSol are using different versions of ZFS.
Due to the fact that OSol is using a newer version (v13?) you can easily import a pool that you have created with FreeNAS (v6?), but if the zfs-pool is migrated to higher veresion you can't import it with FreeNAS anymore.

I made some tests with FreeNAS /OSol on a D945GCLF2 board and the performance of FreeNAS was really disappointing me (~22Mbyte/s) undepended of the used filesystem.
OSol was much more better simply out of the box.
 
Be carefully ... FreeNAS & OSol are using different versions of ZFS.
Due to the fact that OSol is using a newer version (v13?) you can easily import a pool that you have created with FreeNAS (v6?), but if the zfs-pool is migrated to higher veresion you can't import it with FreeNAS anymore.

Yes, good and important point. I think I realized that at some point, after I had originally written that statement.

I made some tests with FreeNAS /OSol on a D945GCLF2 board and the performance of FreeNAS was really disappointing me (~22Mbyte/s) undepended of the used filesystem.
OSol was much more better simply out of the box.

Thanks for the info! I really do need to try it soon.
 
I hope you guys are setting the send and receive buffer sizes for Samba on FreeNAS to at least 65536. The default buffer sizes cripple Samba performance. At least in my experience.

00Roush
 
Finally, I got OSol installed. Yes, OSol is much better

6 1TB drives construct RaidZ

Bonnie++ test file system performance
Sequential read: 450MByte/s
Sequential write: 97MByte/s

The write performance is bit disappointing, but faster enough, considering all the on disk data verification overhead

Un-tuned samba tested by iozone with 4G+ file
write: 36MByte/s
read: 52Mbyte/s

File copy from NAS gave about 65Mbytes/s for a 10GiB file

Overall it gave reasonable performance comparing with best throughput I can get out of the hardware.

But the boot time of OSol is very very long. Considering lots of linux distro nowadays can boot under 30s(even under my 4 years old desktop). My NAS won't be running for 24x7, but waked up by the magic packet via Lan when needed and shut down when idle for a period of time.
 
Played with different ZFS layout yesterday with my 6 1TB drives

a. Raidz1
b. Raidz2
c. dynamic stripe over 3 pairs of mirrors
d. dynamic stripe over 6 individual drive

a. b. and c. have similar performance write/read 140/300+ Mbyte/s
d. has best write/read performance 224/550 Mbyte/s

Another interesting observation is while I copying large file(10GiB) within same zfs using:

time cp verybigfile verybigfile2

finished just about 25 seconds. I have a feeling this must have something to do with ZFS's COW feature.

In addition I tried to copy verybigfile from two windows XP clients at the same time. It yielded 100-110 Mbytes total throughput, that is close to the limitation of GLan, which is very good.

My overall impression is that OSol is very solid system a good choice for server, but with limited packages in repository(comparing with major linux distro). The community support isn't as good as linux, it is hard to find solution some time.
 
Thanks for the comments, 00Roush and hua_qiu, especially some of your OSol/ZFS numbers! I haven't gotten to start tuning my setup yet, as I am preparing a move in a couple weeks. Hopefully I'll have some time after that to try OSol.
 
I hope you guys are setting the send and receive buffer sizes for Samba on FreeNAS to at least 65536. The default buffer sizes cripple Samba performance. At least in my experience.
00Roush

I just did some quick testing on my FreeNAS setup with this in mind. I tried doing a 1GB file copy multiple times with buffer sizes of 8K, 16K (default), 32K and 64K (your suggestion). Interestingly, 64K is not an automatic win and (in my case) was actually the worst performer. Several trials with each buffer size gave me these results:
  • 8K buffers: 190.7 Mb/s
  • 16K buffers: 262.3 Mb/s
  • 32K buffers: 278.3 Mb/s
  • 64K buffers: 88.0 Mb/s

I now have my buffers set at 32K, which is where I'll probably leave it for now. I could do further testing around the 32K point, to find the peak, but I suspect 32K is probably pretty close to it.

One interesting bit is that the larger the buffers, the greater the jumping in the traffic graph... 8K buffers bounced about very little, while 64K bounced about a ton. I'm wondering if something behind these buffers is holding things up... so network transfer fills the buffer faster than it can be emptied.

There may be certain things about ZFS and/or the vm.kmem_size="1073741824" that I have set (which allows ZFS to use more RAM for its stuff). Experimenting with different kmem_size values in combination with the CIFS read/write buffers might be what is needed to find the optimal balance. Assuming that is the issue here and not other things...

(Note, of course, this is all FreeNAS, not OpenSol; I will probably experiment with and/or move to OSol when I have more free time.)
 
I had similar result, larger send/receive buffer size doesn't necessary mean good overall performance. I actually got best outcome with default 8k settings.

With OSol I have to revise my initial impression. I found the GUI(Gnome) is very unstable. It crash many times without an apparent reason. And the worst is that when it crash it also brought down the whole system. I can't even login via ssh or ping the box. The crash happened, once when I was resizing a window, once I was loging out, once I just moved mouse and once I was typing. I found disable desktop effect will reduce the chance of crash, but not eliminate them. So now I rarely touch the GUI, all operation is via ssh. This never happened to me on linux as far as I remember. At worst system will response to ctl+alt+backspace. Seems the GUI on OSol has long way to go.

However, on the other hand, the core system seems reasonably ok. I stress test my NAS with 3 samba clients over the night, by copying 20GiB file non-stop. The combined throughput is about 100MB/s. So far I only noticed ssh crashed a couple times.

At the same time, I am also testing Solaris 10 5/09. Hopefully the the commercial version is more stable. The advantage of Solaris is that it can be installed without the GUI. However, the hardware support on Solaris is poor, I got wired problem with my NIC. If all your hardwares are on Solaris's HCL list, I think it is a better choice than OSol.

Last, I recommend move away from NAS or FreeBSD based ZFS implementation.

http://www.opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0

You can see from the discussion above and many other posts, even under Solaris ZFS still have bugs here and there, and require some fine tune with memory(ZFS is quite memory hungry), I don't think all the ports are ready for production. Also, from what I read, 64bit system recommended due to ZFS's memory hungry nature.
 
I just did some quick testing on my FreeNAS setup with this in mind. I tried doing a 1GB file copy multiple times with buffer sizes of 8K, 16K (default), 32K and 64K (your suggestion). Interestingly, 64K is not an automatic win and (in my case) was actually the worst performer. Several trials with each buffer size gave me these results:
  • 8K buffers: 190.7 Mb/s
  • 16K buffers: 262.3 Mb/s
  • 32K buffers: 278.3 Mb/s
  • 64K buffers: 88.0 Mb/s

I now have my buffers set at 32K, which is where I'll probably leave it for now. I could do further testing around the 32K point, to find the peak, but I suspect 32K is probably pretty close to it.

One interesting bit is that the larger the buffers, the greater the jumping in the traffic graph... 8K buffers bounced about very little, while 64K bounced about a ton. I'm wondering if something behind these buffers is holding things up... so network transfer fills the buffer faster than it can be emptied.

There may be certain things about ZFS and/or the vm.kmem_size="1073741824" that I have set (which allows ZFS to use more RAM for its stuff). Experimenting with different kmem_size values in combination with the CIFS read/write buffers might be what is needed to find the optimal balance. Assuming that is the issue here and not other things...

(Note, of course, this is all FreeNAS, not OpenSol; I will probably experiment with and/or move to OSol when I have more free time.)

Have you tried the buffer changes using UFS as the filesystem? In my testing with FreeNAS using the native file system I have found 64k buffer sizes give the best performance. Here's some results from FreeNAS .69 for some tests I just ran:

8k buffers 18-24 MB/sec
16k buffers 36-46 MB/sec
32K buffers 57-80 MB/sec
64k buffers 85-90 MB/sec

The low side is write speed the high side is read speed. This is using files ranging in size from 1.2 GB to 3.93 GB and Vista SP1 as the client

Now with ZFS I have experienced similar problems when increasing the buffer size except I was using EON 64 bit as the OS. Actually on the FreeNAS forums some one opened a thread about the same problem. Here is the link... http://sourceforge.net/apps/phpbb/freenas/viewtopic.php?f=97&t=2896 Glad I am not the only one this has happened to.

00Roush
 
Matt,

I continue to read this thread with great interest. I am quite the newbie when it comes to all of this stuff. I got my NAS box up and running with OpenSolaris snv_111b. I currently have four 1TB SATA II drives in mirrored pairs for a storage zpool and an old 160GB IDE drive for a root pool.

Being new to all of this, I am kind of learning as I am going. I am now looking to expand my storage zpool to 6 SATA drives and I plan on migrating my root drive to a couple of mirrored 80GB SATA drives that were given to me. I will probably run these older SATA 1.5GB/s drives off an old SATA controller that I will throw in and run my storage zpool off the onboard SATA II controller on my MB (6 devices). At some point down the road, I will upgrade the SATA controller to SATA II and place each of the mirrored pairs on separate controllers for a little bit of extra redundancy, but I'm really in no rush to do so.

My main concern at this point is storage space in my case. I have enough room in the case for 9 drives (five external 5.25 bays, two external 3.5 bays, and two internal 3.5 bays). I'm looking at using the two internal 3.5 bays for my root pool, and purchasing something like what you were originally looking at for a 3 in 2 backplane cage (2 of these). This is the model I am currently considering:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817995004

Does this look like something that, space allowing, will work for what I want? I am less concerned with being able to hot swap the drives as I am with easy access and more storage space for drives in my 5.25 bays. One bay will be used for a CD/DVD drive. This would allow me, if it will work, 6 easily accessible and potentially hot swappable drives for my storage zpool and the two internal 3.5 bays for my root zpool. I have no experience with these fancy backplane drive cages, so I am a bit uncertain how they work. Looking at the pictures, there appears to be 2 molex power connectors, 2 SATA power connectors, 2 SATA cable connectors, and what I am guessing is 1 SAS cable connector. If there are 3 drives, why only 2 connectors? How does all this stuff hook up and will the system see all 6 drives as separate drives? If I am only required to use 2 SATA ports on this thing would I get away with using only 4 SATA ports to run 6 drives? I'm a bit confused on all of that. Any clarification would be awesome.

Lastly, in regards to some of the performance benchmarks I have seen, I am nowhere close. I will be upgrading my home network this week to gigabit LAN with CAT-6 cables and wireless-N. I am currently running everything through an aging WRT54G with only 100Mbit lan and wireless-g. With my current setup I am getting write speeds of about 3MB/sec over wireless-g and about 10MB/sec over ethernet for CIFS shares. I recently set up an iSCSI target for doing Time Machine backups of my MacBook with even more abysmal results of 700KB/sec to 1.1 MB/sec write speed, but I think most of that has to do with Time Machine. Hopefully, with some new gear I'll be seeing a lot better speeds. There is certainly a lot of room for improvement.

On the NAS end, everything is shared via OpenSolaris' built-in CIFS server. Very easy to set up. I've been hoping to get mediatomb to compile for UPnP and maybe netatalk for AFP, but I'm taking things kind of slow for now.

Any thoughts, ideas or input would be great. Thanks again.

Eric
 
Last edited:
Cool build.

I built mine 3 years ago custom. Got an 8 front bay server case and put in
removable caddies, 600W power supply, a Matsonic MSCLE266-F mb with
256MB RAM. I added the drives in raid 1 pairs (4 pairs) attached to a
8 drive highpoint rocketraid controller and used a 128MB DiskOnChip for
the boot device which I have ghosted onto a USB stick for reloading.
Been working flawlessly for 3 years. Every month I swap the odd disks in
the array with clean ones and put the disks in a firesafe for added backup.
Only had 1 drive failure so far (just recently) and was able to re-mirror
and go on. I had about $250 in the main unit and then got disks on
closeout.

Going to play with a HN1200 now using a pair of 1Ts. Hoping to flash the
FreeBSD onto that motherboard and see what I can do. Should be fun.

Chuck
 
You may want to consider a DiskOnChip over a USB stick. Better performance and seamless integration. I got some 128MB chips off ebay for $5 each then I use a
512MB USB to ghost the boot so I can reload exactly back to what I have really
quick.

Chuck
 
You may want to consider a DiskOnChip over a USB stick. Better performance and seamless integration. I got some 128MB chips off ebay for $5 each then I use a
512MB USB to ghost the boot so I can reload exactly back to what I have really
quick.

Chuck

Yeah, I ended up getting a Transcend 128MB module that plugs directly into the PATA connector on the motherboard. Works just fine, although I might revise that later... Makes initial installation a little workaroundy, since I can't pop a CD-ROM drive in there even temporarily, so I have to install first to USB stick and then to the module. Still, not a huge deal, but next time I work on this, I might change around again.
 
Similar threads
Thread starter Title Forum Replies Date
P Questions on my first DIY NAS (+server) build DIY 14

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top