What's new

DIY: FreeNAS zfs/raidz 3TB storage server report

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

mattmoss

Occasional Visitor


I recently put together a report on my recent NAS build.

http://dl-client.getdropbox.com/u/49293/FreeNAS_Build.html
Updated: 2009 Aug 01

It runs FreeNAS 0.7RC1, contains five 750GB drives in a zfs/raidz configuration (for 3TB of storage and redundancy enough for one drive failure), gigabit ethernet, in a somewhat smallish case (read: non-tower).

:D

I can haz comments?

My next task is to benchmark it, because while FTP to the machine gets around 300Mb/s, CIFS and AFP (my daily machine runs Mac OS X 10.5.7) write performance into the NAS are pathetic: 10-15Mb/s.

EDIT: Here are some primitive benchmarks. These were mostly done by copying a 1GB random data file back and forth in different ways. The tests of the previous day showing pathetic 10Mb/s write speeds are now showing much better. This could be attributed to two things. First, I "untweaked" the machine: undid changes to various parameters I don't understand, but had made because someone said it would be faster. Stupid idea, so I reverted most of the parameters back to the defaults to get a base reading. Second, I rebooted both the NAS and the desktop using the NAS.

The tests, FreeNAS base build (with only an increase in allowable memory use for ZFS):

  1. ping NAS
    --- 192.168.2.250 ping statistics ---
    8 packets transmitted, 8 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 0.160/0.177/0.189/0.011 ms
  2. Run 'iperf -s' on NAS, 'iperf -c IPADDR' on host. (No manual changes to window size: server says 64KB, client says 129KB)
  3. Same as #2, both sides window size set to 64KB. (Client actually uses 65KB)
  4. FTP from host to NAS; transfer 1GB file of random data. Push and pull rates (bin mode set).
    Push: 346 Mb/s
    Pull: 136 Mb/s

    I think the reason write is faster than read is because the NAS can make good use of the RAM as a write cache. Just my uneducated guess.
  5. rsync -av to NAS
    84.4 Mb/s
  6. CIFS: cp sample.bin /Volumes/Media
  7. CIFS: cp /Volumes/Media/sample.bin .
    76 Mb/s

    Interestingly, the first copy was only 42s, but the next three took 1m48s. What happened? (The rejected first copy was 194 Mb/s.)
  8. CIFS: copy file via Finder to NAS
  9. CIFS: copy file via Finder from NAS
    87.8 Mb/s

    This had the same "first fast, remainder not so fast" results as #8. (First copy was 221 Mb/s.)
 
Last edited:
That's great work. It is great that it does not run Software RAID.

Perhaps install VMware ESXi (freeware) and run FreeNAS in a VM if your RAID card is supported by ESXi. You can then run other lightweight OSes on it (at the same time adjacent to FreeNAS in a VM) like a BSD/Linux system for a MythTV backend or Windows Home Server (as an expermient).

I suggest a VM Server like ESXi so that you can try new File Server OSes like Openfiler or UNRaid as they come out without affecting your FreeNAS system and without buying a new server. If you don't like the new File Server OS, just delete the VM instance.
 
That's great work. It is great that it does not run Software RAID.

You don't consider raidz as software raid? Although perhaps raidz is more portable, in theory.

I suggest a VM Server like ESXi so that you can try new File Server OSes like Openfiler or UNRaid as they come out without affecting your FreeNAS system and without buying a new server. If you don't like the new File Server OS, just delete the VM instance.

Interesting idea... I'll have to look into it. It would be cool to test the theory above.
 
I meant to say if it is not too late, install ESXi on the NAS device you built if the Atom CPU and RAID card is supported. If you have filled the NAS with data already then there is no point. Use v3.5 for 32-bit and 4.0 for 64-bit CPU.

Oh wow, I never new about RAID-Z with ZFS (or ZFS exclusive). The technology sounds really cool.
 
I meant to say if it is not too late, install ESXi on the NAS device you built if the Atom CPU and RAID card is supported. If you have filled the NAS with data already then there is no point. Use v3.5 for 32-bit and 4.0 for 64-bit CPU.

Considering I'm booting off a USB stick and not the RAID itself (which isn't possible anyway), I should be able to swap out the boot device with something else (which I am planning... a 128MB IDE Flash drive is on the way). Trying the boot with ESXi should be testable without interfering with the data.

Oh wow, I never new about RAID-Z with ZFS (or ZFS exclusive). The technology sounds really cool.

Darn right it is! :D
 
Trying the boot with ESXi should be testable without interfering with the data.

Yes, the ESXi base OS must be booted from USB just like FreeNAS, but when you install ESXi to USB flash, all VM OSes will be stored on the RAID array including FreeNAS.

This may sound strange because the RAID array was initialized in FreeNAS, but ESXi can use an iSCSI array that was creating in a running instance of a FreeNAS VM (on the ESXi install) for storage within the ESXi server for other VMs as well as storage for other workstations and servers on the LAN. This may sound pointless but it gives you the option to test out new file server software without buying more physical hardware. Since ESXi manages the iSCSI target from FreeNAS for storage, you won't need to worry if new OSes you want to test on the VM server nativetly support booting or installing onto an iSCSI initiator because it is seen as a Physical Drive by the VM sandbox.

This may not be what you want as ESXi was designed to run on a separate server than the SAN and not both on the same server as I am suggesting, but it can be done and may be useful since this was a DYI project.
 
Hi Mattmoss,

I think you should give ubuntu server 9.04 a try. The result you got from FreeNAS is, to be honest, terrible. I recently built a NAS with 6 1TB disks in RAID10, here is my result with ubuntu server 9.04 minimum install straight out of box

iperf: 940 Mbits/s
iperf -F verbigfile : 932 Mbits/s

FTP from vsftpd: 117 MByte/s ( 936Mbit in your unit of measure )

Copy 10GiB(10x1024^3 Bytes) file on a Windows XP by using old DOS trick

copy z:\verybigfile nul

gives average 82MByte/s ( 656 Mbit in your unit )

if I copy such file from two client machine in parallel, the total throughput is about 100MByte/s.

My hardware
Intel E7400 2.8G (yes, it is bit over kill)
4G RAM DDR800
6 1TB Seagate in RAID10 chunk=64k layout=n2 ( should given similar performance as your 3 disk Raidz in theory, Bonnie++ test, sequential block input(read) is about 300M/s, write 220M/s)

At this moment, I am still working on fine tuning samba server to match the best perform NAS, e.g. NETGEAR ReadyNAS Pro.

The best iozone test on 8G file size I got is about 61MB/s wirte, 62MB/s read. I guess the samba server in Netgear's new firmware must have been heavily modified to achieve such performance.

Something interesting, I mount CIFS on my NAS via lo(the local loopback network interface), I got iozone result of stunning 250MB/s read, 200MB/s write on 8G file size. It seems smbd hasn't achieve its full performance potential. Maybe there is something in the kernel I can play with. Any idea is welcome.
 
That's quite impressive hua_qiu, except it is lacking a particular requirement of mine: ZFS. (I wasn't explicit about that requirement in my initial post, but it is one of my requirements.) As far as I know, Linux distributions only implement ZFS in FUSE, which I doubt will do much better.

Actually, if I wanted to do it right, I'd install OpenSolaris which has a nearly full ZFS implementation, probably more stable and more efficient than FreeNAS. But I can't install OpenSolaris at the moment, while I can install FreeNAS. So, there it is.

Anyway, I will be playing with the settings to hopefully improve performance. As it is, it does most tasks I currently use it for at reasonable speeds. Testing performance numbers is one thing, but those are upper thresholds that daily activity can't come close to. A NAS will hit those speeds only when doing sustained reads/writes. So, if I copy huge files to the NAS, yeah, it won't be as efficient; but all other tasks do just fine.
 
Last edited:
When I was planning my NAS, EXT4, ZFS and XFS are on my list. After heavy reading, I finally pick XFS and softRAID in favour performance and data safety

6 drive RAID10 can sustain 3 hard drives failure at a time under the best scenario of worst situation, while providing 3-hard-drive RAID0 write performance and 6-hard-drive RAID0 read performance in theory.

1. ZFS is impressive in particular the raidz1 and raidz2, the technology is too new and lack of kernel level support(except OpenSolaris) as you said.

2. EXT4 is new, although has been merged into main stream kernel, but lack of online defragment which is important for the longevity of large file system.

3. I have been using XFS for years. The file system is developed for decade with lots of management tools. Can be defraged online. mkfs.xfs will align its structure with the underline RAID device

While I agree that usage patten is vary from person to person, but copy large movie files, thousands of RAW image from DSLR, AVCHD video from high definition camera and streaming media file are very common activities these days, therefore performance and data safety(against hardware failure) are the two most important criteria in my opinion. The pictures, videos you took on your holiday are irreplaceable and worth scarifying $300(in my case 3 hard disks) to protect.
 
Both OpenSolaris and FreeNAS (i.e. FreeBSD) implement it in kernel-space; it's Linux's GPL that keeps it in Linux user-space.

As for the rest, I'm glad you evaluated your options and chose what you did. So did I. I picked ZFS. Right now, that is not negotiable. What is negotiable is choosing between FreeNAS (small and easy) and OpenSolaris (more mature and complete ZFS implementation).

Right now, I am on FreeNAS. Period. I didn't make this thread to start finding whole new systems to try. That may be an option for me in the future, but it isn't now. Instead, options for tweaking FreeNAS is what I am interested in.
 
Just out of curiosity, what makes ZFS your ultimate selection(not negotiable)? I may missed out some advanced features in ZFS, and eager to know. Can you explain in detail? Thanks

Actually, after read this post, I tried FreeNAS/ZFS from the liveCD. Overall performance is about the half I can get out of Ubuntu server. So your input is appreciated. If you have some good tips in regard to tuning FreeNAS with ZFS, please share here. My NAS project hasen't finish yet, I am open all options.
 
I may missed out some advanced features in ZFS, and eager to know. tried FreeNAS/ZFS from the liveCD. Overall performance is about the half I can get out of Ubuntu server.

There is an overhead when blocks are written because the File System verifies a checksum, but the ZFS driver for FreeNAS is likely based on FUSE and does not offer the same performance as a native driver, therefore, ZFS will likely perform best under an OpenSolaris install.

The File System can support RAID Z arrays, but was designed for Performance Servers with high I/O disk activity. The featureset is quite impressive and if you have 30 minutes or so, listen to this podcast with two Sun Developpers to get an idea of what distinguishes it from other, more basic file systems.
http://twit.tv/floss58
 
Just out of curiosity, what makes ZFS your ultimate selection(not negotiable)? I may missed out some advanced features in ZFS, and eager to know. Can you explain in detail? Thanks

Primarily, because I am evaluating ZFS. I am also a fan of technology that does things right, and ZFS gets many things right: atomic operations, fail-safe mechanisms from start to finish, migration, integrated raid, error correction, instant snapshots, live scrub, block compression/encryption, more. There's a lot of info you can get at http://opensolaris.org/os/community/zfs/.

I am no stranger to benchmarking; my line of work has been in the optimization of 3d graphics and engines, at both high and low levels. And while I'd like it to run as fast as reasonably possible, I'm not particularly interested (nor have the time right now) to try a half-dozen server variations just to get "the fastest."
 
Last edited:
There is an overhead when blocks are written because the File System verifies a checksum, but the ZFS driver for FreeNAS is likely based on FUSE and does not offer the same performance as a native driver, therefore, ZFS will likely perform best under an OpenSolaris install.

No. FreeNAS has it implemented in kernel-space and does not need FUSE. Linux, on the other hand, is currently implementing ZFS with FUSE because of licensing issues. But there is the checksum overhead, as you mention.

EDIT: You are also correct that ZFS will perform best under OpenSolaris, in speed, stability, and feature completeness. BSD's ZFS is pretty decent now, but still some things missing. I would like to test with OpenSolaris, but I didn't realize the default installer would require nearly 4GB on the boot drive. Not much, except I wasn't expecting that and have a boot drive of 128MB. Not a big deal, but I just need to grab a different boot for OpenSolaris (or build a stripped-down version of OSol that doesn't have the windowing system and unneeded drivers, etc). That said, all it takes to migrate the ZFS pool is to run 'zpool export tank' in FreeNAS, and run 'zpool import tank' in OpenSolaris (where "tank" is the name of the pool). That's it.
 
Last edited:
Matt,

Thanks for the great howto and other info. It's been a great help with deciding how to build my own NAS. I have gone with a different hardware configuration and plan on using four 1TB drives to create two mirrored pairs in one zpool. As I require more space, I plan to add mirrored pairs to the zpool up to a max of 4-5 pairs or 8-10 drives. At that point, when/if I require more space I would start replacing pairs starting with the smallest drives.

The end goal of my project it to provide one central repository for all my media, backups, and other important files. The mirrored pairs I feel offer me the best redundancy in case of drive failure while allowing me to grow my zpool as required over time.

At this point after using FreeNAS, do you see any real compelling reason to use Open Solaris instead? My thinking was that since my network is going to be the bottleneck for everything I would never see any performance gains using Solaris. Additionally, going with Solaris would add some additional headaches such as an additional root zpool (extra disk in what will eventually be a very full box), compiling/setting up a UPnP server in an unfamiliar environment (might be fun), worrying about hardware/driver support for hardware I've already purchased (not sure if it'd work in Solaris), and I also would like to have AFP support which I am not sure is possible on Solaris (though I would think that it is).

FreeNAS should give me everything I want on a thumb drive. The question is whether or not Open Solaris would give me any real advantages for what I intend to use the system for. I'd rather set things up right from the start, but, like you said, exporting the pool at some point down the road is not too painstaking. What do you think?

Thanks again for the howto and all the info. It was of great help to me in deciding how to build my system.

Eric
 
Last edited:
Thanks for the great howto and other info. It's been a great help with deciding how to build my own NAS.

I'm glad it has helped. Certainly, my build is a bit raw around the edges, as it's my first time building a DIY NAS and first time dealing with ZFS. I'll be improving things as I go along (and maybe, eventually, find a case I like!).

At this point after using FreeNAS, do you see any real compelling reason to use Open Solaris instead?

ZFS has been around longer in OpenSolaris and so is, in theory, more stable and faster. The latter two points were certainly true not too long ago, but the latest FreeBSD/NAS implementation of ZFS seems to be much improved. I don't know of any performance comparisons between FreeNAS and OSol.

What is a definite different between ZFS implementations is that OSol is more feature-complete. If you look at http://wiki.freebsd.org/ZFS and scroll down to the table, you can see that some features of ZFS are missing/incomplete under BSD: zfsboot, ACL, exattr, iSCSI. For my personal NAS setup, none of these are issues (although it would have been nice to try iSCSI). But you'll want to keep these in mind for your own project...

My thinking was that since my network is going to be the bottleneck for everything I would never see any performance gains using Solaris.

Test it. If you're not using gigabit ethernet, then it probably will be the bottleneck. With gigabit, though, you might start seeing other bottlenecks. As you can see in the conversation up above, my throughput numbers aren't so great compared to the Ubuntu server/RAID10 setup mentioned. I've done almost no tuning yet, so hopefully I can get some improvements, but I know I'm not bottlenecked on network at the moment.

Additionally, going with Solaris would add some additional headaches such as...

That is definitely a benefit of FreeNAS: you can get all the basics setup trivially.

FreeNAS should give me everything I want on a thumb drive. The question is whether or not Open Solaris would give me any real advantages for what I intend to use the system for. I'd rather set things up right from the start, but, like you said, exporting the pool at some point down the road is not too painstaking. What do you think?

I'd say go for FreeNAS for now. You can start using it and play as you go. And trying OSol later is always an option: the zpool export/import just works (I tried it briefly booting off a OSol USB stick), so you can always go the OSol route later if you want to experiement.
 
When you find a nice case, let us know :). I went with this:

http://www.newegg.com/Product/Product.aspx?Item=N82E16811119192

Aesthetics aside, hopefully everything will roll together smoothly when I get the hardware on Monday. First time for me as well with NAS and ZFS.

With gigabit, though, you might start seeing other bottlenecks.

I think I'll cross that bridge when I come to it. For now, anything I do with the NAS as far as streaming will be done over 802.11g. I will probably upgrade to 802.11n in the near future, but still will be nowhere near gigabit speeds. As far as backup and copying data, I don't put near as much focus on speed as storage space. All my computers are on 24/7 for the most part and they can do backup/file transfers at their own leisure. I don't think it will impact me too much if a things take a bit longer.

As you can see in the conversation up above, my throughput numbers aren't so great compared to the Ubuntu server/RAID10 setup mentioned.

And like you, I think ZFS or possibly a Drobo (too much $$$) are the only things out there that really do what I want as far as expandability. It's really hard to add space to a RAID array, and RAID lacks the self-healing and scrubbing features of ZFS (so far as I know, I'm a newb at this). ZFS seems to give me the LVM and redundancy that I want. Throughput isn't so much of an issue to me as I'll just be using this for storage. I'm sure there's a lot of other cool stuff I could do, but it seems like this is the way to go for what I am looking for.
 
Last edited:
All my computers are on 24/7 for the most part and they can do backup/file transfers at their own leisure

With all the electricity your computers consumed at their own leisure time, you certainly can afford a Drobo, no offense, maybe the electricity is cheap at your place, but be more environment responsible.

ZFS does have the feature that most other file systems can only envy for, e.g. self-healing, expandable, scrubbing features, etc, however, I couldn't stopping share my concern about the stability of the relative very new file system. It require time to iron out all the glitches and bugs in different implementations. A code bug in ZFS means a service ticket to the SUN developer and maybe a few minute code correction, a couple hour restore/down time to a large cooperate who can afford expensive backup/restore solutions and maybe a few thousands dollar loss of business, but to a home brew NAS user it is everything(unless you are not care about your data, and if this is the case all the discussion, self-healing, on-disk-checksun,etc, is meaning less). I also temped by it feature, but I just could not convince myself to jump into an unproved solution(yet, maybe in a couple of years time, ZFS is proved to be as reliable).

Secondly, if you do choose ZFS, I strongly not recommend FreeNAS. I test FreeNAS on the same hardware with RAID0 setup. The throughput is just half of a straight out of box ubuntu server. For such highly specialized piece of software it is disappointing, don't you agree?
 
Similar threads
Thread starter Title Forum Replies Date
P Questions on my first DIY NAS (+server) build DIY 14

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top