What's new

My DIY NAS performance (Ubuntu)

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

sammynl

Occasional Visitor
Hi There,

These are the performances for my DIY RAID-5 NAS:

Drives = 5 * Samsung Spinpoint F1 1TB - 4Gb RAM - Intel Celeron Dual Core 2 Ghz, running Linux Ubuntu Server 9.04 with Samba as OS. One dedicated 150GB SATA-II disk for the OS.

Antec NSK6580 casing with 6 * 3.5 drive slots (max 9 with convertors 5.25 to 3.5), Intel DG965OT motherboard with 6 SATA ports.

Configured the NAS running a 5-disk Linux RAID-5 array (chunk-size 1024 kB, not sure if smaller chunk-size is faster?), leaving 4TB available disk space.

Reading 600Mb iso file from NAS:
write600mb.jpg


Reading 4.3Gb iso file form NAS:
read.jpg


Writing 600Mb file to NAS:
write.jpg



New thread about Ubuntu 9.10 with ext4 filesystem and performance: http://forums.smallnetbuilder.com/showthread.php?t=2429
 
Last edited:
Have you gotten a chance to test with just a single disk or a RAID 0 array in the sever? Also what OS are you using on the client?

00Roush
 
Have you gotten a chance to test with just a single disk or a RAID 0 array in the sever? Also what OS are you using on the client?

00Roush

I haven't tested RAID0 but that would increase write speed i guess. Read speed is near the limit of what the drives can do already. The router is a Netgear WNR3500 Gigabit router.

The client is running Vista Ultimate 64-bit SP1 with 8Gb of RAM on a RAID10 array of 4 disks (Intel Matrix ICH7R fake raid).

I did a test with a XP SP2 64-bit client and found the performance on the client side to be much less than a Vista Client.

I also did a test with Windows 2003 R2 Storage Server, but Ubuntu Server 9.04 with Linux Raid is really much much much faster, the Windows Server performance is no where near the Ubuntu performance. Linux RAID rules!

4a198b4e.jpg


4a198bc7.jpg


Running 2 robocopy jobs on Vista Client, downloading about 800 Gig of data from NAS, 30% of job completed:
4a1b1285k.jpg
 
Last edited:
I just had wondered as I had done some single drive testing and RAID 0 testing with Ubuntu and Win XP on the server. Vista SP1 on the client. Tried RAID 5 but the numbers were awful. Then again I am not sure if I ever did get the linux RAID setup properly.

00Roush
 
I just had wondered as I had done some single drive testing and RAID 0 testing with Ubuntu and Win XP on the server. Vista SP1 on the client. Tried RAID 5 but the numbers were awful. Then again I am not sure if I ever did get the linux RAID setup properly.

00Roush

There are a few things i found out about Linux raid and performance. I reinstalled Ubuntu server and created a new Linux RAID-5 array after adding an extra disk, and changed the array chunksize from 1024k to 64k only, big surprise the write speed went up from about 50MB/sec to 70MB/sec! A huge increase. So it looks like a smaller chunksize (I believe the amount of data written to 1 disk before the OS starts writing to next disk in the array) matters, and/or the more disks in a raid5 array the faster it gets. This array now has 6 disks.
writespeed.jpg


On the Linux side it is advised to turn of last access times info on files for both the array and the mount folder:
aulat.jpg

---
aulat2.jpg


The array is formatted with the Linux ext3 filesystem.

Also tested an XP client, these are slow compared to Vista clients, seems like the SMB protocol for CIFS in Vista is much better.
 
Last edited:
Interesting results with changing your chunksize... those results are damn good for RAID 5.

If you would like I can explain in detail as to why XP has lower SMB transfer speeds than Vista when used as the client. Basically it has to due mostly with the file copy engine. I actually have been working on a program that will copy files over SMB connections just as fast on XP as Vista does. So far I have been able to get read and write performance on my XP client to be very close to my Vista performance. With XP Pro on my server and Vista SP1 on my client I usually see an average of 80-100 MB/sec with files in the 500 MB- 4GB range. A 20 GB test file usually comes in at around 85 MB/sec.

00Roush
 
Hi all,

Just joined after being inspired by thiggins' "How To Build A Fast NAS" and "Build Your Own Atom-based NAS" articles and also sammynl's great results. Thanks guys. :)

I'll be ordering a DIY NAS soon and I will post about my experiences. Complete Linux noob but am ready to learn. Looking at building a RAID-5 or RAID-10 array to use as file server/media server. I want that 100MB/s read speed. :cool:

@sammynl - Thanks for posting your build specs too! Your latest results with 64K chunk size in RAID-5 agree with tests I found on the web (see links below). Lower chunk size in RAID-5 gives better block write speed and higher chunk size in RAID-5 gives better block read speed. The best compromise if you want good read and write is 256KB chunk size.

Here's a link to a test of Linux-RAID 5,6 and 10 with varying chunk size and I/O schedulers. There's a lot of info and graphs but if you just look at the top of the table and scan down to RAID-5 you will see that write performance falls away rapidly with increasing chunk size while read performance increases. Seems RAID10,f2 (far layout) gives the best performance of the configs he tested.

Here's another test which compares software Linux-RAID under both EXT3 and XFS file systems. This test setup also involved stride-aligning the arrays.

These and other hard-to-read tables can be found here under "Other benchmarks from 2007-2008".

@thiggins - In part 4 of your article you tried to use stride aligning to help improve performance and didn't see any improvements (although you were also limited by XP at this stage in your investigation). However based on what I've read you don't need to adjust the stride value based on the number of disks in the array. It's purely a function of block size and chunk size. So for a 4K block size and 256K chunk size you would set stride=64 (256/4), for 4K block size and 64K chunk it would be stride=16 (64/4), etc.

Would be interesting to see if you gain/lose performance with a properly stride-aligned array using a Vista SP1 client.

@00Roush - I would love to test your program with my new NAS as it will be deployed in a mixed OS environment. Do you know does Windows 7 RC file copy performance match Vista SP1?

Moezilla
 
So far I have not tested Windows 7 but to my knowledge it is generally based on Vista so my guess is the file copy engine would be similar.

I still have a bit of work to do on my program. It works and has good performance but still looking for a better way to implement things.

00Roush
 
Hi all,



@sammynl - Thanks for posting your build specs too! Your latest results with 64K chunk size in RAID-5 agree with tests I found on the web (see links below). Lower chunk size in RAID-5 gives better block write speed and higher chunk size in RAID-5 gives better block read speed. The best compromise if you want good read and write is 256KB chunk size.


Moezilla

Thank you for the links, excellent stuff to read. As far as build specs, I forgot to mention that after adding the 6th storage-disk in the array, all SATA ports on the motherboard are being used for storage hence I had to add a SATA controller for the OS disk. I bought the Promise TX2300 for that purpose it is fully supported by Ubuntu 9.04 you don't need to hussle with drivers or anything at all, it is more or less transparent for Ubuntu. Configured as JBOD with just one OS disk, a silent and small 2.5" sata laptop drive (samsung) with low energy needs and little heat production. If this seperate OS disk crashes you will not loose data on the RAID array, a reinstall of Ubuntu on a new disk will see an already built array because array-info is written to the disks in that array, just mount it and you're fine.

As far as Linux is concerned the installation isn't that hard, webmin is the tool for linux-noobs like me. The hardest thing was getting it to boot after the installation's first reboot, but then Super Grub Disk helped me out to set the right partition as active.

@00Roush: Thanks for your comments, I would sure like to know the SMB and copy engine changes between the XP and Vista, and I'm very interested in your program, it sounds very promising and very useful for scheduled backup jobs to write data to the NAS and vice versa, I use robocopy for that now because it only copies files new or changed.
 
Last edited:
@00Roush - Thanks for your input, keep me in mind for testing your program. I'm happy to run pre-alpha/alpha/beta software.

@sammynl - Thanks for keeping us updated on your build. Hopefully I will have my parts here by wednesday.
 
Here is a link to download my program... http://www.mediafire.com/file/4db71ogi8xwlks5/copytest.zip Please ignore the pop ups. (at least its free right?)

Let me know if it works for you and also what kind of performance you see. So far in my testing at home I have found performance on Vista is as fast, if not faster, than using the standard drag and drop. XP seems to be just as fast as Vista but sometimes I have noticed speeds jumping around when reading from a network drive. Mostly I have been testing networked drives but it will also copy between local drives.

00Roush
 
Last edited:
Hi Roush,

I'm impressed by the performance, copying 4.5 Gb file from local disk D: to RAID-5 NAS network drive W: with 81.4 MB/sec writespeed!
test1l.jpg


What robocopy (Vista x64 SP1) does with the same file is 78 MB/sec

So as far as I can see your program is about 10% faster :D!

Regards, Sammy.
 
Last edited:
Sweet. Glad to see it works.

Also when writing to the network drive you should see cpu usage down quite a bit compared to drag and drop or robocopy.

I would love to know how your read speeds compare. Maybe you could give that XP client a go again.

Thanks for the kind words and feedback.

00Roush
 
Thanks for posting your program 00Roush. As soon as I have my NAS set up I will post results but it looks good based on sammynl's experience. Well done!
 
Sorry I've taken so long to post again.

Some preliminary results with 00Roush's copy engine:

[Client]
OS: XP SP3 x86
OS HDD: Western Digital 400GB WD4000KD
CPU: E8400
RAM: 2GB
LAN: Realtek 8111B - Driver=5.724.423.2009
FS HDD: Western Digital 750GB WD7500AAKS
Controller: Intel ICH9R SATA AHCI

[Server]
OS: Ubuntu 9.04 x64
OS HDD: A-Data PD-7 2GB USB 2.0 Flash Disk
CPU: Intel Celeron E1400
RAM: 4GB
LAN: Realtek 8111B
HDD: 3 x Seagate 500GB ST3500410AS
RAID Parameters: --chunk=256 --level=raid10 --raid-devices=3 --layout=f2
FS Parameters: mkfs.ext3 -b 4096 -E stride=64 // tune2fs -c 50 -m 1
Mount Options: data=writeback

Ok, first up we have performance copying a 700MB avi to and from the NAS

[Native copy engine and timed manually]
XP -> NAS = ~70MB/s
NAS -> XP = ~37MB/s

[Robocopy XP0026 to NAS only]
cxpsp3x86sub904x64roboc.gif


[00Roush Alpha]
cxpsp3x86sub904x6400rou.gif


Now a 4700MB archive to and from the NAS

[Native copy engine and timed manually]
XP -> NAS = ~75MB/s
NAS -> XP = ~38MB/s

[Robocopy XP0026 to NAS only]
cxpsp3x86sub904x64robocn.gif


[00Roush Alpha]
cxpsp3x86sub904x6400rouv.gif


[Conclusion]
Write performance to the NAS using Robocopy is poor while performance is excellent using XP's native and 00Roush's copy engines. The latter appears to be 10-20% faster than the former and I believe it is limited in this setup by the read speed of the source drive (WD7500AAKS) and not the write speed of the RAID array. Some iozone work and/or a faster source drive would help to establish if this is true. Regardless, great work 00Roush!

Read performance is generally poor and I believe either the write speed of the XP client drive or XP itself is at fault. Again some iozone work or a faster drive in the client or a test with Vista SP1/ W7RC would help clear things up.

[[[EDIT]: Ran iozone with following command: iozone -Raz -i 0 -i 1 -f /mnt/raidarr/f1 -r 64k -n 512m -g 8g -+u
KB Write Read CPUWrite CPURead
524288 189659 2792701 35.24 100.00
1048576 140572 3006652 45.52 100.00
2097152 147996 2888802 39.95 99.17
4194304 130949 546449 38.59 37.50
8388608 128813 359522 39.31 47.36

The results for below 4GB file size indicate caching is having a major (and expected) impact. The results for 8GB file size are pretty exciting though! [/EDIT]]]

Moezilla
 
Last edited:
Thanks for the feedback Moezilla. Glad you got things setup.

I am wondering about your read results in general. They seem to be too low based upon your setup. As you pointed out your read and write speeds with Iozone are good so something else must be holding you back. I think your WD 750 GB drive can read and write at ~80 MB/sec so it shouldn't be holding you back when you are reading from the server. Is the 750 GB drive full??

A Vista SP1 client might help. At least then you can find out if it is just XP or some other problem. Something else you might try is FTP. This could help narrow down if it a samba problem or not. I usually use filezilla as the FTP client as the native windows FTP does not seem to have very good performance in XP or Vista.

I did have a couple questions about your setup. I wasn't clear on what RAID mode you were using on the server. Also are your client and server close to each other on the network. Just wondering how much latency you might have between the two.


Again thank you for the information.
00Roush
 
Hi 00Roush,

To answer your questions:
- 750GB drive is less than half full.
- RAID level is Linux RAID10 - Explained here
- Client and server are connected to same gigabit switch. Cables are < 10m and ping times <1ms.

I have found some interesting things while testing which may explain my problem.

My first port of call was to disable NCQ as I heard rumblings on the internet about poor implementation at either the controller-level or drive-level. The change made no difference to performance across a number of tests. Here's a HDTune Pro benchmark. Can you tell which is which? ;)
hdtunewd7500aaksreadcom.gif


Next up was to test the 750GB drive using ATTO. On the left is the test with Direct I/O enabled so all caching and buffering is disabled. This test showed that the drive is physically capable of read and write speeds of ~84MB/s. On the right is the same test with Direct I/O disabled. Whatever caching and buffering is in action completely decimates write speed at 32K and 64K block sizes. The results are in-line with the performance achieved using your test program to copy a file from a network drive which sets the request size to 32K. 128K block size shows write performance close to the drive's maximum.
attowd7500aaks512ocompa.gif


Finally I tested copying a 750MB file from the 750GB drive to the 400GB OS drive in the same client using your test program. The 400GB drive has a physical maximum read and write performance of ~60-65MB/s (shown by ATTO). The results come close to that maximum speed indicating that the problem I'm having is limited to the 750GB drive's inability to write at high speed using block sizes of 32K or 64K.
cxpsp3x86sxpsp3x8600rou.gif


[Conclusion]
So 00Roush, to confirm my findings would it be possible for you to give me a copy of your program which could be passed the block size to use on the command line, or could you set it in the executable to 128KB? Or is there a technical limitation that requires you to use 32K block sizes for network drives?

Moezilla
 
Here is a link to my copy program that allows you to set the request size if you want. http://www.mediafire.com/file/da0gjkfojt4/test2.exe The program will ask you what request size you want to use.

Basically the reason I set the request size to 32k on network drives is because it has given me the best performance. Vista also uses this size when copying files from SMB1.0 computers. SMB1.0 is limited to a max of 60k reads across the network but if I use that size reads and writes will be odd sized. I believe this odd size causes performance to suffer due to the read actually being broken up into 32k, 16k, 8k or 4k reads by the system. I can explain more in depth if you would like. Also currently all reads are set to unbuffered no matter the drive type. Writes are set to be buffered except when transferring between two local disks. I have tried to use unbuffered writes when writing to a local drive and reading from a network drive but performance has always been worse. Still working on it. (and still learning) Anyway... feel free to try any sizes you want.

Maybe you should do a copy to your c: drive from a network drive and see what performance you get.

Again thanks for the information.
00Roush

P.S. Do you think we should start a new thread about this? I don't want to hijack sammynl's thread with stuff that is not on the same subject.
 
Last edited:
Thanks 00Roush. Will do some testing and post results in a new thread now that we've wandered far away from DIY NAS setup. Maybe I'll post it in the general NAS discussion forum.
 
Finally - 109MB/sec Read speed from RAID 5 NAS.

Final speeds after tuning NAS and client, Ubuntu 9.04 64-bit with Linux RAID-5:

raid5.jpg


Reading from NAS:
109mbread.jpg


Writing to NAS:
82mbwrite.jpg


Reading one single large 8.1Gb HD mediafile from NAS:
8gbx.jpg



Disabled all NAS onboard hardware in BIOS except PCI-E LAN.
Ubuntu 9.04 64-bit Server minimal install, mdadm and samba only.
Client running Vista x64 SP2 on RAID 0.
Gigabit switched network with CAT6 wiring.
 
Last edited:

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top