What's new

NAS Max Transfer Speed Questions.

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

iwod

Regular Contributor
I have had this questions for a few years and yet no one, no site, no forum on the internet seems to have an article on it. I hope someone with highly technical knowledge could answer my questions.

What is the maximum speed we can get from a NAS using CAT6 1 Gigibit Ethernet?

The article on SNB about Home Build NAS was the closest article relating to the topic. Assuming we have SSD Raid or even a Virtual Ram Drive, Quad Core CPU, 4Gb or higher Ram. a dedicated Gigibit Ethernet with Jumbo Frame support. Could we even achieve 80MB/s?

Looking at the chart on SNB, the fastest is WHS with write speed of 6x MB/s. Which is only half of the theoretical throughput.

So where is the bottleneck? CPU Power is one of the obvious answer as we see a clear relationship between Speed output and processing power.
But Why? We have been shouting about we have excessive CPU power for most of our use. And I can not believe such simple operation of transferring file over the network require more then 100% of a 500Mhz Pentium-M CPU workload. Any explanation as to why we need so much CPU power for such simple operation?

TCP/IP Stack. A lot of people will mention the overhead cost of using TCP/IP when transferring file. But how much of an overhead cost should we expect? Even at 30% overhead we could still get around 70Mb/s?

OS Protocol Problem? - With WHS we clearly see it is way faster then any other Linux Samba NAS. So is there overhead in Linux Samba implementation to Windows Machine as well?
 
This is not a direct answer to your question, but with my WHS I was able to sustain 70 - 75 MB/s write speed. It's logical I couldn't get any higher since my HD's aren't faster than that.
 
This is not a direct answer to your question, but with my WHS I was able to sustain 70 - 75 MB/s write speed. It's logical I couldn't get any higher since my HD's aren't faster than that.

That is another point i want to make is that people constantly write transfer speed they are getting much higher then the one in shown in SNB chart.
 
In a NAS connected by a gigabit ethernet connection, you SHOULD be limited by the hard drive performance inside it. A modern hard drive (not a raptor or a SAS drive, just your run of the mill 320 or 500 gig SATA drive) can see sustained reads and writes from 60-75 MByte/s. This is a single drive, in optimal conditions. But there are many factors that can slow this down.

First is the processor in the NAS device. TCP overhead is can be demanding, some high-end server NICs have dedicated TCP offload engines just to reduce this burden.

Next is the hard drive fragmentation and location of the data on the platter. Data close to the outer edge can be read faster than data on the inner tracks. A good HDD review should show a curve that drops off as more data is read or written to the drive.

If the NAS is a RAID NAS, especially a RAID-5 NAS, you could see a performance hit, instead of a performance boost, because the controller doesn't have enough processing power to perform the parity calculations. This is one reason why many sysadmins insist on using high-end raid cards, even if the motherboard supports raid.

Basically, when it comes to NAS devices, there's no real way to know how a device is going to perform until it's tested. There are just too many variables. So keep reading SMB's NAS reviews. While their results aren't going to match yours perfectly, they'll at least help you pick out the good ones and avoid ones that aren't worth the sheet metal they're (hopefully) made from.

Tam
 
That is another point i want to make is that people constantly write transfer speed they are getting much higher then the one in shown in SNB chart.

Which chart? Remember that the bar charts are an average of multiple file sizes.

And if smaller file sizes are used, speeds will be much higher due to OS and NAS caching. That's why I added the "small file size mode" option to the charts.
 
Don Capps, the creator of iozone and and expert on file systems, was kind enough to have a long chat about this the other day with me. Here are some key take-aways from the conversation:

- You should be able to get close to wire speed from a properly-designed NAS, even with a gigabit connection. 100 MB/s is achievable.

- You can have fast and you can have cheap. Most consumer NAS makers concentrate on keeping cost (and power consumption) down, which is generally the right trade-off for the market.

- SATA drives are an example of focusing on cost. They provide a lot of storage for relatively low money. But they have relatively short lives. They also are terrible when it comes to random access performance vs. sequential.

- RAID controllers that can handle a 100 MB/s rate don't come cheap. For example, a 3ware 4 channel 9500 series RAID controller will cost $350 - $400, but will do 100 MB/s RAID 5 writes. The 9650SE 4 channel controller will deliver over 800MB/s RAID 6 reads and 600MB/s RAID 6 writes and sells for about $400.

So, the technology is readily available for gigabit "wire speed" NASes. But that's not the direction that consumer NAS manufacturers are going.
 
If we look at the graph in the new Intel SS4200-E Entry Storage System slidshows, It shows all three , Iomega, Synology and Intel drop to mere 25MB/s at the end when file size are big. Which means they are all limited to 25MB/s.

Surely Raid 5 or HDD isn't a problem as 25Mb/s is very slow speed.
Processor could be a problem but would a Core2Quad improve it by much?
( Intel SS4200-E Entry Storage System is nothing more then a entry level computer with custom casing )

You see, 25MB/s is very far away from 80MB/s. So where is this bottleneck?
 
In the SS4200-E, I think it's doing software RAID. So probably in the CPU and software.
 
- RAID controllers that can handle a 100 MB/s rate don't come cheap. For example, a 3ware 4 channel 9500 series RAID controller will cost $350 - $400, but will do 100 MB/s RAID 5 writes. The 9650SE 4 channel controller will deliver over 800MB/s RAID 6 reads and 600MB/s RAID 6 writes and sells for about $400.

And what about the HighPoint RocketRAID 2640x4?
It cost about 180$ and can do 501Mb/s Reads and 846Mb/s writes...
Looks good, what do ya think?

------------------------------------------------------------------------

What hardware specification should i use in order to achieve 100mb/s, using FreeNAS software raid 5 configuration???
 
Last edited:
In the SS4200-E, I think it's doing software RAID. So probably in the CPU and software.

For the past years people have been shouting that we have excess amount of CPU power. And yet we cant do raid5 with more then 25MB/s in software mode?

Which doesn't sound logical to me at all.
 
I think that it's a software and driver issues.

Even when u doing massive writing the CPU doesn't go above 3% of using.
 
The OS and its tuning tends to be the biggest bottleneck once you get into gigabit and have sufficient storage performance. This factor makes it very non-intuitive. You can have the fastest hardware, with parts which measure very well by themselves, but get bogged down when it comes to network file transfers due to the OS and the Windows file transfer protocol.

Here's a chart for example showing file transfers over the same hardware, with the OS being changed on one side, and tweaking done per OS to the best of my knowledge at that time:

smb-transfer-vista.png


Note however that this represents one specific case. Change the target OS from Vista, and you could see > 100 MB/s pushes from a Vista client for example.

One potential solution to Windows file transfer puzzles is using a decent ftp implementation.
 
For the past years people have been shouting that we have excess amount of CPU power. And yet we cant do raid5 with more then 25MB/s in software mode?

Which doesn't sound logical to me at all.

For a modern desktop, this has very little to do with CPU power these days, and much more to do with RAID 5 write optimization. This optimization typically needs caching to be enabled, but write caching incurs some risk and additional effort, so when designers think of RAID 5, they might tend to give simplicity and reliability greater precedence over performance.

The key to all this behavior is in the nature of RAID 5 parity. To write something, you logically have to read existing data, re-calculate the parity, and then write the changes and adjusted parity. This incurs two sequential drive accesses (one after the other; non-parallel), and doesn't benefit from the number of drives being striped. This implies write performance at around 1/2 the sequential rate of a single drive.

If instead the write is cached or optimized so that the entire stripe + parity is written in one shot, there's no need to read the parity (just overwrite it with new data), and this also benefits from parallelization across all the drives. Performance is then around (the number of drives - 1) times the sequential rate per drive.

Here are some specific local file performance measurements for example, using on-board Intel RAID 5, varying the number of drives and write caching on/off.

ir5-3vs4-wb-on-vs-off.png


nVIDIA RAID 5 on the other hand, has had no write-caching (in the versions I've tried), so the only way to get decent RAID 5 write performance with it has been to perfectly match the stripe size and number of drives to the access size. E.g., using the typical 64K accesses, three drives in RAID 5 with 32K stripe size (sometimes) works well.
 
The other part of the equation - multiple clients

The thing that the charts are not revealing is how the NAS box will perform when it is under load from multiple users. The tests are only one client. It's hard to actually make a good buying decision based on this because the box that is designed with the horsepower to handle multiple clients may not necessarily have the fastest single user, flat out, transfer rate. It has more to do with the NIC performance and hardware off-load and uP available cycles. So the charts only go so far in my opinion.

roos
 
Thanks for the graph it just shows that performance are really limited by Software as well.
But if Linux and FreeBSD are already being used for much larger server. What is stopping them from performaing to their maximum.
Since most current NAS are PowerPC / ARM based. Would that have an impact since most OS are X86 optimized ?
 
There are three factors when designing and implementing any network based service, of which storage is a part.

Cost - factored into any project
Performance - what we all want, and the main topic here
Safety - whether that be redundancy, survivability, fault tolerance, etc.

As you drive any of these factors to infinity, it drives the other two as well. These factors are inexplicably intertwined. If you have an infinite amount of money, you can have infinite performance and safety.

If you have a fixed cost factor, the other factors are affected as well, and therefore you compromise everything to balance. Otherwise, with limitations on cost, you can have performance without safety, or safety without performance, all in relation to that cost.

Consumer oriented products are a victim of this rule because to sell a product, it has to be affordable. The more affordable, the more is sold, increasing the profits for the manufacturer, etc., whether it's a good product or not.

As I've written in another forum, performance is dependent upon all the little pieces coming together in the most effective and efficient manner. It's the "weakest link in the chain" issue. There is not one piece, x86, Intel/AMD/PowerPC/ARM, OS, TCP/IP stack, driver, hardware, architecture, network, etc., that can be singled out as the culprit in this quest for performance. It all matters is the short answer.

Iwod, you're asking the right questions. The problem is there are no absolute answers. That's what makes all of this "stuff" more an art than a science. People are always searching for faster/better/cheaper and we are making phenomenal strides in technology to enable this. Check out http://www.top500.org/ to see where we came from and where we are in terms of absolute computing power and network performance. This is the world that I live in professionally and I've seen derivations of all this stuff filter down into the consumer world.
 
There's also I/O speeds to look at. You have to have the ability to stream all that data in from the Network adapter, perform whatever RAID checksumming you need to do, and also stream the data back out to the drives, all simultaneously. Carefully managing IRQ sharing conflicts and things like that can have a major impact on performance.

The rule of thumb I came up with from many many tests of many different servers was that up to the Athlon64 and P4 era, you were pretty much limited to 25MB/sec and could possibly push it up to 30MB/sec if you applied yourself to stripping all non-essential services from your server.

With the later Athlong64X2 and Core Duo chips and their really fast I/O subsystems you could push it up closer to 50MB/sec.

Most low-cost NASes out there are limited to around 10 to 15MB/sec with some going as high as 25. With WHS, be careful. The speed may look nice, but you may have no redundancy with your data. Better hope your drives don't fail.

Embarassingly for me as a Linux guy, Windows 2008 Server blew my rules of thumb away. I managed to hit 96MB/sec with a default install Windows 2008 server on a Quad Xeon with SAS drives (the server had the same I/O bus speeds as recent Core Duo systems). I immediately formatted the server and put an optimized install of 64bit Gentoo Linux on it, and was able to manage 70MB/sec using Samba.

It pains me at the present moment to see Windows that far ahead of Linux/Samba. It's likely that WHS gets it's high speeds because it is running some kind of stripped down Windows 2008.
 
Last edited:
The OS and its tuning tends to be the biggest bottleneck once you get into gigabit and have sufficient storage performance. This factor makes it very non-intuitive. You can have the fastest hardware, with parts which measure very well by themselves, but get bogged down when it comes to network file transfers due to the OS and the Windows file transfer protocol.

This is so true. I have two Athlon64X2 Linux/Samba servers. If I run a network throughput test between them using iperf (even if I make iperf run constantly for a couple hours) they will talk to each other at 999megabit. But using samba, they only manage 40MB/sec.
 
So the simple answer is that we have been living in marketing hype.

1TFlops from Geforce GTX? Quad Core Processor for $199? Insane amount of processing power that we must donate to research? 4GB of Ram for less then 80 USD. Gigabit Ethernet, IPv6 solving all of IP problems.....

And yet we cant manage to serve anywhere close to 70MB/s on a recent PC, such a simple job of file serving on the network. And we are no where near it.

No wonder i am getting tired of technology
 
I think it has to do with the fact that a PC is designed to do anything and everything, which means it really over-achieves at nothing. Hard core systems designers always laugh at the PC. "Oh sure, the CPU is a monster, but there's no I/O!" or things like that.

This is why a Cisco or Juniper switch can do sustained multi-gigabit throughput - even with encryption, and run on a CPU that is a fraction of the power of a PC CPU. They have designed the system to excel at network I/O, and simply don't need that huge CPU there, because there would be nothing for it to do.

Consumer NASes have the same problem. They use these little ARM-based embedded systems that are very generic. They get used for NASes, firewalls, security systems, time-clocks, Cash Registers, and all kinds of other stuff, which means once again they aren't all that great at anything.

Take a look at one of the enterprise NASes from NetApp or NexSan. These are high-end, purpose-designed systems, which do one thing, and do it well. Their storage throughput is legendary, compared to a consumer NAS, and I bet their CPUs aren't much more powerful. But you have to pay for that speciality. No one is going to spend $100k for a home-NAS.
 

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top