What's new

Gigabit Ethernet Need-to-Know

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

A

a.k.a.

Guest
Hello everyone,

This is in ref to the article that appeared on August 19 on Gigabit Ethernet.

THANK YOU for getting an article up that tackles Gig-E speeds. I moved a NAS/DAS storage plan to the back burner because I wasn't finding much on Google about real-world Gig-E speeds compared to the alternative options.

This is my first venture into network solutions, and I'm keeping it simple/stupid, but I'm hitting a roadblock in my understanding of the mechanisms behind network transfer speeds. Specifically, how do buffers fit into this picture? Here's the issue I need help understanding:

One additional comparative benchmark that would make the article even more helpful is against the data transfer rates for a PCIe ExpressCard 1.0 (Type II) <-> eSATA adapter. Why? Well, this is what is available to anyone with a laptop who needs dedicated storage.

I have read that ExpressCard eSATA adapters get real-world sustained read speeds of anywhere between 30-75 MB/s (MB/s, not Mbs). Sustained speed is important -- it's the speed after the buffer is full, and when the hard drive speed becomes the bottleneck. I don't have figures for sustained write speeds, but I assume theyr'e about the same, since we're talking about data moving between one maxed-out HDD and another maxed-out HDD.

If I'm following this right, the Gig-E unidirectional speeds mentioned in the article are around 640Mbs for a PCI card and 900 Mbs for a PCIe card, or about 80MB/s to 112MB/s. Sounds a lot faster!

But then, when we're dealing with backing up multiple-gigabyte folders or entire HDDs, a HDD-to-HDD backup process over Gig-E still runs into the same speed bottlenecks as I'm reading about with the ExpressCard eSATA solution, doesn't it? Wouldn't the two data transfer rates be roughly equivalent in the end? Also, I've heard that the TCP/IP packet overhead is pretty large and eats up bandwidth regardless, and so one might even see slower transfer speeds than with an ExpressCard solution.

Or am I missing something pretty basic? Is there something about buffers that I didn't pick up from this article? If we're talking about backing up multiple-Gigabyte folders, then what kind of a NAS setup could one set up that gets around the full/empty-buffer bottleneck and shows higher transfer speeds over a sustained data load?

As you can tell, at this point I'm still feeling like quite the novice, but it would be HUGE to get a better grasp of these speed factors.

A little more info about my laptop: It's a ThinkPad T61p Core 2 Duo laptop (2.2GHz). It has an Intel 82566MM Gigabit Adapter, a 667MHz front-side bus rate, and 3 Mini PCI Express buses, some of which are already taken up, but I can't entirely decipher everything I'm looking at under Vista's Device Manager console. There's also the iSCSI controller. I'm running Vista x64, but I've got a TechNet subscription. That doesn't allow access to Home Server, but when SBS 2008 is finally released, that should be available to tinker with.

Thank you very much in advance for your time and your advice.

a.k.a.
 
I can't speak to eSATA transfer speeds, sorry.

The testing in this article was done with IxChariot and a script that is very bandwidth efficient. The tests performed were not intended to simulate file transfer performance, but to explore what raw TCP/IP transfers could do.

File transfers involve plenty of protocol overhead that differ by OS and network file system used. OS buffering and caching has a huge effect on file transfer performance and is a very complicated subject.

I am exploring NAS performance limits in another series, so you might want to follow that one. So far I have been able to get writes in the 80-90 MB/s range, but reads are limited to 60-70 MB/s. Peeling that onion is taking some time.
 
The file transfers are the killer for me. With 100Mbps your file transfers are limited by the speed of the network. With GigE you are limited by the speed of your hard disks.

If you use a DV camcorder, the files are 10GB per hour. My backups are 27GB.

A related place this performance matters is if you are using network boot and filesystem. For example I have PXE boot Ubuntu Linux setup (using an expanded LiveCD). It is almost 10x faster using GigE.

The next big thing is 10GigE (1 million ports shipped in 2007 according to Wikipedia) with 100GigE currently under development.
 
But what good is 10GigE if the limit is the hard drive speed?
 
Tim,
Modern PCI-E Gigabit adapters actually net wirespeed on TX and RX (945-950 on Chariot) and 1860ish Mbs on Bi-directional. The game isn't about throughput now, it's about at what cost of CPU utilization you achieve these results at. Modern Adapters with support for Large Send Offload (TCP Segmentation offloading) net wirespeed at less than 10% CPU utilization, while RX (optimized with Receive Side Scaling) nets less than 20% CPU.

This can all be done using standard size ethernet frames. Jumbo frames only tend to eat buffers these days.

Jason Folk
CCNP
 
oh, and of course power required to in active and low power states.

Regards,
Jason Folk
 
But what good is 10GigE if the limit is the hard drive speed?

The 10GigE and 100GigE comment was not in the context of hard drives, just pointing out that GigE is not currently the fastest conventional networking you can get :)

The current fastest hard drives have a 3GBps interface so a single drive could theoretically saturate a GigE connection, although platter speeds aren't that high (my 750GB drives get around 60-70% of a GBps). However an array of drives as well as data being served out of cache to one or more clients easily could saturate GigE.
 
Last edited:
We tested an Asus G2S laptop using Vista SP1 in real measured tests using a combination of file 5GB (total) file tranfers, iozone tests, and media encode tests. This laptop has an Esata port as well as PCIe connected gigabit so we tested both.

From Vista SP1 to our test XP SP1 workstation (raid 0) our best transfer over gigabit was measured at 47.5MB/s

From the laptop's Esata port, the best transfer rate was 45.89MB/s. The external drive used was the less-than-speedy WD "green" 1 TB drive.

The transfer rate between this laptop and our Intel 4 drive NAS (SS4200) was just over 45MB/s (read). In other words, with Vista's support of SMB2, and what I'm guessing is the Intel's support of this on the NAS, you might find using a fast NAS about the same as using the Esata drive where read performance was concerned. Use RAID0 instead of RAID5 on the NAS and write speeds would likely be on par.

As an aside, we regularly use this laptop to transfer files from SxS Expresscards containing HD video from a Sony EX1 camera, as well as DVCPRO100 footage from an HPX500 (via a P2 PCMCIA to Expresscard adapter) Transferring directly from the SxS card to our XP editing workstation (RAID0) we typically hover in the 29MB/s area as reported by Vista over an 8GB transfer. Btw, aka your search for real world numbers is the same as what we're exploring and what brought me here to this site. We've got a comprehensive review coming which will certainly (with Tim's permission) link back here to a few of these excellent articles.
 
Last edited:
Tim, Roger, and Dennis,

It's very helpful to hear all this. Thanks for jumping into the discussion.

Maybe I'm misunderstanding how processors handle I/O activity, but where's the NAS storage server/client app that does what the torrents did for P2P? How about a backup app that takes advantage of dual core processing to route the data simultaneously through both the Gig-E and the eSATA ports? You'd need a dual core NAS OS on the other end, presumably, but it would be worth every penny if you could break through the data transfer bottleneck. Maybe it could route data into RAM until the HDD caches were available to write.

Roger, PXE caught my attention just now, in an equally half-baked way. Could you explain what makes it faster? You're saying that file transfer speeds are much higher under PXE, not just boot speed? Is Pixie a full-fledged OS distro, that in this case is installed on a server and booting an Ubuntu Live CD onto a second machine? Or is Pixie more an app running under a different Linux distro, that's handling file transfer (in this case a Live CD) for the OS? Is PXE the kind of user-friendly, workhorse distro that you could use as a primary OS on a workstation/laptop?

Dennis, count me among the people who think your review will be doing everyone a favor. I am definitely looking forward to it. I had two comments on this very worthy project:

1. On the WD external drive, is that inside a simple external enclosure, or a multi-volume RAID rack? If it happens to be in a simple enclosure, is it a dedicated eSATA-only enclosure? Or is it some kind of eSATA + FW800 / IDE / USB enclosure? I've read that dual interface enclosures route data through a chip, whereas in enclosures without the FW800 / IDE / USB, the data routes directly to/from the drive, and can move faster.

2. By chance if you haven't finished the write-up yet, it would be incredibly helpful to see the file transfers checked out over two different brands of ExpressCard eSATA adapter as well. The vast majority of laptop users don't have dedicated eSATA ports. It's been a while since I've looked, but I remember finding only patchy speed comparisons for ExpressCard-eSATA adapters, but I'd wager there are significant differences. Obviously, you can't test them all, but you hear through forums which ones are getting all the buzz. (Mainly one hears mentions of Silicon Image (SIIG), Rosewill, Apiotek, Sonnet, Adaptec and CalDigit, roughly in that order.)

Another measure of eSATA ExpressCard differences would be whether you can boot a machine from it. Most, seemingly, cannot, and it doesn't seem to be strictly whether your BIOS can change ports for its boot priority. Everyone is asking this question.

Some 2-port eSATA ExpressCards are even touted as good 2-volume RAID controllers, but my assumption is that they aren't dedicated pathways, they're just post multipliers that are splitting bandwidth. Maybe some ExpressCard 54 RAID controllers are legit. (It would be even nicer to have a two-layered RAID card that taps the stacked top+bottom ExpressCard slots that a lot of computers have.)

Just because I'm thanking each of you for your insights, it doesn't mean this thread is closed!

Happy summer weekend to readers of this thread.

a.k.a.
 
Last edited by a moderator:
One thing we don't have is an Expresscard Esata adapter :-( The G2S has an Esata port standard, as well as a 1920x1200 res screen, which is why we use it in our studio in combination with external 1TB drives. The drive enclosure we're using is a single drive "SMART" brand (about $45) which has both Esata and USB interfaces. In a test between the same drive/enclosure to an Intel based workstation (RAID0) running XP SP3, we saw 67MB/s reading from the Esata port and writing to the workstation's RAID0 array.

USB tests on the same drive have been testing about 29MB/s which is fairly consistent in both reads and writes...albeit with higher CPU overhead. In other words, this particular drive enclosure claims to have a SATA II interface and in testing is performing as such.
 
In other words, with Vista's support of SMB2

I have been developing the SMB/CIFS/SMB2 protocol since the early 90's. SMB2 has nothing to do with performance except the kool aid from Microsoft Marketing. The existing SMB/CIFS stack in pre-Vista operating systems was a huge hairy brittle mess. It dated back to the days of Xenix and OS/2 and still had code to deal with all the compatibility issues. (In various information levels the Microsoft developers cunningly did things like transferring raw kernel data structures over the wire. Of course changing the kernel then meant they had to be compatible with the old and new variants of the data structures. Do that for 25 years to get an idea of how bad it got).

SMB2 let developers start with clean code. That is also why there is no SMB2 stack for XP. If it gave that much of a performance boost then there would be massive demand for it. The performance advantage was that Microsoft developers could write the code quicker without having to worry about Windows 95 clients, OS/2, Samba or even XP compatibility. It also clarified intellectual property issues. SMB was originally started by IBM. For SMB2 Microsoft was sole owner of the IP.

When I was at Juniper Networks I did a podcast about SMB2 (as well as Compound TCP). Sadly it was before Vista was released and their EULA didn't allow for any benchmarking. So now I can give real numbers. For example displaying a zip file with Explorer involves opening it, and then reading the table of contents which is stored at the end of zip files. For the one I used the table of contents was about 80kb. SMB1 has a maximum read size of 64kb so it would require two reads to get the whole table. SMB2 has a larger maximum read size but sent 64kb requests anyway. If you were hand generating the SMB or SMB2 protocol, you would need 5 requests to get the table of contents (an open, a file information to get file size, two reads and a close). Windows XP sent 1,500 requests waiting for the answer to each one before sending the next so latency hurts. Vista/SMB2 sent 3,000 requests of which 1,500 were synchronous (wait for answer before sending next) and the other 1,500 were asynchronous (no waiting for answers). You can use WireShark in any similar kind of test to see exactly how beneficial SMB2 is. (It also shows why WAN optimizers exist and why most customers care about SMB/CIFS for them!)
 
Roger, PXE caught my attention just now, in an equally half-baked way. Could you explain what makes it faster?

PXE is a standard for network booting - http://en.wikipedia.org/wiki/Preboot_Execution_Environment

In my example all my machines can network boot. I can change network to be first in the BIOS boot order. The first part of the network boot then offers me a menu. I have the following choices:

  • Windows (BartPE)
  • MS-DOS
  • MemoryTest+
  • Various from sysrescue (boot&nuke, freedos, aida, ntpass)
  • Text mode installation for Ubuntu (32 & 64 bit, 4 releases)
  • Ubuntu LiveDVD (32 & 64 bit, 2 releases)

For the last one the DVD files are served over the network (NFS). There are tens of megabytes of files that are read to boot the operating system, start services, bring up graphics, log in and start desktop programs. Quite simply tens of megabytes are a lot quicker to retrieve over GigE than 100Mbps. It also makes applications start quicker since their shared libraries, program, icons, text and data often add up to quite a bit. With GigE it is almost like operating off the local hard drive whereas 100Mbps is noticeably slower due to the network being the limiting factor.
 
Roger, awesome information. My curiousity was immediately piqued when Steve mentioned SMB2, and I looked at our Vista SP1 results on the test laptop. It has me actually upgrading one of the test machines (about 3 hours as I type this!) to Vista so I can test the exact same hardware on XP SP3 vs Vista SP1. We were seeing a measured 45MB/s (or should I say 45 MiB/second!) to our Vista laptop (single drive) which could only be exceeded by XP machines running RAID 0 arrays. My goal here is use RAID 0 on both the NAS and Vista workstation to see if the combination of fast drives and SMB2 will improve performance well beyond the single drive numbers.

Btw, we used PXE a bit in my past days as an analyst. With gigabit ethernet becoming the standard, one wonders if network boots/images will make their way back. The thin client thing kind of went to the wayside as workstations got so cheap (at least in the enterprise I worked in).
 
Last edited:
SMB1 vs SMB2

SMB2 is indeed much quicker as we're seeing read and write speeds at 48MB/s going between a Vista SP1 RAID 0 workstation vs our Vista G2s laptop over a cheap gigabit switch. This is nearly twice the speed that the same laptop tested when writing to the identical workstation running XP.

Our ffmpeg encode test is now at 32MB/s in the same scenario both reading and writing which is about 75% quicker again while writing to the RAID 0 array. This laptop's 200MB hard disk speed is for the first time max'd out in both read and write directions over the LAN.

Next test will be between two RAID0 workstations using SMB2 to see how high we can go.
 
Last edited:
PXE server was used on one my prior assignments, but that server down so much it wasn't productive and when it was up it was slow and often crashed. Most companies use images on Windows Server or NAS drives as no one wants to tie up network bandwidth pushing down images during the day.

Going from 100mbps to GIG in a domain enterprise enviroment where still old Cat 3 and 5 are still present causes major problems. Network Refresh Project is still going on for Cisco Catalyst 3750 24/48-ports. But Nortel Switches do a better job in network closet where there is no cooling systems.

Each corporate company I've been at have were too many network topology infrastructure environments.
 
SMB2 is indeed much quicker

You have to be very careful what you are actually measuring! (Or to put it another way I ask the marketing folks what results they want and come up with the appropriate benchmark :) )

Be aware that Windows XP and below have very small caches defaulting to 10MB no matter how much memory you have. This dates back to the early days of Windows NT which was designed to run on machines with 4MB of RAM. Google "LargeSystemCache" to get more information and change the setting. There were a lot of underlying changes in Vista to improve on the previous miserlyness of the kernel and memory consumption, but it didn't really have a cool marketing bullet point.

Secondly CompoundTCP in Vista will result in connections getting to maximum speed sooner (it ramps up quicker) and less fallback on packet loss or congestion (it doesn't halve throughput and slowly climb back up).

If you want to measure the difference between SMB1 and SMB2 then you need to run with Vista on both ends and only disable the SMB2 protocol. If you compare against XP then you are also implicitly measuring the performance of the TCP stacks, caching strategies, memory usage, device driver quality and features (NCQ, jumbo frames, TCP offload etc). Once doing all this, I'd expect SMB2 to be marginally faster just because the code paths are less tortuous and complicated.
 
With gigabit ethernet becoming the standard, one wonders if network boots/images will make their way back. The thin client thing kind of went to the wayside as workstations got so cheap (at least in the enterprise I worked in).

I was also involved in the Thin Client industry. The fundamental problem there is that Thin Client was perceived as a hardware issue. To my mind it is actually entirely a systems management issue with the hardware being mostly irrelevant.

The simple measure is "If the machine in front of a user caught fire and died, how long would it be before they are up and running again with their applications and data". The shorter the time period, the more "Thin Client" you are. With SunRays (hardware) you can do it in 30 seconds. With the Tarantella product I worked on (software) it was similar - you just needed a browser. I have seen other companies manage heavy weight desktops with the same measure in mind (SMS, roaming profiles etc) although some data would be lost due to roaming profiles synchronization frequency.

GigE certainly makes for a more pleasant experience if the underlying applications are sucked across the network. If only the display is over the network then 100Mbps is fine at the client although you'll want GigE on the server end.
 
You're right, I should be testing again with SMB2 disabled. In this test I ran the Vista upgrade on precisely the same box we previously tested using XP. In that case, XP only supports SMB1, so no way to change it. Now that Vista is on both, there's no reason I can't test that. Roger, our testing is actually pretty low-tech. I've written a small batch file that does the following:

1. Copies 5GB of data (about 130 files, largest is 1.3GB) from the client to the server ... and then back again to clean directories. Each process is time stamped and logged.

2. Performs an ffmpeg encode of an HD MP4 file, 540MB file to 2 streams. Source is on the target, output on destination...and then back again. Each proces is time stamped and logged.

3. Runs the iozone tests similar to Tim's NAS tests which are logged.

The client and server in many of the tests is on the same box as we test USB, ESATA and tranfers between 2 local RAID arrays.

By calculating actual file transfer/time the numbers I'm reporting are measured and based on as real of a test as I can figure out. Caching is there, (and shows up nicely in the iozone tests) but my goal here is to figure out the best gigabit solution for HD video editing where 5GB is actually a pretty small number. In that SMB2 test, the 5GB ended up being transferred nearly twice as fast from the Vista laptop to the Vista workstaiton.
 
Last edited:
my goal here is to figure out the best gigabit solution for HD video editing where 5GB is actually a pretty small number.

A little out of the scope of "your small network" :) You may wish to attend the annual CIFS conference. See http://www.snia.org/events/storage-developer2008/ and also look on the Samba mailing lists for when their developers will be around. A lot of the CIFS attendees are video companies since they have the same issues you do. For example you should find the Quantel folks there.

I don't want this to be an operating systems war, but I can assure you that the Samba developers are very confident they easily outperform all current Microsoft solutions. One just told me the other day that they jokingly asked for 100GigE since they can already saturate multiple 10GigE links!
 
Actually had a chat with the Quantel guys at NAB in Las Vegas this year :) It's true, there are plenty of solutions if you have $$ to spend on a SAS, but the target of our investigation is really just the small independent film-maker who is generally on a budget. We're trying to see what the best configuration is for a system under $3000.

If a Linux variant is typically the OS for the inexpensive NAS boxes, one wonders if Samba could be implemented on some of these boxes already?
 
Last edited:

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top