What's new

NIC Teaming not worth it?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

vlad1966

New Around Here
I think I've got a pretty good setup, but trying to see if I can get faster data transfer rates on my home network. This is my setup:

FreeNAS server (i5-4570S, Asrock Z87E-ITX, 8GB RAM)
2 PCs (one with Intel Gigabit & KillerNIC Gigabit)
Comcast 50Mb down/10Mb up Cable

I usually average about minimum 80MB to more often 110MB/sec data transfer rate to/from PC to my FreeNAS server.

I was considering trying teaming, but now I'm not so sure after reading this, since I'm mainly concerned with data transfer rate over my network & not redundancy:

http://www.smallnetbuilder.com/content/view/30556/53/

"A final interesting aspect about a LAG is it doesn't increase throughput for individual data flows. Each data flow is limited to the bandwidth of a single link in the LAG. In a LAG with two or more 1 Gbps links, the best throughput an individual data flow will see is 1 Gbps. The real value of LAG is in increasing total (or aggregate) throughput between devices. Read this brief presentation for a nice and clear explanation.A final interesting aspect about a LAG is it doesn't increase throughput for individual data flows. Each data flow is limited to the bandwidth of a single link in the LAG. In a LAG with two or more 1 Gbps links, the best throughput an individual data flow will see is 1 Gbps. The real value of LAG is in increasing total (or aggregate) throughput between devices."

Unless I'm missing something, isn't teaming kind of pointless performance wise unless you just want failover?

I know I'd need a managed switch if I wanted to try teaming & 2 NICS in each PC, but anything else?

Thoughts on if it would be worth it???
 
Redundancy is definitely a plus with "teaming" or link aggregation. In a business environment it can help on throughput because you may have many people doing transfers at once. In a home environment it will be of limited use for many reasons. It will help with redundancy and may help if you have several PC's doing transfers at once. Though each individual PC will be limited by the speed of a single link, multiple PC's can use multiple links at the same time. The problem is that often with home PC's and NAS equipment, the hard drives are not fast enough to transfer multiple Gigabits of data to different PC's at the same time. Also it depends on your switch and NAS device as how the link aggregation is set up and how it determines what gets transferred over each link. Often times you get a links that are not balanced well. One link may be saturated while the other link does not have much traffic on it.
 
It isn't just about redundancy, it is also about aggregate connection speed.

A server with two gigabit NICs can transfer 2x1Gbps as it can handle multiple connections and will load balance over the NICs.

It however CANNOT do 2Gbps to any one client.

What you can do though is load up Windows 8/8.1 or Server 2013 on to your server and any/all clients. They all have SMB3/3.01 with them, which includes a protocol (WINDOWS SPECIFIC!!!) called SMB Multichannel which allows the LAN manager to open multiple connections between the two machines. This means you CAN do more than one adapter between machines. Do NOT team the adapter though, otherwise you are back to 2x1Gbps. Windows will handle load balancing and multiple connections and will manage the NICs for redundancy, load balancing and throughput as of 8/8.1 and server 2013. Also no need to do link aggregation on the switch (so even a dumb switch will work).

I found this out awhile ago. It is what pushed me in to upgrading my desktop and server to windows 8 originally.

I regularly get 235MB/sec between my desktop and server on medium/large file transfers utilizing two Intel Gigabit CT NICs in both machines and a RAID0 array on both machines. By comparison, when I was doing adapter teaming, my server was limited to 117MB/sec between my desktop and server even though both had dual NICs in one team. I could however do 117MB/sec from my server to my desktop and simultaneous do 114MB/sec from my server to my laptop...or do 117MB/sec to the server from my desktop and also from my laptop to the server at the same time.
 
Last edited:
I regularly get 235MB/sec between my desktop and server on medium/large file transfers utilizing two Intel Gigabit CT NICs in both machines and a RAID0 array on both machines. By comparison, when I was doing adapter teaming, my server was limited to 117MB/sec between my desktop and server even though both had dual NICs in one team. I could however do 117MB/sec from my server to my desktop and simultaneous do 114MB/sec from my server to my laptop...or do 117MB/sec to the server from my desktop and also from my laptop to the server at the same time.

Did you do anything special to enable it for Windows 8.1?

I tried the powershell command:
Code:
Set-NetOffloadGlobalSetting -NetworkDirect Enabled

but it says "This feature is available on servers only." :(

All the documentation I can find says only Server 2012 R2 talking to Server 2012 R2 (both with rdma adapters of course) can do SMB Direct.

Windows 8.1 will do multi-threaded tcp and rdma but not the multichannel part, at least I cannot figure out how to turn it on. Do you have the "Microsoft Network Adapter Multiplexor Protocol" bound in Ethernet Properties?

I have two 10Gbps connections to my server and I usually get ~850MB/s off of it (8 drive RAID6) but that is only over one connection. I have tried ramdrive to ramdrive copies and cannot go above 1.1GB/s and only one link goes active at a time.

EDIT:
called SMB Multichannel
I ended up re-reading your post and this jumped out at me, a quick google using SMB Multichannel instead of SMB NetworkDirect and my two connections are linked! They keep in step beautifully, 20Gbps. Thanks a lot for the firm statement of terminology. :)

Wow it really does work too; now I just need something that can use 20Gbps over the network. I might need eight more drives. ;)
 
Last edited:
And you are officially on my do not like list.

I only have 2Gbps going on :(

Though...my RAID0 array can't do better than that really.

Maybe one of these days we'll have a convergence of 10GbE and cheap SSD storage and I can have both going at the same time.

I do feel like we are getting close. Some SSD storage is around 40 cents per GB. With NVM and PCI-e/SATAExpress based flash storage as well as 3D flash and smaller process sizes it might only be a couple more years before we are pushing in to sub 20 cents per GB and speeds well over 1000MB/sec reads and over 500MB/sec writes with flash storage.

Just needs to hit around 10 cents per GB and I can probably justify moving my storage to SSD. Easier to expand storage too with SSDs and JBOD/storage spaces IMHO. Less worries about needing RAID for performance or performance penalties with sequential transfers with a full/nearly full SSD for things like basic file storage (I wouldn't want to do it with a DB or other high write environment). Just plug in SSDs as you need more storage instead of needing to replace an entire array of HDDs when you need more space.

Probably spinning disks for me again as I need more storage in the next 6 months. Hopefully after that when I am looking at more storage again in 3-4 years SSDs will be cheap enough I can replace at least one system with SSDs for mass storage, if not both. And by then maybe 10GbE will be cheap enough too.
 

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top