What's new

"Affordable" 10G networking results :-)

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Dennis Wood

Senior Member
"Affordable" 10G networking - real world 1000MB/s !!

I wasn't expecting to be saying this anything in the next few years, however "affordable" 10G networking is here mostly due to Netgear's 8 port 10G switches, and the performance is pretty amazing. I've been testing/tweaking etc here for nearly 3 weeks researching, testing disk RAID IO on various chipsets. Here's a teaser as I write a rather extensive blog/series on what it took to get here. A real world 1.03GB/s of transfer going on here...albeit between shared RAM drives on two windows 8.1 workstations. The latest build, a system based on ASUS Z87-WS motherboard, 32GB of RAM and the I7-4770 running at 4.2MHz has taken speeds up from 850MB/s to over 1000 MB/s.

10g_ramdisk_windows81.png
 

Attachments

  • 10g_ramdisk_windows81.jpg
    10g_ramdisk_windows81.jpg
    62 KB · Views: 566
Last edited:
Best NASPT score ever?

Not really playing fair here, but this is what the Intel NASPT score would look like with a 10G network NAS, assuming file IO was not a limitation. In this case the target was a remote ramdrive "NAS" hosted on a windows 8.1 workstation with 32GB of physical ram. I did this test basically as a benchmark for the various raid tests I'm doing..including Z87 intel onboard tests (6 x Sata6 ports, plus 4 x Marvel Sata6) as well as a just-arrived rocket raid 2720 card.

Samsung's 840 EVO series SSD is not so far off with this ability as their "RAPID" feature uses system RAM to cache SSD data. Having just installed one tonight, this RAM cache is not marketing hype...the drive performance is impressive with a hybrid setup on a single SSD. With Sata6 ports max'd already with current SSD offerings, Sata Express will be the next jump for affordable high speed RAID. This limitation is becoming quite obvious in the disk IO tests.

10g_ramdrive_naspt.png
 
So why haven't you thrown a pair of 10GbE cards in the machines you are testing with? I want to see what SMB3.0 multichannel will do over 10GbE!

On my desktop and server over a pair of GbE links it'll hit 235-238MB/sec, which is almost exactly doubled the 117-118MB/sec I could get over a single link.

Interestingly I do have an odd issue with SMB multichannel. I ONE adapter in my desktop is disconnected my uplink speed to my server drops to 20-30MB/sec. Downlink stays at around 117MB/sec and if I disable the disconnected adapter, uplink shoots back up to 117MB/sec.

I have NO idea what is going on with that. Not something I need to really solve, but VERY odd behavior IMHO.

So, come on, SHOW ME THE NETWORK SPEED! I wanna see at least dual 10GbE links with SMB3, even if it is just RAM disk to RAM disk.

Now maybe in just a couple more years the total price will be down to the point where I can afford 10GbE (either 16 port GbE switch with a pair of extra 10GbE ports or just a 5-8port 10GbE to link in with my existing 16 port GbE switch). My price point is roughly $400 to make all the magic happen between switch and NIC for my desktop and a NIC for my server.

I assume that is at least 3-5 years out. Sigh.
 
Actually, those are network speeds..albeit single port only. Both workstations have the intel x540 10Gbe NIC (about $400 each sadly). The dual port cards are $600 (if you send me two I'll test them :) You do raise a good point on the SMB3 multichannel enhancements..as clearly it is working for you on dual NICS. My understanding of NIC bonding in the new windows platforms (from Technet) was that only 2012 Server could bond NICS for point to point increases. Are you doing this via third party drivers on the workstations? Dual NIC performance would be something indeed..however a drive array to saturate that pipe would require at least 16 SATA6 drives. Now for someone who truly needs this bandwidth, a RAID array with 8 x 1TB SSDs like the 840 EVO could saturate a 20Gbe pipe right now. That's a $6400 tag for drives only...

Workstation 1 is based on an Asus Z87-WS motherboard (has 4 16x PCIe slots) with 32GB RAM. An Intel x540 10Gbe NIC in that machine is connected to a Netgear XS708E switch (about $800). Because I'm testing a shared storage Adobe CC workflow, this machine also has dual GTX 650 Tii Boost cards as a cost effective "CUDA farm" for rendering.

Workstation 2 is based on an Asus P8Z77 motherboard with 8GB of RAM. It can only run two PCIe cards at a full 8x so one slot hosts the RocketRaid 2720 card, and the other hosts the x540 10Gbe NIC connected as you might expect to the switch.

Assuming zero overhead, a 10Gbe network is capable of 1250 MB/s. Testing with NTttcp (similar to iperf) indicates 1181 MB/s between the NICs. So getting two windows machines to copy/paste like this at from 1100 to 1030 MB/s is as good as it gets :) The goal of my little project here is to get shared storage that affordably exceeds the performance of a single SSD drive like the 840 EVO (which is quite amazing in RAPID mode) but provides excellent performance using the PPBM5 and PPBM6 Adobe benchmarks. In other words, anything beyond 500MB/s in terms of shared storage, is exceeding the goal. At nearly 900MB/s in RAID zero with the six Hitach Deskstar drives, the performance hit using RAID 50 (still initializing after 24 hours..) is what's next in testing. 16 TB of usable storage would be just fine.
 
Last edited:
Hmm. Bandwidth aggregation does indeed look to be windows 8.1 "enabled" with SMB3 multichannel.

The Z87-WS workstation has dual 1GBe ports onboard, and the TS-470 Pro NAS has 4 ports, 2 of them 1GBe. With QTS 4.1 beta, SMB3 support is there. I'll test this indeed and report back.
 
Its also enabled on Windows 8.

I've been running it for about 9 months between my desktop and server, both originally on just Windows 8, currently my desktop is on 8.1, but the server is still just Windows 8.

Its actually the reason I upgraded from Windows 7, was so that I could do SMB Multichannel.

I just wish my storage system could keep up with it, but I have ~560GB free of 1.9TB on my 2x1TB RAID0 desktop array (Samsung Spinpoint F1 7200rpm drives). Just too little free space, so its pushing the inside of the platters and I am down to around 190-210MB/sec on transfers these days (well, new data on the disks anyway). The Server array seems to still be keeping up, for now (1.96TB free of 3.6TB on a pair of Samsung F4EG drives in RAID0).

I am currently looking at a couple of Seagate 3TB 7200rpm drives for both machines, but I'll probably upgrade my desktop first, and the server in 3-9 months. That should be able to saturate 2GbE links with SMB multichannel, at least for several years. It kind of makes me wish I had 3 links up and running, but once I copy over the data, the arrays probably won't be able to do much more than 275-300MB/sec.

SSDs in RAID0 and 10GbE is my dream, no time soon as my wife would kick me were it counts if I suggested spending $1,500 on networking gear and $2,000+ on storage.
 
It will drop in price, but perhaps not quite as quick as predicted. Certainly the Intel x540 NIC and Netgear XS708 switch are two of the cost effective leaders for SMB right now. I was checking out a dual processor Supermicro server board that has both 10G and the LSI 2208 RAID chip on board, which is "economical", despite an $800 price tag. If you price this stuff out separately, you're at nearly $1500.

Asus has two ATX server boards with 10G on their site but neither seem to be available anywhere, despite a June 2013 release. My guess is that most consumers simply don't need the bandwidth yet, therefore demand is not so high.

With SSDs now at $550 for a 1TB drive, I'm sure we'll see SSD and RAID rapidly becoming practical.
 
That's pretty interesting results.
I have been using 10Gbe at work for one purpose; p2v servers for clients that can afford 10 minutes of downtime (the time it takes to install an intel x540 card). We used to do it with a Quad port card with all 4 ports linked.

It usually cuts down the time to about 2/3rds as it was before, but more importantly it is less of headache to set up.

From what I remember the original 1Gbe card cost more some years back than we purchased the 10Gbe card.

p.s.
dual 10Gbe server board with Intel HW Raid and a dedicated IPMI port for under $500;
http://www.newegg.com/Product/Product.aspx?Item=N82E16813157399
 
Thanks for the board link. I had not seen the ASrock 10G server offerings.

After seeing power usage on a dual Xeon setup, I'm having second thoughts on that setup in terms of it's power usage.

What "looks" good right now are the Avoton (8 core atom) boards from both Asrock and Supermicro. Throw this board in an 8 bay hot swap case like the just released DS380 from Silverstone. (about $160) and you have a very efficient server setup. For about $600 + RAM, you're done as far as the basic server board/chassis goes. All that's left is your drives, and a 10G card.

These boards have only one x8 PCIe slot, so if a 10G card goes in there, you have nothing left for a RAID card. In that case, likely the RAID solution that makes sense is Windows 2012 Server (to retain SMB3 multichannel performance, windows 8.1 or 2012 server is all there is) and storage spaces. Storage spaces with parity, in turn, look to have terrible write performance, however there is some hope after reading this thread on 2012 storage spaces SSD tiers. So one of these Avoton boards with 12 SATA ports, six spinners and three SSD's (one for boot, the other two for storage spaces parity tier drives) may be the ticket. A mini-itx format Avaton with two PCIe slots would be just perfect. One slot for 10G, and the other for something like the RocketRaid 2720 which is quite decent in my RAID5 write tests here.
 
Last edited:
Thanks for the board link. I had not seen the ASrock 10G server offerings.

After seeing power usage on a dual Xeon setup, I'm having second thoughts on that setup in terms of it's power usage.

Well, you don't HAVE to run the board maxed out with 2x 150watt CPUs

Intel Ark list of LGA2011 CPUs with 80watts or less of TDP


Then again, you could reasonably run this as the only server on your network;
File server, Router, NVR, etc.



What "looks" good right now are the Avoton (8 core atom) boards from both Asrock and Supermicro. Throw this board in an 8 bay hot swap case like the just released DS380 from Silverstone. (about $160) and you have a very efficient server setup. For about $600 + RAM, you're done as far as the basic server board/chassis goes. All that's left is your drives, and a 10G card.

These boards have only one x8 PCIe slot, so if a 10G card goes in there, you have nothing left for a RAID card. In that case, likely the RAID solution that makes sense is Windows 2012 Server (to retain SMB3 multichannel performance, windows 8.1 or 2012 server is all there is) and storage spaces. Storage spaces with parity, in turn, look to have terrible write performance, however there is some hope after reading this thread on 2012 storage spaces SSD tiers. So one of these Avoton boards with 12 SATA ports, six spinners and three SSD's (one for boot, the other two for storage spaces parity tier drives) may be the ticket. A mini-itx format Avaton with two PCIe slots would be just perfect. One slot for 10G, and the other for something like the RocketRaid 2720 which is quite decent in my RAID5 write tests here.

The spec of M-ITX only allows for 1 single card slot. You get three choices, PCI-e bus extenders, move up to M-ATX, and get a M-ITX board with either an onboard hardware raid and/or onboard 10gbe (of which I have never seen yet would love to).
 
That Avoton option might be the route for me, at least eventually.

I really don't have big network requires, I am just a speed king, but I am also going for super low power.

It shouldn't, but does bother me that my server uses 21w at idle and 32w streaming movies (about 50w under load).

Celeron G1620 based, uATX H67 board (Asus, IIRC, but its been a long time since I've cracked the case), 2x4GB G.Skill Sniper memory at 1.2v, a pair of Samsung F4EG 2TB drives in RAID0 and a Corsair Force 60 as a boot drive, on-board Realtek NIC disabled and running a pair of Intel Gigabit CT NICs. PSU is an Antec Earthwatts 350.

I don't need much processing power as basically the thing just needs to act as a file server, Calibre server and iTunes server and that is it, but as much disk and network performance as is possible, at a resonable cost and low power footprint are my goals.

I am not unhappy with it right now, but if the option exists to resonably upgrade to 10GbE speeds and a faster drive array down the road, I'd deffinitely jump on it. Resonable, in the end, likely being for under $400, which seems like we are probably still at least a couple of years away from, if not more.

I am mildly tempted to re-enable the onboard NIC and attach it to the switch, just to increase my aggregate bandwidth to 3Gbps, but realsitically other loads on the server, outside of ME doing something, are on the order of a few MB/sec, and the impact on the RAID array is probably more than the impact on the network pipe.


I am really not sure what I want to do in the future, if/when 10GbE becomes a realsitic possiblity for me. I think the most likely answer is, just live with 2 drive RAID0 arrays and let the drives limit the maximum throughput to whatever degree they will. More than 2 spinning disks just seems like too great a point of failure (the data is backed up at least on my desktop, so it isn't a single point of failure still), plus the power consumption just seems inordinate, especially for things like streaming movies, which is what the server gets used for 95% of the time it is in active use.

I am mildly tempted to go with a big honking SSD for non-movies (pictures, music, etc), as a 1TB drive would cover all of that fine for a long time, but I don't like split storage on my server (I am weird, I admit it).

Probably stupid question/comment, but it seems like a shame, that in the CONSUMER space, there isn't an easy option for something like a 60GB SSD as a write cache for a RAID array for hard drives. I can live with a bigger RAID array or faster drives on my desktop over the server.

Though, meh. HDD speeds aren't increasing very fast, but by the time 10GbE is affordable for me, I'll probably have moved on from whatever 3TBx2 RAID0 arrays I am likely to invest in in the next 6 or so months for my desktop and server and probably be on to even faster 4-6TBx2 RAID0 arrays, which just might be able to juice 350-400+MB/sec. It just seems silly that I have "wait" for a 2GB movie to transfer, let alone 10-20GB of movies. Sigh.
 
Asus does offer ssd disk caching on many of their boards now. On the z87-WS board I have here, the ssd is attached to the marvel chipset, and can be set as a cache drive for a mirror'd set of spinners. No raid 5 though.

I built up the test mule with windows server 2012 last night, so am testing ssd tiered storage/parity storage spaces right now. I'll baseline that setup vs a 6 x 4TB raid 5 setup on the rocket raid 2720.

The good news is that I only had to figure out disabling VMQ in the intel NIC driver on 2012 server (same x540 nic but more options appear in the server driver config) to get right back to 1.1 GB/s copy/paste performance. Disabled SMB signing as well. More later.
 
Similar threads
Thread starter Title Forum Replies Date
dlandiss Home networking Other LAN and WAN 12

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top