What's new

Expanding switch capacity

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

azazel1024

Very Senior Member
I originally got a TP-Link web smart switch, 16 port (SG-2216) when it was on deep sale for $110 or so. Its worked great, I love it, it looks sexy sitting over my server, ad nauseum.

The issue I have is as I've been wiring my house I've realized that 16 ports is not going to cut it. Currently I have 13 ports filled and at a minimum I have 4 more, 1 for my daughters bedroom, 1 more in my playroom/family room (so it can do double duty as a home office in a few years when my kids are older), 1 in some built in bookcases for a networked stereo and 1 GBIC+fiber taking up a port to run out to a shed/workshop in a couple of years.

That doesn't include that in about 2-3 years I am tearing down my garage and building a nicer garage with master suite over it and have plans to blow out the back for a big living room and expanded kitchen later. All of that will likely need at least 1 more port in the garage, master bedroom and new living room.

So, not enough ports, eventually. I am fine for a couple of years, but soon I'll be over my limit. I have an 8 port Trendnet dumb switch I can hook in to expand my port capacity which is fine.

However, ideally I'd like to increase the switching capacity. I currently run two ports for my server and for my desktop. Looking at future needs and uses of spaces, I could see possibly hitting a limit on switching capacity between the switches under some circumstances even with careful management of which ports are connected to which switches.

So, bonding/SMB3.0+ question for you.

Under Windows 8+ with SMB3, you get SMB multichannel, which is awesome. Its what I use between my server and my desktop to get ~230MB/sec speeds between my RAID arrays with a couple of Intel CT GbE NICs in each machine (onboard LAN disabled on both machines). With SMB3 and/or in general with dump switches, can I just connect two ports to two ports between the switches? Or will this setup a loop back event?

To avoid a possible loop back, I assume I'd need to do port bonding? Which I'd need a managed switch that supported that (like my SG-2216 does)? Yes?

Just trying to get an idea for the future what kind of switches I'd need to look at. In the short term when I go over the port limit, I'll just use the 8 port unmanaged switch and 1-1 connection between switches, though if it'd support 2-2, sure, why not. With 2-2/bonded, I doubt I'll have enough ports very long term with a setup like that (only 20 free, and I am thinking I'll be using 22-24 in the end. Maybe just get rid of the sg2216 and go with a 24 port model? I'll want to mix in 10GbE some day. Hmmm).

Anyway, thanks!
 
You need a smart/managed switch with port bonding to do what you want to do.
 
Okay, I thought as much. Sounds like a couple of TP-Link SG2216 might cover me (I already have the one), or else I might just go big and go with a 24 and/or 32 port model down the road and retire the 2216.

Though, hopefully with 10GbE not terribly far down the road, or at least some consumer gear kind of like what we have/had for GbE, where you can, say, get a 16 port Fast Ethernet switch with a couple of GbE ports on it, maybe someone will come out with an affordable 10GbE switch, that is, say 8-16 GbE ports with a couple of 10GbE ports.

Use it to expand general LAN capacity and use the pair of 10GbE to connect the only two machines I really care about having fat pipes, and still be able to bond a couple of 1GbE ports between the two switches to ensure a decent backbone between them.

Till then I'll just live with the lower backbone capacity of the 8-port connected to the 16 port switch over a single jack.
 
i have a feeling certain manufacturers intend to drink every last drop of blood from businesses before 10GbE will be reasonably priced :(
 
For PCs and NASes, I wonder when they will be able to run file system read/write code and IP stacks fast enough to benefit from 10gigabit ethernet.

This is in the context of pro-sumer and SOHO networks, not enterprise.
 
For PCs and NASes, I wonder when they will be able to run file system read/write code and IP stacks fast enough to benefit from 10gigabit ethernet.

This is in the context of pro-sumer and SOHO networks, not enterprise.

hi stevech,

are you saying 10GbE i currently only useful at the backbone, etc? i have no experience with it, so i wouldn't know. i'd have expected filesystems to be sufficient, but i'm blissfully unaware of the performance at the network layer. i imagine it could need a hefty cpu, though
 
hi stevech,

are you saying 10GbE i currently only useful at the backbone, etc? i have no experience with it, so i wouldn't know. i'd have expected filesystems to be sufficient, but i'm blissfully unaware of the performance at the network layer. i imagine it could need a hefty cpu, though
Yes, IMO it's applicable to the space between copper and fiber. Maybe cheaper than fiber for switch uplinks.

My good speed PCs cannot, and less so can my laptops and of course WiFi based devices, cannot fill up the 1000BT ethernet capacity, due to file system overhead and network protocol/SMB overhead. Even so, only transfers of very large files can potentially benefit, since transferring lots of small files (not merged into a zip file), has the high overhead of file creation or deletion.

Doing 2+ concurrent transfers into or out of a PC gives higher aggregate speeds, but that's not often convenient.

And we're talking about intra-LAN transfers of course, not on the WAN.
 
ahh. a few years back, i had an i7 920 with the ich10r chipset. this chipset worked very well with software raid0. with 4x 500gb single platter harddrives, it could transfer at 500Mb/s (not 500mbit!!) when copying to a ramdisk, for example. however, it definitely took a massive hit when transferring directories of small files and i never had a chance to see how it performed in a network transfer over anything greater than 100mbit.

i'd love to saturate 10GbE with something like that
 
i have a feeling certain manufacturers intend to drink every last drop of blood from businesses before 10GbE will be reasonably priced :(
Netgear's XS708E is an 8-port 10GbE copper switch (one port can also be used as a fiber port) for $100 (-ish) per port. The XS712T is a 12-port (2 usable as fiber) switch which adds "real" management (SNMP, etc.) for $140 (-ish) per port.

For folks who want mostly Gigabit ports with a few 10GbE ports, there are a lot more products available at lower prices.

For PCs and NASes, I wonder when they will be able to run file system read/write code and IP stacks fast enough to benefit from 10gigabit ethernet.

Code:
(0:97) here:/tmp# iperf -c there
------------------------------------------------------------
Client connecting to there, TCP port 5001
TCP window size: 32.0 KByte (default)
------------------------------------------------------------
[  3] local 10.20.30.61 port 41101 connected with 10.20.30.40 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  11.5 GBytes  9.85 Gbits/sec

(Copied verbatim except for sanitizing hostnames / IP addresses.)

This is in the context of pro-sumer and SOHO networks, not enterprise.
Well, this is on a network running at my house, entirely for personal use. Even if it it s a bit excessive.
 
good lord, that is awesome

hey, got a kill-a-watt or something hooked up to all that? just curious xD

rather, i see the apc units, do you know how much power it's all drawing at idle?

oh, and what is your total storage?

got a higher res picture? lol
 
Last edited:
good lord, that is awesome
Thanks!

hey, got a kill-a-watt or something hooked up to all that? just curious xD

rather, i see the apc units, do you know how much power it's all drawing at idle?
Total current draw is about 25A at 120V (15A in the left rack, 10A in the right).

The UPS is a Symmetra RM w/ 2 XR packs. It is running at around 50% of capacity, with a runtime of 1.75 hours with the current load.

oh, and what is your total storage?
128TB+ - each of the RAIDzilla II's holds 32TB. There's some more storage in the other boxes, but probably not more than another couple TB.

got a higher res picture? lol
I'll take some better pictures at some point in the future - that pic was taken during construction with a little Fuji digicam and then de-skewed in Photoshop. I'll set up a tripod and use my good camera (EOS 1D) to take better pictures later. The RAIDzilla II link above has hi-res pictures of the actual file servers.

To at least try to make this a little relevant to the Networking / Switches forum, from the top left:

  • Fiber patch panel
  • Cisco 2821 router
  • Dell Powerconnect 8024 switch (24 10GbE ports)
  • PowerDsine PD-9024G PoE injector
  • Cisco Catalyst 4948-10GE switch (48 10/100/1000 ports + 2 10GbE ports)
 
that is quite some power draw for 'soho' networking lol, though i actually thought it would have been worse.

if you don't mind me asking, what are you using the storage for? i assume some kind of project. at the very least, i can definitely see why 10GbE would be a worthwhile investment for you.
 
that is quite some power draw for 'soho' networking lol, though i actually thought it would have been worse.
I've been working to get the power consumption down by upgrading to more energy-efficient servers. The Dell in the left rack is a very powerful system (2 * X5680 Xeon, 48GB RAM, 6 * 300GB 15K SAS disks, etc.) but normally consumes less than 250W. I need to do something to put the 'zillas on a diet.

if you don't mind me asking, what are you using the storage for? i assume some kind of project. at the very least, i can definitely see why 10GbE would be a worthwhile investment for you.
A variety of things - I have a race car w/ multiple cameras which generates about 64GB/day, and all 34,000+ miles are on those servers. [Low-res sample on YouTube.] They also store daily backups from a dozen or so client systems.

I didn't actually need 10GbE - I just decided that I would need it at some future point, and I might as well go ahead and do it when I did the big server rebuild. It helps that I got a great deal on the 24-port 10GbE switch...
 
damn, that's cool. and i thought i loved my 09 mitsu eclipse. the motor in it was killed by a loose oil filter (i wasn't changing my own oil) and i'm sol, so now i'm going to probably get a suzuki sv650s. been wanting to learn to ride, myself. anyway, awesome stuff there.
 
damn, that's cool. and i thought i loved my 09 mitsu eclipse. the motor in it was killed by a loose oil filter (i wasn't changing my own oil) and i'm sol, so now i'm going to probably get a suzuki sv650s. been wanting to learn to ride, myself. anyway, awesome stuff there.
The color on the Atom is a tribute to my late lamented '95 Talon. It was totaled twice, the second time 101 miles after the first, when I was T-boned by a 16-year-old kid with a California learner's permit who drove his 18-wheeler through a red light in NJ. :eek:

There's more on my Atom here.

To try to at least be somewhat relevant to networking, my BMW wagon has its own file server w/ 2500+ CD images on it, which does text-to-speech and uses the stock head unit for control. No point in doing that on the Atom - I wouldn't be able to hear anything.
 
The color on the Atom is a tribute to my late lamented '95 Talon. It was totaled twice, the second time 101 miles after the first, when I was T-boned by a 16-year-old kid with a California learner's permit who drove his 18-wheeler through a red light in NJ. :eek:

There's more on my Atom here.

wow. that's awful luck. looks like fresh paint on that talon; when i get a chance, i've always wanted to flip a VW Corrado SLC in that exact color and some pretty fibre rims.

To try to at least be somewhat relevant to networking, my BMW wagon has its own file server w/ 2500+ CD images on it, which does text-to-speech and uses the stock head unit for control. No point in doing that on the Atom - I wouldn't be able to hear anything.

:p - That's more or less the project I intended on starting before playing with some forced induction on the mitsu; it was nearly paid off. (that's a story i'm happy to forget; let's just say superlube wont be getting any more business from me.) What did you use for the file server? I assume you have it setup to do things like sync over wifi, etc. (+4G, i'd bet)

Have you been through Monty, CA while Jay Leno was putting on a show? I was in uniform a few years back while he was hosting a show during the motorcycle races. I miss that city in particular. I had a Jetta GLX back then for cruising down the coast. :)

Bookmarked your blog; i miss road tripping; life has kept me pretty busy. My last trip was coast to coast in the Jetta on i10 (kinda fun, kinda boring. 150mph stretches through desert hoping i had enough gas was interesting...) and more city miles in the Mitsu than you can shake a stick at. I must have put ~75,000 miles on it in about 2.5 years. (yep, double checked my math there. on more than a few occasions i was hitting close to 4,000 miles in a month lol.)

After I get some time in with the sv650s, i'd like to get an Audi TT. I have a feeling once i upgrade the car, i'll forget about the bike. There's a few cities around here that I see the TT at great prices. Just waiting to finish up school (~9 months)

(As much as i love the Corrado, i want the TT for a daily driver :p)

[edit/] http://www.youtube.com/watch?v=WaWoo82zNUA this is hilarious
 
Last edited:
:p - That's more or less the project I intended on starting before playing with some forced induction on the mitsu; it was nearly paid off. (that's a story i'm happy to forget; let's just say superlube wont be getting any more business from me.) What did you use for the file server? I assume you have it setup to do things like sync over wifi, etc. (+4G, i'd bet)
It started out as a Phatbox (ARM/Linux) and grew from there. The whole BMW is now a USB peripheral - I can plug it into my PC and program / control all of the car's systems - lights, climate, etc.

Have you been through Monty, CA while Jay Leno was putting on a show? I was in uniform a few years back while he was hosting a show during the motorcycle races. I miss that city in particular. I had a Jetta GLX back then for cruising down the coast. :)
Not one of his shows, though I've met him and chatted (he has an Atom as well, 3 production numbers below mine). I used to do the Crystal Cove show (before it turned into Cars & Coffee and got thrown out of the mall). There's an Atom under this mob of people. More here.
 
Dear sweet fluffy lord! That is an awesome setup.

Overhead on a modern machine is really just not that much for 10GbE. Oh, sure it isn't NOTHING, and TCP offload and stuff is a lot more important for 10GbE than it is for 1GbE, but any modern machine should be able to push through most of the pipe, especially using jumbo frames.

My desktop has not an issue at all saturating a pair of GbE links and I think it rumbles along at something like 4-5% CPU activity in the process, some of which I suspect is the RAID0 array feeding it. I'd move to 3GbE links, but my current array can barely saturate 2GbE links (and actually less now that the arrays on both ends are pushing 50% and 80% utilized), that and I didn't bother laying a 3rd Cat5e between my desktop and the switch and I am NOT going to rip open walls I just finished dry walling and painting less than 6 months ago.

I am a lot more interested in a switch that has, say, a pair of 10GbE ports on it as uplink ports. I assume you could use them to connect a pair of machines through it instead of switch uplink? I'd still need it to be RJ-45 copper though, as I am again not going to rip open walls to lay new wiring (and I am pretty sure the Cat5e I already laid should handle the speed of 10GbE since it isn't a terribly long distance and not a lot of EMI on the runs. Worst comes to worst, I could probably tape Cat6a to one of the current 5e's to my computer and fish it back through the wall and ceiling. Its only about 20ft of length behind the walls/ceiling before its in unfinished space for the rest of the run and I only used a handful of loose coax brad staples to hold it in place).

The only machines I forsee anytime soon wanting more than a single GbE link are my desktop and the server. Maybe sometime in the future, but no year soon. My wife doesn't care now and won't down the line and my kids are almost 6, 4 and 2...so they aren't going to have computing needs more than 10Mbps for at least the next 4-5 years.

Honestly though for me, I care mostly once prices start coming down on SSDs where it might be feasible to use them as main storage. Which isn't going to be soon. Current min requirements are 2TB of storage for me, or at least that is roughly how much everything aggregated takes up. Its growing at a rate of about 20-25GB per month on average, but the switch to 4k down the line is likely to increase my growth rate, as will things like higher MP cameras and stuff (whenever that happens). It should still be fairly linear growth rate with the occasional jump in requirements (such as if/when the kids might start cluttering it with any of their stuff). Flash seems to still be roughly geometric in price reduction, so maybe, just maybe the price will catch up with my requirements.

Right now I really need 4TB to be safe and give me growing room for at least a couple of years, which is what I have in the server, though my desktop is overwhelmed at 1+1TB RAID0 array and a 500GB HDD (plus 120+60GB SSDs). I figure for 2-3 years 4TB is sufficient and out to maybe 6 years something like 6-7TB would be enough. If prices halve even only every 2 years on SSD storage...maybe, just maybe. That makes 6TB around $700 in 6 years...but part of the reason of needing the extra storage is that HDDs start getting painfully slow (for me) once they hit much over 50% utilized capacity, so I need to leave lots of spare area. SSDs can be pretty filled up, so long as you are looking at mostly sequential workloads and not having much actual I/O with them to clutter them up requiring garbage collection and TRIM. So for bulk storage I could probably push a flash drive/array to more like 80-90% filled before worrying about expanding it. So in 5-6 years I might be able to get away with only 4TB still and possibly have a little room to expand. Might be sub $500 to buy that kind of SSD storage.

Its tempting, especially if 10GbE is affordable by that point. I drool at the thought of shoving things over the network at 800-1000MB/sec. That 5GB video, there in 5 seconds. That 20GB archive, don't bother making coffee, it'll be done in less than 30s. A 1.2GB ISO, don't blink.
 
Last edited:
Dear sweet fluffy lord! That is an awesome setup.
Thanks! I think I should start a "Your picture next to the racks for $25" (like "Pictures of your kid with Santa: $25") campaign to help offset the electricity bill. :D

Overhead on a modern machine is really just not that much for 10GbE. Oh, sure it isn't NOTHING, and TCP offload and stuff is a lot more important for 10GbE than it is for 1GbE, but any modern machine should be able to push through most of the pipe, especially using jumbo frames.
Unfortunately, there isn't enough bandwidth in a classic PCI slot (or older bus designs) to put a 10GbE card in there, even if such a card existed. It would be interesting to benchmark the variety of retired systems I have in my "museum", from a PS/2-50 though Pentium 4 systems, to see how networking performance scales up on those older boxes.

I am a lot more interested in a switch that has, say, a pair of 10GbE ports on it as uplink ports. I assume you could use them to connect a pair of machines through it instead of switch uplink? I'd still need it to be RJ-45 copper though, as I am again not going to rip open walls to lay new wiring (and I am pretty sure the Cat5e I already laid should handle the speed of 10GbE since it isn't a terribly long distance and not a lot of EMI on the runs. Worst comes to worst, I could probably tape Cat6a to one of the current 5e's to my computer and fish it back through the wall and ceiling. Its only about 20ft of length behind the walls/ceiling before its in unfinished space for the rest of the run and I only used a handful of loose coax brad staples to hold it in place).
That sort of switch is quite common. In the managed switch space, they are often used as "top of rack" switches in datacenters, where there are lots of 1GbE ports and a much smaller number (2 or 4 is common) of 10GbE uplink ports. That's just the way they're marketed - the uplink ports can be used for anything you want.

The problem with older models of switch is that the uplink ports use some nearly-obsolete transceivers, like XENPAK or X2. Those were the first pluggable transceivers for 10GbE and almost all of the "smarts" is in the transceiver - the interface to the switch fabric is basically a self-clocked parallel port. Since many of those switches were sold to people who never used the 10GbE ports, it made sense to shift the cost into optional modules. Modern switches will use SFP+, which is a simple interface converter - all of the serialize/deserialize, clocking, etc. is on the switch motherboard.

Complicating the issue is the lack of a simple transceiver with a RJ45 port - despite these being common in 1GbE, in both GBIC and SFP formats, I've never seen an SFP+ one (or XENPAK, for that matter).

This means that you need to either use fiber (not a problem for me - in the server rack picture, there's a pair of aqua [50 micron] jumpers between the Cisco switch's 2 10GbE ports and the Dell's fiber ports) or a direct attach cable which is a pre-made length of cable with permanently attached ends using the SFP+ form factor. Unfortunately, what's on the wire isn't convertable to standard RJ45 wiring.

The only machines I forsee anytime soon wanting more than a single GbE link are my desktop and the server. Maybe sometime in the future, but no year soon. My wife doesn't care now and won't down the line and my kids are almost 6, 4 and 2...so they aren't going to have computing needs more than 10Mbps for at least the next 4-5 years.
Well, the longer you wait, the less expensive things will be. And a lot of the bugs will be worked out.

Honestly though for me, I care mostly once prices start coming down on SSDs where it might be feasible to use them as main storage. Which isn't going to be soon. Current min requirements are 2TB of storage for me, or at least that is roughly how much everything aggregated takes up. Its growing at a rate of about 20-25GB per month on average, but the switch to 4k down the line is likely to increase my growth rate, as will things like higher MP cameras and stuff (whenever that happens). It should still be fairly linear growth rate with the occasional jump in requirements (such as if/when the kids might start cluttering it with any of their stuff). Flash seems to still be roughly geometric in price reduction, so maybe, just maybe the price will catch up with my requirements.
Aside from price, you really need a direct PCIe attachment to get the most performance out of the SSDs, particularly at the higher densities. I'm running OCZ Velodrive SSDs in my 'zillas - they're a PCIe x8 device with onboard SAS RAID (LSI SAS2008 family). So each of the 4 SSD "zones" on the card has its own SAS bus to the RAID controller. Some of the [vastly] more expensive enterprise SSDs dispense with the industry-standard RAID controller altogether - if the card has a proprietary driver, it doesn't matter what the underlying transport is - why pretend to be a SAS device?

All of this SSD stuff seems to have created renewed interest in advanced disk technologies - WD/HGST is betting on helium, Seagate on shingled recording. In addition to improved capacity, we'll see higher transfer rates from these drives.

For SSD to replace disk completely for most applications, we're going to need to come up with a better way to estimate "media" lifetime, as well as provide an economical way for customers to restore the media to "as-new" state even after the warranty ends - many disk drives are still in use long after the warranty is over. This might be provided by swappable flash modules, or via improvements in the flash chips themselves. Right now, the emphasis is on smaller (physical die size) chips and not a lot of thought is being given to what happens on that media 5 or 10 years later.

Eventually disk will be replaced by something. And, like tape today, there will be "disk is dead" predictions by industry pundits, who will then be surprised to find out how much disk storage is still being sold.

I've been in this industry long enough to have heard the "in 5 years we'll be storing all data on..." for bubble memory, sugar cubes, etc. - but the vast majority of data is still on spinning rust.

Right now I really need 4TB to be safe and give me growing room for at least a couple of years, which is what I have in the server, though my desktop is overwhelmed at 1+1TB RAID0 array and a 500GB HDD (plus 120+60GB SSDs). I figure for 2-3 years 4TB is sufficient and out to maybe 6 years something like 6-7TB would be enough...."

I plan on the 'zilla design to be good for 10+ years, with a mid-life refresh. My plan is to install 6TB drives (or larger) once one of the competing 6TB technologies is the clear winner. I will probably switch from SATA to SAS at the same time, mostly to expand the range of drives I can put in the 'zillas.
 
The PCIe bus is rapidly limiting bandwidth in my test boxes here. Other than an X79 chipset board with 40 lanes of PCIe (but no support for PCIe 3.0) you run out of lanes pretty quick on even the ASUS Z87-WS board which adds "fake" lanes using a PLX chip...offering four PCIe 3.0 16x slots.

The intel x540 is ok in a 4x slot (PCIe 2.0) spec, however the two port card would be quite limited unless in an 8x slot. Given non-server boards (aside from X79) have 16 lanes, and a raid card + 10Gbe NIC uses up all 16 of them. If you are building up Adobe workstations (as I am) where two CUDA cards are a cost effective way to boost rendering performance, you are essentially forced to an X79 chipset, or a server board if you want to run dual graphics cards, RAID card, and a 10GBe NIC (48 PCIe are lanes required to run these four cards at top speed). This means a dual processor server board, unless you're ok running the cards under their recommended bandwidth.

Once you dig under the hood of these chipsets a bit, you realize that 10Gbe is pushing the envelope a bit if there is a need for disk IO to match.
 

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top