What's new

Low UL throughput over WIFI on new fiber line.. Normal?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Final Update 2 (lol): I think I have it figured out for anyone else who has a problem with LUMEN/Quantum fiber in the future.

Its just an acceleration issue on UL end from ONT. (Seems to offload to other side of RJ-45.)

  • I pulled out legacy router hardware with slower CPUs and the capability of disabling features like NAT boost on a 800mhz A9 (BCM4708).
  • As I expected throughput bottlenecks in the 350mbps area over wired which coincides with the 200-400mbps range of throughput I was getting over wireless + wired glitching. Sorta emulates what a NPU in a WIFI radio is doing when broadcasting.

The only way to circumvent this is to increase bonding and or use higher MIMO configurations as a link, though I've had issues with a AC86U (800mhz NPU radio) to AX86U (1.5ghz NPU radio) transmission.

Works from BE800 to AX86U though (both 1.5ghz+ NPUs) ...at least when you're connected at full rate.. Intel AX210 to BE800 is intermittent unless its full 2400mbps link rate with flow cache on.

The main A53 in an AX86U for example is a non factor when it comes to wireless network processing in modern HW. BCM4708 is a nice test bed because everything is done on main CPU in older platform, no separate radio NPU. Bottleneck is purely CPU in that case, unless NAT boost is enabled.

Downside of NAT boost is that it increases latency and bottlenecks UL further, but I can get around 500mbps on a 2013 router over WL DL.

The 5500XK ONT simply offloads throughput to the other side of RJ-45... Switch/Main CPU in AX86U can process it fine... Laptop with Raptor Lake i7 is same situation.

The 5500XK ONT CAN DO processing on both ends after internal speed test or resetting a stats table internally. As I can prove here:


Not sure if its just a bug or an intentional feature, but my brain cancer/OCD is now over.

I would say the ONT is the bottleneck (bugged?), but slower CPU/NPU can be too, especially if link rate is at a weaker QAM.. which does make sense.

I don't think any other fiber ISP's share this issue.. just weird.
 
Last edited:
I would for sure recommend sending in your findings to Lumen.
 
I would for sure recommend sending in your findings to Lumen.

I've been sending info to Imtakintou. He's forwarding my info to a labs team. Not sure if anything will come out of it.

Also not sure if this is intentional or a bug considering acceleration works "pre router" then bottlenecks shortly after.

Modern PCs can obviously handle offloading, but I have two forms of proof that acceleration can be done inside the 5500XK unit.

I don't think the OLT side has a trouble since it works "on demand".. Was also able to confirm I'm on a 1:32 split Calix. Theres only 2-3 slots per 8 unit MST in my neighborhood.. basically 12 or so users.

Unless the issue comes from a HW conflict between the 5500XK and topology here.. It's pretty weird.

Asked for a Calix Gigapoint with BCM hardware.. They wont allow it for newer deployments. They want to switch everyone to 5500XK (GPON) or 6500XK (XGS)
 
I sure hope they fix it before they force me to switch the the C5500XK. I'm holding onto my Calix ONT for dear life.
 
I assume this issue is an accidental bug with your topology or misconfiguration. I wish someone here or on reddit with a C5500XK and 940m would do some tests to see if its just your locations topology.

Lumen's SmartNID's seems to be very buggy with terrible software QA. There was an issue with the C6500XK earlier this year that if you had the 3gb or 8gb service, and tried to max out your connection on the 10GB port when its in bridge mode, the whole unit would lock up until your rebooted it. How the hell does that not be found during development and testing in the lab when the whole point of the XGS-PON service is so they can sell expensive multi-gig speeds. And it takes a bunch of people on reddit doing this testing and reaching out to that reddit user, to forward it to the engineering lab, to have a fix roll out a couple of months later.
 
I sure hope they fix it before they force me to switch the the C5500XK. I'm holding onto my Calix ONT for dear life.

I was hoping I could at least try an indoor Calix unit here, but they're saving whatever they have left for legacy replacements and phone line connectivity.

It's possible that theres a hardware conflict with the specific Calix build they're running here as to why it glitches out, but it definitely has something to do with the 5500XK ONT/ONU and offloading.

I know this because my BE800 is the strongest offering that I own. Has a stronger QCA processing section relative to the AX86U's BCM 1.5Ghz A7 on the radio itself. Results in faster speeds on other end.. It's clear as day what's happening now.

I'll also mention again that transparent bridge mode ends up bricking itself after 1-2 days... where I cant get over 280mbps throughput on wired itself through same BE800... Could just be a normal HW conflict on my end with BE800 itself.. I haven't tested it enough to figure that part out.

Electrical/RF interference was never an issue for me. I can reset "Stats table" in GUI and max out throughput whenever I want to.. Ended up downgrading to 500/500 because of this issue.

LUMEN's own Q9500WK Pod also bottlenecks.. Uses an ARM based Econet/Airoha (Mediatek Sub) chip as main NPU.. Will do 900mbps max throughput over wireless when GUI is reset.

5500XK/6500XK are also Econet/Airoha. Future WIFI7 W1700 and W1701 as well. Seems Axon/LUMEN are investing heavily into this.

Previous Calix HW was BCM based on ONT/ONU end.. at least the Gigapoint 800 line.


Btw.. speaking of which, there's a new chip that does both GPON and XGS-PON, aside from being separate chip designs like the 5500XK and 6500XK. I wouldn't be surprised if they streamlined it into one unit and Axon scrapped the 5500XK/6500XK in a couple years.

 
Last edited:
I've got a Calix 716-I R2 that has a hard coded TCP connection limit. And I'll take that over the current SmartNID's.

Of the people here locally that have the C5500XK, each one has it in bridge mode and have never any issue with it locking up or anything.
 
I've got a Calix 716-I R2 that has a hard coded TCP connection limit. And I'll take that over the current SmartNID's.

Of the people here locally that have the C5500XK, each one has it in bridge mode and have never any issue with it locking up or anything

I don't doubt that it could be a local topology issue or faulty setting.

I just know that CPU sided performance is influencing throughput.. 5500XK Unit is offloading UL... Which makes me question if this is intentional or just a bug.

You can even see this happening over wired where it will grab a specific threshold, then randomly speed up as an extension to the default acceleration. Was more noticeable on 940mbps

Wired bugging out on trans bridge me could just be an issue with my BE800. Not sure.. Ill have to keep messing with it.

I believe the NPU portion of a radio is a "bottleneck" as its variable depending on hardware used. BE800 is slightly stronger over AX86U. If the 5500XK is accelerating on unit.. theres no issue.

As I said, even the Q9500WK pod with Lantiq/Intel/Maxlinear radios will pull a max of 900 over wireless till it bottlenecks or I reset.. And thats prob a weaker ARM design.. Cant find much info on the specific Econet/Airoha design Axon used for pod..

Whatever the case, The issue involves the 5500XK. Rather take defective calix hardware at this point.
 
Last edited:
I assume this issue is an accidental bug with your topology or misconfiguration. I wish someone here or on reddit with a C5500XK and 940m would do some tests to see if its just your locations topology.

Lumen's SmartNID's seems to be very buggy with terrible software QA. There was an issue with the C6500XK earlier this year that if you had the 3gb or 8gb service, and tried to max out your connection on the 10GB port when its in bridge mode, the whole unit would lock up until your rebooted it. How the hell does that not be found during development and testing in the lab when the whole point of the XGS-PON service is so they can sell expensive multi-gig speeds. And it takes a bunch of people on reddit doing this testing and reaching out to that reddit user, to forward it to the engineering lab, to have a fix roll out a couple of months later.

I thought about this and it does make sense considering the issues with the 6500XK.

The 5500XK supports 8 different GPON deployment cards:

  • Auto detection of upstream OLT - Works with Calix and Adtran OLTs
    • - Calix cards supported: GPON-4 (EXA 3.4.10.35), GPON-4r2(EXA 3.4.10.35), GPON-8(EXA 3.4.10.35), GPON-8X(EXA 3.4.10.35), GPON-16X(EXA 3.4.10.35), E3-2 (AXOS R21.1.0)
    • - Adtran Cards supported: GPON-4 (10.0.1.5), GPON-8 (10.0.1.5)
Source: https://www.axon-networks.com/products/gpon-ont

I'll just assume it's bugged with the specific OLT being used. There's 6 Calix options. Prob never had a Q&A considering it works out of box on reset, but bottlenecks after 5 mins.

That or it's an intentional "feature" given the 5500XK can offload UL with a stronger CPU at other end. ¯\_(ツ)_/¯



Anyway.. my routing went to complete crap today... Pinging Chicago servers from Orlando results in 300-400ms....Was 40ms this morning. I'm getting really frustrated with LUMEN..

Edit: I can route through Dallas and get 60 ping (still higher).. I guess their backend is just completely messed up rn.
 
Last edited:
I'm not home but people in the Twin Cities are also saying that the routing has been messed up for the past few days with high pings. Though people at my house and the folks I know haven't experienced it. I'm guessing some BGP issues in certain regions.

That makes sense that it could be the specific OLT you connect to. Like a hardware issue, or a software misconfiguration.
 
I'm not home but people in the Twin Cities are also saying that the routing has been messed up for the past few days with high pings. Though people at my house and the folks I know haven't experienced it. I'm guessing some BGP issues in certain regions.

That makes sense that it could be the specific OLT you connect to. Like a hardware issue, or a software misconfiguration.


Imtalkintou said that he doesn't believe there's an OLT end side config issue, but firmware (5500XK) with a specific GPON deployment card does make sense to me... Especially since it fixes itself with the GUI "stats" reset at the click of a button.... I have to agree. I have no local congestion here.

I think Axon/LUMEN should just cut losses and make a single newer ONT unit with the Airoha AN7581 (Mediatek sub) "All in one" ARM chip I linked... Same model is in the newer Axon designed W1700 and W1701 WIFI7 hardware as a "main CPU".. FCC NDA expired in late March.

Would streamline XGS-PON and GPON to one unit.. Updates would be linear and not split between 2... Will obviously cost more (EE design and unit cost).. but less issues long term... Especially when converting legacy GPON to XGS... Customers would be able to keep their hardware instead of being forced to switch when the transition happens.


As for routing.. Cali ping is fine, Local ping is fine..

Mid west, ATL and general east coast are completely broken. Never had an issue before today.
 
Last edited:
UL Issue happens a lot more often now on wired, but that GUI reset basically solves the day... Could be a compatibility issue with something at the OLT + settings.. but the crux of the issue is the AXON SmartNID unit.

I assume a weaker 600mhz MIPS Calix design wouldn't share these issues given CALIX>CALIX setup, even if the splitters that translate data happen to be suboptimal.. same for SFP at other end.

Its a packet bug for sure. Clearing parameters for WAN stats (5500XK) fixes it 100% of the time.

If there's any engineers that work on this stuff, id be happy to attempt to send info over to quantum/LUMEN through employee proxy..

Edit: Any external server outside of local one will bottleneck unless I clear GUI, reseat SC/ACP or run OLT test.. lovely!
 
Last edited:

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top