What's new

FlexQoS FlexQoS 1.3.2 - Flexible QoS Enhancement Script for Adaptive QoS

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Status
Not open for further replies.
Hey, I desperately need your help, something is very wrong with my QoS setup I think. I've moved from the AX58U to the AX86U (fresh install of course; installed merlin, factory reset, then manually set everything) and when Steam downloads games I reach a 8% packet loss locally, to the router (tested with WinMTR) as well as on TeamSpeak, where it's 20% in loss an 6-8% out loss.

QoS is set to adaptive, fq_codel, FlexQoS 1.3.1 is active, and I have an AppDB rule redirecting Valve Steam to "file transfers".
I'm using spdMerlin to automatically set my QoS limits every 30 minutes. They're usually between 600 and 800mbit download and 32-48mbit upload.

This is my QoS priority list:
1663536436062.png


FlexQoS:
1663536814923.png


- 2.5 Gbit LAN to router (1G shows the same behaviour)
- 1000/50 cable WAN connection (DOCSIS3, Germany, Vodafone) using a TC4400

I also had a gaming rule for my PC running steam; I've removed both the gaming rule and the AppDB redirection to no avail.

When I disable QoS completely, steam downloads at full bore (it hasn't while QoS was active), but the packet loss "only" gets to 1-2%; which compared to the 20% PL using QoS tells me that it's not any physical connection that's at fault. Also I can fully saturate the connection to/from the router both ways using 20 iperf3 streams with no packet loss accumulating.


This never happened on my AX58U, what could possibly be wrong? (To be clear, the AX58U never had to handle 1 GBit downstream; I've specifically updated to the AX86U when my downstream was increased from 500 MBit to 1 GBit as its SOC should handle it well, the AX58U only managed 550 MBit)


Thank you so much, I'll provide any details you need for helping me debug this.
 
Last edited:
It's probably flowcache. If you SSH into the router and do fc disable, does it improve? Check if it's on or off with fc status - most higher end AX routers seem to need flowcache off to perform right.

Flowcache helps hit those very high mbits, but I gather it sort of scrambles incoming data, causing it to all be a jumble. It's a good recipe to create ping spikes and packet loss, but also very high transfer rates. My RT-AX68U improved remarkably with it off. World of difference when VOIP/Torrents/Games/RemoteDeskop were all going on. I also have an RT-AX56U and turned it off there too, for good measure. Having an XBox downloading games killed the network with flowcache enabled. With it disabled, the XBox can't cause nearly as much harm. :)
 
@dave14305

Trying to update to 1.3.2 (while in dev mode)

I'm getting this returned:

FlexQoS v1.3.1 released 2022-03-15
Development channel

Checking for updates
You have the latest version installed


Do I need to turn off dev mode to update? If so what is the command to do this?

Thanks as ever, working well in dev mode 1.3.1 anyway.
 
It's probably flowcache. If you SSH into the router and do fc disable, does it improve? Check if it's on or off with fc status - most higher end AX routers seem to need flowcache off to perform right.

Flowcache helps hit those very high mbits, but I gather it sort of scrambles incoming data, causing it to all be a jumble. It's a good recipe to create ping spikes and packet loss, but also very high transfer rates. My RT-AX68U improved remarkably with it off. World of difference when VOIP/Torrents/Games/RemoteDeskop were all going on. I also have an RT-AX56U and turned it off there too, for good measure. Having an XBox downloading games killed the network with flowcache enabled. With it disabled, the XBox can't cause nearly as much harm. :)
Do you know if this applies to the current stock firmware as well?
 
@dave14305

Trying to update to 1.3.2 (while in dev mode)

I'm getting this returned:

FlexQoS v1.3.1 released 2022-03-15
Development channel

Checking for updates
You have the latest version installed


Do I need to turn off dev mode to update? If so what is the command to do this?

Thanks as ever, working well in dev mode 1.3.1 anyway.
Try ssh to router: flexqos stable
 
Try ssh to router: flexqos stable
Thanks, it seems the develop mode is one release behind the stable.

Hoping the new one contains the iptables chains update.

Think it does : CHANGED: Split iptables rules into separate upload and download chains to avoid unnecessary rule traversal
 
Thanks, it seems the develop mode is one release behind the stable.

Hoping the new one contains the iptables chains update.

Think it does : CHANGED: Split iptables rules into separate upload and download chains to avoid unnecessary rule traversal
If flexqos debug now shows 1.3.2, you should be gtg. You can also verify by running the following command as well:

iptables -t mangle -nvL
 
It's probably flowcache. If you SSH into the router and do fc disable, does it improve? Check if it's on or off with fc status - most higher end AX routers seem to need flowcache off to perform right.

Flowcache helps hit those very high mbits, but I gather it sort of scrambles incoming data, causing it to all be a jumble. It's a good recipe to create ping spikes and packet loss, but also very high transfer rates. My RT-AX68U improved remarkably with it off. World of difference when VOIP/Torrents/Games/RemoteDeskop were all going on. I also have an RT-AX56U and turned it off there too, for good measure. Having an XBox downloading games killed the network with flowcache enabled. With it disabled, the XBox can't cause nearly as much harm. :)
Thank you, just tried this. Didn't help, though. Packet loss inbound in TS3 went to 10% within about a minute with Steam downloading (it increases every couple of seconds by 1%).
 
qdisc htb 1: dev eth0 root refcnt 2 r2q 10 default 0 direct_packets_stat 401309
qdisc fq_codel 102: dev eth0 parent 1:2 limit 1000p flows 1024 quantum 1514 target 5.0ms interval 100.0ms
qdisc fq_codel 110: dev eth0 parent 1:10 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 111: dev eth0 parent 1:11 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 112: dev eth0 parent 1:12 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 113: dev eth0 parent 1:13 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 114: dev eth0 parent 1:14 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 115: dev eth0 parent 1:15 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 116: dev eth0 parent 1:16 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms
qdisc fq_codel 117: dev eth0 parent 1:17 limit 1000p flows 1024 quantum 300 target 5.0ms interval 100.0ms


I am looking for some help with bufferbloat on AC3100.

I have flexqos on my router already.

However, when I look at qdisc for adaptive QoS the default is 0. When the default is 0 any unclassified traffic will go into 0. The direct_packets_stat shows how many packets were sent without being assigned to any fq_codel queues.

From what I understand about htb: if there are packets which couldn't be assigned it simply goes to a fifo queue that is hardware accelerated. And the way htb is written, the fifo queue gets highest priority. Correct me if I am wrong. Would love to understand qdisc more.

Is there any way to change the htb default from 0 to let's say 14 or something in the middle?
 
However, when I look at qdisc for adaptive QoS the default is 0. When the default is 0 any unclassified traffic will go into 0. The direct_packets_stat shows how many packets were sent without being assigned to any fq_codel queues.
That many packets might suggest you have upload traffic not being picked up by the Adaptive QoS classifier. What unusual traffic do you have that doesn’t come from br0? Any Guest Network #1? Any other addons running like spdMerlin?
Is there any way to change the htb default from 0 to let's say 14 or something in the middle?
Not really. I don’t believe you can change it once it’s created by Adaptive QoS.
 
FlexQoS Version 1.3.2 - Released 16-Sep-2022

This is just a minor release with an iptables optimization. Nothing exciting or outwardly visible has changed.
  • CHANGED: Split iptables rules into separate upload and download chains to avoid unnecessary rule traversal
  • CHANGED: Tweaked 'debug' command output formatting for better readability
So far the split rule chains are running great! This was a brilliant idea @dave14305 .
 
That many packets might suggest you have upload traffic not being picked up by the Adaptive QoS classifier. What unusual traffic do you have that doesn’t come from br0? Any Guest Network #1? Any other addons running like spdMerlin?

Not really. I don’t believe you can change it once it’s created by Adaptive QoS.

Output of ifconfig
br0 Link encap:Ethernet HWaddr
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0


UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1
RX packets:18990 errors:0 dropped:0 overruns:0 frame:0
TX packets:23052 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3589498 (3.4 MiB) TX bytes:18952936 (18.0 MiB)

eth0 Link encap:Ethernet HWaddr 0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:41255 errors:0 dropped:0 overruns:0 frame:0
TX packets:45606 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:19588661 (18.6 MiB) TX bytes:32533807 (31.0 MiB)
Interrupt:181 Base address:0x6000

eth1 Link encap:Ethernet HWaddr 0

UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:10591 errors:0 dropped:28 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:2402914 (2.2 MiB)

eth2 Link encap:Ethernet HWaddr

UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:16245 errors:0 dropped:27 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:5442696 (5.1 MiB)

fwd0 Link encap:Ethernet HWaddr 00:00:00:00:00:00

UP BROADCAST RUNNING PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1
RX packets:10487 errors:0 dropped:0 overruns:0 frame:0
TX packets:9120 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:1334858 (1.2 MiB)
Interrupt:179 Base address:0x4000

fwd1 Link encap:Ethernet HWaddr 00:00:00:00:00:00

UP BROADCAST RUNNING PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1
RX packets:16160 errors:0 dropped:0 overruns:0 frame:0
TX packets:12559 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:2232040 (2.1 MiB)
Interrupt:180 Base address:0x5000

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MULTICAST MTU:16436 Metric:1
RX packets:21609 errors:0 dropped:0 overruns:0 frame:0
TX packets:21609 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4979578 (4.7 MiB) TX bytes:4979578 (4.7 MiB)

lo:0 Link encap:Local Loopback
inet addr:127.0.1.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MULTICAST MTU:16436 Metric:1

vlan1 Link encap:Ethernet HWaddr

UP BROADCAST RUNNING PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1
RX packets:21739 errors:0 dropped:0 overruns:0 frame:0
TX packets:24782 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4226238 (4.0 MiB) TX bytes:20304056 (19.3 MiB)

vlan2 Link encap:Ethernet HWaddr

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:424 (424.0 B)

No other scripts.
No guest wifi
 
Output of ifconfig
This doesn't really show anything helpful.

You can see what happens to the stats after manually adding a catchall filter for the upload direction (WAN interface). You can replace 14 with whatever class you would like (10-17).
Code:
tc filter add dev eth0 protocol all prio 99 matchall flowid 1:14
 
So far the split rule chains are running great! This was a brilliant idea @dave14305 .
After learning about and tracking the development of nftables, there is a lot of emphasis in nftables to avoid processing extra rules. The recently added optimization option of nft looks for ways to combine similar rules for more efficient packet processing.

So this seemed like an easy way to achieve something similar with iptables. No idea if this has a measurable impact on performance (you'd need a lot of iptables rules before it would make a dent).
 
After learning about and tracking the development of nftables, there is a lot of emphasis in nftables to avoid processing extra rules. The recently added optimization option of nft looks for ways to combine similar rules for more efficient packet processing.

So this seemed like an easy way to achieve something similar with iptables. No idea if this has a measurable impact on performance (you'd need a lot of iptables rules before it would make a dent).
I completely grasp what you mean. A week before you released this, I was actually imagining doing something like this( as odd as it sounds.) I was playing around with nftables and iptable configurations. The discerning difference in performance and efficiency can often be offset by the organization of the chains. Afterall, before nftables were fully marketed to the fw4 aspect, later versions of iptables were just nftable options with iptables used ontop. That is where this whole concept of restructuring the chains really caught on. I am glad to see you taking a dive into trying the concept out. With a heavily operated QoS network, I imagine this adjustment will definitely improve efficiency. For the simple organization, it definitely improves ones ability to keep track of things when birds eye view observing the chains.
 
Last edited:
This doesn't really show anything helpful.

You can see what happens to the stats after manually adding a catchall filter for the upload direction (WAN interface). You can replace 14 with whatever class you would like (10-17).
Code:
tc filter add dev eth0 protocol all prio 99 matchall flowid 1:14
Thank you dave14305 for your help so far.

This is the output of tc filder add
Unknown filter "matchall", hence option "flowid" is unparsable
 
Thank you dave14305 for your help so far.

This is the output of tc filder add
Modified command:
tc filter add dev eth0 protocol all prio 99 u32 match u32 0 0 flowid 1:14

Using the above code I was able to stop the direct_packets_stat from rising.


However, the net control packets are still a constant 1-2 kb/s. Any advice on how to narrow down the misclassified upload and how to properly classify it?
 
However, the net control packets are still a constant 1-2 kb/s. Any advice on how to narrow down the misclassified upload and how to properly classify it?
That would probably be DNS traffic from the router. It won’t show up under Tracked Connections. Watch the counters with:
Code:
iptables -t mangle -nvL OUTPUT
 
I am back on FlexQoS - downgraded my Fiber from gig to 200/200 (cost is less than half) and was having jitter and VoIP problems when Steam updates and Backups kicked in...

Thanks @dave14305 for Version 1.3.2 - it works great! I also noticed that upstream processing seems to use a different Core now than downstream (I see downstream using Core 1 and upstream Core 4 - rebooted a couple of times and seems to be consistent). I can run 200/200 full steam and the router can keep up with no jitter or lost packets....

For Routers like the RT-AX86 don't forget to set
Code:
fc disable
otherwise FlexQoS does not work correctly.... I had mentioned that in an earlier post but had brain freeze and totally forgot about it... No added back in
Code:
/jffs/scripts/firewall-start
so it survives reboots...

Thanks again - FlexQoS makes the router so much better for me!!!
 
I am back on FlexQoS - downgraded my Fiber from gig to 200/200 (cost is less than half) and was having jitter and VoIP problems when Steam updates and Backups kicked in...

Thanks @dave14305 for Version 1.3.2 - it works great! I also noticed that upstream processing seems to use a different Core now than downstream (I see downstream using Core 1 and upstream Core 4 - rebooted a couple of times and seems to be consistent). I can run 200/200 full steam and the router can keep up with no jitter or lost packets....

For Routers like the RT-AX86 don't forget to set
Code:
fc disable
otherwise FlexQoS does not work correctly.... I had mentioned that in an earlier post but had brain freeze and totally forgot about it... No added back in
Code:
/jffs/scripts/firewall-start
so it survives reboots...

Thanks again - FlexQoS makes the router so much better for me!!!
Sadly this doesn't help me. Somehow I still get massive packet loss when Steam downloads, even with fc disable.

Does anyone have any ideas for debugging this? I had to disable QoS for now, as it performs worse on my AX86U (with 1gbit down) compared to my AX58U (500mbit down).

I was under the impression the AX86U would be fast enough to handle Adaptive QoS fq_codel with FlexQoS on GBit cable. Is this not the case?
 
Status
Not open for further replies.

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!

Staff online

Top