What's new

FlexQoS FlexQoS 1.2.4 - Flexible QoS Enhancement Script for Adaptive QoS

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

BETA Version 1.1.1 (develop branch)

NEW

  • Experimental htb+fq_codel qdisc to replace htb+htb+sfq (opt-in)
  • HTB burst parameter now is based on "burst by duration" concept borrowed from SQM scripts @ OpenWRT. Burst is now defined as the number of bytes that can be sent at line speed in 1 millisecond. This is an experimental design change so feedback is appreciated.
  • Burst is now defined equally for all classes based on 1 ms of the configured upload or download bandwidth (i.e. line rate). The minimum burst will be set to 3200 to match ASUS standard.
  • Burst used to be defined as 1% of rate rounded down to the nearest 1600 byte increment, also with a minimum of 3200 bytes.
CHANGED
  • Custom rate rules are now included in debug output
  • Improved IPv6 device name matching to match local IPv6 connections with corresponding custom names from Network Map Client List using MAC address as the foreign key.
  • Reverse logic for backup file retention during uninstall (@maghuro)
  • Expanded iptables mask bits to include upload/download bits
  • Code style changes regarding readonly variables and local versus global variables
  • Code style changes with variable names as uppercase
  • Reordered custom bandwidth settings in addon settings storage to remove earlier kludge in bandwidth UI
  • Renamed bandwidth custom settings variable name
  • Combined upload/download mark variables into common variable
  • Refactored code into more discrete functions to avoid repetiton
  • Flush FlexQoS iptables chain during start to avoid deleting individual rules before adding them.
FIXED
  • Cleaned up temporary file left behind during an update download
  • 5-minute Cronjob might fail if Entware coreutils-date is also installed
The fq_codel support requires much testing to ensure it sticks and doesn't get overridden or cause other issues with Adaptive QoS. I have only tested on 384.19, so any 386.1 beta testers are encouraged.

Switch to the develop branch at the command line with flexqos develop
Switch back to stable branch with flexqos stable
 
Switched to develop once again, thank you! :) BTW, had an eye on the cronjob at 3:30 am and it never had to reinitialize qos in the last weeks. Looks like it's really not needed anymore like you already mentioned/asked in the past.

Regarding fq_codel testing: How could we check/verify if it's still active on a regular basis to give feedback?
 
Last edited:
Jan 14 19:19:00 FlexQoS: /jffs/addons/flexqos/flexqos.sh (pid=4852) called in unattended mode with 1 args: -check
Jan 14 19:19:00 FlexQoS: iptables rules already present
Jan 14 19:19:01 FlexQoS: No TC modifications necessary

These still pop up every few hours on mine
 
Jan 14 19:19:00 FlexQoS: /jffs/addons/flexqos/flexqos.sh (pid=4852) called in unattended mode with 1 args: -check
Jan 14 19:19:00 FlexQoS: iptables rules already present
Jan 14 19:19:01 FlexQoS: No TC modifications necessary

These still pop up every few hours on mine
What’s in the syslog 5 minutes prior?
 
What’s in the syslog 5 minutes prior?
Bunch of device connecting and disconnect messages and anew one about can’t add ip because it already in the UDB

Edit: I’ll let it run for a day before I give feedback because I don’t see any performance impact so I think it’s just logging
Edit2: @dave14305 I restarted everything and since then the message does not pop up anymore + FQ_Codel is working perfectly
 
Last edited:
Don't know if this was happening before but as I'm focusing on it atm I saw the following minor scale glitch at the graphs. Sometimes the vertical scale is missing beween min and max depending on the max value.
graph1.png
vs
graph2.png


But fq_codel looking good so far (384.19). Will keep an eye on it.
Code:
admin@router:/tmp/home/root# tc qdisc ls | grep fq_codel
qdisc fq_codel 8002: dev eth0 parent 1:10 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8008: dev eth0 parent 1:11 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800a: dev eth0 parent 1:12 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800c: dev eth0 parent 1:13 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800e: dev eth0 parent 1:14 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8010: dev eth0 parent 1:15 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8012: dev eth0 parent 1:16 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8014: dev eth0 parent 1:17 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8001: dev br0 parent 1:10 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8006: dev br0 parent 1:11 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8009: dev br0 parent 1:12 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800b: dev br0 parent 1:13 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800d: dev br0 parent 1:14 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800f: dev br0 parent 1:15 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8011: dev br0 parent 1:16 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8013: dev br0 parent 1:17 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
 
Don't know if this was happening before but as I'm focusing on it atm I saw the following minor scale glitch at the graphs. Sometimes the vertical scale is missing beween min and max depending on the max value.
View attachment 29520 vs View attachment 29522

But fq_codel looking good so far (384.19). Will keep an eye on it.
Code:
admin@router:/tmp/home/root# tc qdisc ls | grep fq_codel
qdisc fq_codel 8002: dev eth0 parent 1:10 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8008: dev eth0 parent 1:11 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800a: dev eth0 parent 1:12 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800c: dev eth0 parent 1:13 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800e: dev eth0 parent 1:14 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8010: dev eth0 parent 1:15 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8012: dev eth0 parent 1:16 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8014: dev eth0 parent 1:17 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8001: dev br0 parent 1:10 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8006: dev br0 parent 1:11 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8009: dev br0 parent 1:12 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800b: dev br0 parent 1:13 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800d: dev br0 parent 1:14 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800f: dev br0 parent 1:15 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8011: dev br0 parent 1:16 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8013: dev br0 parent 1:17 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
I’ve seen the same glitch with the log scale. Waiting for Jack Yaz to experience it in his addons so that he can let me steal a fix.

Have speed tests or bufferbloat changed at all?
 
Have speed tests or bufferbloat changed at all?
Hard to say, did several tests with default and fq_codel settings and it seems as if buffer bloat variance is lower with fq_codel between several tests while it varies up to 100+ms with default htb sometimes. Jitter looks also more constant with fq_codel and personally I expect more fairness within the same class when different clients occupy bandwidth (like vpn handling sw-image transfers on one client and videoconference on another client both inside work-from-home class) and that's what counts for me. No gamers, but home working and schooling... But it's too early to give you hard facts for feedback as it's evening, no fights for bandwidth at the moment. ;)
 
@dave14305 I'm on the RT-AX86U and running Merlin 386.1 beta 4. I've updated to 1.1.1 develop now and will see how gaming runs on it.. I'll give it a good run over the weekend and report back.
 
When changing to develop branch when FlexQos is installer BUT adaptive qos is disabled, it uninstalls the script :p
 
Yes, that is partly intentional due to how amtm detects whether the script is installed successfully or not. I'll have to think about a better way to handle that.
Not a big deal :)

I have a big deal question:
I have a device, 192.168.1.100 which using the custom rules I define to all traffic coming out from it will be considered game downloads.
This particular device also have an ipv6 address. The traffic from this device when going out from ipv6 is not being considered game downloads.

I hope you understand :)
How can I solved this?
 
Not a big deal :)

I have a big deal question:
I have a device, 192.168.1.100 which using the custom rules I define to all traffic coming out from it will be considered game downloads.
This particular device also have an ipv6 address. The traffic from this device when going out from ipv6 is not being considered game downloads.

I hope you understand :)
How can I solved this?
You can't solve it unless you can add your own ip6tables rule. FlexQoS isn't good for ip6tables rules that involve IPv6 addresses since on the LAN they are likely to change frequently due to IPV6 privacy extensions.
 
You can't solve it unless you can add your own ip6tables rule. FlexQoS isn't good for ip6tables rules that involve IPv6 addresses since on the LAN they are likely to change frequently due to IPV6 privacy extensions.
Noob speaking: isn't possible to populate ip6tables the same way you did to get custom ipv6 names by corresponding it to device mac address?
 
BETA Version 1.1.1 (develop branch)

NEW

  • Experimental htb+fq_codel qdisc to replace htb+htb+sfq (opt-in)
  • HTB burst parameter now is based on "burst by duration" concept borrowed from SQM scripts @ OpenWRT. Burst is now defined as the number of bytes that can be sent at line speed in 1 millisecond. This is an experimental design change so feedback is appreciated.
  • Burst is now defined equally for all classes based on 1 ms of the configured upload or download bandwidth (i.e. line rate). The minimum burst will be set to 3200 to match ASUS standard.
  • Burst used to be defined as 1% of rate rounded down to the nearest 1600 byte increment, also with a minimum of 3200 bytes.
CHANGED
  • Custom rate rules are now included in debug output
  • Improved IPv6 device name matching to match local IPv6 connections with corresponding custom names from Network Map Client List using MAC address as the foreign key.
  • Reverse logic for backup file retention during uninstall (@maghuro)
  • Expanded iptables mask bits to include upload/download bits
  • Code style changes regarding readonly variables and local versus global variables
  • Code style changes with variable names as uppercase
  • Reordered custom bandwidth settings in addon settings storage to remove earlier kludge in bandwidth UI
  • Renamed bandwidth custom settings variable name
  • Combined upload/download mark variables into common variable
  • Refactored code into more discrete functions to avoid repetiton
  • Flush FlexQoS iptables chain during start to avoid deleting individual rules before adding them.
FIXED
  • Cleaned up temporary file left behind during an update download
  • 5-minute Cronjob might fail if Entware coreutils-date is also installed
The fq_codel support requires much testing to ensure it sticks and doesn't get overridden or cause other issues with Adaptive QoS. I have only tested on 384.19, so any 386.1 beta testers are encouraged.

Switch to the develop branch at the command line with flexqos develop
Switch back to stable branch with flexqos stable

386.1 beta 4b here.
All appears to be working as advertised!
Fq-codel reports as working.
Loving the real device name on ipv6 connections, instead of the former alphabet soup.:cool:

Bufferbloat great too!
 
Noob speaking: isn't possible to populate ip6tables the same way you did to get custom ipv6 names by corresponding it to device mac address?
I don’t believe you can match on MAC address in the mangle POSTROUTING chain. If you could, it would do no good on download from the internet because the local MAC is not known yet.
 
@dave14305 thanks for the update. I'm trying the developer script. So far running great on beta4b without any issues. I did a few dls reports speedtest and "A" test bufferbloat. This is the 1st time I can get these results with Adaptive QoS and your script. Thanks alot for this. I'll continue to monitor...

Code:
admin@RT-AX88U-0D80:/tmp/home/root# tc qdisc ls | grep fq_codel
qdisc fq_codel 800b: dev eth0 parent 1:10 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800d: dev eth0 parent 1:11 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800f: dev eth0 parent 1:12 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8015: dev eth0 parent 1:13 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8017: dev eth0 parent 1:14 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8019: dev eth0 parent 1:15 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 801b: dev eth0 parent 1:16 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 801d: dev eth0 parent 1:17 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800a: dev br0 parent 1:10 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800c: dev br0 parent 1:11 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 800e: dev br0 parent 1:12 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8014: dev br0 parent 1:13 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8016: dev br0 parent 1:14 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8018: dev br0 parent 1:15 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 801a: dev br0 parent 1:16 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 801c: dev br0 parent 1:17 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
 

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top