What's new

FlexQoS FlexQoS 1.2.4 - Flexible QoS Enhancement Script for Adaptive QoS

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Those with issues on dev, are you using the fq_codel switch ? If so try turning it off and see if things stabilise.
I will try and then report back.

dev on fq codel on 384.18 was very good for me, it’s only since 386.1 that I’ve noticed the issues.

I doubt its anything dave has changed and is most likely ASUS changing something.

did you update spdmerlin to include firewall restart like Dave did with flex?
 
Last edited:
@Jack Yaz I ran the stable version of Flexqos and still have these big problem since Updating to 386.1.
 
I have to say, i never had that bad internet connection/jittery since upgrading to WRTMerlin!
 
Last edited:
Why does queue discipline not visible in QOS page? :(
 
Why does queue discipline not visible in QOS page? :(

You have to A) be on the development branch and B) click 'customize' on the FlexQOS tab. That will allow the queue discipline to be checked between 'default' and 'fq_codel'.
 
Is there anyway to check if the priority list in QOS is actually set to what is showing on the GUI, i have it set for VOIP then Work from home followed by Gaming etc but for some reason gaming appears to be getting top priority and when games are updated for instance the network is grinding to a halt with just that download taking all the bandwidth.
I don’t think it’s a problem with this addon as it was happening before i installed it, it was actually the reason I installed.
Any help would be appreciated.
 
@roboots21 what is even default before we hat zhe options codel sfq_codel and fq_codel??
 
@roboots21 what is even default before we hat zhe options codel sfq_codel and fq_codel??
Default is the SFQ option, that is now the only option for adaptive QoS on the new firmware. So selecting default will just use SFQ, selecting fq_codel on the develop branch of FlexQoS will use that instead, although you’d have to ask @dave14305 for the expert answer as to what and how it is implemented!
 
Last edited:
AX86U here on 386.1. Did a nuclear reset and started from scratch.

Using the develop branch for FlexQoS (1.1.1) and fq_codel option selected.

no issues to report, looking at stats from conmon and spdmerlin, latency and jitter is looking really good and haven’t had any random big spikes over what I’d normally expect.

speed tests on spdmerlin show my connection is consistently performing better, particularly at peak times where so far, I’m seeing approx 100mb better on the d/l performance during peak times.

had some separate issues with the add-ons mounting (now fixed) but this firmware setup is hands down the best I’ve had so far. Huge thanks to all concerned devs!
 
The Default Queue Discipline option uses whatever is setup on the main QoS page. If you're still running 384 firmware, this could be sfq, codel or fq_codel. If you're on 386, it means only sfq. So if you hate your performance under 386, it could be due to losing fq_codel.

If you choose the experimental fq_codel option within FlexQoS, you replace some of the standard ASUS Adaptive QoS hierarchy (htb + htb + sfq) with htb + fq_codel. If it doesn't work well for gaming, set it back to Default. I don't play games, so I have no experience with how it might impact your jitter or not.

There are known issues documented in the 386.1 thread about Adaptive QoS not starting properly on certain models. Be sure you're not a victim of that problem. I've not noticed it myself, but if you can't get vanilla Adaptive QoS working on 386.1, FlexQoS isn't going to solve underlying firmware problems, since FlexQoS relies on Adaptive QoS working properly.

If you have real problems and want advice, please provide some data. If it's a script problem, the output of flexqos debug might point it out. Or syslogs showing FlexQoS startup.
 
AX86U here on 386.1. Did a nuclear reset and started from scratch.

Using the develop branch for FlexQoS (1.1.1) and fq_codel option selected.

no issues to report, looking at stats from conmon and spdmerlin, latency and jitter is looking really good and haven’t had any random big spikes over what I’d normally expect.

speed tests on spdmerlin show my connection is consistently performing better, particularly at peak times where so far, I’m seeing approx 100mb better on the d/l performance during peak times.

had some separate issues with the add-ons mounting (now fixed) but this firmware setup is hands down the best I’ve had so far. Huge thanks to all concerned devs!

It doesn't work for me. After I enable fq-codel it still uses sfq...

ac86u 386.1 + flexqos 1.1.1 dev.
 
It doesn't work for me. After I enable fq-codel it still uses sfq...

ac86u 386.1 + flexqos 1.1.1 dev.
What is the output of tc qdisc
 
What is the output of tc qdisc
admin@RT-AC86U-19F8:/tmp/home/root# tc qdisc
qdisc htb 1: dev eth0 root refcnt 2 r2q 10 default 0 direct_packets_stat 59 dire ct_qlen 1000
qdisc sfq 2: dev eth0 parent 1:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 10: dev eth0 parent 1:10 r2q 10 default 598 direct_packets_stat 8 dire ct_qlen 1000
qdisc sfq 1256: dev eth0 parent 10:256 limit 127p quantum 1514b depth 127 diviso r 1024
qdisc htb 11: dev eth0 parent 1:11 r2q 10 default 598 direct_packets_stat 0 dire ct_qlen 1000
qdisc sfq 2256: dev eth0 parent 11:256 limit 127p quantum 1514b depth 127 diviso r 1024
qdisc htb 12: dev eth0 parent 1:12 r2q 10 default 598 direct_packets_stat 0 dire ct_qlen 1000
qdisc sfq 3256: dev eth0 parent 12:256 limit 127p quantum 1514b depth 127 diviso r 1024
qdisc htb 13: dev eth0 parent 1:13 r2q 10 default 598 direct_packets_stat 0 dire ct_qlen 1000
qdisc sfq 4256: dev eth0 parent 13:256 limit 127p quantum 1514b depth 127 diviso r 1024
qdisc htb 14: dev eth0 parent 1:14 r2q 10 default 598 direct_packets_stat 0 dire ct_qlen 1000
qdisc sfq 5256: dev eth0 parent 14:256 limit 127p quantum 1514b depth 127 diviso r 1024
qdisc htb 15: dev eth0 parent 1:15 r2q 10 default 598 direct_packets_stat 0 dire ct_qlen 1000
qdisc sfq 6256: dev eth0 parent 15:256 limit 127p quantum 1514b depth 127 diviso r 1024
qdisc htb 16: dev eth0 parent 1:16 r2q 10 default 598 direct_packets_stat 0 dire ct_qlen 1000
qdisc sfq 7256: dev eth0 parent 16:256 limit 127p quantum 1514b depth 127 diviso r 1024
qdisc htb 17: dev eth0 parent 1:17 r2q 10 default 598 direct_packets_stat 0 dire ct_qlen 1000
qdisc sfq 8256: dev eth0 parent 17:256 limit 127p quantum 1514b depth 127 diviso r 1024
qdisc sfq 1002: dev eth0 parent 10:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2002: dev eth0 parent 11:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3002: dev eth0 parent 12:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4002: dev eth0 parent 13:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5002: dev eth0 parent 14:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6002: dev eth0 parent 15:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7002: dev eth0 parent 16:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc pfifo_fast 0: dev eth1 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth2 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth3 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth4 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev spu_us_dummy root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev spu_ds_dummy root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth5 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth6 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc htb 1: dev br0 root refcnt 2 r2q 10 default 0 direct_packets_stat 137 dire ct_qlen 2
qdisc sfq 2: dev br0 parent 1:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 10: dev br0 parent 1:10 r2q 10 default 598 direct_packets_stat 16 dire ct_qlen 2
qdisc sfq 1256: dev br0 parent 10:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 11: dev br0 parent 1:11 r2q 10 default 598 direct_packets_stat 0 direc t_qlen 2
qdisc sfq 2256: dev br0 parent 11:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 12: dev br0 parent 1:12 r2q 10 default 598 direct_packets_stat 0 direc t_qlen 2
qdisc sfq 3256: dev br0 parent 12:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 13: dev br0 parent 1:13 r2q 10 default 598 direct_packets_stat 0 direc t_qlen 2
qdisc sfq 4256: dev br0 parent 13:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 14: dev br0 parent 1:14 r2q 10 default 598 direct_packets_stat 0 direc t_qlen 2
qdisc sfq 5256: dev br0 parent 14:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 15: dev br0 parent 1:15 r2q 10 default 598 direct_packets_stat 0 direc t_qlen 2
qdisc sfq 6256: dev br0 parent 15:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 16: dev br0 parent 1:16 r2q 10 default 598 direct_packets_stat 0 direc t_qlen 2
qdisc sfq 7256: dev br0 parent 16:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 17: dev br0 parent 1:17 r2q 10 default 598 direct_packets_stat 0 direc t_qlen 2
qdisc sfq 8256: dev br0 parent 17:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1002: dev br0 parent 10:2 limit 127p quantum 1514b depth 127 divisor 1 024
qdisc sfq 2002: dev br0 parent 11:2 limit 127p quantum 1514b depth 127 divisor 1 024
qdisc sfq 3002: dev br0 parent 12:2 limit 127p quantum 1514b depth 127 divisor 1 024
qdisc sfq 4002: dev br0 parent 13:2 limit 127p quantum 1514b depth 127 divisor 1 024
qdisc sfq 5002: dev br0 parent 14:2 limit 127p quantum 1514b depth 127 divisor 1 024
qdisc sfq 6002: dev br0 parent 15:2 limit 127p quantum 1514b depth 127 divisor 1 024
qdisc sfq 7002: dev br0 parent 16:2 limit 127p quantum 1514b depth 127 divisor 1


Edit: Restarted router, it works now.

qdisc htb 1: dev eth0 root refcnt 2 r2q 10 default 0 direct_packets_stat 48 direct_qlen 1000
qdisc fq_codel 8042: dev eth0 parent 1:2 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8044: dev eth0 parent 1:10 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8046: dev eth0 parent 1:11 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8048: dev eth0 parent 1:12 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 804a: dev eth0 parent 1:13 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 804c: dev eth0 parent 1:14 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 804e: dev eth0 parent 1:15 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8050: dev eth0 parent 1:16 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8052: dev eth0 parent 1:17 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc pfifo_fast 0: dev eth1 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth2 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth3 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth4 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev spu_us_dummy root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev spu_ds_dummy root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth5 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth6 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc htb 1: dev br0 root refcnt 2 r2q 10 default 0 direct_packets_stat 89 direct_qlen 2
qdisc fq_codel 8041: dev br0 parent 1:2 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8043: dev br0 parent 1:10 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8045: dev br0 parent 1:11 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8047: dev br0 parent 1:12 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8049: dev br0 parent 1:13 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 804b: dev br0 parent 1:14 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 804d: dev br0 parent 1:15 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 804f: dev br0 parent 1:16 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
qdisc fq_codel 8051: dev br0 parent 1:17 limit 1024p flows 1024 quantum 1514 target 5.0ms interval 100.0ms ecn
 
386.1 + FlexQos develop + fqcodel working fine here.:)

Just switched over to FlexQOS V1.1.1 Dev with fq_codel on my RT-AX88U, with V386.1 Final, with 30+ devices, gaming, streaming, IOT, Work from Home, multiple users, etc. Will start testing gaming over the next few days.
 
In my setup FlexQoS is working perfectly with 386.1 as it did before with 384.19 on my AC68U. No problems with multiple voice/video collaboration tools in parallel in our family. Just checked tc qdisc now and fq_codel is still there, A+/A/A+ at dslreports speedtest with bufferbloat below +20ms as it was with the old firmware and jitter of 3ms with cloudflares speedtest. No idea why some of you have such problems with QoS in your setup in general and this gorgeous script in detail.
 
Last edited:
In my setup FlexQoS is working perfectly with 386.1 as it did before with 384.19 on my AC68U. No problems with multiple voice/video collaboration tools in parallel in our family. Just checked tc qdisc now and fq_codel is still there, A+/A/A+ at dslreports speedtest with bufferbloat below +20ms as it was with the old firmware and jitter of 3ms with cloudflares speedtest. No idea why some of you have such problems with QoS in your setup in general and this gorgeous script in detail.

Absolutely! :)
Working very well here.
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top