Like this:What does some example Plex connection look like in Tracked Connections?
Please send a screenshot of the same connection from Merlin’s Classification page.Like this:
View attachment 34020
It's identified here as streaming, but the graph shows it as a different type of traffic:
View attachment 34021
This is the iptables rule:
View attachment 34022
iptables -t mangle -nvL FlexQoS
It’s not really a bug. AMTM is not designed to handle “development” branches of scripts when checking for updates. If you switch to stable, it will be in sync.It has been a long time since I wanted to report this bug:
View attachment 34025
View attachment 34026
Without it, 12003F traffic would end up in Net Control.Could anyone shed on light on what the rule for World Wide Web Mark 12003F as the default is intended to do? What is that mark specifically?
It seems to catch my security camera video and dump it in Web Surfing (as intended).
Here you go:Please send a screenshot of the same connection from Merlin’s Classification page.
Also it would be helpful to see if the rule is being hit:
Code:iptables -t mangle -nvL FlexQoS
iptables -t mangle -nvL FlexQoS
Chain FlexQoS (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 500,4500 MARK xset 0x8006ffff/0xc03fffff
0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 500,4500 MARK xset 0x4006ffff/0xc03fffff
1 91 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport dports 16384:16415 MARK xset 0x8006ffff/0xc03fffff
0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport sports 16384:16415 MARK xset 0x4006ffff/0xc03fffff
0 0 MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 119,563 MARK xset 0x8003ffff/0xc03fffff
0 0 MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 119,563 MARK xset 0x4003ffff/0xc03fffff
7017 589K MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff
0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff
4937 329K MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff
0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff
0 0 MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 1194 MARK xset 0x8003ffff/0xc03fffff
0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 1194 MARK xset 0x8003ffff/0xc03fffff
0 0 MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 1194 MARK xset 0x4003ffff/0xc03fffff
310 25420 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 1194 MARK xset 0x4003ffff/0xc03fffff
1095K 55M MARK tcp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff
0 0 MARK udp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff
2933K 4201M MARK tcp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff
0 0 MARK udp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff
1332 72157 MARK tcp -- * br0 0.0.0.0/0 192.168.0.67 multiport sports !80,443,139,445 mark match 0x80000000/0xc03fffff MARK xset 0x8008ffff/0xc03fffff
25 900 MARK udp -- * br0 0.0.0.0/0 192.168.0.67 multiport sports !80,443,139,445 mark match 0x80000000/0xc03fffff MARK xset 0x8008ffff/0xc03fffff
863 56284 MARK tcp -- * eth4 192.168.0.67 0.0.0.0/0 multiport dports !80,443,139,445 mark match 0x40000000/0xc03fffff MARK xset 0x4008ffff/0xc03fffff
25 900 MARK udp -- * eth4 192.168.0.67 0.0.0.0/0 multiport dports !80,443,139,445 mark match 0x40000000/0xc03fffff MARK xset 0x4008ffff/0xc03fffff
1443 792K MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 80,443 mark match 0x80080000/0xc03f0000 MARK xset 0x8003ffff/0xc03fffff
1159 464K MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 mark match 0x40080000/0xc03f0000 MARK xset 0x4003ffff/0xc03fffff
0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 9987 MARK xset 0x8006ffff/0xc03fffff
0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 9987 MARK xset 0x4006ffff/0xc03fffff
0 0 MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 1337 MARK xset 0x8004ffff/0xc03fffff
0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 1337 MARK xset 0x8004ffff/0xc03fffff
0 0 MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 1337 MARK xset 0x4004ffff/0xc03fffff
0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 1337 MARK xset 0x4004ffff/0xc03fffff
Traffic is hitting the rule, with about 4GB of data. But that data shows as Web Surfing in your earlier graph.Here you go:
View attachment 34031
Code:iptables -t mangle -nvL FlexQoS Chain FlexQoS (1 references) pkts bytes target prot opt in out source destination 0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 500,4500 MARK xset 0x8006ffff/0xc03fffff 0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 500,4500 MARK xset 0x4006ffff/0xc03fffff 1 91 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport dports 16384:16415 MARK xset 0x8006ffff/0xc03fffff 0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport sports 16384:16415 MARK xset 0x4006ffff/0xc03fffff 0 0 MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 119,563 MARK xset 0x8003ffff/0xc03fffff 0 0 MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 119,563 MARK xset 0x4003ffff/0xc03fffff 7017 589K MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff 0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff 4937 329K MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff 0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff 0 0 MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 1194 MARK xset 0x8003ffff/0xc03fffff 0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 1194 MARK xset 0x8003ffff/0xc03fffff 0 0 MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 1194 MARK xset 0x4003ffff/0xc03fffff 310 25420 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 1194 MARK xset 0x4003ffff/0xc03fffff 1095K 55M MARK tcp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff 0 0 MARK udp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff 2933K 4201M MARK tcp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff 0 0 MARK udp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff 1332 72157 MARK tcp -- * br0 0.0.0.0/0 192.168.0.67 multiport sports !80,443,139,445 mark match 0x80000000/0xc03fffff MARK xset 0x8008ffff/0xc03fffff 25 900 MARK udp -- * br0 0.0.0.0/0 192.168.0.67 multiport sports !80,443,139,445 mark match 0x80000000/0xc03fffff MARK xset 0x8008ffff/0xc03fffff 863 56284 MARK tcp -- * eth4 192.168.0.67 0.0.0.0/0 multiport dports !80,443,139,445 mark match 0x40000000/0xc03fffff MARK xset 0x4008ffff/0xc03fffff 25 900 MARK udp -- * eth4 192.168.0.67 0.0.0.0/0 multiport dports !80,443,139,445 mark match 0x40000000/0xc03fffff MARK xset 0x4008ffff/0xc03fffff 1443 792K MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 80,443 mark match 0x80080000/0xc03f0000 MARK xset 0x8003ffff/0xc03fffff 1159 464K MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 mark match 0x40080000/0xc03f0000 MARK xset 0x4003ffff/0xc03fffff 0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 9987 MARK xset 0x8006ffff/0xc03fffff 0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 9987 MARK xset 0x4006ffff/0xc03fffff 0 0 MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 1337 MARK xset 0x8004ffff/0xc03fffff 0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 1337 MARK xset 0x8004ffff/0xc03fffff 0 0 MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 1337 MARK xset 0x4004ffff/0xc03fffff 0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 1337 MARK xset 0x4004ffff/0xc03fffff
tc -s class show dev eth4 parent 1: | grep -A4 $(tc filter show dev eth4 | grep -B1 0x4004ffff | head -1 | sed -En 's/.*flowid (1:1[0-7]).*$/\1/p')
I'm pretty sure I don't, where would I find that?Traffic is hitting the rule, with about 4GB of data. But that data shows as Web Surfing in your earlier graph.
Do you have any of the Game/Gear Accelerator enabled? That messes up the normal QoS category order.
If not, what is the output of this command?
Code:tc -s class show dev eth4 parent 1: | grep -A4 $(tc filter show dev eth4 | grep -B1 0x4004ffff | head -1 | sed -En 's/.*flowid (1:1[0-7]).*$/\1/p')
tc -s class show dev eth4 parent 1: | grep -A4 $(tc filter show dev eth4 | grep -B1 0x4004ffff | head -1 | sed -En 's/.*flowid (1:1[0-7]).*$/\1/p')
class htb 1:13 parent 1:1 leaf 8033: prio 3 rate 4915Kbit overhead 18 ceil 49152Kbit burst 6Kb cburst 60788b
Sent 27324736 bytes 367029 pkt (dropped 0, overlimits 0 requeues 0)
rate 168bit 0pps backlog 0b 0p requeues 0
lended: 366893 borrowed: 136 giants: 0
tokens: 154113 ctokens: 154395
What does your Upload screenshot look like now? How much data in Streaming? 26 MB?I'm pretty sure I don't, where would I find that?
Code:tc -s class show dev eth4 parent 1: | grep -A4 $(tc filter show dev eth4 | grep -B1 0x4004ffff | head -1 | sed -En 's/.*flowid (1:1[0-7]).*$/\1/p') class htb 1:13 parent 1:1 leaf 8033: prio 3 rate 4915Kbit overhead 18 ceil 49152Kbit burst 6Kb cburst 60788b Sent 27324736 bytes 367029 pkt (dropped 0, overlimits 0 requeues 0) rate 168bit 0pps backlog 0b 0p requeues 0 lended: 366893 borrowed: 136 giants: 0 tokens: 154113 ctokens: 154395
Edit: Adding some more settings screens for you:
View attachment 34032View attachment 34033View attachment 34034
tc filter show dev eth4
If your router supports it, it would be on the Game tab, click the "Add" button, and see if any devices already listed in the popup window.I'm pretty sure I don't, where would I find that?
I have no Game tab, so I guess it's not on.If your router supports it, it would be on the Game tab, click the "Add" button, and see if any devices already listed in the popup window.
Yep, roughly 26 MB.What does your Upload screenshot look like now? How much data in Streaming? 26 MB?
How about all your tc filters?
Code:tc filter show dev eth4
https://pastebin.com/kBe5sEXTHow about all your tc filters?
Code:
It seems like the packets aren't being marked properly and may be hitting the AppDB rule for 14**** (meaning iptables didn't change the mark, or didn't match the traffic).I have no Game tab, so I guess it's not on.
Yep, roughly 26 MB.
https://pastebin.com/kBe5sEXT
Thanks for taking a look!
tc filter show dev eth4 | grep -EB1 "x4004ffff|x40140000"
tc -s class show dev eth4 parent 1: | grep -EA4 "htb 1:1[34]"
iptables -t mangle -nvL FlexQoS | grep 32400
It seems like the packets aren't being marked properly and may be hitting the AppDB rule for 14**** (meaning iptables didn't change the mark, or didn't match the traffic).
Please run these commands before starting a Plex stream, let it run for a minute, then run the commands again. Please post both the before and after results. Whenever it's convenient for you. I'll review it tomorrow.
Bash:tc filter show dev eth4 | grep -EB1 "x4004ffff|x40140000" tc -s class show dev eth4 parent 1: | grep -EA4 "htb 1:1[34]" iptables -t mangle -nvL FlexQoS | grep 32400
tc filter show dev eth4 | grep -EB1 "x4004ffff|x40140000"
tc -s class show dev eth4 parent 1: | grep -EA4 "htb 1:1[34]"
iptables -t mangle -nvL FlexQoS | grep 32400filter parent 1: protocol all pref 5 u32 fh 827::805 order 2053 key ht 827 bkt 0 flowid 1:13
mark 0x4004ffff 0xc03fffff (success 3599)
--
filter parent 1: protocol all pref 23 u32 fh 804::800 order 2048 key ht 804 bkt 0 flowid 1:14
mark 0x40140000 0xc03f0000 (success 30241853)
tc -s class show dev eth4 parent 1: | grep -EA4 "htb 1:1[34]"
class htb 1:13 parent 1:1 leaf 8033: prio 3 rate 4915Kbit overhead 18 ceil 49152Kbit burst 6Kb cburst 60788b
Sent 34996420 bytes 432552 pkt (dropped 0, overlimits 0 requeues 0)
rate 1496bit 0pps backlog 0b 0p requeues 0
lended: 432339 borrowed: 213 giants: 0
tokens: 154113 ctokens: 154395
--
class htb 1:14 parent 1:1 leaf 8035: prio 4 rate 4915Kbit overhead 18 ceil 49152Kbit burst 6Kb cburst 60788b
Sent 2809730995 bytes 30400021 pkt (dropped 0, overlimits 0 requeues 0)
rate 34744bit 34pps backlog 0b 0p requeues 0
lended: 24447007 borrowed: 5953014 giants: 0
tokens: 152155 ctokens: 154199
iptables -t mangle -nvL FlexQoS | grep 32400
23335 1972K MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff
0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff
15840 1044K MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff
0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff
1719K 82M MARK tcp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff
0 0 MARK udp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff
4187K 5936M MARK tcp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff
0 0 MARK udp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff
tc filter show dev eth4 | grep -EB1 "x4004ffff|x40140000"
tc -s class show dev eth4 parent 1: | grep -EA4 "htb 1:1[34]"
iptables -t mangle -nvL FlexQoS | grep 32400filter parent 1: protocol all pref 5 u32 fh 827::805 order 2053 key ht 827 bkt 0 flowid 1:13
mark 0x4004ffff 0xc03fffff (success 3686)
--
filter parent 1: protocol all pref 23 u32 fh 804::800 order 2048 key ht 804 bkt 0 flowid 1:14
mark 0x40140000 0xc03f0000 (success 30278003)
tc -s class show dev eth4 parent 1: | grep -EA4 "htb 1:1[34]"
class htb 1:13 parent 1:1 leaf 8033: prio 3 rate 4915Kbit overhead 18 ceil 49152Kbit burst 6Kb cburst 60788b
Sent 35187995 bytes 433181 pkt (dropped 0, overlimits 0 requeues 0)
rate 2288bit 2pps backlog 0b 0p requeues 0
lended: 432954 borrowed: 227 giants: 0
tokens: 154113 ctokens: 154395
--
class htb 1:14 parent 1:1 leaf 8035: prio 4 rate 4915Kbit overhead 18 ceil 49152Kbit burst 6Kb cburst 60788b
Sent 2815494523 bytes 30436809 pkt (dropped 0, overlimits 0 requeues 0)
rate 124760bit 116pps backlog 0b 0p requeues 0
lended: 24483086 borrowed: 5953723 giants: 0
tokens: 154418 ctokens: 154425
iptables -t mangle -nvL FlexQoS | grep 32400
23508 1984K MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff
0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff
15955 1051K MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff
0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff
1738K 83M MARK tcp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff
0 0 MARK udp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff
4304K 6110M MARK tcp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff
0 0 MARK udp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff
There are fewer packets in the class stats than in the iptables counters, but it still seems to me that the iptables mark is not surviving. Or somehow the HW acceleration is causing different behavior. Maybe it’s a peculiarity with the AX58U (I assume it’s a 58U due to eth4).Before:
Code:tc filter show dev eth4 | grep -EB1 "x4004ffff|x40140000" tc -s class show dev eth4 parent 1: | grep -EA4 "htb 1:1[34]" iptables -t mangle -nvL FlexQoS | grep 32400filter parent 1: protocol all pref 5 u32 fh 827::805 order 2053 key ht 827 bkt 0 flowid 1:13 mark 0x4004ffff 0xc03fffff (success 3599) -- filter parent 1: protocol all pref 23 u32 fh 804::800 order 2048 key ht 804 bkt 0 flowid 1:14 mark 0x40140000 0xc03f0000 (success 30241853) tc -s class show dev eth4 parent 1: | grep -EA4 "htb 1:1[34]" class htb 1:13 parent 1:1 leaf 8033: prio 3 rate 4915Kbit overhead 18 ceil 49152Kbit burst 6Kb cburst 60788b Sent 34996420 bytes 432552 pkt (dropped 0, overlimits 0 requeues 0) rate 1496bit 0pps backlog 0b 0p requeues 0 lended: 432339 borrowed: 213 giants: 0 tokens: 154113 ctokens: 154395 -- class htb 1:14 parent 1:1 leaf 8035: prio 4 rate 4915Kbit overhead 18 ceil 49152Kbit burst 6Kb cburst 60788b Sent 2809730995 bytes 30400021 pkt (dropped 0, overlimits 0 requeues 0) rate 34744bit 34pps backlog 0b 0p requeues 0 lended: 24447007 borrowed: 5953014 giants: 0 tokens: 152155 ctokens: 154199 iptables -t mangle -nvL FlexQoS | grep 32400 23335 1972K MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff 0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff 15840 1044K MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff 0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff 1719K 82M MARK tcp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff 0 0 MARK udp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff 4187K 5936M MARK tcp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff 0 0 MARK udp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff
After:
Code:tc filter show dev eth4 | grep -EB1 "x4004ffff|x40140000" tc -s class show dev eth4 parent 1: | grep -EA4 "htb 1:1[34]" iptables -t mangle -nvL FlexQoS | grep 32400filter parent 1: protocol all pref 5 u32 fh 827::805 order 2053 key ht 827 bkt 0 flowid 1:13 mark 0x4004ffff 0xc03fffff (success 3686) -- filter parent 1: protocol all pref 23 u32 fh 804::800 order 2048 key ht 804 bkt 0 flowid 1:14 mark 0x40140000 0xc03f0000 (success 30278003) tc -s class show dev eth4 parent 1: | grep -EA4 "htb 1:1[34]" class htb 1:13 parent 1:1 leaf 8033: prio 3 rate 4915Kbit overhead 18 ceil 49152Kbit burst 6Kb cburst 60788b Sent 35187995 bytes 433181 pkt (dropped 0, overlimits 0 requeues 0) rate 2288bit 2pps backlog 0b 0p requeues 0 lended: 432954 borrowed: 227 giants: 0 tokens: 154113 ctokens: 154395 -- class htb 1:14 parent 1:1 leaf 8035: prio 4 rate 4915Kbit overhead 18 ceil 49152Kbit burst 6Kb cburst 60788b Sent 2815494523 bytes 30436809 pkt (dropped 0, overlimits 0 requeues 0) rate 124760bit 116pps backlog 0b 0p requeues 0 lended: 24483086 borrowed: 5953723 giants: 0 tokens: 154418 ctokens: 154425 iptables -t mangle -nvL FlexQoS | grep 32400 23508 1984K MARK tcp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff 0 0 MARK udp -- * br0 0.0.0.0/0 0.0.0.0/0 multiport sports 32400 MARK xset 0x8004ffff/0xc03fffff 15955 1051K MARK tcp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff 0 0 MARK udp -- * eth4 0.0.0.0/0 0.0.0.0/0 multiport dports 32400 MARK xset 0x4004ffff/0xc03fffff 1738K 83M MARK tcp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff 0 0 MARK udp -- * br0 0.0.0.0/0 192.168.0.2 multiport dports 32400 MARK xset 0x8004ffff/0xc03fffff 4304K 6110M MARK tcp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff 0 0 MARK udp -- * eth4 192.168.0.2 0.0.0.0/0 multiport sports 32400 MARK xset 0x4004ffff/0xc03fffff
It's an AX58U, yeah.There are fewer packets in the class stats than in the iptables counters, but it still seems to me that the iptables mark is not surviving. Or somehow the HW acceleration is causing different behavior. Maybe it’s a peculiarity with the AX58U (I assume it’s a 58U due to eth4).
Are these streaming connections already established when QoS starts? I don‘t use Plex so I’m unclear how this behaves from the WAN.
If you restart the Plex server, any pre-existing connections should get reset and classified anew. I don't think it will really help, but I'm not sure what else is going on.It's an AX58U, yeah.
As for the connections: not sure, how can I find out?
Welcome To SNBForums
SNBForums is a community for anyone who wants to learn about or discuss the latest in wireless routers, network storage and the ins and outs of building and maintaining a small network.
If you'd like to post a question, simply register and have at it!
While you're at it, please check out SmallNetBuilder for product reviews and our famous Router Charts, Ranker and plenty more!