FreshJR
Very Senior Member
@Sinner not quite.
QOS is a token based system.
1 token = 1 packet processed (just for example)
Packets to be processed wait in a queue.
Every clock cycle (eg tic) packets consume a token if available. If not available they wait and see if a token will be available next clock cycle.
In steady state operation, as you are operating at your bandwidth limits, tokens are being consumed as soon as they are generated. No issues, everything is working as expected and you are limiting your speeds as expected with no bufferbloat.
Now let’s say your network is idle. If you submit a 10 packets, you would have to wait 10 cycles for 10 token to be generated.
To remove this delay, we could allow a small amount of tokens are stored in buckets to use in the future instead destroying them the during the clock cycle they are created .
So at idle let’s say our bucket is a size of 10. Tokens will be generated and fill up this bucket first if it’s not full before being destroyed.
What this means in practice is that the first 10 packets can pass through without any delay if the network was not in a saturated state. During each clock cycle new tokens will still be generated and will still be filled into the buckets if the network is not saturated.
Allowing to burst packets, instead of waiting for tokens to be generated per clock cycle, while the network is NOT saturated improves responsiveness.
Now if you go to full network load these available tokens will cause you to exceed limits for a brief moment. Eg first 10 packets got passed through 0rated while other packets are consuming tokens at the generation rate.
This happens only at beginning of network saturation. As soon as tokens in the bucket are depleted, you are back to consuming tokens as soon as they are generated and within limits.
There’s is two sets of buckets with the qos system per class.
One per rate(burst) and one per ciel(cburst). Since our tokens generated are split between the classes the order the buckets fill up and get generated tokens get allotted/shared is a little more complicated.
---
Simply put
Too small bursts = unresponsiveness
Too big bursts = you will go over QOS limits at beginning of network saturation for too long of a time and create a spike of bufferbloat.
Asus generates burst/cburst values as a function of inputted user speeds. I had their correlation but deleted it.
---
There’s a little more too it but I’m not trying to write an article. Basically I limited bursts for bulk traffic since I don’t care about a few ms of responsiveness on that type of traffic and this removed a bufferbloat spike at the beginning of network saturation once those classes start consuming data.
---
I have no plans on modifying bursts/cbursts default values. This is outside the scope of the script, but the framework of the script does allow you to extend functionality and override bursts for your needs.
QOS is a token based system.
1 token = 1 packet processed (just for example)
Packets to be processed wait in a queue.
Every clock cycle (eg tic) packets consume a token if available. If not available they wait and see if a token will be available next clock cycle.
In steady state operation, as you are operating at your bandwidth limits, tokens are being consumed as soon as they are generated. No issues, everything is working as expected and you are limiting your speeds as expected with no bufferbloat.
Now let’s say your network is idle. If you submit a 10 packets, you would have to wait 10 cycles for 10 token to be generated.
To remove this delay, we could allow a small amount of tokens are stored in buckets to use in the future instead destroying them the during the clock cycle they are created .
So at idle let’s say our bucket is a size of 10. Tokens will be generated and fill up this bucket first if it’s not full before being destroyed.
What this means in practice is that the first 10 packets can pass through without any delay if the network was not in a saturated state. During each clock cycle new tokens will still be generated and will still be filled into the buckets if the network is not saturated.
Allowing to burst packets, instead of waiting for tokens to be generated per clock cycle, while the network is NOT saturated improves responsiveness.
Now if you go to full network load these available tokens will cause you to exceed limits for a brief moment. Eg first 10 packets got passed through 0rated while other packets are consuming tokens at the generation rate.
This happens only at beginning of network saturation. As soon as tokens in the bucket are depleted, you are back to consuming tokens as soon as they are generated and within limits.
There’s is two sets of buckets with the qos system per class.
One per rate(burst) and one per ciel(cburst). Since our tokens generated are split between the classes the order the buckets fill up and get generated tokens get allotted/shared is a little more complicated.
---
Simply put
Too small bursts = unresponsiveness
Too big bursts = you will go over QOS limits at beginning of network saturation for too long of a time and create a spike of bufferbloat.
Asus generates burst/cburst values as a function of inputted user speeds. I had their correlation but deleted it.
---
There’s a little more too it but I’m not trying to write an article. Basically I limited bursts for bulk traffic since I don’t care about a few ms of responsiveness on that type of traffic and this removed a bufferbloat spike at the beginning of network saturation once those classes start consuming data.
---
I have no plans on modifying bursts/cbursts default values. This is outside the scope of the script, but the framework of the script does allow you to extend functionality and override bursts for your needs.
Last edited: