Yes. But all that LAN traffic might be impacting the routers performance when it is doing these tests.
It's a tiny bit of work. Look at the CPU usage
Yes. But all that LAN traffic might be impacting the routers performance when it is doing these tests.
For what it's worth, my understanding is that some dropped packets is a good thing and actually a way sender and receiver can determine the adequate throughput/speed. Cake is then about choosing the right packet to drop, and dropping early. For instance, it drops more packets in my lower priority tins.Drops are bad. They are what cake is designed to stop happening.
The problem is that the TCP congestion avoidance algorithm relies on packet drops to determine the bandwidth available. A TCP sender increases the rate at which it sends packets until packets start to drop, then decreases the rate. Ideally it speeds up and slows down until it finds an equilibrium equal to the speed of the link. However, for this to work well, the packet drops must occur in a timely manner, so that the sender can select a suitable rate. If a router on the path has a large buffer capacity, the packets can be queued for a long time waiting until the router can send them across a slow link to the ISP. No packets are dropped, so the TCP sender doesn’t receive information that it has exceeded the capacity of the bottleneck link. It doesn’t slow down until it has sent so much beyond the capacity of the link that the buffer fills and drops packets. At this point, the sender has far overestimated the speed of the link.
In a network router, packets are often queued before being transmitted. Packets are only dropped if the buffer is full. On older routers, buffers were fairly small so filled quickly and therefore packets began to drop shortly after the link became saturated, so the TCP/IP protocol could adjust. On newer routers buffers have become large enough to hold several megabytes of data, which can be equivalent to 10 sec. or more of data. This means that the TCP/IP protocol can’t adjust to the speed correctly, as it appears be able to send for 10 sec without receiving any feedback that packets are being dropped. This creates rapid speedups and slowdowns in transmissions.
Cake slows a sender by delaying acknowledgments and transmits. Drops are terrible as a drop requires the entire rotate window to be retransmitted once the timer runs out which is typically 2 seconds. Drops are what cause buffer bloat.Hello,
For what it's worth, my understanding is that some dropped packets is a good thing and actually a way sender and receiver can determine the adequate throughput/speed. Cake is then about choosing the right packet to drop, and dropping early. For instance, it drops more packets in my lower priority tins.
I find the following very informative https://www.bufferbloat.net/projects/bloat/wiki/TechnicalIntro/
Regards
W.
I’ve not read this before. Is there a source for this behavior?Cake slows a sender by delaying acknowledgments and transmits. Drops are terrible as a drop requires the entire rotate window to be retransmitted once the timer runs out which is typically 2 seconds. Drops are what cause buffer bloat.
Maybe the video around this post https://www.snbforums.com/threads/cakeqos-merlin.64800/post-638424It was in the video by one of the authors I posted months ago
Yes, that's the one. Starts out simple explaining the principles and then gets down to the nitty gritty. I was a Systems Programmer, today's Software Engineer, in the 70's and 80's and one of my specialties was drivers and communications systems so I ate this up. That was before I began designing and implementing networks.Maybe the video around this post https://www.snbforums.com/threads/cakeqos-merlin.64800/post-638424
Thank you Morris for your feedback. It is not how I had interpreted the quote above from the bufferbloat site. But the 2 seconds that you mention for the retransmit put things in perspective. I'll try to watch that video.
Note that is AQM, the way it works without SQM Cake. He leaves out another feather that I did as well and that's a TCP Source Squinch which tells a host to stop transmitting briefly. If you don't know TCP/IP look up "TCP Rotate Window" and "TCP Slow Start".I still need to listen to it all. At around 5:33, in that Cake presentation, Jonathan Morton says
"So AQM keeps the queue length short by choosing when to mark packets, or drop them, in order to tell the endpoints that in fact the link is congested and that they should slow down a little bit..."
FWIW, I'm perfectly OK if experts devise an algorithm that wisely drops a few packets for the greater good of the link.
The alternative to dropping, when facing a temporarily congested link, is probably to keep packets in the queue for a while, which might ultimately induce more latency because it might mean less feedback to the endpoints.
Regards
W.
Not all servers have ECN support enabled. A SYN-ACK without the ECE bit set indicates it does not. The connection then proceeds as Not-ECT.
I'm reasonably sure Akamai has specifically enabled ECN support. A lot of smaller webservers are probably running with the default passive-mode ECN support as well (ie. will negotiate inbound but not initiate outbound).
- Jonathan Morton
# cat /proc/sys/net/ipv4/tcp_ecn
2
2 – (default) enable ECN when requested by incoming connections, but do not request ECN on outgoing connections
Those congestion control protocols are mostly useless if they aren't used by both ends of the connection. That's why all those router hacks involving Vegas and such are more placebo than anything.
When I experimented with it, it either made no difference or made things slightly worse setting incoming+outgoing.
"The lack of information communicated in Source Quench messages makes them a rather crude tool for managing congestion. In general terms, the process of regulating the sending of messages between two devices is called flow control, and is usually a function of the transport layer. The Transmission Control Protocol (TCP) actually has a flow control mechanism that is far superior to the use of ICMP Source Quench messages.
Another issue with Source Quench messages is that they can be abused. Transmission of these messages by a malicious user can cause a host to be slowed down when there is no valid reason. This security issue, combined with the superiority of the TCP method for flow control, has caused Source Quench messages to largely fall out of favor."
Puzzled...With a score of B on my 100/100 fiber should I even bother with QoS? When I run it I get an A+ but can't tell any difference. And I'm not having any problems without it, is there something better about A+ that I'm missing?
I tried it just to try it. Just curious really, but when I saw A+ I became interested because of the "improvement".Puzzled...
If you don't notice a real life difference, indeed why bother... Or maybe just for the fun of experimenting as long as it brings you joy/excitement.
Personally, if not noticing a difference, I would go for the settings that give me an A+ ;-) , or whatever settings are configured right now on your equipment.
Regards
W.
I would expect the difference to be for applications that are being slowed by cake, for example a download. Realtime applications are gong to find available buffer and will work fine.Puzzled...
If you don't notice a real life difference, indeed why bother... Or maybe just for the fun of experimenting as long as it brings you joy/excitement.
Personally, if not noticing a difference, I would go for the settings that give me an A+ ;-) , or whatever settings are configured right now on your equipment.
Regards
W.
It's actually the interface queue running out of space. Here is a simple explanation of how cake works:I have cake set with options very similar to the defaults in the new beta of merlin (ie: overhead+speed changes only). I've always set the speeds based on maximum saturation of the network and observing ping.
After reading this thread, I also noticed cake reporting dropped. Continuously reducing the speed only resulted in the throughput being appreciably adjusted. I noticed no changes in the dropped. Curiously, and without scientific testing, there didn't even appear to be an appreciable difference in the number of dropped, even though the throughput was considerably slower.
My understanding is that bufferbloat is essentially the saturation of a link by one or more sources, at the expense of the quality of other or all sources. So it makes sense in lay terms, that when cake notices a link has the potential to cause disruption, it takes means, including dropping packets in that link, to ensure the smooth overall running of the entire network.
It's actually the interface queue running out of space.
If we boarded aircraft like this it would be much faster. Amazing that just the opposite is done. People with dyabilities, and young children go first. The buffer is immediately full!
Welcome To SNBForums
SNBForums is a community for anyone who wants to learn about or discuss the latest in wireless routers, network storage and the ins and outs of building and maintaining a small network.
If you'd like to post a question, simply register and have at it!
While you're at it, please check out SmallNetBuilder for product reviews and our famous Router Charts, Ranker and plenty more!