I want to point out I found another scenario in which this happens. It seems the google built in speed test causes issues as well, for some weird reason. Other speed tests don't seem to have a issue, but the google one causes latency issues and hiccups with QoS. This is in addition to Steam. This only happens with the download, not the upload.
Just tossing this out there as an idea:
I noticed on my RT-AC3200, certain traffic would spike the CPU on both cores to 100%. At my new place I upgraded to an RT-AX56U, which is handling weird loads much better - but the AC3200 would have individual connections slow down, even though I could mostly still utilize my whole connection.
AC3200 -
BCM4709 @ 1 GHz dual core
AX56U - BCM6755 @ 1.5 GHz, quad core
AC86U - BCM4906 @ 1.8 GHz dual core (newest arch)
In the back of my mind, I was pondering over the TrendMicro filtering engine. It's a binary blob. It has weird things like classifying the same traffic several times to different categories. We have no idea what the source code is like, but what if what started out as some engineer's lean and mean dream project, has morphed into some monstrous codebase that is impossible to maintain, and new coders are just slowly strapping things onto it as they figure out how? What if these odd lag inducing moments are parts of the code that are incredibly inefficient, and are hogging one or two cores from the router, for an undetermined amount of time? Is there a way to log how much CPU time that part of the router is taking up?
Back in the WinXP days, it was pretty common for some program to hog 100% of a core for no reason, back when programmers were new to multi-threading. Even if they have the skills, programmers frequently make mistakes, especially when they don't have a clear picture of the whole codebase or all the target devices.
If we add in that Origin might have a defined IP range and ports for downloads, and we know that Steam mixes it up a lot (the whole IP range pretty much can be for game traffic, bulk downloads, community features and communications, etc. - and individual games may even use HTTP or HTTPS for game data, rather than using their own proprietary UDP protocols, so you can't even filter downloads vs game data by watching ports and counting traffic.), then it just seems like Steam might possibly require more intense code to properly filter the connections and traffic into the desired categories.
Just a theory, with no evidence to back it up, but some coding experience. What do you think? Is there a way to test/monitor/log resource usage at the sub-second level to find out what is going on? I am not the biggest linux guru, but I know there's a pile of people here with much greater experience.
This really shows the benefit of projects like Cake - lots of open source code to look at.
More eyes often spots more bugs.