My setups are probably simpler than yours. We registered increased speeds of > 200-500% and total loss of all throughput bottlenecks including throughput from devices at the very edge of our service ranges in both our two-store site with the router near-center of the bottom level at 6' above ground with an extended ground plane, and our four-story site with the router at one side of the building on the third level near the ceiling atop a tall wooden cabinet.
The two-story is in a low-rise suburban environment about 300' above sea level in a large valley (> 20K sq mi). This network is mixed Ethernet and Wi-Fi running 1Gb into and throughout the building, both 2.4 and 5GHz, iOS, Windows, and Linux desktops, laptops, tablets, phones, and miscellaneous IoT stuff including security security and media, and active market trader users along with some light Dev Ops and programming with most of the heavy lifting done on the first floor. The four-story is at high altitude (7,500') in a steep urban-wild land granite mountainous environment. This network is also mixed but the traders generally operate on the fourth floor while operations and programming are on the second floor. This site runs even slower than the two-story with only a 200Mb feed incoming to the building, and it is nearly all Wi-Fi with only the router and server on Ethernet. Both sites suffer heavy Wi-Fi interference from neighboring networks and admins who do not understand the benefit and problems of not dialing back their signals. We do not run any mesh networks, nor is bandwidth split between the router and any bridges or "range-extenders" (although we did try that at one point); the routers at both sites handle DHCP; and run < 100 devices and only a handful of active users plus 30-or so IoT devices active at any given time. Very rarely are any users outside the buildings during access although some of the IoT devices are.
We too tested—and to a limited degree benefited from—nearly all of the sorts of measures recommended previously in this and other threads. Splitting bandwidth using even just one range-extender had negative consequences that took time to present because monitoring was nonexistent. Testing, adjusting and monitoring for channel interference was helpful in stabilizing connections.
Shutting off QoS and allowing the router and NIC's to handle bandwidth allocation on the fly was by far the most successful change for us. It opened things very nicely, especially at the two-story site where we had brought in Gb internet over cable (neither site has fiber available). This was where we discovered Asus' implementation of QoS on our old routers includes bandwidth capping along with their AI, a "feature" of which we were (or at least I was) unaware prior to upgrading the incoming internet speed. This capping was impossible for us to detect at slower speeds but very definitely had an impact on users' throughput as we saw substantial, albeit lower, improvement at the slower four-story site.
As to whether or not a different or more modern implementation of QoS would be of any use, I cannot say. I can say that everything I am seeing from users and admins on the various boards says no. Any implementation of tiering for load-balancing is done mathematically to allow or limit flows in groups as spread over the needs of the organization and within those groups free negotiation usually takes place with extremely limited exceptions.