What's new

Potential problem RT-AC68U (originally TM-AC1900) and WiFi driver 6.37.14.126

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

I never use 2.4GHz, but I was intrigued so I ran some tests on the 1900P with the new driver (v27). The client is at a most extreme end of the house, iperf 3.1.3

2.4GHz, 200 mW
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  1.25 MBytes  1.05 Mbits/sec  3.481 ms  4/158 (2.5%)

2.4GHz, 100 mW
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.01  sec  1.25 MBytes  1.05 Mbits/sec  51.814 ms  7/157 (4.5%)

5GHz, 200mW
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  1.25 MBytes  1.05 Mbits/sec  0.642 ms  0/159 (0%)

If I get a chance, I'll try it on the older equipment.

Did you run those tests with WMM APSD enabled or disabled? Also, run your tests with -b 100M or some equally high value if you aren't already. Seems like you might be REALLY far from the router if not. I have been running iperf3 -c <host ip> -u -b 200M (someone correct me if I am using iperf wrong, this is not something I have much experience with).
 
Last edited:
Did you run those tests with WMM APSD enabled or disabled?

Just got done reading through the thread. It's enabled by default, so that's what I ran. I'm reconfigure it and retest.

edit: with WMM APSD disabled..

2.4GHz, 200 mW
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  1.25 MBytes  1.05 Mbits/sec  4.842 ms  8/159 (5%)

2.4GHz, 100mW
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  1.25 MBytes  1.05 Mbits/sec  4.619 ms  10/159 (6.3%)

5GHz, 200mW
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  1.25 MBytes  1.05 Mbits/sec  1.125 ms  0/159 (0%)
 
Last edited:
Mine on LTS v26:

View attachment 10388

I see you changed DTIM and beacon intervals, or are those defaults on 380.68?

I read about these long time ago

A lower beacon interval increases use of airtime (and decreased throughput because of increased competition for the signal). Increasing this interval can have a great impact on Wi-Fi performance.
https://routerguide.net/beacon-interval-best-optimal-setting-improve-wireless-speed/

We highly recommend using an interval setting between 250 ms to 400 ms for the majority of deployments. Anything lower than this will quickly drain your battery, and anything higher may have performance issues due to signal instability. Remember that we use a default beacon Interval of 650 ms, because we find it offers the best balance of performance and battery life in most cases.
https://kontakt.io/blog/beacon-configuration-strategy-guide-interval/

A lower DTIM interval results in higher throughput but shorter battery life. The DTIM interval is the interval at which the DTIM appears in beacons. For example, if the DTIM interval is set to 2, every second beacon will have the map. If the beacon period is 100 ms (with the DTIM interval set at 2), the DTIM will transmit every second beacon or five times per second.
http://www.summitdata.com/Documents/Glossary/knowledge_center_d.html#dtim_interval
 
No, I modified those values. I seem to recall reading somewhere once that highter DTIM intervals can negatively affect VOIP (though I have no evidence to back such a claim up, and yes, I know how DTIM works before someone tries to explain it to me). I have been rolling with these values for a long time without issue and tend to just repeat them which each firmware version. I also like to have a lower beacon interval for 5ghz (100) in the theory it will cause devices that are 5ghz capable to discover and connect to that band first (again, anecdotal, I am no expert).

What you are saying makes sense, though with WiFi, common sense seems to need to be thrown out the window in many cases.
 
Also, run your tests with -b 100M or some equally high value if you aren't already. Seems like you might be REALLY far from the router if not. I have been running iperf3 -c <host ip> -u -b 200M (someone correct me if I am using iperf wrong, this is not something I have much experience with).

definitely changes the result.

2.4GHz, 200 mW, APSD disabled
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec   222 MBytes   186 Mbits/sec  5.708 ms  25299/28371 (89%)
 
definitely changes the result.

2.4GHz, 200 mW, APSD disabled
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec   222 MBytes   186 Mbits/sec  5.708 ms  25299/28371 (89%)

Yikes! That is not a good result.
 
I have to join everyone else in not being able to detect a difference. Ran against a 2.4GHz only RA-Link USB adapter, and an Atheros dual-band Mini-PCIe card (on 2.4GHz). All substantially looked like this, with the USB having a slight higher jitter/variation.
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  5]   0.00-1.01   sec  7.03 MBytes  58.2 Mbits/sec  2.210 ms  0/900 (0%)
[  5]   1.01-2.01   sec  9.49 MBytes  79.8 Mbits/sec  1.318 ms  0/1215 (0%)
[  5]   2.01-3.01   sec  8.62 MBytes  72.5 Mbits/sec  2.238 ms  0/1104 (0%)
[  5]   3.01-4.01   sec  8.42 MBytes  70.8 Mbits/sec  1.532 ms  0/1078 (0%)
[  5]   4.01-5.01   sec  9.55 MBytes  80.3 Mbits/sec  1.501 ms  0/1223 (0%)
[  5]   5.01-6.01   sec  9.38 MBytes  78.8 Mbits/sec  1.331 ms  0/1200 (0%)
[  5]   6.01-7.00   sec  9.09 MBytes  76.4 Mbits/sec  1.328 ms  0/1164 (0%)
[  5]   7.00-8.00   sec  9.03 MBytes  75.9 Mbits/sec  1.282 ms  0/1156 (0%)
[  5]   8.00-9.00   sec  9.09 MBytes  76.3 Mbits/sec  0.932 ms  0/1163 (0%)
[  5]   9.00-10.02  sec  9.30 MBytes  76.9 Mbits/sec  1.405 ms  0/1190 (0%)
[  5]  10.02-10.22  sec  1.87 MBytes  77.2 Mbits/sec  1.496 ms  0/239 (0%)

Where did you download iperf? I googled and saw so many download sites. And how do you run the test? Thank you!
 
Yikes! That is not a good result.
with it enabled, for good measure.

2.4GHz, APSD enabled
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec   232 MBytes   194 Mbits/sec  8.943 ms  28482/29459 (97%)

5GHz, APSD enabled
Code:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec   236 MBytes   198 Mbits/sec  0.336 ms  5964/30265 (20%)

and yes, I'm quite far from the AP at this location
 
Where did you download iperf? I googled and saw so many download sites. And how do you run the test? Thank you!

iperf.fr is where I downloaded from, and you need one wired PC to act as server, on which you run iperf with the -s option to make it a server. Then on a wireless client you run iperf with the options -c <host ip> -u -b 200M. I don't think 2o0M is necessary, but definitely increase it above the default. I think 100M+ should be good values for a decent test (someone with more knowledge may have better advice).
 
If you force -b too high then you will experience packet loss all the time. You want to keep the value to some reasonable value that the AP should be able to meet without packet loss. So if you see in your test above 186Mbps, try something lower 180, 170, 150 ... to a point where you get no or very little packet loss on multiple runs.

Now a question I've been meaning to ask: my Mac data rate is 217Mbps but I seem to be able to get actual sustained throughput of 151Mbps and sometimes a bit higher without any packet loss. I thought that the expectation is to get about 50% of the link speed, or is that just a rough approximation? Does anyone know how you compute the actual usable throughput from the data rate?
 
If you force -b too high then you will experience packet loss all the time. You want to keep the value to some reasonable value that the AP should be able to meet without packet loss. So if you see in your test above 186Mbps, try something lower 180, 170, 150 ... to a point where you get no or very little packet loss on multiple runs.

Now I question I've been meaning to ask: my Mac data rate is 217Mbps but I seem to be able to get actual sustained throughput of 151Mbps and sometimes a bit higher without any packet loss. I thought that the expectation is to get about 50% of the link speed, or is that just a rough approximation? Does anyone know how you compute the actual usable throughput from the data rate?

Good to know. I get the same results regardless of whether I put 100M or 200M for the -b value, which led me to assume any decently high value would work.
 
The other way to get an idea of your throughput is to run a TCP test first, rather than UDP, though with UDP you should be able to go a bit higher than TCP but it gives you an idea where you are:
Server: iperf -s -i 1
Client: iperf -c server -i 1
 
The other way to get an idea of your throughput is to run a TCP test first, rather than UDP, though with UDP you should be able to go a bit higher than TCP but it gives you an idea where you are:
Server: iperf -s -i 1
Client: iperf -c server -i 1

running a TCP test does not give me any information in terms of lost/transmitted packets which is why I haven't been using it.
 
iperf.fr is where I downloaded from, and you need one wired PC to act as server, on which you run iperf with the -s option to make it a server. Then on a wireless client you run iperf with the options -c <host ip> -u -b 200M. I don't think 2o0M is necessary, but definitely increase it above the default. I think 100M+ should be good values for a decent test (someone with more knowledge may have better advice).

The other way to get an idea of your throughput is to run a TCP test first, rather than UDP, though with UDP you should be able to go a bit higher than TCP but it gives you an idea where you are:
Server: iperf -s -i 1
Client: iperf -c server -i 1

Thank you guys!
 
running a TCP test does not give me any information in terms of lost/transmitted packets which is why I haven't been using it.

Because there is no packet loss in TCP. Well, there can be at the physical layer, but by its nature TCP will retransmit lost packets. iperf can't report what happened in the TCP stack, it's transparent. However, if you lose a lot of packets you will see low TCP throughput.
With UDP iperf can do its own packet tracking and detect lost and out of order packets.

TCP also has extra overhead vs UDP, that's why you will see lower throughput than UDP and it's the reason why time critical apps (e.g. VOIP) use UDP. If a packet is lost so be it, doesn't do VOIP any good to resend it later and delay even more traffic, it's better to just move on, hear a glitch and try to keep up with the current traffic.
 
Ooops. Pay attention to the other traffic when you run tests ... I was just running more tests and getting poor results. I thought somehow v26 broke on me too, I started looking at what setting I may have saved by accident. Then I noticed that my TimeMachine backup kicked in (which happens over WiFi). Phew. Mystery solved.
 
Another tip I can offer: make sure that the client you are testing with, if it's mobile/laptop, is plugged into a power outlet, not on battery. WiFi power save mode may kick in on battery power (depending on your OS/driver settings) and you may see lower performance.
 
Another tip I can offer: make sure that the client you are testing with, if it's mobile/laptop, is plugged into a power outlet, not on battery. WiFi power save mode may kick in on battery power (depending on your OS/driver settings) and you may see lower performance.

Yeah, I have noticed that as well, though never to the levels of low performance as some in this thread, I also edited my power profiles long ago to use high performance on wifi regardless.
 
Down near the bottom of the professional tab (added on V27)....but I set it to default off, but worth a double check.

People failing to do a factory default reset after installing a newer firmware would still have the "old" default setting of being enabled, so it's always best to have people check that setting anyway. It's also the case with my firmware.
 

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top