What's new

Beta Asuswrt-Merlin 386.2 Beta is now available

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Status
Not open for further replies.
Switched to CAKE, deleted FlexQoS then switched back to A.QoS. Gave the below log.

Code:
qdisc htb 1: dev eth0 root refcnt 2 r2q 10 default 0 direct_packets_stat 0 direct_qlen 1000
qdisc sfq 2: dev eth0 parent 1:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 10: dev eth0 parent 1:10 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 1000
qdisc sfq 1256: dev eth0 parent 10:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 11: dev eth0 parent 1:11 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 1000
qdisc sfq 2256: dev eth0 parent 11:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 12: dev eth0 parent 1:12 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 1000
qdisc sfq 3256: dev eth0 parent 12:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 13: dev eth0 parent 1:13 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 1000
qdisc sfq 4256: dev eth0 parent 13:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 14: dev eth0 parent 1:14 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 1000
qdisc sfq 5256: dev eth0 parent 14:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 15: dev eth0 parent 1:15 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 1000
qdisc sfq 6256: dev eth0 parent 15:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 16: dev eth0 parent 1:16 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 1000
qdisc sfq 7256: dev eth0 parent 16:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 17: dev eth0 parent 1:17 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 1000
qdisc sfq 8256: dev eth0 parent 17:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1002: dev eth0 parent 10:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2002: dev eth0 parent 11:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3002: dev eth0 parent 12:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4002: dev eth0 parent 13:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5002: dev eth0 parent 14:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6002: dev eth0 parent 15:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7002: dev eth0 parent 16:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8002: dev eth0 parent 17:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1003: dev eth0 parent 10:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2003: dev eth0 parent 11:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3003: dev eth0 parent 12:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4003: dev eth0 parent 13:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5003: dev eth0 parent 14:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6003: dev eth0 parent 15:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7003: dev eth0 parent 16:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8003: dev eth0 parent 17:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1004: dev eth0 parent 10:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2004: dev eth0 parent 11:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3004: dev eth0 parent 12:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4004: dev eth0 parent 13:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5004: dev eth0 parent 14:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6004: dev eth0 parent 15:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7004: dev eth0 parent 16:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8004: dev eth0 parent 17:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1005: dev eth0 parent 10:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2005: dev eth0 parent 11:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3005: dev eth0 parent 12:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4005: dev eth0 parent 13:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5005: dev eth0 parent 14:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6005: dev eth0 parent 15:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7005: dev eth0 parent 16:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8005: dev eth0 parent 17:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc pfifo_fast 0: dev eth1 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth2 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth3 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth4 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth5 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev spu_us_dummy root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev spu_ds_dummy root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth6 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth7 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc htb 1: dev br0 root refcnt 2 r2q 10 default 0 direct_packets_stat 0 direct_qlen 2
qdisc sfq 2: dev br0 parent 1:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 10: dev br0 parent 1:10 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 1256: dev br0 parent 10:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 11: dev br0 parent 1:11 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 2256: dev br0 parent 11:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 12: dev br0 parent 1:12 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 3256: dev br0 parent 12:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 13: dev br0 parent 1:13 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 4256: dev br0 parent 13:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 14: dev br0 parent 1:14 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 5256: dev br0 parent 14:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 15: dev br0 parent 1:15 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 6256: dev br0 parent 15:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 16: dev br0 parent 1:16 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 7256: dev br0 parent 16:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 17: dev br0 parent 1:17 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 8256: dev br0 parent 17:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1002: dev br0 parent 10:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2002: dev br0 parent 11:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3002: dev br0 parent 12:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4002: dev br0 parent 13:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5002: dev br0 parent 14:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6002: dev br0 parent 15:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7002: dev br0 parent 16:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8002: dev br0 parent 17:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1003: dev br0 parent 10:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2003: dev br0 parent 11:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3003: dev br0 parent 12:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4003: dev br0 parent 13:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5003: dev br0 parent 14:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6003: dev br0 parent 15:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7003: dev br0 parent 16:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8003: dev br0 parent 17:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1004: dev br0 parent 10:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2004: dev br0 parent 11:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3004: dev br0 parent 12:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4004: dev br0 parent 13:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5004: dev br0 parent 14:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6004: dev br0 parent 15:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7004: dev br0 parent 16:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8004: dev br0 parent 17:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1005: dev br0 parent 10:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2005: dev br0 parent 11:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3005: dev br0 parent 12:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4005: dev br0 parent 13:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5005: dev br0 parent 14:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6005: dev br0 parent 15:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7005: dev br0 parent 16:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8005: dev br0 parent 17:5 limit 127p quantum 1514b depth 127 divisor 1024
 
Checked 10 mins later and it had changed. Only occurrence that happened in-between looking through the syslog, is spdmerlin ran a scheduled test.

Code:
qdisc cake 8003: dev eth0 root refcnt 2 bandwidth 32768Kbit besteffort dual-srchost nat nowash ack-filter split-gso rtt 50ms noatm overhead 18 mpu 64
qdisc ingress ffff: dev eth0 parent ffff:fff1 ----------------
qdisc pfifo_fast 0: dev eth1 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth2 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth3 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth4 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth5 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev spu_us_dummy root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev spu_ds_dummy root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth6 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth7 root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc htb 1: dev br0 root refcnt 2 r2q 10 default 0 direct_packets_stat 0 direct_qlen 2
qdisc sfq 2: dev br0 parent 1:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 10: dev br0 parent 1:10 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 1256: dev br0 parent 10:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 11: dev br0 parent 1:11 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 2256: dev br0 parent 11:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 12: dev br0 parent 1:12 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 3256: dev br0 parent 12:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 13: dev br0 parent 1:13 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 4256: dev br0 parent 13:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 14: dev br0 parent 1:14 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 5256: dev br0 parent 14:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 15: dev br0 parent 1:15 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 6256: dev br0 parent 15:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 16: dev br0 parent 1:16 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 7256: dev br0 parent 16:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc htb 17: dev br0 parent 1:17 r2q 10 default 0x256 direct_packets_stat 0 direct_qlen 2
qdisc sfq 8256: dev br0 parent 17:256 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1002: dev br0 parent 10:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2002: dev br0 parent 11:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3002: dev br0 parent 12:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4002: dev br0 parent 13:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5002: dev br0 parent 14:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6002: dev br0 parent 15:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7002: dev br0 parent 16:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8002: dev br0 parent 17:2 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1003: dev br0 parent 10:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2003: dev br0 parent 11:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3003: dev br0 parent 12:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4003: dev br0 parent 13:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5003: dev br0 parent 14:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6003: dev br0 parent 15:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7003: dev br0 parent 16:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8003: dev br0 parent 17:3 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1004: dev br0 parent 10:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2004: dev br0 parent 11:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3004: dev br0 parent 12:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4004: dev br0 parent 13:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5004: dev br0 parent 14:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6004: dev br0 parent 15:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7004: dev br0 parent 16:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8004: dev br0 parent 17:4 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 1005: dev br0 parent 10:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 2005: dev br0 parent 11:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 3005: dev br0 parent 12:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 4005: dev br0 parent 13:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 5005: dev br0 parent 14:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 6005: dev br0 parent 15:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 7005: dev br0 parent 16:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc sfq 8005: dev br0 parent 17:5 limit 127p quantum 1514b depth 127 divisor 1024
qdisc cake 8004: dev ifb4eth0 root refcnt 2 bandwidth 501760Kbit besteffort dual-dsthost nat wash ingress no-ack-filter split-gso rtt 50ms noatm overhead 18 mpu 64
 
Checked 10 mins later and it had changed. Only occurrence that happened in-between looking through the syslog, is spdmerlin ran a scheduled test.
Maybe pick this up in the spdMerlin thread and suggest that @Jack Yaz check which qos_type is in use before starting the cake variants after a test.
 
Much like DNS, DHCP servers are distributed, and queries are not a big deal, they can handle thousands per second I'm sure, but 15 mins is very low, my FIOS is 2 hours and that's the least I've ever seen from an ISP.
I have CenturyLink 100/100 fiber and the dhcp lease time is 30 minutes.
 
I have CenturyLink 100/100 fiber and the dhcp lease time is 30 minutes.

Maybe ISPs are starting to run into IPV4 crunches in certain areas and need to free up IPs quickly. With FIOS, the only time I saw a 15 minute lease was when they put me onto some special "test segment" to troubleshoot some issues when initially installed.
 
I am using a LAN based DNS server (Synology NAS) which i have used with no issue for years, only in this recent 386.2 release has it stopped working.
Curious - I have a Synology too. What are the benefits of running DNS, and what are you using as your upstream? I really like the granular info I get with NextDNS.
 
Curious - I have a Synology too. What are the benefits of running DNS, and what are you using as your upstream? I really like the granular info I get with NextDNS.
Not OP, but Synology supports running Pi-hole as your local DNS on their NAS. Fantastic Ad Blocking. Upstream could then be any of the usual suspects or even a local instance of unbound on a Pi.
 
Offtopic - RT-AC86U, QoS off, is its CPU capable of using the total of my 1gbps down/220mbps up connection?

Doing speedtests from my computer connected via RJ-45, I can "only" get around 900mbps down.
I want to check if it is router limitation or the ISP itself.
I'm on a '1gbps' GPON connection here - which with the overhead of PPPoE (required by my ISP) means the best I ever see is 930-940mbps down. The ac86u does that easily with QOS off (<10% cpu load) and still managed that with the stock QOS on (doesn't quite max out one core)
 
On Comcast 1 gig connection, I see a max of 930 mbps with my AC86U. I've only provisioned for 40 mbps outbound, and the max I see in speed tests is about 44 mbps. Comcast overprovisions the slower data rates, but its hard to say for the 1 gbps connection what's happening. For the SNB AC86U test, the router WAN to LAN throughput topped out at 940 mbps. You might need a multigig ethernet modem and multigig ethernet router to see the full throughput of a 1 gbps connection.
 
I have CenturyLink 100/100 fiber and the dhcp lease time is 30 minutes.
Is it always like that? My ISP usually dishes out 2d leases but if there is work going on in the area on infrastructure they'll hand out short leases like an hour to force everyone to "refresh" quickly.
 
I'm on a '1gbps' GPON connection here - which with the overhead of PPPoE (required by my ISP) means the best I ever see is 930-940mbps down. The ac86u does that easily with QOS off (<10% cpu load) and still managed that with the stock QOS on (doesn't quite max out one core)
Exactly the same here :)
 
Last edited:
It's just a test for people who had traffic spikes shown in Traffic Monitor, to see if a patch from Asus resolves the issue.

Thanks for the quick turn around/explanation @RMerlin :) Appreciate it.
 
This is not the case if switching to Router mode. Not sure if this is for a reason or if it is a bug.
It was deliberately done by Asus, however I have no idea why. I'll re-allow it in AP mode, we'll see if it causes any issue later on.
 
On Comcast 1 gig connection, I see a max of 930 mbps with my AC86U. I've only provisioned for 40 mbps outbound, and the max I see in speed tests is about 44 mbps. Comcast overprovisions the slower data rates, but its hard to say for the 1 gbps connection what's happening. For the SNB AC86U test, the router WAN to LAN throughput topped out at 940 mbps. You might need a multigig ethernet modem and multigig ethernet router to see the full throughput of a 1 gbps connection.

A 1 gig physical connection will never show 1000 megs, there is overhead that consumes a portion of the line. 930 to 940 is right around the max you'll ever see.
 
Maybe ISPs are starting to run into IPV4 crunches in certain areas and need to free up IPs quickly. With FIOS, the only time I saw a 15 minute lease was when they put me onto some special "test segment" to troubleshoot some issues when initially installed.
Nope, I've been with CenturyLink since they came out with DSL in my area and DHCP lease time has always been 15 minutes. Now on 100/100 FIOS and still get 15 minute DHCP lease. Granted that when I went from 15/1 DSL to FIOS in the new house I only got 15/1. Seems someone had forgotten to change the provisioning for bandwidth. Maybe they forgot to change the DHCP lease time, too.
 
A 1 gig physical connection will never show 1000 megs, there is overhead that consumes a portion of the line. 930 to 940 is right around the max you'll ever see.
On 500 mbps I had around 520 real, but on 1gbps I have the 940.

What's overhead?
 
Status
Not open for further replies.

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top