What's new

SNB's Router Test Gets Tougher - A Preview

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

PPPOE uses some form of encoding rather than encryption, its like with some mediums like wifi, encoding is the basis of it, encryption is secondary. Encryption can be used with PPPOE.

PPPOE testing should be used because a lot of ISPs use this even on FTTP.
 
I'm looking into a SIMPLE bufferbloat test.

Won't be doing a full duplex test. The HTTP test is good enough to show router performance when pushed to limits and is a more realistic scenario.
 
I'm looking into a SIMPLE bufferbloat test.

Won't be doing a full duplex test. The HTTP test is good enough to show router performance when pushed to limits and is a more realistic scenario.

Thank you for taking up my suggestion to conduct a bufferbloat test, however simple the implementation of this test may be.

A 'pure' unilateral test is probably not a very realistic scenario because I'd imagine most home routers will be serving multiple users at the same time, and there will be 3-way TCP handshakes as these users surf the internet, or constant streams of upstream and downlink data when someone is playing a game. The router may also have to handle a constant stream of upstream data from IP cameras, all the while having to handle download requests from all the users.

If you're reluctant to attempt to saturate both directions, I'd suggest transmitting at least 10 Mbps in one direction while you attempt to saturate the other direction.
 
Last edited:
I’d like to draw your attention to this seminal article by Scott Wasson from The Tech Report on (a) the importance of measuring frame-rendering times when evaluating graphic card performance and (b) the inadequacy of just measuring the average Frames Per Seconds (FPS): http://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking.

Just as wildly fluctuating frame render times can give rise to ‘judder’, thereby severely negatively affecting gameplay experience, so can inconsistent network performance. In other words, being able to achieve brief spurts of high throughput is not sufficient.

High throughputs must be combined with consistent, reliable and repeatable performance. This requires an analysis of how performance varies over time, and, to this end, it is important to adopt a methodology similar to that adopted by Jim Salter in his 2nd router test (see https://arstechnica.co.uk/gadgets/2016/09/diy-homebrew-router-speed-testing/).

I do not see a need to include these data in your actual reviews, for it will be too time-consuming to present these data in a graph. However, I urge you to at least log these data so that you can notice any abnormalities in the router’s performance.
 
After re-reading the proposed methodology for testing routers, I noticed a couple of interesting results.

1. The baseline performance even for 128 concurrent connections is about 60% of gigabit speed.

2. The number of HTTP requests/sec can rise well above 100% of the baseline performance. You explained it thus: ‘The rise above 100% is due to the difference in relatively small number of requests for the large file sizes and, I think, variation due to the increase in errors that usually come along with the largest (and sometimes smallest) filesize tests.’

A 166% increase (as seen from the ASUS RT-88 upload test) cannot, however, plausibly be accounted for solely on the basis of random errors. This seems to indicate some other underlying issue.​

Given these abnormalities, I hypothesize that the switch might be a bottleneck. But even if I am incorrect – even if the bottleneck lies instead with the nginx web server, or ApacheBench test client, or both – using a more powerful switch allows you to safely exclude the switch as a potential source of bottleneck.

If you’re open to the suggestion of using a more powerful switch, perhaps you may wish to contact Ubiquiti to obtain their newly released 10G Switch (https://www.ubnt.com/edgemax/edgeswitch-16-xg/). But if they are unwilling to provide you with a free sample, it is not prohibitively expensive to purchase a retail version of this 10G Switch (about USD 550).
 
The main performance factor is actually packets per second rather than bandwidth. This is also something that affects switches too but switches can do line rate with smaller packets too.

Doesnt matter what switch you use, all switch chips have a rating in packets per second as well and that is what matters. Adding a 10G switch may not solve the issue.
 
netem can do some of this modeling... and this can be run on the WAN host simulator...

https://wiki.linuxfoundation.org/networking/netem

Latency handling is a big deal for some of the forum members here, and it can stress lower end devices moreso than higher end devices... esp. when Quality of Service comes into play...

I'll roll this back into the thread...

Because there is value there... in a LAN env, latency is very low, so to simulate the WAN side, one should consider adding latency, and netem does a nice job there - and this addresses some of the concerns about bufferbloat...

And on the server/host side.. I stand by this...

Code:
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_max_syn_backlog = 4096

This again, goes back to the host/client latency over the WAN - the host is going to adjust it's RWIN's here based on traffic... It's one of the issues I had with ArsTechnica's testing there... setting the min/max size of the window is not relative to real world expectations ...
 
it is not prohibitively expensive to purchase a retail version of this 10G Switch (about USD 550).

Not going into business issues or whatever, but forking out $550USD is not trivial...
 
The main performance factor is actually packets per second rather than bandwidth. This is also something that affects switches too but switches can do line rate with smaller packets too.

Doesnt matter what switch you use, all switch chips have a rating in packets per second as well and that is what matters. Adding a 10G switch may not solve the issue.

I'm not exactly sure I understand you, but I believe we have a common understanding here. In saying that 'switches can do line rate with smaller packets too', rather than 'will do line rate', you also recognise that some switches are incapable of achieving line-speeds when smaller packets are being forwarded. But we will have no way of knowing whether the switch deployed by SNB is such a switch because the type of switch was not disclosed. Using the switch I suggested will pretty much guarantee that the switch won't be a bottleneck, not so much because it is a 10G switch, but because it is a more powerful switch.

However, the foregoing interpretation seems inconsistent with your second paragraph, specifically the part when you said '[d]oesnt [sic] matter what switch you use'. So I'm not sure whether we are really disagreeing. Hopefully I have made myself clearer.

And in response to sfx2000:
  1. As System Error Message said, 'The main performance factor is actually packets per second rather than bandwidth.' I agree most switches can do 1Gbps, but only when the Ethernet frame size is 1512 bytes.
  2. Also, there is significance difference between 'not prohibitively expensive' and 'not trivial'. Again, I'm not sure whether we are really disagreeing.
I hope I don't come across as being abrasive. Ultimately, I think we are all on the same boat: we want the most demanding and future-proof router test possible. :)

Cheers!
 
Last edited:
I'm not exactly sure I understand you, but I believe we have a common understanding here. In saying that 'switches can do line rate with smaller packets too', rather than 'will do line rate', you also recognise that some switches are incapable of achieving line-speeds when smaller packets are being forwarded. But we will have no way of knowing whether the switch deployed by SNB is such a switch because the type of switch was not disclosed. Using the switch I suggested will pretty much guarantee that the switch won't be a bottleneck, not so much because it is a 10G switch, but because it is a more powerful switch.

However, the foregoing interpretation seems inconsistent with your second paragraph, specifically the part when you said '[d]oesnt [sic] matter what switch you use'. So I'm not sure whether we are really disagreeing. Hopefully I have made myself clearer.

And in response to sfx2000:
  1. As System Error Message said, 'The main performance factor is actually packets per second rather than bandwidth.' I agree most switches can do 1Gbps, but only when the Ethernet frame size is 1512 bytes.
  2. Also, there is significance difference between 'not prohibitively expensive' and 'not trivial'. Again, I'm not sure whether we are really disagreeing.
I hope I don't come across as being abrasive. Ultimately, I think we are all on the same boat: we want the most demanding and future-proof router test possible. :)

Cheers!
when using just 2 ports of a 5 port dumb switch, theres really just no issue here about performance as just using only 2 ports when the switch fabric clearly has lots more performance. Some test cases just work better with a switch in between than without but i put it down to the NIC configuration and not the switch. However it is likely that @thiggins uses a switch because of his network config involving his servers/test beds and the test device itself. The test server could simply be performing some other task when not testing.

Ethernet frames arent constant, they only have a max allowed packet size and this is true of all networks be it wifi, WAN, fiber optic network or even ethernet.
 
I hope I don't come across as being abrasive. Ultimately, I think we are all on the same boat: we want the most demanding and future-proof router test possible. :)

It's fine ;)

And it's good discussion...
 
Sorry for being late to the show, I forgot to keep up with this thread after the initial posts.

PPPoE isn't encrypted. The encapsulation adds some overhead, and it's not always compatible with hardware manufacturer's NAT acceleration (Broadcom added support for it circa their SDK 5.110 SDK if I recall - that came out around 2012 or 2013). You have to also take care of properly handling the larger packet size - an incorrect MTU will cause fragmentation.

Configuring a PPPoE server in a local Linux VM isn't that bad, I've done it in the past to test something. If you want something easier tho, I believe DD-WRT comes with a PPPoE server. You could potentially use some DD-WRT-based router to host your PPPoE concentrator, and connect your test client to it. If the idea is to test bufferbloat and such tho, then the concentrator/router might affect your test results.

Might be worth checking if Streisand supports a PPPoE server. If it does, then either running your own VM, or getting a cheap VPS from Linode/Digital Ocean to deploy Streisand there might be good. It would even be usable to test other tunnel protocols (PPTP/L2TP/IPSec/OpenVPN/SOCKS proxy/etc...).

Streisand would probably be quite useful there if the goal is to test various tunnel/proxy/vpn technologies.
 
encryption with PPPOE is optional and most ISPs dont use it. But the overhead of PPPOE does use some processing and it is very widely use so it should be included in tests for them to be more accurate.

If server load is a concern one way to test this is to have 2 servers and the router on the same switch. PPPOE works on layer 2.

For example 1 server needs 2 ports, both connected to the switch with different IP subnets. The PPPOE server will give an IP in the same subnet as the 2nd interface to the router and the server running PPPOE only needs to route/bridge layer 3. This is usually more representative of real world configurations.
 
Thanks for all the advice on PPPoE guys. It's not going to be in this round of the testing.
 
Ok, folks. I've done further testing to investigate the switch question. I did three test runs between the testbed systems with the NICs directly connected and three connected via a Gigabit switch.

I did find a difference with and without the switch, much to my surprise. Granted that three runs is a damn small sample set, but the standard deviation of the three runs was significantly higher with the switch. Biggest variance was with the 2KB file size.

Since I don't really need the switch, I'm dropping it from the testbed configuration. I'll just set the WAN to a static IP for the test.

Additionally, I've decided there isn't a lot of value to varying the number of concurrent sessions. The most significant throughput changes are with file size; concurrent sessions don't change the numbers that much. I suspect the oddities I'm seeing at the extremes, i.e. small file size and sessions and large filesize and sessions are more about testbed limitations than device.
 
I did find a difference with and without the switch, much to my surprise. Granted that three runs is a damn small sample set, but the standard deviation of the three runs was significantly higher with the switch. Biggest variance was with the 2KB file size.

What was the switch vendor and model?
 

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top