What's new

Is this review dubious?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

I think you should use the reviews and charts already provided on the site :


https://www.smallnetbuilder.com/wir...ghthawk-x4s-smart-wifi-gaming-router-reviewed

https://www.smallnetbuilder.com/tools/charts/router/view

The tester can only go by the results they obtain from the unit provided , you only have to look at comments here on any make or model to find people that for whatever reason are not happy with WiFi results.

Every home is different, every user experience is different , I have a nephew that changed routers (and ISP's) more often than many folk change their socks. He wouldn't listen to anyone he always knew better, his router was sat on the top of a 42inch Toshiba CRT TV , the leads hanging down the back perfectly in line with the CRT........... :rolleyes:o_O
 
David Murphy has done router and other networking reviews for thewirecutter, too. So he has been around.

Interesting that Reviewed (owned by USA Today) is copying thewirecutter's (owned by NY Times) format.

Without seeing the actual test results, there is no basis for comparison. I will say the main throughput tool he used, LAN Speed Test, wasn't particularly accurate when we reviewed it. It seemed to have speed limitations. He would be better off using iperf3.

His latency test is an adaptation of the method Jim Salter has developed and is now using for his router and mesh WiFi testing. Jim developed NetBurn, an open source tool, that I'm also looking at for WiFi capacity testing.

The flaw I see in David's method is that he is using YouTube to generate background traffic. Streaming services don't offer a constant load. Instead they burst traffic to fill the local buffer, then do nothing until the next buffer reload. Jim runs all traffic from a local server in his testing, using different fetch rates and file sizes to simulate different traffic types.
 
His latency test is an adaptation of the method Jim Salter has developed and is now using for his router and mesh WiFi testing. Jim developed NetBurn, an open source tool, that I'm also looking at for WiFi capacity testing.

I started seeing some problems here... smart guy, but his testing methodology is questionable at best for WiFi clients.

I installed GalliumOS Linux on four Chromebooks, set them up with Linksys WUSB-6300 USB3 802.11ac 2×2 NICs, and got to testing against a reference Archer C7 wifi router. For this first round of very-much-beta testing, the Chromebooks aren’t really properly distributed around the house – the “4kstream” Chromebook is a pretty reasonable 20-ish feet away in the next room, but the other three were just sitting on the workbench right next to the router.

The Archer C7 got default settings overall, with a single SSID for both 5 GHz and 2.4 GHz bands. There was clearly no band-steering in play on the C7, as all four Chromebooks associated with the 5 GHz radio. This lead to some unsurprisingly crappy results for our simultaneous tests:
 
I started seeing some problems here... smart guy, but his testing methodology is questionable at best for WiFi clients.
What problem(s) do you see?
I'm investigating using Netburn for capacity testing. I've hit a dead end on looking at latency using UDP.
 
What problem(s) do you see?
I'm investigating using Netburn for capacity testing. I've hit a dead end on looking at latency using UDP.

His methodology tests the application layer, not the network layer... web client to server - and then we have the rest of the OS/Application stack to worry about...

netperf is a good tool here for TCP/UDP latency - and it can cover iperf/iperf3 data...
 
His methodology tests the application layer, not the network layer... web client to server - and then we have the rest of the OS/Application stack to worry about...
That's true. But Linux is pretty good, better than Windows at least, at application repeatability.

I already use Apachebench for wired throughput testing and think it shows performance weaknesses I don't see using iperf3. I've baselined Netburn and its latency and throughput capabilities are more than adequate for WiFi testing, at least the relatively simple benchmarks I run.

My tests with UDP and iperf3 ran into performance limitations on different platforms. And, besides some IPTV providers, what uses UDP any more?
 
what uses UDP any more?

DNS. Although TCP is also supported, most clients (AFAIK) still use UDP.

VPNs also do, including IPSEC and OpenVPN. OpenVPN supports TCP, however it generally causes issues if tunneling SIP (that's what I was told recently by an engineer, I never tested it personally).

And there's also torrenting.
 
That's true. But Linux is pretty good, better than Windows at least, at application repeatability.

I already use Apachebench for wired throughput testing and think it shows performance weaknesses I don't see using iperf3. I've baselined Netburn and its latency and throughput capabilities are more than adequate for WiFi testing, at least the relatively simple benchmarks I run.

My tests with UDP and iperf3 ran into performance limitations on different platforms. And, besides some IPTV providers, what uses UDP any more?

Linux is great for benchmarking - once it's tuned it does pretty good - same goes for the BSD's, although perhaps a bit more esoteric...

ApacheBench is good, goes without saying that again, tuning comes into play - the desktop linux flavors have some tradeoff's that a server tuned install doesn't have, and vice-versa...

Regarding UDP - it's pretty common - esp. with the OpenVPN crowd, as that is the default, and performance is better there - also all the media streaming stuff, and VoiP - SIP is TCP for the control plane, but generally the data plane is UDP via RTP, so performance there is still relevant.

Going back to netburn - anything can change this - the application layer itself, kernel/library updates (esp. now with Meltdown/Spectre updates rolling out), so over time, it can become less relevant when comparing different devices. So it's critical to keep things in a configuration managed environment for the client and host servers - locked down so to speak...

You were looking for something that tests latency at the network level - and netperf is a pretty good tool - it's one of the tools that our cloud guys use for our appliance performance testing. They've done some extra work to visualize the results in charts that are easy to understand...
 
What problem(s) do you see?

I'll start here...

"I installed GalliumOS Linux on four Chromebooks, set them up with Linksys WUSB-6300 USB3 802.11ac 2×2 NICs"

Which pretty much means he's running a ChromeBook in Developer Mode, using Crouton to run GalliumOS in a chroot over ChromeOS, and using a driver that may or may not be optimal for ChromeOS to begin with. He doesn't specify which model of ChromeBook, and this is relevant as there are many, and there's at least three different kernels that I know of in play there - and even within a platform, there are patches and updates depending on where ChromeOS is at... it's not a consistent platform, and one that is hard for others to even reproduce.

Need I say more?
 
I'll start here...

"I installed GalliumOS Linux on four Chromebooks, set them up with Linksys WUSB-6300 USB3 802.11ac 2×2 NICs"

Which pretty much means he's running a ChromeBook in Developer Mode, using Crouton to run GalliumOS in a chroot over ChromeOS, and using a driver that may or may not be optimal for ChromeOS to begin with.

Nope. It's a direct Linux boot from a separate partition, no ChromeOS involved. Standard Ubuntu kernels. Model of chromebook is largely irrelevant, as long as we're talking Intel and not ARM, since the NIC in question is a USB3 external anyway. The tests are extremely gentle in terms of CPU firepower, so the difference between one Celeron CPU or the other isn't enough to show up in results even for wired gigabit tests, let alone wifi (and yes, I do test baselines to be certain).

The point about testing that hits the application layer is valid, but I think it highlights a fundamental misunderstanding about why I'm doing it that way. I'm not trying to expose abstract technical differences on layer 1 or 2, I'm trying to expose differences in actual quality of service in ways relevant to real world use. You can't do that if you ignore the application layer.

It's also a valid point that changes in the kernel - and more importantly, in the NIC driver - can and will eventually alter the baselines and make it difficult to directly compare old reviews to newer ones. But, again, I'm not doing abstract reviews of hardware at a software-agnostic level. I'm doing reviews that test the *user* experience - and that means, however ephemeral it may be, using current software and drivers.
 
Nope. It's a direct Linux boot from a separate partition, no ChromeOS involved. Standard Ubuntu kernels. Model of chromebook is largely irrelevant, as long as we're talking Intel and not ARM, since the NIC in question is a USB3 external anyway. The tests are extremely gentle in terms of CPU firepower, so the difference between one Celeron CPU or the other isn't enough to show up in results even for wired gigabit tests, let alone wifi (and yes, I do test baselines to be certain).

The point about testing that hits the application layer is valid, but I think it highlights a fundamental misunderstanding about why I'm doing it that way. I'm not trying to expose abstract technical differences on layer 1 or 2, I'm trying to expose differences in actual quality of service in ways relevant to real world use. You can't do that if you ignore the application layer.

It's also a valid point that changes in the kernel - and more importantly, in the NIC driver - can and will eventually alter the baselines and make it difficult to directly compare old reviews to newer ones. But, again, I'm not doing abstract reviews of hardware at a software-agnostic level. I'm doing reviews that test the *user* experience - and that means, however ephemeral it may be, using current software and drivers.
USB3 NICs sometimes can be an issue. Not only do they have more overhead and power use but have a higher latency than PCIe. For instance the earlier USB3 had 2.5Gb/s rather than 5Gb/s, then theres the issue about the USB controller itself. USB3 may be rated for 5Gb/s but the controller may be the bottleneck. Its pretty much like the video of linustechtips where he reviewed a DIY SSD using multiple SD cards and getting poor performance (controller).

Rather than 4 laptops with the same NICs, 4 laptops with different PCIe wireless AC NICs would be a far better test. There are cases where USB NICs are better than internal but not always.

I myself when i stress tested wireless AC's capable speeds, i used the internal NIC which was a 2x2 intel AC wifi card which was something i bought and upgraded my laptop with.
 
USB3 NICs sometimes can be an issue. Not only do they have more overhead and power use but have a higher latency than PCIe. For instance the earlier USB3 had 2.5Gb/s rather than 5Gb/s, then theres the issue about the USB controller itself. USB3 may be rated for 5Gb/s but the controller may be the bottleneck. Its pretty much like the video of linustechtips where he reviewed a DIY SSD using multiple SD cards and getting poor performance (controller).

Rather than 4 laptops with the same NICs, 4 laptops with different PCIe wireless AC NICs would be a far better test. There are cases where USB NICs are better than internal but not always.

I myself when i stress tested wireless AC's capable speeds, i used the internal NIC which was a 2x2 intel AC wifi card which was something i bought and upgraded my laptop with.

About 18 months ago, I tested every NIC I could get my hands on, from old Qualcomm and Centrino 802.11n to Intel 7265 (and newer), along with the majority of the USB3 NICs on the market. I selected the WUSB6300 because it had the most consistent results and highest long range TX throughout of everything I tested.

It's no good for testing roaming, though, since the Linux drivers don't support 802.11rksv.

With that said, *those* tests were iperf3 (in both directions) only. It would be interesting to do thorough multi client tests of the various NICs using a reference router, rather than the other way around.
 
Nope. It's a direct Linux boot from a separate partition, no ChromeOS involved. Standard Ubuntu kernels. Model of chromebook is largely irrelevant, as long as we're talking Intel and not ARM, since the NIC in question is a USB3 external anyway. The tests are extremely gentle in terms of CPU firepower, so the difference between one Celeron CPU or the other isn't enough to show up in results even for wired gigabit tests, let alone wifi (and yes, I do test baselines to be certain).

Hi Jim! Welcome to the forums here on SNB - you're efforts have been noticed :)

I worry a bit about the testing, and the methods - but let's touch on one item...

With a ChromeBook, why replace a tightly optimized kernel with a community science project? Google has spent a huge amount of effort on development and QA with their OS and Drivers, so it would make more sense to use the kernel in place, and the driver support for the WiFi NIC's on the chromebook.*

Easy enough to do by going into Developer Mode on the Chromebook, and installing a debian chroot - one gets a debian userland, but still runs on google kernel.

Speaking for the NIC's - the Realtek RT8812au has been problematic at best - with newer kernels post 3.10, it's been a big problem - folks have been sorting it out independently, outside of Realtek, but no official support from the chipset vendor in later kernels - so it's not very optimized - good enough works I suppose, but this would not be my first choice for benchmarking.

The point about testing that hits the application layer is valid, but I think it highlights a fundamental misunderstanding about why I'm doing it that way. I'm not trying to expose abstract technical differences on layer 1 or 2, I'm trying to expose differences in actual quality of service in ways relevant to real world use. You can't do that if you ignore the application layer.

Agreed - for relative testing, it's a darn good start - but like you mention below, things can and do change.

Going back to the network layer though - testing there is very relevant, and there's some great tools to be had there - adding NetBurn validates that testing - if there are problems at lower layers of the stack, it will be evident at the application layer.

It's also a valid point that changes in the kernel - and more importantly, in the NIC driver - can and will eventually alter the baselines and make it difficult to directly compare old reviews to newer ones. But, again, I'm not doing abstract reviews of hardware at a software-agnostic level. I'm doing reviews that test the *user* experience - and that means, however ephemeral it may be, using current software and drivers.

Config management can help there - once one has a good baseline - leave it alone - it becomes the "golden image" so that things can be reproduced over time, and more importantly, by detailed configuration items, reproduced independently.

Anyways - by constructive comments - I'm hoping you continue to refine and clarify the testing - it's good work, and there are opportunities to make it even better.

* Side comment - Chrome is a great browser, and I feel that Chrome on a Chromebook is probably the best case solution for a given hardware platform, and there, google chooses very well - even on a cheap sub-$200USD Lenovo N22 (Intel Celeron N3060) it performs very well - and the Intel 7265 wifi NIC included is pretty impressive for a low-end unit.
 

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top