What's new

Linksys Announces Velop Mesh Wi-Fi System

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

What I'm curious about is what Tim mentioned about true band steering working by de-authing clients on 2.4ghz in the hopes that they reconnect to 5ghz, which seems to indicate that this is something that comes into play at some point after initial connection. For instance, if I'm outside and my phone connects to 2.4ghz, but then I come inside, because it's still getting a good signal it won't switch to 5ghz. In practice, this is not necessarily a big deal, as I think when the iPhone is in sleep for a little while and you wake it, it scans WiFi and connects to the best network.

Still, I'm curious... is how band steering works? Not necessarily just on the initial connection, but also for already-connected devices that the router thinks would be better off being forced over to 5ghz?

As I understand it, Velop doesn't deauth clients, Tim was talking about Google WiFi, which uses RSSI-based deauth band steering. Velop uses 802.11k and v steering to just communicate with compatible clients, urging them to switch BSSIDs. Pretty much no common consumer client supports this except iOS devices (yet), as far as I know. And if it's working properly, it'll still be limited to switching from bad wifi, not from fast wifi to faster wifi. So your iphone situation sounds about normal. As for your 5GHz situation, you might have great signal strength but lots of other BSSIDs on your channel/s on the 2.4GHz band. 2.4 does reach rather far, unfortunately. I'm running into a huge problem with that myself. So all of your clients going to 5GHz and largely ignoring 2.4GHz might be the best scenario, despite being a bit confusing.
 
will we see a day when this is reversed and the clients just advertise they are here and the nodes or ap's make the decision based on location and signal strength and client capability or is that a step to far ?
This would require an entire re-do of how Wi-Fi works. 802.11 was never designed to put intelligence in the network vs. the device. Cellular systems were.
 
As I understand it, Velop doesn't deauth clients,
Correct. I was referring to a general method of encouraging devices to move off an AP.

From the Wi-Fi Nigel article linked in the Sticky client sticky:
One of the mechanisms provided in 802.11v is ‘BSS Transition Management’. This mechanism allows an access point to request that a client transitions to a specific AP, or to supply a set of preferred APs. This mechanism can again provide our client with improved roaming decision data to facilitate better roaming decisions.

So APs supporting 11v can request devices supporting 11v to move. Since it uses BSSID lists, it can be used for band steering in the same AP, and AP to AP steering.

The most reliable methods for band-steering are pre-association and include withholding responses to probe and association requests. But again, some devices don't take the hint and will keep trying to connect to the strongest signal. Well designed APs detect uncooperative STAs and will eventually connect the stubborn client.
 
This would require an entire re-do of how Wi-Fi works. 802.11 was never designed to put intelligence in the network vs. the device. Cellular systems were.

Indeed... and the relative simplicity of the WiFi air interface has enabled things at a much faster pace than what we see in Wireless WAN.

From my perspective, the challenges of mesh/multipoint WiFi are pretty significant, and the OEM's, along with the chipset vendors, they have done some very good engineering effort here.

Looking at the features that have rolled out in 2016, the Mesh stuff is probably the most "useful" for customers compared to MU-MIMO or 160MHz channels.

Good stuff!
 
So team, sanity check me here. I'm just poking around in the web admin and making sure I've got this right;

The 2.4Ghz is a standard radio - but the 5Ghz are split into 2 radios.
One with the lower channels (36/40/44/48) and one with the higher channels (149/153/157/161/165)
Now if you run the 5Ghz at 20Mhz channel width, obviously all of these are available.
But if you are leveraging 40Mhz channels then you're limited to 2x connections per setup (assuming you are never running the same channel on different nodes)

1000w


So for Node1 , 5Ghz Radio1 at 40Mhz you can only have 36+40 or 44+48
On the same Node1 , the 5Ghz Radio2 at 40Mhz you can only have 149+153 or 157+161
(it doesnt appear to support DFS)

So this works fine for a 2 node system - you could have 36+40 on node1 and 44+48 on node2 with either 149+153 or 157+161 for the backhaul
But when you get into a 3 node system how can you avoid having the same channel on multiple nodes for Radio1/Radio2 (unless you drop the 5Ghz back to 20Mhz - which might explain the performance Tim saw)

Obviously with an ethernet backhaul this goes away and you can disable either the higher/lower 5Ghz on one of the nodes and manually set the channels but I'm scratching my head how you can actually leverage the 40Mhz potential of the 5Ghz radios when they are split like this
 
While you wait for a response from someone more knowledgeable than me, I THINK how it works is that multiple nodes can coexist on the same channels, but that doesn't change the fact that there's only so much data that can be piped through any given channel in the same household. Real-world example:

I have 3 nodes using the same two 80mhz wide channels. If there's just one active device, say my iPad, this does not appear to affect throughput. I get ~500mb/s on my hardwired nodes, and ~400mb/s on my wireless node. Similarly, I ran an iPerf test with a device connected through one node, and did a regular internet speed test with another device connected through a different node (but both of these connections were on the same 80mhz channel). In this scenario, the internet speed test is limited by my 60mb/s cable connection, but still, it didn't appear to have a huge impact on the simultaneous iPerf test (which fully saturates the connection). The iPerf test may have dipped a bit, but it definitely does not come to a screeching halt just because there's another node in the household on that same channel. I believe the WiFi protocol has mechanisms to allow multiple devices (that are on the same channel) to share the available bandwidth by taking turns transmitting data, rather than just blindly interfering with each other.

But on the other hand, if I were to run full-speed iPerf tests on both devices (which I can't easily do at the moment because I only have the iPerf server running on one computer), again on two different nodes but on the same channel, I'd be willing to bet that the throughput would be approximately halved on each.

So, I think what this means in practice is that if you have multiple clients in the household that are all constantly transferring a lot of data at the same time, you'd see the speed hit. But if the usage among devices is more sporadic in nature (which it certainly is in my household), it's probably imperceptible.

Another observation... does divvying up the band with 20mhz or 40mhz channel widths really accomplish anything? If you have a hypothetical situation with two different devices trying to transfer at full speed to two different nodes if they are sharing an 80mhz channels, presumably their speeds would be halved. But if they instead were on separate 40ghz channels, wouldn't their speeds be also halved (because of the narrower channels)? Maybe the elimination of the overhead from having to cooperate on the same channel would give two 40mhz channels a bit better speed in this scenario compared to sharing one 80mhz channel, but then again you'd be giving up the ability of clients to have the full use of the 80mhz channel during the times when only one of them was talking at full speed.
 
Wide 80MHz and 160MHz channels improve throughput only when full channel bandwidth is free from interfering transmissions.
You can read here more about it.
 
Yes, all three (or more) nodes could use the same channel, either 2.4 or 5.

5 GHz APs dynamically change channel width only to serve STAs with those widths. In other words, the same AP can serve 20, 40 or 80 MHz B/W STAs. But the AP won't dynamically change channel bandwidth.

It's all about having more effective bandwidth use. It is common practice in large Wi-Fi installs to limit 5 GHz channels to 20 or at most 40 MHz bandwidth. Although maximum bandwidth per channel is lower, effective system-wide total bandwidth can be higher, because devices can be spread among more channels in more APs.

The case for having separate backhaul radios is easily demonstrated by loading up the root node with traffic, then checking bandwidth on the next closest node. In a shared radio system, there is nothing the root node can do, except to limit STA bandwidth, to keep a large pipe open to other nodes. In a properly designed system with separate backhaul, bandwidth available to other nodes should not be affected.
 
It's all about having more effective bandwidth use. It is common practice in large Wi-Fi installs to limit 5 GHz channels to 20 or at most 40 MHz bandwidth. Although maximum bandwidth per channel is lower, effective system-wide total bandwidth can be higher, because devices can be spread among more channels in more APs.

Yes thats true but for home user network, especially if you live in a house then you can experiment with the 80/80+80/160 MHz.
On the 80MHz you can have up to 6 AP with non-overlapping CH but only 2 in the 160MHz band, if you can use DFS CHs.
So it depends on how big area or house you have and how many clients you want to connect to the network, and where you live.
 
Last edited:
Installed iPerf on a second hardwired computer so I could run two full-speed tests from different wireless clients. Here's what I found...

1) With only one client running the test, I naturally got the expected full 400-500mb/s

2) With both clients testing on the same node and same channel, speed of each was pretty much exactly cut in half... 200 - 250mb/s.

3) With both clients testing on different nodes using the same channel, speed was significantly less than half... 100-150mb/s (though, again, last night's similar test of a full-speed iPerf test running on one device with a lower-speed internet test running on the other, did not have a huge impact on either).

4) Manually changing the 5ghz channel width to 40mhz gave speeds of 200-250mb/s.


Now, one interesting note (in this test, Node A is a hardwired remote, while Node B is a wireless remote). I noticed that when the test was running on Node A (on the same 5ghz channel as Node B), I could not get my other device to connect to Node B... it insisted on either connecting to the other (free) channel of Node A or that same (free) channel on the main node even though both were further away than Node B. As soon as the test concluded on Node A, the second device readily connected to Node B. I don't know if the device itself was able to determine that there was a lot of traffic/noise (from that same channel being heavily used by a different node) so it opted for the further-away but clear channel, or if the Velop system was directing it to do so. Anyway, once I was able to get the two idle devices on the same channel/different nodes to run the test, they did remain on these channels, so whatever load balancing was taking place apparently only occurs when roaming, cycling WiFi off/on, etc., not after the connection has been established.

Anyway, I suppose if your network situation frequently includes multiple wireless clients simultaneously doing large file transfers on the LAN (or perhaps over gigabit internet), maybe using smaller channel widths (where each gets lower throughput, but more than they'd get if multiple nodes were trying to use the same channel) would yield a better overall performance. But I suspect for most households, leave it at the default is the best bet, especially considering the fact that it would probably be difficult for a change to narrower channels to also not adversely affect the backhaul.

Indeed, when I asked Linksys about manually setting my 2.4ghz channels a week or so ago, the tech said that was fine but strongly suggested that users not fiddle with the 5ghz channels (which I hadn't planned on doing anyway... "you're meddling with powers you cannot begin to comprehend"). So, once I concluded that test, all my 5ghz settings went back to auto.

And again, I'm thoroughly pleased with my wireless network's performance at these default settings. At any given time, scattered throughout our house there will be various combinations of the living room TV and three AppleTVs streaming movies from Netflix or Plex, my wife watching videos on her iMac, kids playing games and watching videos on iPads and iPhones, and I still get excellent speed. It's only when I deliberately try to overwork the network with, what for my household is an unrealistic scenario, that speed suffers noticeably.
 
Linksys told me Velop does AP steering (load balancing among APs), which could account for the inability of your STA to connect to a busy node.
 
Linksys told me Velop does AP steering (load balancing among APs), which could account for the inability of your STA to connect to a busy node.

Makes sense - that although you might be in 5Ghz range, if the AP is busy you might end up connecting via 2.4Ghz to a more distant node.

It really is a nightmare trying to figure out what is happening when - if only they'd show you which band and AP the device is currently connected to in the admin app (ironically it does have it in the admin guide screenshots-but yet to appear in the product itself).


Sent from my iPhone using Tapatalk
 
Linksys told me Velop does AP steering (load balancing among APs), which could account for the inability of your STA to connect to a busy node.
Actually, in this instance it wasn't even the busy node I was trying to connect to... it was another node that was on the same channel as the busy node. No complaints about that, just mildly curious as to whether it was the client that made that call, or the Linksys.
 
Actually, in this instance it wasn't even the busy node I was trying to connect to... it was another node that was on the same channel as the busy node. No complaints about that, just mildly curious as to whether it was the client that made that call, or the Linksys.
It doesn't matter which node you were trying to connect to. The busy channel connects the two.
 
Question about the Velop. The list of features on the second page of the review has UPnP enable/disable as not available. Is it enabled or disabled by default?
 
Question about the Velop. The list of features on the second page of the review has UPnP enable/disable as not available. Is it enabled or disabled by default?

It's enabled by default.
If you go into the back-end web admin page, you can disable it in there (but not through the regular admin app)


Sent from my iPhone using Tapatalk
 
You know how sometimes you want to ask a question, but don't want to look like and idiot? - hopefully Tim and y'all are feeling benevolent ;-)

In my mind, I'm pretty sure that 802.11ac is a 5Ghz technology only. With that being the case, how come the Velop advertises 802.11ac physical capabilities, and clients appear to connect as 802.11ac - even at 2.4Ghz (see attached). Unless this is of course just OSX wifi info at its worst ;-) Inssider also reports the 2.4Ghz radio as being 802.11ac capable (?)

32526541770_1234845f4f.jpg



Also I just did some testing with wired vs wireless backhaul connections for the nodes. Ethernet connected works as expected but I can't get any iperf testing (to a wired client) via the wireless node to get over 150Mbps regardless of how good the connection is. I'm guess this means that the backhaul is only using a single channel (rather than bonded) which might explain some of the performance issues.
 
Last edited:
In my mind, I'm pretty sure that 802.11ac is a 5Ghz technology only. With that being the case, how come the Velop advertises 802.11ac physical capabilities, and clients appear to connect as 802.11ac - even at 2.4Ghz (see attached). Unless this is of course just OSX wifi info at its worst ;-) Inssider also reports the 2.4Ghz radio as being 802.11ac capable (?)

This goes into Spec vs. non-Spec implementation... as you mentioned above, 11ac is a 5GHz only tech...

802.11n has very specific guidance - and some of the vendors have gone beyond this by implementing 802.11ac in the 2.4GHz band - easy enough as the WiFi chipsets are basically the same, so chipset guys are in on this...

For the most part, it shouldn't cause problems with most modern 2.4GHz chipsets, but some can and will have problems with things they are not programmed to understand (the VHT items in the beacon).

There are a few interop issues - e.g. Realtek chips might have issues with Marvell, and QCA might have issues with Broadcom there - so worst case, set the router to B/G/N mixed, disable Turbo/Nitro, and most clients will be ok - this guidance here is general, so it's up to the vendor on how these things are set in their settings UI...
 

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top