What's new

How We Test Wireless Products - Revison 7

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

In congested environments, a too sensitive receiver can be a disadvantage, because it hears more signals than the insensitive receiver of your neighbor...with the consequence that the sensitive receiver bravely waits for the channel to become free while the insensitive receiver doesn't recognize that the channel is occupied and thus has more opportunities to transmit = better throughput (but less coverage in non congested environments).
Excellent point... 802.11 is CSMA/CA (carrier sense, multiple access with collision avoidance). This is fundamental to good sharing of the unlicensed spectrum. Excessive transmitter power, excessive receiver sensitivity - causes CSMA/CA (clear channnel assessment) faults and delays before transmitting, leading to lower net throughput at the network layer despite an apparent high bit rate on the WiFi link. NOTE that the use of high gain antennas (like 6dBi or more), in urban/suburban areas, can exacerbate all of this.

Also, a down side to low cost transmitting amplifiers is that most don't have the high linearity required for the higher rates of OFDM - leading to distorted transmitted signals - and that causes reduced network layer throughout.
 
The issue with congestion is that it does not seem like many router makers test for it properly.

Here is one of those OMGWTF moments

WNR3500Lv2 on channel 6:
jvLpJEc.jpg


same router on channel 1:

(difference there are about 10 more access points on it but they are pretty far (around -80 db)


This is all at a distance of about 13 feet (both using a single stream transfer)
NgtluvP.jpg


(on tomato, the speeds are even lower and the test fails repeatedly due to it not responding in a timely manner) (making the wireless completely useless on other channels )

None of my other routers have this issue, all others can use any wifi channel with no problem (with only small differences in throughput)
 
Last edited:
It would be interesting to know how different routers calculate the automatic channel suggestions. I'm guessing they calculate them from relatively instantaneous measurements.

It would be really neat if clients could report back to an AP periodically and say "ok, this is the kind of spurious traffic I see on these channels at these signal strengths from where I am physically on the WLAN, add this report into your subjective recommendation of a channel we should use."
 
@ Jemz0r
I fully agree to your statements, except that 3 vertical antennas are the "best" setup. ;)
There are WLAN products in the market which have horizontal polarized antennas only. Many routers not having a "standup" position but "laying flat" position use often horizontal polarized anntennas, because they don't have the room for vertical antennas.
The point is: In a "real" reflective environment there are enough obstacles which partly flip the polarization, such that it doesn't play such a big role if horizontal or vertical. Also because the clients anyhow have about a 50:50 distribution between using horizontal or vertical polarized antennas.

But in anechoic measurements @ "line of sight" the polarization plays a significant role.

"Cross" polarized MIMO (used by Ubiquiti or Mikrotik) is the only way to do MIMO in directional links, because it's the only way to achieve the necessary de-correlation between the signals.
B

I think maybe you mis-understood me. ;)

In the current Smallnetbuilder test environment, the three receiving antennas inside the shielding box are three vertical polarized antennas.

So what I mean is that by having a router with also three vertical polarized antennas to "match" the test antennas should in theory produce better performance in the current Smallnetbuilder test scenario.
Well, that's what I think... :D
 
Good discussion!

From my experience, we are attempting to deal with what we used to designate as two separate testing phases in a combined fashion. We (myself and those I used to work with "back in the day") used to test for path loss (RF attenuation) in a "perfect channel" with no reflections and/or phase changes (easy, in a basic direct connected conductive test scenario) and also test using a "channel simulator" wherein various forms of fading environments could be simulated (Rayleigh, Rician, etc.). Both were direct coupled. Granted, this was to simulate a mobile environment but there may be some correlation in terms of how the short WLAN 2.4GHz and 5GHz wavelengths reflect and refract even in a relatively static environment.

In this case, given the overall complexity and cost of including a full blown channel simulator, I believe the desire is to reduce the test approach to a primarily "perfect channel" RF attenuation only data set (in other words, we are primarily testing strictly for "coverage area" or "distance" given all other factors are set to nominal - such as interference level and dynamic fading constructive and destructive elements). Theoretically quite reasonable so long as you note the limitations. The problem is that the "luxury" of direct coupling is no longer there both because of the design of the DUT's in question as well as the good point raised by bonsai earlier concerning the testing of the complete DUT system including self-generated in-band noise issues. Without direct coupling some "imperfect channel elements" creep in. This includes effects caused by antenna polarization mismatches.

I see only two ways, besides the 45 degree test set antennas compromise approach (or maybe using circularly polarized antennas [directional, though - would have to be accounted for, pattern, etc., as far as I know]), to dealing with cross polarization problems in a free space air coupling DUT-to-test set enclosed RF link:

1) "Pre-randomize" the two-way RF signals in polarization without significantly affecting the signal levels. I don't know how one would do this in the real world but I can visualize a rough form of "Dyson Sphere" arrangement around the DUT with some means of two way transfer of "aggregate RF energy" to and from the test set.

2) Include some form of test set mechanical attachment such that the DUT can be rotated around a set of coordinates approximating a full 360 degrees in all axes (not just along one axis as is currently done - more like a spherical sense); ideally, test software would automate the process such that the DUT would be mechanically rotated until "best results are obtained" at the test set after which the DUT's "location in space" relative to the test set antennas is fixed (presumably in the "optimum position") and the test, as we know it now, can be run. Of course, leaving the DUT fixed and mechanically rotating the antennas should work just as well. Which is easiest depends on the test set construction (for example, rotating the DUT may seem more complex but entails primarily taking account of the mechanical and electrical integrity of data and power lines which are more "forgiving" than accounting for the more complex integrity of RF coaxial lines and connectors attached to the test set antennas.).

Obviously #2 seems the more practical and also obviously, given the current test setup and limitations, the 45 degree antenna approach is probably the best compromise for now.

-Mike
 
Last edited:
Mike,

Note the current test configuration includes octoScope's MPE, which implements IEEE 802.11n/ac Model B.

Yep! I sit corrected! I did read that description some time ago and somehow that slipped my mind. Anyway, I need to study it more.

So the only real test set issue for debate remains the free space coupling antenna (potential) cross polarization mismatch losses. The multipath emulator is further down the line so the free space coupling losses (including those from mismatched antenna polarization) cannot be compensated for by its actions.

In the short term and for simplicity/practicality, I will throw my hat in the 45 degree test set antenna tilt ring. I know you are loaded up enough with stuff to do but maybe you could run a few comparison tests by re-running a small subset of previous runs of various devices using the 45 degree tilted test set antennas. You did so with the ASUS so maybe do so with a "disputed unit" like the discussed Linksys unit? Given the current hypothesis being examined, we should see the later benefit slightly in roughly the same amount as the ASUS unit degraded.

In thinking about my "mechanical rotation" suggestion of my last post - a simplified method which could at least be tried for some spot checks concerning the validity of the cross polarization mismatch issues being discussed might be to add one additional axis of rotation to the current one you already do - that is, simply tilt the DUT 90 degrees and then rotate as you currently do and pick the best mode. Some kind of simple plastic or mostly plastic portable grip meant for work bench use or some such might suffice to hold the DUT in this fashion. So, in other words, you would do as you normally do and rotate the DUT along one axis in its normal standing/sitting position BUT you would also add another rotation wherein the unit is tilted 90 degrees (and, presumably, gripped in some reasonably non-RF reflective mechanical way). I think this should cover the most likely cross polarization loss issues.

-Mike
 
wondering how you decide what fading models to use in the channel emulator? When I and Stanford Univ. did this for IEEE 802.16e, we spent big $$ (one of the major cellular carrier's money) to devise a few channel models based on expensive field data collection with a channel sounder, to get the multipath delay statistics for various subscriber side antenna heights and spacings (2.5GHz, the old MMDS band). The results were accepted by IEEE, but were controversial- because of the many ways the emprical data can be collected. This was all outdoor stuff, for eaves-mounted and chimney mounted subscriber devices/antennas of that era.

Now, LTE does a fine job of building penetration without need for outdoor antennas in most cases - thanks to the use of 700MHz.
 
wondering how you decide what fading models to use in the channel emulator?
only the B model is implemented. Commonly used since it models typical home / small office environment.
 
Note the current test configuration includes octoScope's MPE, which implements IEEE 802.11n/ac Model B.


The problem is that usually a propagation model shall simulate antennas plus propagation medium, which means the interface of the propagaion model is usually a (physical) RF connection (=conducted input/output)

But by picking up the DUT RF singals "wireless" via probe antennas, effectively a second model is put in series to the Octobox propagation model, which increases the (unhealthy) correlation between the MIMO pathes.

Means: DUT antenna 1 will couple to all 3 probe antennas, DUT antenna 2 will couple to all 3 probe antennas and so on,...while in a conducted setup this additional "mix-up" of the signals doesn't happen.

The air loss inside the test box can not only be seen as simple attenuator, but as a combination of attenuators and signal combiners.
The signal combining property will add additional correlation to the spatial streams which, depending on how strong the combining effect is, will more or less hurt the MIMO decoding, such that the link will more or less earlier switch from 3 stream to 2 stream and finally to single stream transmission.

(In my simplified audio metaphor:if left and right channel of a stereo signal are connected via a resistor (in extreme case 0-Ohms ) then the stereo effect is reduced, in worst case both channels carry the same "mono" signal and less information can be transferred accordingly)

As i wrote in my start posting: i experimented quite a bit with a similar setup, and by adjusting the DUT position in the box, it's easy to create cases, where full troughput isn't possible at all, because of the additional mixup and attenuation differences in the pathes (too much correlation). Then you move the probe antennas just by some degrees or some inches in the positon and suddenly ("digitally") the throughput is back to full rate, but still can be close to the "cliff" where little changes like increasing the attenuation immediately result in switching back to operate with lower number of spatial streams.

Therefore it's important to make the correlation (=signal combining inside the box) as small as possible = giving the probe antennas as much spacing as possible.

http://www.microwavejournal.com/articles/10835-mimo-ota-device-testing-with-anechoic-chambers

Each chamber is equipped with a number (usually 4, 8 or 16) of cross-polarized antenna pairs, all of which are fed signals via the channel emulator. Figure 2 illustrates the distribution of power across 8 probes of 6 multipath delays. Each probe has a vertical (top) and horizontal (bottom) element.

http://www.hindawi.com/journals/ijap/2012/615954.fig.001.jpg

As those systems are very costly, I have also been experimenting with another method, where the DUT is placed in the anechoic Box and (tiny) position variable probe antennas are located close as possible to the individual DUT antennas (fixed with adhesive tape on the housing)
This allows lowest possible correlation because of operating in the near field of the DUT antennas.

By observing the RF power on each probe antenna, it's possible to estimate the coupling loss as absolute value and also to make it equal for all streams by fine tuning the probe position.
If you have no power meter, a simple diode rectifier head connected to an oscilloscope will at least allow relative measurements within a 15 dB range.

This method is close to testing conducted, but has the advantage that it's not required to open the DUT and also any platform noise leaking into the DUT antennas is still fully taken into account.

Off course it's extra work to do this "probe-calibration" once for each product, but on the other side you save all the rotations and repetitions and the level accuracy will be much better than with the actual method.

What this method does not cover are the differences in antenna propagation properties.
However in such small chambers it's not possible at all to judge MIMO propagation properties, so in my eyes it's not a big disadvantage.

bonsai
 
Last edited:
...
What this method does not cover are the differences in antenna propagation properties.
However in such small chambers it's not possible at all to judge MIMO propagation properties, so in my eyes it's not a big disadvantage.

bonsai

If the channel simulator does good fading, and variable delay too, then "IF" you have a valid set of statistics for the expected time variant fades/delays, then you can begin to assess the benefit of some forms of MIMO. Problem is.. doing this such that it is at all relevant to the real world of residences vs. block wall buildings vs concrete-on-steel pan floors, and so on. It quickly moves into black magic and randomness.

As said earlier, this is where emperical measurements, extensive, ($$$$) is needed. Lots of that has been done for the ITU for cellular systems (except for 700MHz/LTE), but very little for indoor systems. Only one I've ever seen is ones for 802.16e and the ITU model with 50nS RMS delay spread. But these are too sparse to be highly useful.
 
Last edited:
If the channel simulator does good fading, and variable delay too, then "IF" you have a valid set of statistics for the expected time variant fades/delays,...

As said earlier, this is where emperical measurements, extensive, ($$$$) is needed. Lots of that has been done for the ITU for cellular systems (except for 700MHz/LTE), but very little for indoor systems. Only one I've ever seen is ones for 802.16e and the ITU model with 50nS RMS delay spread. But these are too sparse to be highly useful.

You are talking about mobile standards where time variance is of high relevance and any dB of link budget or throughput improvement directly transfers into big money savings for the operators.

This sort of simulations are usually done during development of the standard and perhaps also by chipset suppliers, but usually not by product developers which use off the shelve chipsets.

I think in WLAN operated in houses, the fading situation is rather "static" and the situations less complex. The short cyclix prefix in WLAN limits the allowed delay spread to some 100ns and the few pilot tones won't allow for mobile applications anyhow.

=> If the WLAN antenna correlation (crosstalk) is sufficiently low and antenna efficiency sufficiently high then the differences between antenna concepts will be hard to notice for usual customers in usual homes. Except you want to cover a garden or have a paper house, then (omnidirectional ) antenna gain from rubberstick antennas can give a benefit in coverage, but only if the max EIRP TX power defined by the regulatory bodies isn't touched. Physical TX power (TRP) is always preferable over antenna gain. In fact antenna gain has little to no benefit inside usual brick wall homes.


bonsai
 
Last edited:
I agree - mostly. the challenge with emulating indoor multipath is that the delay paths are short but are a large percentage of the duration of the modulated symbol periods. That causes either constructive or destructive multipath. Emulating this is nearly impossible.
 
I agree - mostly. the challenge with emulating indoor multipath is that the delay paths are short but are a large percentage of the duration of the modulated symbol periods. That causes either constructive or destructive multipath. Emulating this is nearly impossible.

This is a somewhat interesting paper I came across that did tests (yes there are lots of such papers) on 2.4ghz & 5ghz attenuation and reflection (those #s were especially interesting) in common building environments.

http://www.ko4bb.com/Manuals/05)_GPS_Timing/E10589_Propagation_Losses_2_and_5GHz.pdf

The data is interesting but still more indicative of lab test conditions than real world (your wifi devices will be various distances from various types of building materials, and they just used a signal pane window to measure "glass" as a building material (obviously most homes have multiple panes, some with inert gasses between the panes, etc). And it also is too narrow in scope to get down to things like actual link & transfer speed on multi stream/multi beam antenna configurations that may or may not use signal reflectivity to their advantage, etc.

Nevertheless, interesting paper.
 
I agree - mostly. the challenge with emulating indoor multipath is that the delay paths are short but are a large percentage of the duration of the modulated symbol periods. That causes either constructive or destructive multipath. Emulating this is nearly impossible.

These problems are the reason, why OFDM modulations became so successful and why WLAN was defined with so many modulation schemes.
The data is splitted into 48 data-streams (per 20MHz) and each stream modulated to an own (sub) carrier frequency. Because the subcarrier bandwidth is much lower than the coherence bandwidth, each subcarrier has to deal only with flat fading. The cyclic prefix is long enough to compensate for echos when the symbol changes.
As long as the signal to noise ratio is still good enough and the fading notch not deeper than about 10 to 15dB, the subcarrier can still be decoded. Even if some subcarriers are lost, WLAN can compensate quite a large amount of lost subcarriers by forward error correction ( FEC=5/6,3/4,2/3,1/2 ) and if too many subcarriers suffer from poor SNR, the modulation can be changed stepwise to less SNR demanding modulations (64QAM,16QAM,QPSK,BPSK).
 
For indoor path lengths, rather than satellite/ionospheric paths, I've never felt that OFDM is any REAL benefit over a single carrier receiver with a dynamic equalizer. The premise of OFDM is that fading in time is frequency dependent. My work for a year with $M R&D project concluded saying that indoors, the fading isn't frequency dependent within merely 20MHz, on short paths. Long wireless paths, out to 800+ mile orbits, that's a different story.

Cable modems/DOCSIS use dynamic equalizers rather than OFDM because the in such wired systems, the fades/multipath are much like indoor short path wireless.
 
Last edited:
I think we are getting off-topic :rolleyes:

For indoor path lengths, rather than satellite/ionospheric paths, I've never felt that OFDM is any REAL benefit over a single carrier receiver with a dynamic equalizer.

Some years ago i read a paper, that one of the reasons for preferring OFDM at high data rates is that OFDM is significantly more economic from receiver implementation point of view (energy consumption and chip size). In single carrier systems, the symbol time gets shorter with increasing bandwidth, while the delay spread of the propagation channel stays constant. So not only everything has to operate at higher clock speed, but additionally the equalizer has to cover more symbols and more equalizer calculations are necessary per output-bit. In OFDM, even if increasing the bandwidth, the symbol clock stays constant and is only a very small fraction of SC systems.
In short words: The complexity increase for the wider FFT is less than the complexity increase for the equalizer in a single carrier system.
In OFDM the equalizer is very simple and operated at a low clock frequency (off course, many in parallel),in SC systems the Equalizer is more complex and has to operaty at a much higher clock frequency.

Cable modems/DOCSIS use dynamic equalizers rather than OFDM because the in such wired systems, the fades/multipath are much like indoor short path wireless.

Docsis uses impedance matched coax cables as medium and the channel bandwith's are rather low (0.2MHz to 3.2/6.4 MHz) compared to WLAN. The bandwidth and max throughput in the very latest Docsis 2 is still less than the very old 802.11a.
Docsis devices are not battery operated, so no need for extra energy efficient processing.

ADSL/VDSL uses DMT, which is also a multi carrier system similar to OFDM. Powerline modems also use OFDM. Digital terrestric TV uses OFDM. LTE uses OFDM.
I know no MIMO standard using different modulation than OFDM.

OFDM has the potential of dynamically splitting and merging frequency spectrum (LTE, 802.11AC)
OFDMA (Wimax,LTE) enables dynamical use of subchannels on selected subcarriers, which enables avoiding the fading holes and dedicate only the "sweet" portions of the channel to the mobile station.

... looks like OFDM is meanwhile dominating most (wideband) standards.

@PrivateJoker
the document contains really uselful stuff. Thank you for sharing the link.

bonsai
 
Last edited:
While I agree with bonsai in that we are getting off topic, I must say that it has been quite interesting! The RF stuff is "juicy goodness" to me so the discussion is fascinating from my point of view.

However, having said that, back to the topic of how best to deal with the cross polarization losses in the free space coupling of the test set to the DUT...

Firstly, just to get it out of the way, my idea of tilting the DUT physically up by 90 degrees obviously has a major flaw - it is unrealistic to expect all DUT antennas to be lined up in exactly the same polarization relative to each other and across both bands. DUH! I realized this fairly quickly after posting but was busy with "life stuff" and couldn't get back to apologize - I'm sure everyone involved in this discussion saw that flaw right away, sorry for my idiocy. That "life stuff" is partly the reason my brain wasn't firing on all cylinders anyway, I guess.

So I really can't think of any solution better than what bonsai has already stated. The "probe antennas" intrigue me - can you (bonsai) describe them to me? I am assuming you are referring to passive RF probes consisting of a terminating resistive 50 ohm load and short radiating/pickup "tip" or "loop" (physically very small relative to the operating wavelengths in question and non-resonant). You would be measuring voltage across the resistor and converting to power for "reception" and vice versa for "transmission". As the pickup element ("tip") of the probes are so small and non-resonant at the test frequencies there would be no polarization concerns (and, as you said, you are operating in the near field where all polarizations, vertical, horizontal, circular, and elliptical are, theoretically, present and numerous) but the probe feed should be effectively shielded to prevent coupling and issues such as loading down the DUT transmitters (due to reactive near-field antenna coupling). You would also need to know at least approximately where each antenna is located within the DUT and deal with DUT designs that use separate band specific antennas (i.e. a three antenna design would be a six antenna design when mono-band antennas are used for a dual band radio; the antennas for each band in such a case may not be co-located). Anyway, I'd be interested in what kind of probe designs you are using and how you determine placement.

Otherwise, some form of directional dual polarized or, as you say, circularly polarized antennas may have to be used similar to how EMC test facilities operate (but more simplified and limited to the operating bands only). Also, I found this article in a quick search that may be of interest: http://mwrf.com/passive-components/slot-antenna-uses-dual-polarization.

-Mike
 
However, having said that, back to the topic of how best to deal with the cross polarization losses in the free space coupling of the test set to the DUT...
It's not only the cross polarization loss which hurts, but also the unequal distances between shielded box antennas and dut antennas.
The actual criteria is to place front of the DUT 8 inches apart from the pickup-antennas. However, if any antenna is further away than 8 inches, the pathloss attenuation increases rapidly (6dB when doubling the distance)


The "probe antennas" intrigue me - can you (bonsai) describe them to me? I am assuming you are referring to passive RF probes consisting of a terminating resistive 50 ohm load and short radiating/pickup "tip" or "loop" (physically very small relative to the operating wavelengths in question and non-resonant). ...
...
.. but the probe feed should be effectively shielded to prevent coupling and issues such as loading down the DUT transmitters (due to reactive near-field antenna coupling).

I first experimented with off the shelve WLAN antennas having pigtail cables.
This is a simple approach, and already a big step forward compared to undefined coupling losses when using fixed installed probe antennas in the shielded box.
But can further be optimized to remove some disadvantages:
When thinking of an antenna as resonator (bandpass), then bringing 2 wlan antennas close together will cause (bandpass) resonance coupling effects which (if applied in near field) shifts the original resonances of the DUT antennas and can also add attenuation slopes over the band.
Another disdavantage is that resonant probe antennas do what the are made for, they pick also significant amount of energy from "the air" =the other DUT antennas and this increases the unwanted correlation.

At the end i used simple self made RF current probes, also known as EMI loops. Simply use an open coax cable end , form a 15mm diameter loop and solder the center conductor to the cable shield.
http://m.eet.com/media/1161339/fig4_loop_probes.png
The advantage is, if the loops are small enough, they are wideband and have less "reactive" impact on the DUT antennas properties.
"Disadvantage" at a first look is the coupling loss is much higher than with resonant probe antennas, but at second look this is not really a problem, because anyhow a minimum atennuation of at least 10..20 dB is desirable to prevent overloading /damage of the WLAN receivers.

You would also need to know at least approximately where each antenna is located within the DUT and deal with DUT designs that use separate band specific antennas (i.e. a three antenna design would be a six antenna design when mono-band antennas are used for a dual band radio; the antennas for each band in such a case may not be co-located). Anyway, I'd be interested in what kind of probe designs you are using and how you determine placement.
Yes this is a little disadvantage, but in most reviews i saw already "inside" photos (f.e. selfmade or copied from FCC approval reports) where the antenna positions can be seen .

Another disadvantage is, that operating in the near field requires "tuning" and good fixing the probe antenna position because little movements can cause dB of coupling loss variation.

Finally the "tuning" requires to "measure" the coupling loss. This can be done by initiating a data transfer in 11a/g mode and measuring the probe power.
F.e. with a time domain power meter, or if not available by a WLAN receiver equipped with input attenuator (30..40dB) and calibrated RSSI readout (I sometimes used f.e. an Ubiquity Bullet)
But a time domain power meter or a RF schottky diode rectifier head (attached to oscilloscope) have faster response and allow quicker finding the best position.

If you can live with reduced accuracy, then simply place resonant WLAN probe antennas at "controlled" distance to the DUT antennas (keep around 2 inches distance to DUT) This solution requires less precise mechanical "tuning". However for some DUT anntena designs the polarization is not easy to judge, so a relative power measurement for finding the maximum while changing the probe antenna polarization is recommended.

In all cases, the coax cables of the probe antennas should be routed "orthogonal" to the field to minimize incident field pickup and the probe cables should keep max distance from other antennas to minimize cross coupling via RF currents induced in the coax shield.

bonasi
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!

Members online

Top