Contention Window Algorithm

  • 1
  • Question
  • Updated 3 years ago
Doe the latest firmware perform automatic adjustment of the size of contention window based upon demand for airtime?  
Photo of Dawn Douglass

Dawn Douglass

  • 67 Posts
  • 3 Reply Likes

Posted 3 years ago

  • 1
Photo of Mike Kouri

Mike Kouri, Official Rep

  • 1030 Posts
  • 271 Reply Likes
Dawn,
Could you restate the question using different words? I think I am following you, and if so, I think our "Dynamic Airtime Scheduling" feature is what you are looking for (and it's been in HiveOS longer than I have).
Photo of Nick Lowe

Nick Lowe, Official Rep

  • 2491 Posts
  • 451 Reply Likes
Aerohive have a whitepaper on the feature Mike mentions:

http://www.aerohive.com/pdfs/Aerohive-Whitepaper-Dynamic_Airtime_Scheduling.pdf
Photo of Roberto Casula

Roberto Casula, Champ

  • 231 Posts
  • 111 Reply Likes
I think Dawn MAY be asking about adaptive algorithms for dynamically adjusting the EDCA contention window, which affects contended medium access at the MAC layer.

When backoff occurs due to collision detection, the backoff count is randomly chosen between 0 and whatever the value of CWmin is for the traffic's access class. The value of AIFS is also relevant and provides a fixed minimum offset for the backoff count.

Ordinarily, the CWmin for each WMM queue is a static piece of configuration (in the radio profile). For time-sensitive applications, we generally want a lower CWmin to keep jitter and latency low and to prioritise medium access for these traffic classes, so the VI and VO ACs have lower CWmin (and AIFS) values than BK and BE.

However, a lower CWmin increases latency and jitter when traffic levels and station counts are high because there is a higher probability of multiple collisions. Conversely, a higher CWmin increases latency and jitter when traffic levels are low because stations will often be backing off for longer than is really necessary as the chances of multiple collisions occurring are much lower.

Because of this, there have been several proposals (mostly academic research papers I think) for algorithms which dynamically adjust the CWmin parameters of the various access classes to adapt to prevailing conditions, i.e. to reduce the CWmin value when traffic levels are low and increase it when traffic levels are high. As the EDCA parameters are transmitted to clients via beacon frames, the AP can in principle force clients to adapt their backoff timing based on this dynamic adaptation algorithm.

How effective these algorithms is highly dependent on many factors. In some situations, dynamic algorithms will result in worse behaviour. There are also complications if there are non-WMM-enabled stations in the environment.

As far as I am aware, and I could be completely wrong here as the detailed operation of DAS is protected Aerohive IP, DAS doesn't operate at this level of dynamically adjusting MAC parameters, but rather at the level of packet scheduling and per-user based queuing. DAS operates primarily by servicing per-user traffic queues at a rate that is proportional to the real-time transmit data rate to each client. Clients that can receive at a higher data rate have their queues serviced more frequently than clients that can receive at a lower data rate. This is primarily controlling AP -> Client transmission, though Client -> AP data rates, for TCP traffic at least, can also be controlled coarsely because DAS will impact the transmission of TCP ACKs which will result in contention adaptations in the client's TCP stack.

Mike, please correct me if I'm wrong on the above! Dawn - is this what you are asking about? If so, what's the background behind your question? I can certainly say from extensive testing in real customer environments that DAS dramatically improves overall user experience and performance in the real world and works well in environments that mix voice and video and high-bandwidth applications on the same radio.
Photo of Mike Kouri

Mike Kouri, Official Rep

  • 1030 Posts
  • 271 Reply Likes
Roberto,
You constantly impress me. Thank you for the education. 

You are right, I apparently misunderstood Dawn's question, and I think you answered it - We don't do that. You are correct, DAS operates at the packet scheduling and per-user based queuing level which will very coarsely accomplish similar things, but it's not exactly what she was looking for.

You've got more direct experience using DAS than I do, so I'll take this opportunity to pick your brains, if I may.

I've heard from our field folks that with the gradual replacement of old 11b/g clients, the increase in 11n clients and clients that preferentially try to associate to the 5GHz band before the 2.4GHz band, that DAS has become less effective than in the past. Do you agree? 

However, I believe that with the explosion of 2.4GHz internet-of-things devices, that DAS may become more effective again in the near future. Do you agree?
Photo of Roberto Casula

Roberto Casula, Champ

  • 231 Posts
  • 111 Reply Likes
Hi Mike,

To an extent, I'd agree but it very much depends on the client mix and the deployment density etc. I've certainly got customers where enabling/disabling DAS has little or no apparent impact on real user experience (and never did), but others where it still does (though maybe less so than when we started with this all those years ago), so less relevant/important maybe, but not entirely useless certainly.

It is still very common to  have clients at the extremes of the coverage area using low MCS rates and hanging on to the 2.4GHz radio hogging significant amounts of airtime for example, and there are still plenty of devices which are 802.11n but 2.4GHz only, or that don't (at least by default) bias the 5GHz radio.

Where DAS is still useful I think is when there are different classes of user (e.g. VIPs, normal staff and guests) as we can use the user profile SLA to at least give a bit of a premier service to the more important users at the expense of the less important ones.

The other thing that has changed in the last couple of years is the amount of station -> AP traffic and DAS is a lot less useful here as I say. A few years ago, the vast bulk of traffic was going down to the clients, but cloud sync now means there can be a lot of traffic in the other direction. You can take one photo on your phone these days and suddenly seven different applications automatically try to simultaneously background sync it to cloud storage (DropBox, Facebook, Google+, iCloud etc.).

In general, the primary issue in terms of predictability of performance is still badly-behaved clients and buggy drivers. Quite often, the more wireless vendors try to do "clever" things to overcome the fundamental problems of a shared and highly contended medium and the more people like me tinker with advanced settings to try to optimise things, the more likelihood that a couple of dodgy clients can trigger signficant problems for the network. And customers find this difficult to get their heads around because they believe that wireless networks should behave as consistently and deterministically as wired networks...largely because the industry is setting those expectations.
Photo of Dawn Douglass

Dawn Douglass

  • 67 Posts
  • 3 Reply Likes
Robert,

Thanks for the great answer to what I now realize was a vague question.  I had heard this topic discussed in a webinar a while ago and I certainly didn't have good technical grasp on how it worked until I read your answer.