All Things 802.11ac - Question 7: How Do the New Modulation and Coding Schemes Work

  • 1
  • Question
  • Updated 1 year ago
  • Answered
I think it’s fair to say that 802.11ac has simplified the MCS world, at least when compared to 802.11n. Can you give us a brief overview of the MCS specification of .11ac, and how decisions are made as to which MCS to use at any given moment in time? And are there any other PHY-layer elements of interest that provide meaningful benefits to end-users?
Photo of Craig Mathias

Craig Mathias

  • 63 Posts
  • 0 Reply Likes
  • not tired in the least...

Posted 5 years ago

  • 1
Photo of Matthew Gast

Matthew Gast

  • 284 Posts
  • 63 Reply Likes
As the question states, MCS is the modulation and coding scheme. It's a combination of both the modulation (number of bits per symbol, or a measure of the raw bit-transfer capacity of the channel) and a forward-error correction code to correct errors.

802.11n had 76 different options, while 802.11ac has only 9. That's due to two major changes in 11ac. First, the channel width is no longer part of the MCS. In 11ac, MCS 7 is always 64-QAM with a 5/6 rate code, and that MCS can be applied to any channel width.

Second, 802.11ac eliminated the unequal modulation option from its MCS. In 11n, there were several options where each stream could be modulated independently. If there was a stream that had a high error rate, it could be modulated more conservatively than the other streams. (The reason for including this is that beamforming changes the channel characteristics and application of the steering matrix might reduce the SNR on one particular stream.)
Photo of Matthew Gast

Matthew Gast

  • 284 Posts
  • 63 Reply Likes
The decision of what data rate (MCS) to use is always implementation dependent in 802.11ac (or any 802.11 PHY). Typically, most products will have a cutoff SNR that is used and is the maximum performance of the receiver plus a few dB for margin. For example, if a chip was designed around the idea of a -95 dB noise floor, the spec's minimum performance requirement is 13 dB (signal received at -82 dBm) for a 20 MHz channel using MCS 0.

And yes, at higher MCSes and wide channels, the minimum sensitivity can get pretty high. For MCS 9, the spec's requirements go from -57 dBm at 20 MHz channels to -48 dBm at 160 MHz channels. That requires some pretty short distances to achieve.
Photo of Matthew Gast

Matthew Gast

  • 284 Posts
  • 63 Reply Likes
The basic algorithm used by most 802.11 devices is to transmit at the highest speed that's stable, and fall back to lower speeds when required. There's been a great deal of research into rate adaptation, and often there's a hysteresis component that will try to track performance over time and not overrreact to momentary blips. (When there are both open- and closed-source drivers, sometimes the closed-source driver has a better rate adaptation algorithm).
Photo of Craig Mathias

Craig Mathias

  • 63 Posts
  • 0 Reply Likes
So have the developers of the standard perhaps gone a bit overboard in terms of practicality, or are they anticipating continuing advances in implementation technologies?
Photo of Matthew Gast

Matthew Gast

  • 284 Posts
  • 63 Reply Likes
The standard's rules on multi-rate support are about compatibility and ensuring that new devices transmit what's needed for compatibility using older data rates. Whether you choose to transmit at MCS 9 or MCS 7 is totally up to a vendor, though -- and some may choose to try to transmit at MCS 9 three times before falling back, others might choose to make the number of attempted transmissions larger or smaller.

Overall, I'd consider rate selection to be an area where the standard doesn't say much because it enables innovation. As a chip vendor improves at sustaining higher data rates at lower signal strengths, they can just modify their software to use higher data rates more often.

The downside is that it's possible to write some really awful rate adaptation algorithms, too. It would be possible to write the Dumb As Any Nitwit (DAAN) algorithm that said "whenever there is packet loss, go to the minimum data rate and stay there until finding a new AP."
Photo of Matthew Gast

Matthew Gast

  • 284 Posts
  • 63 Reply Likes
There's one other small component in the PHY of 802.11ac: the Low-Density Parity Check (LDPC) code. It's a forward-error correction code, but it performs a bit better than the convolutional codes that have been used in 802.11 up to now. You might get the equivalent of 1-2 dB by using LDPC instead of convolutional codes! (And given how close beamforming takes you to the wire, you want to be using LDPC with beamforming...)
Photo of John Smith

John Smith

  • 1 Post
  • 0 Reply Likes
I understand that LDPC is mostly preferred for high MCS (higher QAM such as 256-QAM). So, does it mean that during re-transmission, on switching to lower MCS, FEC scheme can be changed to convolution coding instead of LDPC. In general, my query is that can FEC of retransmitted packet be different from FEC of original packet (considering 802.11ac offers two FEC mechanisms - convolution coding and LDPC). Also, is MCS generally varied while retransmission.

Thanks!
John