GRE tunnel performance for guest users

  • 1
  • Question
  • Updated 5 years ago
  • Answered
Anyone who knows the maximum GRE tunnels available for a single termination point (AP121), and the maximum throughput when several remote offices are tunneling guest users through GRE? Also if any HA (like 2x AP121 termination points) is available?
Photo of HippenLive .

HippenLive .

  • 3 Posts
  • 0 Reply Likes

Posted 5 years ago

  • 1
Photo of Mike Kouri

Mike Kouri, Official Rep

  • 1030 Posts
  • 271 Reply Likes
The AP121 according to my records supports up to 63 GRE tunnels with up to 100 clients carried within each tunnel. In practice, I don't know anyone who uses more than 2 GRE tunnels, I believe the vast majority of our customers run only one tunnel back to a single tunnel terminator in the DMZ for their guests.

GRE is a very lightweight protocol, assuming the AP121 had -only- guests connected and there was no blocking on intermediate equipment or at the DMZ tunnel terminator, you should see at least 80% of the performance you would see on wireless-to-wired forwarding.

Did this answer your questions, or did I just dance around them?
Photo of HippenLive .

HippenLive .

  • 3 Posts
  • 0 Reply Likes
Thx Mike, If we have a remote office with 50x AP121, and wants to tunnel guest traffic back to main office where there is one AP121 as a GRE termination point, wouldn't that be 50 GRE tunnels towards the one AP121? Will the one AP121 be able to forward traffic at the ethernet port at wirespeed minus 20% overhead?
Photo of Mike Kouri

Mike Kouri, Official Rep

  • 1030 Posts
  • 271 Reply Likes
HippenLive,
I think so, but to be perfectly honest with you, if I were in your shoes I wouldn't use an AP121 as my termination gateway. Given the number of other APs that may be tunneling back to your DMZ, I would consider deploying a CVG as a tunnel terminator. It will have a bigger CPU, more RAM, and won't be also trying to act as an AP so will be able to handle the load more gracefully.

Also, given your comments about remote offices, unless you have an extremely high-speed internet connection, I think the link between the remote offices and the HQ will be the bottleneck and you are unlikely to get wire-speed forwarding onto the DMZ network.

Does this help at all?
Photo of Roberto Casula

Roberto Casula, Champ

  • 231 Posts
  • 111 Reply Likes
I would definitely look at using HiveOS VAs (CVGs) as tunnel terminators as Mike suggests. Also, you can configure more than one terminator and the tunnels will be balanced between them (just as happens with automatic tunnels for L3 roaming).

My experience of GRE tunnelling in our various customers is generally that it works well. For most there are no problems and no user complaints about performance. At a few customers, however, there have been some specific issues which are worth bearing in mind, especially if you are making very extensive use of GRE.

Every GRE packet has to be processed by the AP's CPU rather than being forwarded in hardware as non-tunnelled traffic is (this is the case both on the tunnel intiators and tunnel terminators). This can result in a lot of software interrupt requests and a significant increase in the AP's CPU utilisation. If the AP is also doing a lot of other CPU intensive tasks (e.g. WIPS in an environment with a lot of rogue APs, L7 Application reporting) as well as a lot of GRE processing it can be too much for the AP to handle and this can in extreme circumstances cause problems (for example, the AP may "drop off" from HiveManager due to CAPWAP timeouts, or new users may not reliably connect due to delays in EAP processing). This is especially a problem on the older APs that had slower processors (like the AP120). The problem is much less on the newer models such as the AP121s. AP330s are very unlikely to experience any issues, even when acting as tunnel terminators, due to their higher spec processors (though I'd still use HiveOS VAs for this).

Be particularly aware of excessive broadcast/multicast traffic in the VLAN where the tunnel terminates as broadcast/multicast packets have to be replicated and sent separately down each tunnel. There is an option now in the interface Management Options screen to filter multicast traffic sent down GRE tunnels. If you do not need multicast, consider filtering it completely via this mechanism.

You also need to be careful of the impact of RF issues. If you have a bad RF environment that is causing a lot of packets to require multiple retransmission attempts, this places further load on the CPU. In the worst case, this can cause the CPU utilisation of the AP to hit 100%. Once the RF environment is rectified, this problem goes away.

It is definitely the case that GRE tunnelling will limit your bandwidth and this is primarily due to the software packet processing that is required, not the overhead of the GRE encapsulation. A smaller number of large packets will be much less limited than a large number of smaller packets - so it's not just about the amount of data, but also how it's sent.
Photo of HippenLive .

HippenLive .

  • 3 Posts
  • 0 Reply Likes
We did some testing with two laptops (Wireless "speed" 300Mbps) one laptop connected to one AP120 and the other one to a AP141. Both AP's had GRE tunnels to a central AP141 which had a Windows server running iPerf server connected to it. All AP's and servers were gigabit connected. Both laptops running 10x iPerf client sessions towards the Windows server over a period of 200 seconds, could not perform better than 65Mbps with default iPerf settings. It seems like having one AP141 as the only GRE termination point will always limit the bandwith to 65Mbps. You could then use several AP's as termination points (round robin) or as recommended by Roberto and Mike, use CVG. Thx - Jarle