How to choose game server locations for the best player experience

Play Video
27 April 2021

Gaming has truly gone global. No longer can you just focus on just the United States, Europe or Asia and think you’ll be successful. Gaming has become a truly global playfield with global competition in all markets.

Therefore, you need to make sure you are present in as many locations as you feasibly can, or, as your game and matchmaker can support. The price of ignoring regions is that you’ll find the competing games moving into those market, capturing players which might be forever out of your reach.

Table of Contents

Global Coverage for gaming is a puzzle

This brings immediate logistical challenges; not every platform or every cloud provider is present in all over the world. Regions like Russia, Latin America, Middle East, Africa, and parts of Asia are grossly underserved by what is commonly seen as the large hyperscalers such as Google, AWS, Azure, and Alibaba/Tencent. For game developers it becomes more of a puzzle in which you need to combine these to get global coverage for your game. That is why one of the take-aways of the previous topic was that you’d really want to ensure your game can run on multiple clouds, or, you work with a platform which allows to do that for you.

“The price of ignoring regions is that you'll find the competing games moving into those market, capturing players which might be forever out of your reach.”

Stefan Ideler, CTO i3D.net

Optimizing for low latency

You can’t really mention player experience and global coverage without having to talk a little bit about latency as well. So how much does latency matter in gaming? Often, I have discussions with vendors and suppliers who still think that gaming can be served in the same way as for example financial services with 2-3 hubs around the world.

Let’s dispel that thought right away. The reality is that low latency in gaming is not about squeezing every millisecond out of a single London to Tokyo route. It’s about optimizing 50+ hubs around the world in all major population areas to ensure your game is hosted as close as possible to the most players possible by using the best possible network path available.

Since this is the case, it’s unlikely that you’ll see gaming companies on the forefront of acquiring highly specialized infrastructure equipment for sub-1ms latency gains. Please do not fall into that trap.

Although this might change with cloud gaming or hybrid streaming models, which I’ll touch upon in a later blog. As the total latency will be the network path plus your computing / encoding times, any gains there start to matter. For now, that business case is farfetched except for some of the hyperscalers with enough money to burn.

Games as a service

Let’s go back to the game as a live service model, where retaining your players is key. You want to make sure they play your game daily and are exposed as often as possible to rewards, skins, season passes and other seasonal actions (your microtransactions) to keep your game running. That experience needs to be flawless; sub-par latency and packet-loss can make or break that whole proposition.

A successful GaaS (Game as a Service) game needs the following:

Sadly, you’re not there yet with just a large network of partners with a bunch of transit connections. You (if in house) or your infrastructure / platform partner needs to build, invest, and retain relationships with all the major internet players at each location you plan to operate with your game. That also means you’ll need to jump through the various hoops and barriers these parties impose on you along the way.

To clarify, there are many places around the world where the internet infrastructure is not functioning properly in terms of keeping traffic local (which is required for gaming) and therefore manual intervention is required to make things work.

Bad practices

Take the Middle East for example; many incumbents there do not exchange traffic locally, but instead do the exchange in Frankfurt or Marseille. This can happen even if you’re in the same country. It can be that from Dubai, to reach another ISP, your traffic first goes towards Frankfurt before coming back to your neighbor.

The same happens in Latin America, where a lot of exchanges take place in Miami. You can imagine this is a disaster for latency-based applications like online gaming. In general, these ISP’s first suggestion is to come to their own data center and build / buy a local cluster just for your users.

This functions as a so-called ‘gaming CDN’, like CDN’s (Content Distribution Network) like YouTube. The issue is that in online gaming where people play together, regardless of ISP, this is simply not an option. Gaming is not static content, it’s dynamic by nature.

Another example is how gaming is often wrongly identified and traffic is shaped. We’ve hosted games with voice-chat capabilities that got mistakenly labeled as WebRTC and banned by ISPs to prevent their customers from avoiding their revenue model. Besides that, UDP traffic policing is sadly still in place across various parts of the internet or blocked entirely with a mindset that only TCP/http/https are valid services.

Then suddenly, you find yourself having to educate and explain why gaming traffic does need to be served locally. You also need to implement technical solutions to keep both inbound and outbound traffic in the region, as just getting a local transit party or peering agreement is often not enough. You really need to investigate what the user’s perspective is and see if traffic is properly routed both ways.

Examples: optimizing latency in Middle East

Two examples that we had to deal with was trying to help users for a popular football game in the Middle East: One between Kuwait and Dubai, the other between Dubai and Pakistan.

In one case, a peering agreement was in place, but it was only working one way – so from a user perspective, players were still seeing 130 ms latency while trying to play a soccer game with their neighbors. If traffic were routed locally, this should be around 16ms.

Or the case of Pakistan/Dubai, which normally has a latency of about 22-25 ms, now had a latency of about 150 ms due to transit policies in place at the ISP side. This was because it defaulted to the ‘cheaper’ Europe route, instead of keeping it local.

The first step is to be aware of this issue in the first place. This requires good management of your social resources like social networks, in-game reports, and tracking of your client to/from the server. This will help you figure out what is going wrong where, because let’s face it, the internet breaks all the time. With that information you can then attempt to get it fixed yourself or with your partners. However, this can be a lengthy process – it involves work with suppliers, harnessing relationships, and sometimes even involves your end users to make their ISPs aware of the difficulties as well.

Detection, visibility of the impact and transparency in the steps taken to work towards a solution are key. These also create and manage the expectations of the players. Nothing is worse than false promises, right? Either way the player’s experience is key, so you need to keep that in mind while developing your game.

Do you like to know how i3D.net has reduced latency in the Middle East by 80%?
➜ Download Case Study
Main Take-Aways

Providing the best player experience is not about squeezing every millisecond out of a single route. It’s about optimizing dozens of hubs around the world in high population areas to ensure your game is hosted as close as possible to the most players possible by using the best possible network path available.