Why edge compute doesn’t always mean lower latency

Why edge compute doesn’t always mean lower latency
9 May 2023
‘The cloud’ has become a buzzword for virtually limitless and scalable compute resources since its advent, although this might be considered false advertising. While cloud services might be convenient in making users and customers forget about the service that they use — until it breaks down of course — there are real server costs that can get pretty steep once the number of users on a service rises.

Table of Contents

The myth of ‘limitless’

The popularity of the cloud has also led to other niche products being explored, one of them being ‘edge cloud.’ Edge cloud refers to a distributed computing architecture that brings cloud resources closer to the end user, minimizing latency and improving performance — in theory. Deploying compute resources at the network’s edge should allow for more real-time data processing and analysis, offering reduced bandwidth costs and more. An example of a strong use case for this is when you’re deploying compute resources for products and services in the fields of autonomous vehicles, Augmented and Virtual Reality (AR/VR) and the Internet of Things (IoT).

The cloud or even edge cloud, however, still racks servers in data centers, so while the compute resources might (but not always) be physically closer to the end user, they may not always be in close and effective proximity to the applicable local Internet Service Provider’s (ISP) network interconnection edge.

 

Introducing latency

Latency refers to the delay between a data request and its response, often measured in milliseconds. It impacts the speed and efficiency of data transmission, affecting user experience and application performance. Hence it is a crucial aspect to consider, especially in real-time communication applications and multiplayer video games.

So in that sense, it makes sense to put your compute resources as close to your user as possible, since close proximity means lower latency, right?

Well, yes, in theory, but “closer to the user” does not have to mean physically closer to the user. It’s a bit more complicated than that. To explain, we need to explain how the internet, more specifically peering works.

Peering is a voluntary interconnection agreement between two or more network providers to exchange traffic directly, enabling faster and more efficient data routing. This arrangement allows each network to reach the other’s end-users, improving connectivity, reducing latency and lowering overall bandwidth costs. Aside from a direct peering connection between network providers, they can also be established at Internet Exchange Points (IXPs), where multiple networks converge, allowing them to connect and exchange traffic more easily.

The objective here is to avoid “scenic routing” which is essentially your data packets taking a longer route than necessary because the network provider of the compute resource doesn’t have a direct connection to an end-user’s ISP. This, in turn, increases latency.

 

Connecting players in the Middle East

Let’s say you’re hosting a video game in the Middle East. For this, you opt for cloud resources in the Middle East, perhaps in the United Arab Emirates in this example. You’d say great, my game instance is running on a compute resource in the United Arab Emirates, close to my end users.

One player is playing on ISP A, and the other on ISP B. ISP A is connected directly with the provider of the cloud resource, whereas B isn’t (peering arrangements are not always easy to get). This means that player one is routed directly to your network, while the other is sent on a scenic route, as the network provider needs a physical connection to ISP B. These users could be much closer than their data needs to travel, which would affect their player experience detrimentally.

Even if your server is in the same city, that doesn’t always translate to direct connections. This is especially the case since IXPs and carrier hotels aren’t always available for interconnection in smaller cities or regions, forcing data packets to sometimes still travel to another country or city prior to circling back to reach the final user.

 

Why edge is not always necessary (or even preferable)

Even if we ignore the costs involved in setting up compute resources wherever your users are — this would mean having servers and racks in every city your users are in, a virtually impossible task — there are reasons why this would not be the best option for your video game or RTC application. Take video games for instance. It is important for players to have low latency, but there is no point in having an excellent connection to the match you are in if it has no other players. Matchmaking is a very sensitive process, that is predicated on variables such as the number of uses in a region, their skill levels, and other factors such as game modes etc.
It is also becoming increasingly commonplace for players to play with their friends across regions. You might want to jump into a match of Fortnite with your friends in Asia while sitting in Europe, and the only region that would have tolerable latency for all players in the squad could be somewhere in between, like the Middle East.
For RTC use cases, the whole idea is to connect users from all kinds of different places, so the likelihood of data packets traveling across regions is very high. If the peering agreements and the route taken by the network are not carefully thought out with the objective of finding the fastest route between locations, you are likely to set your service up for failure.

 

Bringing users together without edge cloud

It is also important to take note of how fast traffic can travel between locations before looking to deploy costly resources in the bid to be as close to your users as possible. Take the additional example of traffic for London and UK video game users on i3D.net’s network. i3D.net’s network is capable of serving players in the UK from our Rotterdam data center with 6-10ms round-trip latency, which means that the connection offers a good level of service for players in both the UK and Europe. In a game like PUBG, where 100 players are required per match, it is advisable to have fewer locations where the game is hosted, so that matches can be found quickly and easily for your player base without having to rely on bots to fill empty spaces. This way i3D.net can bring your community together at a minimal latency impact, so everyone in the match can enjoy a good total user experience.

 

Other issues with edge cloud

There are several other problems with edge cloud as well. Here are a few examples:


1. Cost
The most obvious factor in deploying and managing a large number of devices is obviously the cost involved. Additionally, the cost of network connectivity and data storage can add up over time.


2. Bandwidth
In theory bandwidth costs should go down when the traffic has to be transported over a much shorter distance thereby theoretically saving costs. In practice it often means you have to break open the network topology in places where it wasn’t designed to do that, and deviating from a high-scale design plan can be a costly adventure.


3. Limited resources
Edge cloud generally tends to rely on small computing devices that have limited processing power, storage, and memory. These compute resources are used more often than high-end hardware to keep costs to a minimum because of the large amount of servers needed to be close to all users. This can create challenges when running complex applications or managing large amounts of data.


4. Network connectivity
A reliable and fast network connection is required to transmit data between devices and the cloud. If the network is congested or has latency issues, it can impact the performance and reliability of edge cloud applications. If you are not paying extra for dedicated infrastructure, you might end up facing issues such as congestion in overloaded locations.


5. Security
Deploying edge cloud might lead to decreased physical security. It would be much simpler deploying guards and security mechanisms in one location as opposed to all PoPs on the edge cloud. This can create vulnerabilities on your network.


6. Scalability
Designed to operate on a small scale, edge cloud can be challenging to scale up as the number of devices and users grows. This can limit the ability to support large-scale applications.

Main Take-Aways

Edge cloud can certainly have its benefits, but there are a whole host of issues to consider before committing a large number of compute resources to try and reduce latency when other, more feasible and effective methods exist.

Reach out to i3D.net today to see how we can help in setting up resources for your project.