Skip to content

How does the Content Delivery Network (CDN) work?

A Content Delivery Network (CDN) is a geographically distributed network of servers that work together to deliver web content to users from locations closer to them. For example, images, videos, scripts and other assets are delivered. By storing copies of content on multiple servers worldwide, CDNs reduce latency and enhance website performance. As a result, it provides faster loading times and a more reliable user experience. CDNs also help to mitigate traffic spikes and distribute the load efficiently across their server network, ensuring optimal delivery even during high-traffic periods.

The next sections discuss the elements associated with a CDN.

What is an origin?

A CDN origin, also known as the origin server, is the original server where the web content is stored and hosted. It acts as the central source of the content that will be distributed across the CDN. When a user requests a specific file or resource, the CDN retrieves it from the origin server and caches it on its edge servers for faster delivery to users in different geographical locations. The CDN origin plays a crucial role in ensuring that the content remains up-to-date and consistent across the CDN network.

What are edge servers?

A CDN edge, or simply "edge," is a critical component of a CDN infrastructure. The CDN edge is a network of strategically distributed servers located in various geographical locations, often closer to end-users or clients than the origin server. These servers are positioned at the "edge" of the internet, hence the name. The primary purpose of CDN edge servers is to improve the delivery speed, reliability, and performance of web content. Here is an explanation of how CDN edges work:

  • Caching: CDN edge servers cache (store) copies of static and dynamic content from the origin server. This caching reduces the load on the origin server and minimizes the need for repeated requests to the same content, resulting in faster page loading times.
  • Proximity: Edge servers are strategically located in data centers or points of presence (PoPs) around the world. They are positioned closer to end-users, reducing the physical distance data needs to travel. This proximity minimizes latency, which is crucial for delivering content quickly and responsively.

Access Control List (ACLs)

CDN ACLs enable content owners to restrict access to their content, protect sensitive data, prevent unauthorized downloads, and implement various security measures. Some common use cases for CDN ACLs include:

  • Restricting access based on IP addresses: Content owners can allow or deny access to specific IP addresses or ranges, controlling which users or locations can access their content.
  • Authentication and authorization: CDNs can integrate with various authentication mechanisms, such as API keys, OAuth tokens, or custom tokens, to ensure that only authorized users or applications can access certain resources.
  • Geolocation-based access: Content owners can use ACLs to access different content to users based on their geographic location, tailoring the experience for specific regions.
  • Rate limiting: ACLs can be used to restrict the number of requests a user or IP address can make within a certain period, protecting the CDN and the origin server from excessive traffic or abuse.


Caching is a fundamental and critical aspect of CDNs. It involves the temporary storage of website content, such as images, videos, scripts, and other assets, on CDN edge servers distributed across various geographical locations. The main purpose of caching in CDNs is to reduce latency and improve website performance by delivering content to users from servers closer to their physical location.

When a user requests a resource from a website served through a CDN, the CDN edge server first checks if it has a cached copy of that resource. If the resource is present in the cache and is still valid (not expired), the CDN serves it directly to the user without needing to access the origin server. This process significantly reduces the time it takes to deliver the content, resulting in faster page load times and a more responsive user experience.

Key benefits of caching in CDNs:

  • Faster content delivery: Caching allows CDNs to serve content from edge servers geographically closer to the user, reducing the round-trip time and minimizing network latency.
  • Offloading origin server: By serving content from edge caches, CDNs reduce the load on the origin server, preventing it from getting overwhelmed during traffic spikes.
  • Scalability: Caching enables CDNs to efficiently handle high volumes of traffic and distribute the load across their server network, ensuring consistent performance even during peak usage.
  • Cost-effectiveness: With content cached and served from edge locations, CDNs can minimize the need for additional infrastructure and bandwidth resources.


Purging in a CDN refers to the process of intentionally and selectively removing cached content from the CDN's edge servers. When content needs to be updated, removed, or when there are changes to the website's assets, purging ensures that the CDN serves the most up-to-date version of the content to users. Purging in CDNs is crucial for maintaining content accuracy, especially for frequently changing resources like dynamic content, real-time data, or user-generated content. It helps to avoid serving stale or obsolete content to users and ensures that the CDN cache remains synchronized with the origin server's content.

Edge-cache warming

CDN edge cache warming, also known as cache preloading or cache priming, is a technique used by CDNs to proactively load or populate their edge server caches with content before actual user requests are made. The goal of cache warming is to improve the efficiency and responsiveness of the CDN by reducing cache misses when users start accessing the website or web application.

By preloading content into the cache, cache warming helps to ensure that a higher proportion of user requests can be served directly from the CDN's edge servers, minimizing the need to fetch content from the origin server. This results in reduced latency, faster page load times, and a more seamless user experience.

Hit-miss ratio

CDN hit-miss ratio, also known as cache hit ratio, is a metric used to measure the efficiency and effectiveness of a CDN in serving content from its cache. It indicates the proportion of user requests that are successfully served from the CDN's cache (cache hits) compared to the requests that need to be fetched from the origin server (cache misses).

A higher hit-miss ratio indicates that a larger proportion of user requests are being fulfilled from the cache, leading to reduced load on the origin server and improved content delivery performance. On the other hand, a lower hit-miss ratio suggests that more requests are resulting in cache misses, which may lead to increased latency and higher resource utilization on the origin server.