Why should you have a highly distributed platform?

Since the dawn of the CDN market, delivering content to users from nearby servers has always been the key to providing the best possible performance. By being close to the end user – in both geographical and network topological senses – close proximity minimizes latency and avoids congested peering points, Internet routing problems, and other middle-mile bottlenecks. Consequently, having a highly distributed platform has always been the single most important architectural attribute for CDN performance, scale, and reliability.

 

This holds true now more than ever, as users, devices, and networks become more distributed and content gets more dynamic. Many so-called “next-generation CDN providers” fail to meet the baseline requirement of a highly distributed architecture – instead deploying a centralized CDN architecture with perhaps only 10-30 POPs, or points of presence, to deliver content from. This is largely because it takes a tremendous investment of time, expertise, and capital to deploy a highly distributed platform – requiring the development of relationships with thousands of network providers as well as highly sophisticated software to run the platform efficiently. Unfortunately, centralized architectures are a subpar shortcut: their performance and capabilities simply do not measure up.

 

Better Caching Performance

 

A highly distributed CDN architecture is critical to get as close to as many end users as possible. Today, no single network has more than 6% of (non-cellular) Internet access traffic, and the top 30 networks combined add up to only 46%. It takes more than 600 networks to cover 90% of Internet access traffic. This means even the largest centralized CDNs, with several dozen POPs around the world, are still not within a single network hop of the majority of Internet users. Their “edge servers” actually sit in the centralized backbones of the Internet, not at the Internet’s edge; as a result, delivering content to users often requires going through congested peering points and relying on BGP (Border Gateway Protocol) routing. However, since BGP is not a performance-based protocol, it does not always provide the lowest-latency routes, nor can it respond quickly to outages, errors, or congestion. Physical distance to end users matters as well, since the farther data has to travel, the more latency is introduced. Because of the way TCP is impacted by latency and packet loss, with its slow start, connection setup overhead, and lost-packet retransmission, latency can have an unexpectedly severe effect on performance, particularly for “chatty” web applications and high-quality video. Thus, having a highly distributed platform, along with the ability to accurately map users to nearby servers, is absolutely essential to achieving high levels of performance.

 

Better Dynamic Content Performance

 

The performance benefits of a highly distributed architecture hold not only for cacheable content that can be delivered directly by the CDN but also for uncacheable content that requires a full round trip back to the origin. In fact, a highly distributed platform is also essential for the acceleration of dynamic content. CDNs can speed server-to-server communications within their platforms using various route and transport protocol enhancements – optimizing TCP parameters, multiplexing connections, or routing around BGP inefficiencies, for example. These optimizations only work within the CDN platform, however, and don’t apply to the data as it travels between the CDN and end user, so having servers close to end users is critical.

The importance of this is revealed when we examine real-world last-mile performance – in contrast to backbone-centric measurements that third-party performance testing platforms often employ. The figure below shows North American download times for a dynamic (uncacheable) page served by Akamai compared with that of a competitor having POPs in fewer than 10 North American cities. Akamai saw a modest 6% edge over the competitor when looking only at testing agents deployed within backbone networks. But when broadening the measurements to include agents distributed across many networks – where users are – Akamai has a 63% advantage, reducing page load time from over 7 seconds to fewer than 4.5. Moreover, these results are for North America only – a relatively well-connected region. Internationally, we would typically see an even greater performance differential between a centralized platform and a highly distributed one.

Continue reading?

Share this article

asds