Remember the Information Superhighway? It’s what some folks used to call the Internet back in the 1990s. Those of us lucky enough to have access from home were using dial-up modems that were over 1,000 times slower than the cable modem I’m using right now. Nothing very Super about that and the Internet has never remotely resembled a Highway. For example, suppose you wanted to drive from Chicago to New York as quickly as possible. Would you ever go via Texas? Or suppose you only wanted to travel from one part of Singapore to another. Would you first get on a boat to California, only to immediately sail back to a different part of the city? No, you wouldn’t, but this sort of thing happens all the time with your Internet traffic on the Information Superhighway we use today. (See this presentation or this blog for a few more examples.)
The highway analogy immediately breaks down since the Internet consists entirely of private roads (i.e., cables). Yes, there are cables between Chicago and New York, but you might not be able to use them since your provider and its various partners don’t have the appropriate access. So maybe you take the scenic route via Texas or Los Angeles. Yet unless you happen to live in a country that restricts Internet access, you can pretty much get to anywhere you want to go, which is a very good thing. However, your traffic might just flow via very long and circuitous paths, a fact that has serious implications for both performance and security. In this blog, we focus on the performance issues around having your traffic travel half way around the world to get down the street. In a subsequent post, we’ll look at the issue of security and how errant paths can be exploited.
Why does “local” Internet traffic bounce around the world?
Keep in mind that moving traffic around on the Internet is a business, and an extremely competitive low-margin one at that. In this environment, Internet service providers are always seeking to reduce their costs, which can mean trying to keep traffic on their own networks (rather than passing it off to another provider) or avoiding some other provider entirely (by not interconnecting with them and negotiating a business relationship). It’s all about money and that doesn’t necessarily translate into the best user experience. In addition, configuring routing on the Internet is a very complex, error-prone process. Since ISPs are not necessarily paying attention to performance, suboptimal paths can exist for months or years before anyone notices.
However, content providers do care about performance and will often host their content in data centers around the world in an attempt to reduce latencies to their customers. One technique that is used is called anycast, where the same IP addresses (prefixes) are hosted and announced from multiple locations with the hope that users end up at the closest data center. But it is really just a hope. Despite the hype around this and other approaches, all they really do is increase your odds of getting better performance by increasing the number of choices. Whether you do or not depends on your provider, the providers used by the data centers involved, the relationships between these providers (if any) and how they have chosen to route their traffic.
In other words, it’s the paths between you and the content you are trying to reach that matter in the end, both for performance and security.
So my traffic can really leave the country when the content is hosted down the street?
Absolutely. Let’s look at a couple of examples. Readers of this blog are certainly familiar with DNS and the importance of the root name servers. While there are 13 IP addresses associated with the root name servers, they are actually located in hundreds of locations around the world. The most prolific is the L-root run by ICANN, with 146 locations, two of which are in Los Angeles. The following graphic shows recent paths from each of our Los Angeles data centers to the L-root. From one LA data center, we get to an L-root instance in LA in under 1ms. From the other, queries to the L-root take over 150ms and are answered from Dundee, Scotland!
What is going on here? In the first case, our local provider elects to hand off our request to another carrier, one that serves an L-root location in Los Angeles. DNS requests via this path are handled quickly and efficiently. In the second case, our local provider (a different one) directly services an L-root location, but this one happens to be Dundee, Scotland. Rather than pass our traffic to a competitor, this carrier keeps our request local to its own network. In general, this is not a bad idea as interconnects between providers can often be congested. However, in this case, our DNS queries are serviced more than 8,000 kilometers away, introducing needless delay. If the Internet had been designed for performance, this would never happen.
Since DNS performance is critical to user experience, let’s consider one more example. This time we’ll look at Google’s public DNS servers from South America, globally accessible at 184.108.40.206 and 220.127.116.11. While Google claims to have a strong presence in the region, starting in mid-September, we noticed quite poor performance to both IP addresses from a number of our locations on the continent. We traced that back to an internal routing change at Google that appears to now be redirecting South American DNS queries to the US — at least from our locations and via our providers. Obviously, this is less than ideal from a performance standpoint as shown below.
So can anyone host content effectively in this environment?
Yes, with the correct tools. The first thing to realize is that no one — not even Google — can afford to have data centers everywhere, buying Internet transit from everyone. If you are hosting content on the Internet, you have to first decide on your target markets and then locate the major providers in those markets. If you already have customers, your task is somewhat easier as you can mine your own logs for their IP addresses and then figure out where they are. With knowledge of your existing or potential customers, the structure of the Internet and the business relationships that exist between providers, you can plan and implement your hosting strategy. But as the previous example shows, the Internet is a very dynamic place and there is no substitute for regular monitoring to ensure that your services are meeting your performance expectations from your target markets.
As a final example, let’s look at global latency measurements into two networks, one hosting a Google DNS server (18.104.22.168/24) and the other hosting various subdomains of Microsoft’s Bing (22.214.171.124/24). In the following graphic (click on maps to enlarge), we display the minimum latency observed from over a hundred Renesys data centers to these networks over one recent 24-hour period. As would be expected, both Google and Microsoft perform quite well from Europe and the US, but there are stark differences in other parts of the world. From Chennai, India, we recorded a minimum latency of 4ms for Google, whereas Microsoft clocked in at 290ms. The situation was reversed in Sao Paulo, Brazil with Google at 200ms and Microsoft at 2ms.
Keep in mind that these measurements were taken from Renesys data centers in these cities. As with the L-root example, measurements taken from another data center in the same city using different providers could yield very different results. The point of this example is simply to show that even the largest players in the market can have poor performance in some locations via some providers. No one can be everywhere. However, if you identify your target market (or have us assist you in the process), we can certainly help you monitor performance, as well as make informed provider and routing choices. Better yet, we can even put our own probes into your network for additional insights. This is part of what makes us a Gartner 2013 Cool Vendor.
About the Author
Earl leads a peerless team of data scientists who are committed to analyzing Dyn’s vast Internet Performance data resources and applying their expertise to continually improve upon Dyn’s products and services.More Content by Earl Zmijewski