As end-users, we expect to connect to an Internet Service Provider which will take our packets and deliver them to all destinations around the world, and take packets from all those places and transport those back to us. But how does this work when all these destinations that we communicate with are served by different ISPs?
Smaller ISPs are customers of larger ISPs just like end-users buy service from an ISP. But a dozen or so of the largest ISPs in the world handle all of the traffic towards customers of other ISPs exclusively through peering. These are called the tier-1 networks. Smaller ISPs also interconnect and exchange traffic with each other without money changing hands, but the interesting thing about the tier-1s is that they by definition depend on peering to handle all of their off-net traffic and they peer in many locations. We’ve talked about the how and why of peering before. Today, we’ll look at where networks interconnect.
First a little history. In the beginning, there was the ARPANET, built for the US Department of Defense. The ARPANET functioned as the “backbone” of the internet: you simply connected to the ARPANET and the ARPANET would transport your traffic to other connected networks. This function was taken over by the National Science Foundation’s NSFNET Backbone in the late 1980s, with Federal Internet Exchanges on the East and West Coasts to interconnect the ARPANET and the NSFNET Backbone together during the transition. The NSFNET Backbone served research and government users well, but as it didn’t allow for “for-profit activities”, its use by commercial ISPs was somewhat problematic. Also, traffic kept growing. By the mid-1990s the US government exited the internet backbone business and the NSFNET backbone was decommissioned. It was replaced by commercial backbones, operated by large telecoms operators such as AT&T, MCI and Sprint. Four “network access points” (NAPs) were created in Virginia, New Jersey, Chicago and San Jose so these commercial networks could interconnect and exchange traffic.
When the NAPs were built around 1995, Ethernet speed was just about to make the jump to 100 Mbps. So the NAPs were either limited to 10 Mbps Ethernet or more complex and costly technologies such as ATM and FDDI. As a result, larger networks started to connect directly rather than to depend on the NAPs to exchange all their peering traffic. Elsewhere in the world traffic volume was lower, and internet exchanges (IXes) were able to move to faster Ethernet variants as those became available. As such, in the US, private peering is more common, while elsewhere there is more of a tradition to peer over internet exchanges. However, as traffic volumes have increased private peering over a direct connection between two networks has also become more common outside the US.
Today, the three largest internet exchanges in the world are the DE-CIX in Frankfurt at almost 5 terabits per second peak traffic, AMS-IX Amsterdam at more than 4 Tbps and LINX London at 3 Tbps. Of the top 20 exchanges on the Wikipedia list of internet exchange points by size, 13 exclusively or mainly operate in Europe, two in Asia and one in Brazil. The remaining four have a strong presence in the US: Equinix, Coresite Any2, the Seattle Internet Exchange (SIX) and the New York International Internet Exchange (NYIIX). However, note that the Packet Clearing House ranking is different.
Equinix runs datacenters throughout the world and also offers internet exchange services in Ashburn, Atlanta, Chicago, Dallas, Los Angeles, Miami, New York City, Palo Alto, San Jose, Seattle, Toronto and Vienna in North America. In Europe, their IXes are in Geneva, Paris and Zurich and in Asia in Hong Kong, Osaka, Singapore and Tokyo as well as Melbourne and Sidney in Australia.
Coresite runs datacenters in the US; their Any2 IX service is available in each datacenter. Those are located in Los Angeles, the Silicon Valley area, Chicago, Denver, Boston, Miami, New York and the Northern Virginia/DC area. The SIX in Seattle reached 500 Gbps peak this month and the NIIYX in the New York City area exchanges 350 Gbps at peak hours.
Direct, private peering between two networks can happen in any location where both of them are present. That will typically include many of the cities that have been mentioned previously, as internet exchanges tend to be created in places where many networks are present. This of course includes very large cities and capitals, but also larger coastal cities where seacables land.
For instance, many transatlantic fibers terminate in New York/New Jersey, the Washington, DC area, and to a lesser degree Boston. Transpacific cables favor Los Angeles, the Bay Area and Seattle. And Miami is the gateway to the Caribbean and South America. As a result, it’s not uncommon for large Asian networks that don’t really have a US presence to be present in LA or the Bay Area in order to connect to one or more internet exchanges and perform private peering. The same for Caribbean and South American networks in Miami and European networks in New York or the DC area. So presence in one of these cities or a similar one in other regions allows for the opportunity to peer with some networks from other regions without having to invest in intercontinental fiber capacity yourself.
Most internet exchanges have a presence in multiple datacenters in their city or region. It’s not uncommon for existing internet exchanges to go to new cities. Typically, this means that a separate IX with its own infrastructure is set up in one or more datacenters in the new city. So if you connect to AMS-IX New York you can peer with other AMS-IX New York members, not with AMS-IX Amsterdam members. However, there are internet exchanges that allow members in one city to connect to members in another city, such as Coresite Any2 in the US and the Neutral Internet Exchange (NL-ix) in Europe.
So if you want to increase your peering footprint, you have a lot of options to consider. Even without taking the quality of the local food into consideration.