The latest NSA revelations appear to have done something quite interesting. More and more people are looking at the level of surveillance, and they are beginning to wonder how it is possible for one government organisation to have such access to information. The Internet is open and free, how is it possible for the NSA to have this level of control? Whatever happened to our dream of an open Internet?

Back in December 2012, we were bombarded with a campaign to save the free Internet. We were told that the ITU, the UN’s telecommunications body, was planning to take control of the Web during an intergovernmental meeting. A large coalition of private enterprises, activists and some governments (including the US government) came out in strong opposition to this move.

The story went something like this: the Internet is free and nobody owns it. Repressive regimes (Russia, China and Saudi Arabia were often mentioned) want to take control to make it easier for governments to keep an eye on their citizens. Any change to the current open state of affairs is bad.

The ITU-12 conference came and went, and it became evident that as far as evil takeovers went, this one had been a rather poorly organised one. Nothing changed. The Internet had been saved. As you were.

One of the things that always struck me about this campaign was the assumption that the Internet is free. While it is true that in theory anyone can create their own network and join the Internet, the idea that this makes the Web a free and open space seems to be an illusion.The problem is that we tend to think of Internet governance in the wrong terms. We concentrate on the existing multi-stakeholder institutions that have decision-making power over domain names and protocols as the governing bodies that exercise some level of control over the Internet. But we seldom think of the reality. The Web is more centralised than we would like to believe, few countries and a handful of few private companies have disproportionate amount of power with regards to the existing architecture. This is where the real power lies.

The Internet as we know it is a network of networks that relies on standard communication protocols and a shared backbone infrastructure to get information from one point to another. ISPs, education institutions, workplaces, households, governments, organisations, each has its own network of computers that can communicate with one another. In order for this network to communicate to the world, it requires a connecting infrastructure into the wider Internet in the shape of network access points. So the real Internet is the basic infrastructure that permits one network to talk to the external world, this consists of the domain name server system, Internet routers, the optic cable backbone, and Internet exchange points (IXPs). At home, we pay an ISP for Internet access. The ISP then has to run its servers, but it also has to pay those companies that provide bandwidth, more commonly done by connecting to one of the world’s 188 IXPs. It is in these points that most of the world’s traffic passes through, and where it is possible that there is an increasing level of centrality.

The distributed and open Internet is a worthy cause to support. Information wants to be free, but somebody has to pay for it. So besides the common fear of governments prevalent in online communities, we need to take a hard look at the way in which the Internet has become a sizeable business, and how some few companies command a disproportionate amount of power. These companies no longer respond to self-imposed promises not to be evil, their reason for existing is to make a profit. The NSA revelations have uncovered a public-private conglomerate of gigantic proportions, with the US government and many US-based companies at the centre. Each new revelation has uncovered layers of collaboration that many suspected, but the reality seems to surpass even the worst conspiracy theories.

The PRISM program unveiled collaboration at the service level. Most of the largest Internet services are based in the United States, so PRISM uses that fact by co-opting these companies into allowing surveillance of its users. One PRISM slide boasts that most communications pass through the US, while another chronicles the dates in which companies like Microsoft, Google, Yahoo, Facebook and Skype were added to the program.

However, to me the most surprising (and chilling) revelation of all is XKeyscore, which implies a level of collaboration at the basic infrastructure level that few suspected. XKeyscore is a NSA program that allows intelligence agents to retrieve metadata and content about anything a user does online simply by providing an email address. Unlike PRISM, which relies on the service providers, there are strong implications in the XKeyscore presentation that lead me to believe that the US intelligence services are able to snoop on Internet traffic almost at the basic level. First there is the fact that XKeyscore is not centralised, it consists of a number of 500 Linux servers located around the world.

Then there is the fact that XKeyscore can be used to obtain an amount of data that cannot come from service collaborations. XKeyscore boasts that it can give an analyst access to “nearly everything a user does on the internet”. Moreover, it can provide information at a national level that implies deep connections, for example, “Show me all the encrypted documents in Iran”, or “Give me all the VPN startups in country X, and give me the data so I can decrypt and discover the users”. These claims are only possible if the NSA can have access to most communications going through the Web. This can only happen if they have taps at the highest level. Maybe I am missing something obvious, but I cannot think of any way this is possible other than by having access to all traffic.

There are a few possible ways in which the NSA is able to pull this off, and none is palatable:

The US has made a deal with companies providing backbone services that allows them to snoop Internet traffic. This might be the easiest to achieve, but the hardest to manage as it implies a level of international collaboration that seems difficult to encounter under normal circumstances. Backbone taps. All Internet traffic goes through optic cables at one point or another. The US has built taps to those cables. Highly unlikely . Very likely. Hardware backdoors. Router and/or server manufacturers have built-in hardware or vulnerabilities that allow intelligence agencies access to traffic. Somewhat likely, but problematic as lots of manufacturing takes place internationally. Software/protocol backdoors. A strong indicator for this is the claim in the XKeyscore slides that the NSA is able to easily decrypt VPN traffic, which leads me to believe that they may have access to a key that unlocks virtual private communications. All you need to do is somehow rig standard-setting bodies by bribing and/or employing a few key people in the decision-making process. This theory has the problem that many standards are open, so it could be easy for someone to find the backdoor. Use sham servers to hoover and store data packets. The Internet operates by redirecting data packets throughout the entire system, so if someone managed routers/servers/DNS servers in key locations, in theory they could begin to collect and store information; one could use the decentralised nature of the Internet against it. I am not entirely sure if this is even technically possible, but I find this the most likely explanation because it is the hypothesis that requires fewest assumptions, and I am a strong believer in Occam’s Razor.

One thing is clear, the Internet governance debate is over. The assumption that the Internet is free and open has not survived the NSA revelations, if a country is able to have such a level of access to every communication then that country effectively controls the Internet. Everything else is just bickering over minor details.