What’s next for the internet? (Image: Jasper James/Getty)

Editorial: “The rise of the splinternet“

Openness is the internet’s great strength – and weakness. With powerful forces carving it up, is its golden age coming to an end?

How quickly the world changes. In August 1991 Tim Berners-Lee, a researcher at CERN near Geneva, Switzerland, posted a message to a discussion forum detailing a new method for sharing information between networked computers. To make his idea a reality, he also set up a server running on one of CERN’s computers. A mere two decades later, some 2 billion of us are hooked up to Berners-Lee’s invention, and the UN General Assembly last month declared access to it a fundamental human right. It is, of course, the World Wide Web.


Today, most of us in the developed world and elsewhere take the internet for granted. But should we? The way it works and the way we engage with it are still defined by characteristics it has inherited from its easy-going early days, and this has left it under threat – from criminals, controlling authorities and commercial interests. “The days of the internet as we used to think of it are ending,” says Craig Labovitz of Arbor Networks, a security software company in Chelmsford, Massachusetts. Could we now be living in the golden age of the internet?

Though it was the World Wide Web that opened the internet to the world, the underlying structure dates back much further. That architecture took shape in the early 1960s, when the US air force asked Paul Baran at the RAND Corporation in Santa Monica, California, to come up with a military communications network that could withstand a nuclear attack. Baran proposed a network with no central hub; instead, information would pass from any point in the network to any other through many decentralised switching stations, or routers.

For Baran’s plan to work, every message would be broken up into small packets of digital information, each of which would be relayed from router to router, handed over like hot potatoes. Dividing the message into packets instead of sending it whole meant that communication links would only be busy during the instant they were called upon to carry those packets. The links could be shared from moment to moment. “That’s a big win in terms of efficiency,” says Jon Crowcroft, a computer scientist at the University of Cambridge. It also made the network fast and robust: there was no central gatekeeper or single point of failure. Destroy any one link, and the remaining routers could work out a new path between origin and destination.

Baran’s work paved the way for the Advanced Research Projects Agency Network (see “Internet evolution”), which then led to the internet and the “anything goes” culture that remains its signature. From then on, the internet was open to anyone who wanted to join the party, from individual users to entire local networks. “There was a level of trust that worked in the early days,” says Crowcroft. No one particularly cared who anyone was, and if you wanted to remain anonymous, you could. “We just connected and assumed everyone else was a nice guy.” Even the hackers who almost immediately began to play with the new network’s potential for mischief were largely harmless, showing up security weaknesses for the sheer technical joy of it.

These basic ingredients – openness, trust and decentralisation – were baked into the internet at its inception. It was these qualities, which allowed diverse groups of people from far-flung corners of the world to connect, experiment and invent, that were arguably the key elements of the explosive technological growth of the past two decades. That culture gave us the likes of Skype, Google, YouTube, Facebook and Twitter.

The internet’s decentralised structure also makes it difficult for even the most controlling regime to seal off its citizens from the rest of the world. China and North Korea are perhaps the most successful in this respect; by providing only a few tightly controlled points of entry, these governments can censor the data its people can access. But less restrictive countries, such as South Korea, also splinter their citizens’ experience of the web by restricting “socially harmful” sites. Savvy netizens routinely circumvent such attempts, using social media and the web’s cloak of anonymity to embarrass and even topple their governments. The overthrow of the Egyptian regime in February is being called by some the first social media revolution. Though debatable, this assertion is supported in the book Tweets From Tahrir, an account told entirely through Twitter messages from the centre of the nation’s capital.

It is tempting to think that things can only get better – that the internet can only evolve more openness, more democracy, more innovation, more freedom. Unfortunately, things might not be that simple.

It is tempting to think that the internet can only lead to more openness, more democracy, more freedom

There’s a problem on the horizon, and it comes from an unexpected quarter – in fact from some of the very names we have come to associate most strongly with the internet’s success. The likes of Apple, Google and Amazon are starting to fragment the web to support their own technologies, products and corporate strategy. Is there anything that can be done to stop them?

If Apple, Google and Amazon start to fragment the web, can anything be done to stop them?

Some authorities are certainly trying. Google, for instance, has attracted the scrutiny of the US Federal Trade Commission, which last month launched an antitrust investigation to determine whether the company’s search results skew towards businesses with which it is aligned and away from its competitors. And as millions of people buy into Apple’s world of iPads and iPhones, they are also buying into Apple’s restricted vision of the internet. The company tightly controls the technologies users are allowed to put on those devices.

Take, for instance, Adobe’s Flash software, which most PCs support and most websites use to run graphics and other multimedia, and even entire apps. Flash is prohibited in all Apple apps, for security reasons – which means that the iPhone browser cannot display a large portion of the internet. That creates a private, Apple-only ecosystem within the larger internet. A similar kind of balkanisation is evident in Google’s Android mobile-phone operating system, Amazon’s Kindle e-reader, and Facebook’s networks, which are completely walled off from the rest of the internet.

Should we care? On the one hand, these companies have grown so big precisely because they make products and provide services that we want to use.

The problem is that this concentration of power in the hands of a few creates problems for resilience and availability. “From an engineering standpoint, the downsides to this are the same things you get with monoculture in agriculture,” says Labovitz. Ecosystems without genetic variation are the most vulnerable to being wiped out by a single virus. Similarly, as more of us depend on ever fewer sources for content, and get locked into proprietary technologies, we will become more susceptible to potentially catastrophic single points of failure.

That problem will only intensify with the ascendancy of the cloud, one of the biggest internet innovations of the past few years. The cloud is the nebulous collection of servers in distant locations that increasingly store our data and provide crucial services. It started with web mail services like Hotmail, which let you store your email on central servers rather than on the computer in front of you. The concept quickly spread. Last month, Apple announced the iCloud, a free service that will store all your music, photos, email, books and other data – and even apps – for seamless access via any Apple device, be that an iPhone, iPad or MacBook laptop.

Some companies have moved their entire IT departments into the cloud. Indeed, there are companies that barely exist outside the cloud: in addition to backing up data, Amazon lets internet-based companies rent space on its servers.

The cloud could generate exactly the single points of failure that the internet’s robust architecture was supposed to prevent. And when those points fail, they may fail spectacularly. During an outage of Amazon’s cloud service in April, when the company’s servers went dark, entire companies briefly blinked out of existence. Cloud services also raise security concerns. “One big issue with being connected to the cloud is that a lot of information is in different places and shared,” says Labovitz. “You no longer have one castle to protect. It’s a much more distributed architecture, and a connected one. You just need one weak link.”

The cloud could generate the single points of failure the internet’s architecture was supposed to prevent

Labovitz’s worries are substantiated by a recent rise in real-world attacks. In March, an unknown group hacked RSA, a company that makes electronic tokens that can be used to create a supposedly impregnable password. Two months later, hackers used what they had gleaned from that attack to infiltrate computers belonging to the defence contractor Lockheed Martin, which relied on those tokens for their security. In May, Sony Online Entertainment’s servers were hacked, compromising the personal information of about 25 million users.

The vulnerability is worrying enough if it’s our email or personal data being hacked, but soon it could be more intimate and dangerous than that. Imagine being a heart patient and having your pacemaker hacked, or someone with diabetes whose insulin supply is suddenly cut off. That is a real prospect, as the next big internet innovation, the “internet of things”, gets under way. In the utopian vision, sensors embedded in all kinds of everyday objects will continuously communicate with the cloud.

Objects that participate in this internet of things might be as mundane as a sensor in your refrigerator that tells the nearest supermarket when you’re out of milk. Or it could be a medical sensor that taps into a cloud-based controller, for example, a monitor that transmits a diabetic person’s glucose levels to a data centre every 5 minutes. This information could instantly be used to calculate an optimal insulin dosage, which is transmitted back to an insulin pump.

As we begin to interact with the internet in this way, without ever touching a keyboard or a screen, we will become increasingly vulnerable to threats, such as hacking and network instability, that were once only relevant for a small and relatively insignificant part of our lives. And that might necessitate a fundamental rethink of how the internet works.

Take anonymity. “It is the internet’s greatest strength and its greatest weakness,” says Marc Goodman, a computer security consultant who blogs at futurecrimes.com. For every popular uprising it facilitates, anonymity allows a slew of criminals far more dangerous than those early hackers to cover their online tracks. And these anonymous criminals can reach right into your computer if it’s not well protected. There is no security built into the internet.

There have been several proposals to address this weakness. In 2009, Eugene Kaspersky, who runs the internet security firm Kaspersky Labs, based in Moscow, Russia, suggested that the internet would be better off if people were required to have a kind of licence to get online. To access Kaspersky’s vision of the internet, for example, the processor in your computer might need to be verified. An authentication requirement would fragment the internet in many ways. Despite the idea’s significant technical obstacles and the objections it raises among privacy advocates, however, similar proposals pop up from time to time.

There might be another way. The US National Science Foundation is investing $32 million in a project it calls the Future Internet Architectures programme. Under its auspices, four different groups have been set up, each spread across numerous institutions, to investigate options for a more evolved internet. The groups will cover mobile internet access, identity verification schemes, data safety – and cloud computing, part of the project called Nebula, after the Latin for “cloud”. Given the promise of the internet of things, securing the cloud might be a good place to start. That means revamping the internet to ensure that it is highly resilient and constantly available: for example, by finding new ways of transmitting packets of information.

“Internet routing algorithms were designed in an era where people were really excited about finding the best path through a network,” says Jonathan Smith of the University of Pennsylvania in Philadelphia, who heads the Nebula team. “It’s a beautiful algorithm. If you can find the best path, you should take it.”

But what if that “best” path breaks down, in an attack, say? Choosing a new path can introduce delays that might be trivial for checking your email but crippling for applications that rely on real-time instructions from the cloud, such as the control for an insulin pump. So Smith and his colleagues are developing algorithms that will establish many, predetermined paths from endpoint to endpoint, something that is not possible in today’s internet. Such an advance could increase the network’s resilience to hacking.

Nebula’s creators also envisage giving senders more control over the path their packets take, just as offline businesses can opt to hire safe couriers for a particularly important package, rather than entrusting it to the general mail. Similarly, receivers could dictate who can send them packets and routers could verify that a packet is indeed taking the intended path. This could solve the problem of trusting your network without resorting to a national or global internet identity programme.

There are some hard choices ahead. Had the internet been built with bulletproof security in mind, we might never have reaped the rewards of breakneck innovation. Yet as our dependence on the internet grows, we are more vulnerable to those who seek to disrupt – whether they are hackers exposing the internet’s weaknesses, governments intent on keeping their citizens under control, or corporations driven by profits.

So how many of the internet’s fundamental properties do we want to change? The nature of our future online lives will depend on answering this question, on how we walk the tightrope between total security and innovation-friendly openness. It is a question that will require widespread and vocal debate, says William Lehr of the Massachusetts Institute of Technology. “We cannot just assume that everything will work out fine,” he says. But with some careful thought about how we want the next phase of the internet to look, we might prolong its golden age.