Illustration by Greygouar

In March 2001, futurist Ray Kurzweil published an essay arguing that humans found it hard to comprehend their own future. It was clear from history, he argued, that technological change is exponential — even though most of us are unable to see it — and that in a few decades, the world would be unrecognizably different. “We won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate),” he wrote, in ‘The Law of Accelerating Returns’.

Fifteen years on, Kurzweil is a director of engineering at Google and his essay has acquired a cult following among futurists. Some of its predictions are outlandish or over-hyped — but technology experts say that its basic tenets often hold. The evidence, they say, lies in the exponential advances in a suite of enabling technologies ranging from computing power to data storage, to the scale and performance of the Internet (see ‘Onwards and upwards’). These advances are creating tipping points — moments at which technologies such as robotics, artificial intelligence (AI), biology, nanotechnology and 3D printing cross a threshold and trigger sudden and significant change. “We live in a mind-blowingly different world than our grandparents,” says Fei-Fei Li, head of the Stanford Artificial Intelligence Laboratory in California, and this will be all the more true for our children and grandchildren (see 'Future focus').

Kurzweil and others have argued that people find this pace of change almost impossible to grasp, because it is human nature to perceive rates of progress as linear, not exponential — much as when one zooms in on a small part of a circle and it appears as an almost straight line. People tend to focus on the past few years, but pulling back reveals a much more dramatic change. Many things that society now takes for granted would have seemed like futuristic nonsense just a few decades ago. We can search across billions of pages, images and videos on the web; mobile phones have become ubiquitous; billions of connected smart sensors monitor in real time everything from the state of the planet to our heartbeats, sleep and steps; and drones and satellites the size of shoeboxes roam the skies.

Onwards and upwards

Exponential advances in enabling technologies have reached the point at which they could trigger disruptive change in sectors from artificial intelligence to robotics to medicine. As a result, many experts argue that tomorrow’s world will be unrecognizable from that of today.

ENABLERS

1. Computing power The exponential growth in supercomputing performance is one indicator of dizzying advances across computing. Supercomputers in 2020 are likely to be 30 times more powerful than those of today.

2. Really big data The amount of data worldwide is predicted to reach a whopping 44 zettabytes (1021 bytes) by 2020 — nearly as many digital bits as there are stars in the Universe. This gives more raw material for artificial-intelligence machines to learn from.

3. Communication speed Meanwhile, the performance and scale of the Internet improves. Broadband and WiFi speeds are increasing, and Internet data traffic will exceed a zettabyte this year and double by 2019.

DRIVERS

4. Talking devices By 2020, the number of connected sensors and devices in buildings, cities and farms — the ‘Internet of Things’ — will be twice that of the human population.

5. Biology booms Conceptual and technological advances are driving progress in biology. DNA sequencing costs have fallen at an exponential rate and the number of sequences has soared since 1985. Similar advances are happening in neuroscience and biological nanotechnology.

6. Like it, print it 3D printing is becoming cheaper and quicker — one factor that could disrupt manufacturing and allow once-pricey robotics to be mass produced.

7. Rise of robots Purchases of robots are set to rocket as their capabilities increase and costs fall, a trend driven by massive investments in artificial intelligence and robotics by the military and by computing giants such as Google.

All these factors are now converging to push seemingly futuristic technologies out of the lab, and set them on the same path taken by personal computing and consumer electronics.

Illustrations by Greygouar; Design by Wes Fernandes/Nature; Sources: 1. top500; 2. IDC Digital Universe Study, 2012; 3. Cisco Visual Network Index (VNI), 2015; 4. Cisco VNI Global IP Traffic Forecast, 2014–2019; 5. NCBI; 6. EPSRC; Direct Manufacturing Research Center; Roland Berger; 7. International federation of robotics, Japan Robot Association; Japan Ministry of Economy, Trade & Industry; euRobotics; BCG

If the pace of change is exponentially speeding up, all those advances could begin to look trivial within a few years. Take ‘deep learning’, a form of artificial intelligence that uses powerful microprocessor chips and algorithms to simulate neural networks that train and learn through experience, using massive data sets. Last month, the Google-owned AI company DeepMind used deep learning to enable a computer to beat for the first time a human professional at the game of Go, long considered one of the grand challenges of AI. Researchers told Nature that they foresee a future just 20 years from now — or even sooner — in which robots with AI are as common as cars or phones and are integrated into families, offices and factories. The “disruptive exponentials” of technological change will create “a world where everybody can have a robot and robots are pervasively integrated in the fabric of life”, says Daniela Rus, head of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology in Cambridge.

After decades in development, applications of AI are moving into the real world, says Li, with the arrival of self-driving cars, virtual reality and more. Progress in AI and robotics is likely to accelerate rapidly as deep-pocketed companies such as Google, Apple, Facebook and Microsoft pour billions of dollars into these fields. Gill Pratt, former head of the US Defense Advanced Research Projects Agency’s Robotics Challenge, asked last year whether robotics is about to undergo a ‘Cambrian explosion’ — a period of rapid machine diversification (G. A. Pratt J. Econ. Perspect. 29, 51–60; 2015). Although a single robot cannot yet match the learning ability of a toddler, Pratt pointed out that robots have one huge advantage: humans can communicate with each other at only 10 bits per second — whereas robots can communicate through the Internet at speeds 100 million times faster. This could, he said, result in multitides of robots building on each other’s learning experiences at lightning speed. Pratt was hired last September to head the Toyota Research Institute, a new US$1-billion AI and robotics research venture headquartered in Palo Alto, California.

Many researchers say that it is important to prepare for this new world. “We need to become much more responsible in terms of designing and operating these robots as they become more powerful,” says Li. In January 2015, a group including Elon Musk, Bill Gates and Stephen Hawking penned an open letter calling for extensive research to maximize the benefits of AI and avoid its potential pitfalls. The letter has now been signed by more than 8,000 people.

Yet predicting the future can be a fool’s game — and not everyone is convinced that technological change will hit humanity quite so fast. Ken Goldberg, an engineer at the University of California, Berkeley, questions the idea that technologies advance exponentially across the board, or that those that do will continue indefinitely. “The danger of overly optimistic exuberance is that it could set unrealistic expectations and trigger the next AI winter,” he says, alluding to periods in AI’s history where hype gave way to disappointment followed by steep cuts in funding. Goldberg says that recent warnings that AI and robots risk surpassing human intelligence are “greatly exaggerated”.

And Stuart Russell, a computer scientist at the University of California, Berkeley, questions the notion that exponential advances in technology necessarily lead to transformative leaps. “If we had computers a trillion times faster we wouldn’t have human-level AI; half in jest, one might say we’d just get wrong answers a trillion times sooner,” he says. “What matters are real conceptual and algorithmic breakthroughs, which are very hard to predict.”

Russell did sign the Hawking letter — and says it is important not to ignore the ways that technologies could be taken in potentially harmful directions with profound results. “We made this mistake with fossil-fuel technologies 100 years ago — now it’s probably too late.”

Future focus

Expert predictions

“A possible ‘Cambrian explosion’ in robotics with a rapid period of incredible machine diversification. Robots communicating with each other at speeds that are 100 million times faster than humans might allow swarms of robots to build on each other’s learning experiences at lightning speed.” Gill Pratt, Head of the Toyota Research Institute, Palo Alto, California

“A full brain-activity map and connectome by 2020 and by 2040 it will be routine to read and write data to billions of neurons. By 2040,1 billion people will have their whole genome sequenced and get constant updates of their immunomes and microbiomes.” George Church, Geneticist at Harvard Medical School, Cambridge, Massachusetts

“The promise for the future is a world where robots are as common as cars and phones, a world where everybody can have a robot and robots are pervasively integrated in the fabric of life.” Daniela Rus, Head of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology, Cambridge

“In the next couple of generations, we will seethe first phase of true personal, assistive robots in the home and other human environments. There will be a huge opportunity to better the quality of life, for example by freeing up people from work.” Fei-Fei Li, Head of the Stanford Artificial Intelligence Laboratory, California

“Tomorrow’s scientists will have armies of virtual graduate students, doing lab work, statistical analysis, literature search and even paper-writing for them.” Pedro Domingos, Machine-learning researcher, University of Washington, Seattle