We are at the next inflection point in humanity. The rate of change is now going to be measured in years or months.

If you think the pace of technology change has been speeding up, prepare for a shock — you ain't seen nothing yet. After the transistor gave us pervasive computing and the Internet made us all digitally connected, we've started to discover how much our world can change in just a few years. Now that advances in artificial intelligence are starting to kick in, that rate of change is set to surge to unprecedented new heights. According to Scott Rickard, VP data science at business software vendor Salesforce:

This week's landmark AI triumph by Google DeepMind's AlphaGo at the ancient and challenging game of Go is just the beginning. Speaking at a Salesforce event for tech industry analysts earlier in the week, Rickard discussed the factors that are now in place to boost AI-fueled innovation, while presenting an optimistic view of what that means for the future of humanity.

While yesterday's 3-0 match defeat of human Go champion Lee Sedol invokes memories of the equally shocking 1997 defeat of chess grand master Gary Kasparov by IBM's Deep Blue, it's worth reflecting on the history of world-class chess since that time (especially in light of Lee Sedol's victory in today's fourth game of the five-game match). As Wired's Kevin Kelly describes in an October 2014 article on AI breakthroughs:

The advent of AI didn’t diminish the performance of purely human chess players. Quite the opposite. Cheap, supersmart chess programs inspired more people than ever to play chess, at more tournaments than ever, and the players got better than ever. There are more than twice as many grand masters now as there were when Deep Blue first beat Kasparov. The top-ranked human chess player today, Magnus Carlsen, trained with AIs and has been deemed the most computer-like of all human chess players. He also has the highest human grand master rating of all time.

We can now expect a similar blossoming in the ranks of the world's leading Go players. That's the thing about humans. As Kevin Ashton, described as the Father of the Internet of Things, likes to say:

The thing that changes faster than our technology is our ability to get used to it.

Robo-enhanced humanity

Those who fear the moment when generalized artificial intelligence overtakes and accelerates away from the brainpower of humans — the so-called singularity — may well be underestimating the enormous adaptability of our species and the millions of years we've already spent learning how to augment our natural capabilities with the tools we create. Rickard predicts:

The average person is going to get smarter at a faster rate than the average AI.

The era of AI-augmented, robo-enhanced humanity has already begun. AI historians may cite 2006 as the dawn of a new era, when Geoffrey Hinton revealed his deep learning techniques for training neural networks. But to my mind, it was the launch of the Apple iPhone in 2007 that, through its always-on connectivity and touchscreen interface, put the pooled knowledge and computing power of the Internet literally at the fingertips of the world's population.

Less than 10 years later, for many of us the touch-screen smartphone (since augmented with voice recognition) has become an extension of our beings. I routinely find myself reaching for my iPhone to check a fact or verify a supposition — often on topics that, in my youth, I would have had to visit a university library to have found the answer.

Rickard talks about the virtuous cycle of discovery and innovation that our digitally connected world enables at an unprecedented rate. We live in a society where individuals can contribute new knowledge and ideas, connect with others, and consume everything else that's in the network, which leads to more contributions, and so on. This, of course, is how human society has always evolved its augmented capabilities, beginning with the discovery of tools, and then agriculture and writing, and so on.

There was an explosion of innovation when the commercialization of the printing press suddenly and dramatically enlarged the numbers of people that could disseminate, share and consume knowledge and ideas. The advent of the Internet has brought an even greater leap forward. AI compounds the magnitude of that leap by providing the tools to predict, find, filter and summarize the knowledge we need, accelerating the speed at which we can assimilate new information and create new findings that we can feed back to the connected community.

Virtuous AI cycle

This virtuous cycle of knowledge sharing and discovery combines with a second virtuous cycle in the evolution of artificial intelligence. Kelly's Wired article describes its three components as the three breakthroughs that have finally unleashed AI on the world: cheap parallel computation, big data and better algorithms. Rickard highlights how they feed each other.

First of all there's an explosion of data that's easily available as raw material to analyze. In the past, the difficulty of collecting data meant that data scientists had to carefully select and clean their data, and devise clever algorithms to extrapolate from these small samples. Today, AI researchers can achieve so much more just by grabbing huge chunks of raw, unfiltered data and throwing lots of dumb computing power at it. As Rickard explains:

Lots and lots of messy data is much better than coding in what we believe is human intelligence — what we think is going on.

Then there's the continued ramping of raw compute capacity that gets not only cheaper but also demands less and less power to run. Moore's law continues to deliver a doubling of output every 18 months or so, which now means that each 18 months we're doubling what we've achieved in the entire past half-century of computing advances. Each new advance is in itself a massive surge.

Feed all that data into this ever-more plentiful computing and you can now start to refine your algorithms (or, using AI, they can refine themselves) at a frenetic rate, drawing on the virtuous cycle of connected knowledge-sharing and discovery to accelerate the pace of refinement. That is why Rickard projects a new era of advances that will accelerate away even from the astonishing achievements of the past few decades.

Alien future?

There are elements of this future that seem alien, or appear to stretch credulity. Rickard talked about machines that can read brainwaves and thus know what we are thinking, leading to the concept of augmented imagination, in which a human operator interacts with an AI collaborator just by thought. There's clearly some way to go before we understand enough about interpreting brainwaves for this to produce any substantive communications. And yet, given the pace of progress that now becomes possible by applying modern AI and brute computing power, it may be closer than we expect.

Meanwhile, we are already adapting to interact with the smart devices around us whether by touch, voice or movement. My children expect to access all of human knowledge by typing in a few search terms on an iPad — they take it for granted. Is it so unlikely that their children will expect the answers to appear before them before they even realize what the question was going to be?

Microsoft founder Bill Gates is often attributed this familiar axiom of the computing era:

Most people overestimate what they can do in one year and underestimate what they can do in ten years.

We are about to enter an era of AI-augmented human innovation in which we may well find ourselves blown away by what transpires each and every year.