For Peter Norvig, director of research at Google and a pioneer of machine learning, the data-driven AI technique behind so many of its recent successes, the key issue is working out how to ensure that these new systems improve society as a whole – and not just those who control it. “Artificial intelligence has proven to be quite effective at practical tasks — from labeling photos, to understanding speech and written natural language, to helping identify diseases,” he says. “The challenge now is to make sure everyone benefits from this technology.”

The big problem is that the complexity of the software often means that it is impossible to work out exactly why an AI system does what it does. With the way today’s AI works – based on a massively successful technique known as machine learning – you can’t lift the lid and take a look at the workings. So we take it on trust. The challenge then is to come up with new ways of monitoring or auditing the very many areas in which AI now plays such a big role.

For Jonathan Zittrain, a professor of internet law at Harvard Law School, there is a danger that the increasing complexity of computer systems might prevent them from getting the scrutiny they need. “I'm concerned about the reduction of human autonomy as our systems — aided by technology — become more complex and tightly coupled,” he says. “If we ‘set it and forget it’, we may rue how a system evolves – and that there is no clear place for an ethical dimension to be considered.”