In the early 1970s, a British grad student named Geoff Hinton began to make simple mathematical models of how neurons in the human brain visually understand the world. Artificial neural networks, as they are called, remained an impractical technology for decades. But in 2012, Hinton and two of his grad students at the University of Toronto used them to deliver a big jump in the accuracy with which computers could recognize objects in photos. Within six months, Google had acquired a startup founded by the three researchers. Previously obscure, artificial neural networks were the talk of Silicon Valley. All large tech companies now place the technology that Hinton and a small community of others painstakingly coaxed into usefulness at the heart of their plans for the future—and our lives.

WIRED caught up with Hinton last week at the first G7 conference on artificial intelligence, where delegates from the world’s leading industrialized economies discussed how to encourage the benefits of AI, while minimizing downsides such as job losses and algorithms that learn to discriminate. An edited transcript of the interview follows

WIRED: Canada’s prime minister Justin Trudeau told the G7 conference that more work is needed on the ethical challenges raised by artificial intelligence. What do you think?

Geoff Hinton: I’ve always been worried about potential misuses in lethal autonomous weapons. I think there should be something like a Geneva Convention banning them, like there is for chemical weapons. Even if not everyone signs on to it, the fact it’s there will act as a sort of moral flag post. You’ll notice who doesn’t sign it.

WIRED: More than 4,500 of your Google colleagues signed a letter protesting a Pentagon contract that involved applying machine learning to drone imagery. Google says it was not for offensive uses. Did you sign the letter?

GH: As a Google executive, I didn't think it was my place to complain in public about it, so I complained in private about it. Rather than signing the letter I talked to [Google cofounder] Sergey Brin. He said he was a bit upset about it, too. And so they're not pursuing it.

WIRED: Google’s leaders decided to complete but not renew the contract. And they released some guidelines on use of AI that include a pledge not to use the technology for weapons.

GH: I think Google's made the right decision. There are going to be all sorts of things that need cloud computation, and it's very hard to know where to draw a line, and in a sense it's going to be arbitrary. I'm happy where Google drew the line. The principles made a lot of sense to me.

WIRED: Artificial intelligence can raise ethical questions in everyday situations, too. For example, when software is used to make decisions in social services, or health care. What should we look out for?

GH: I’m an expert on trying to get the technology to work, not an expert on social policy. One place where I do have technical expertise that’s relevant is [whether] regulators should insist that you can explain how your AI system works. I think that would be a complete disaster.

People can’t explain how they work, for most of the things they do. When you hire somebody, the decision is based on all sorts of things you can quantify, and then all sorts of gut feelings. People have no idea how they do that. If you ask them to explain their decision, you are forcing them to make up a story.