How big of a technology shift is this for businesses?

It’s like electrification. And it took about two or three decades for electrification to pretty much change the way the world was. Sometimes I meet very senior people with big responsibilities who have been led to believe that artificial intelligence is some kind of “magic dust” that you sprinkle on an organization and it just gets smarter. In fact, implementing artificial intelligence successfully is a slog.

When people come in and say “How do I actually implement this artificial-intelligence project?” we immediately start breaking the problems down in our brains into the traditional components of AI—perception, decision making, action (and this decision-making component is a critical part of it now; you can use machine learning to make decisions much more effectively)—and we map those onto different parts of the business. One of the things Google Cloud has in place is these building blocks that you can slot together.

Solving artificial-intelligence problems involves a lot of tough engineering and math and linear algebra and all that stuff. It very much isn’t the magic-dust type of solution.

What mistakes do companies make in adopting AI?

There are a couple of mistakes I see being made over and over again. When people come and say “I’ve got this massive amount of data—surely there’s some value I can get out of it,” I sit them down and have a strong talk with them.

What you really need to be doing is working with a problem your customers have or your workers have. Just write down the solution you’d like to have; then work backwards and figure out what kind of automation might support this goal; then work back to whether there’s the data you need, and how you collect it.

What makes a good AI practitioner?

The problem is, it’s a kind of artisanal skill. There’s no real playbook for it. But the big push is to find the problem and work backwards from it. And it’s actually fun, because there’s creativity in thinking about how the business could change, and creativity in understanding which pieces of technology are really feasible as opposed to a blue-sky crazy science project. But it’s really rare to find people who are able to use both parts of their brain at once.

AI is about using math to make machines make really good decisions. At the moment it has nothing to do with simulating real human intelligence. Once you understand that, it kind of gives you permission to think about how a set of data tools—things like deep learning and auto machine learning and, say, natural language translation—how you can put those into situations where you can solve problems. Rather than just saying “Wouldn’t it be good if the computer replaced the brains of all my employees so that they could run my company automatically?”

What do you think of MIT’s plan to build a new college for AI?

I was really pleased to see what MIT is doing. At Carnegie Mellon, when we created our big AI initiative two years ago, more than 50% of everyone involved in it was outside the school of computer science.

AI by itself is an abstract concept that, to me personally, isn’t that exciting. It’s when you say, “How are we going to make cosmology vastly more effective through massive automation?” or “How are we going to make it so that kids studying literature can have tools to find if something was written by a person in the same state of mind as someone else?”

What MIT is doing is very sensible. It’s my personal belief that there won’t be many opportunities for students who want to avoid AI.

At CMU you organized an AI conference with the Obama administration. Does the current US government need to pay more attention to AI?

There are huge sectors of the commercial world [that will be affected], but there are also things in the public sector, from education to effectively managing health care for veterans to automation for controlling massive fires. I would be horrified at any country that decided it was not going to be bringing artificial intelligence into the public sector—we have opportunities in so many verticals to save lives and improve lives.

Google Cloud was at the center of a recent controversy over a contract with the US Air Force. Will you continue to work with the US military?

We will continue our work with governments and the military in many areas, as we have for years. These include cybersecurity, training, military recruitment, veterans’ health care, and search and rescue. We will also work to provide tools to enhance government efficiency.

Collaboration in these areas is important, and we’ll actively look for more ways to augment the critical work of these organizations. One recent example is our partnership with the Drug Enforcement Agency to fight opioid addiction.

Google’s plans regarding China have also been controversial, and there is a Google AI research lab in Beijing. How important is China to Google’s AI plans?

Google’s really serious when it says “AI first,” and that’s what has attracted so many of us to Google in the first place. There is AI happening in pretty much every Google engineering office around the world, including China.

How do you plan to integrate employees’ concerns over how AI is used into plans for the future?

Sundar [Pichai, Google’s CEO] wrote a blog about AI principles in June, and we also just published a post about working with people for “steering the right path for AI.” Sundar set us on this path because it’s the right thing to do, but I also think that this makes very sound business sense.

I want to see organizations choosing to work with Google specifically because we are so systematically organized around making sure AI projects avoid the many potential ethical pitfalls that new AI practitioners may make.