Share All sharing options for: Sundar Pichai says the future of Google is AI. But can he fix the algorithm?

Unbeknownst to me, at the very moment on Monday morning when I was asking Google CEO Sundar Pichai about the biggest ethical concern for AI today, Google's algorithms were promoting misinformation about the Las Vegas shooting.

I was asking in the context of the aftermath of the 2016 election and the misinformation that companies like Facebook, Twitter, and Google were found to have spread. Pichai, I found out later, had a rough idea that something was going wrong with one of his algorithms as we were speaking. So his answer, I think it's fair to say, also serves as a response to the widespread criticisms the company faced in the days after the shooting.

"I view it as a big responsibility to get it right," he says. "I think we'll be able to do these things better over time. But I think the answer to your question, the short answer and the only answer, is we feel huge responsibility." Later, he added, "Today, we overwhelmingly get it right. But I think every single time we stumble. I feel the pain, and I think we should be held accountable."

Learning about Google's "stumble" after we talked put some of our conversation in a different light. I was there to talk about how Pichai’s project to realign the entire company to an "AI-first" footing was going in the lead-up to Google's massive hardware event. Google often seems like the leader in weaving AI into its products; that’s certainly Pichai’s relentless focus. But it’s worth questioning whether Google’s systems are making the right decisions, even as they make some decisions much easier.

When the subject isn't the failure of its news algorithms, Pichai is enthusiastic about AI. There’s not much difference between an enthusiastic Sundar Pichai and a quiet, thoughtful Sundar Pichai, but you get a sense of it when he names a half-dozen Google products that have been improved by its deep learning systems off the top of his head.

Google's lead in doing clever, innovative things with AI is impressive, and the examples Pichai cites can sometimes even verge on inspiring — but there's clearly still work to do.

Most executives talk about AI like it's just another thing that's included in the box or in its cloud; it's a buzzword, a tick box on a spec sheet slotted in right after the processor. But Pichai is intent on pressing Google's advantage in AI — not just by integrating AI features into every product it makes, but by making products that are themselves inspired by AI, products that wouldn't be conceivable without it.

There's no better example of that than Google Clips, a tiny little camera that automatically captures seven-second moving photos of things it finds "interesting." It's a new way to think about photography, one that leverages Google's ability to do lots of different AI tasks: recognize faces, recognize "bad" photos, recognize "interesting" content. It's simply applied to your own pictures instead of content on the internet.

Clips does all this locally: nothing is sent to the cloud, and nothing integrates with whatever Google Photos knows about you. As much as Google is known for doing its AI in the cloud, many of the devices it's releasing are doing AI locally. Pichai says that's by design, and that both kinds of AI are necessary. "A hybrid approach absolutely makes sense," he says. "We will thoughtfully invest in both. Depending on the context, depending on what you're dealing with, it'll make sense to deploy it differently."

Clips is the kind of thing Pichai wants Google to do more of. "I made a deliberate decision to name the hardware product with [a] software name," he says. "The reason we named it Clips is that the more exciting part of it is … the machine learning, the computer vision work we do underneath the scenes."

For Google, making hardware is about selling products, but it's also about learning how hardware can better integrate AI. "It's really tough to drive the future of computing forward if you're not able to think about these things together," Pichai says. Fundamentally, his question about every hardware product is "how do we apply AI to rethink our products?" He doesn't want to make AI just another feature, he wants AI to fundamentally alter what each device is.

Some of those half-dozen AI examples Pichai cites are solutions to problems you might not realize could be solved with AI. Recently, Google Maps added the ability to find parking near your destination. What you might not know is that Google isn't just canvasing local parking garages; it's using AI.

The concept of AI is as fuzzy and complicated as the actual mechanics behind it. Since so many products and features we use now purport to be powered by it, you can get a sense that it's just a bullshit marketing term. And given Google's lead in the field, I expected Pichai to have strong opinions about the difference between artificial intelligence, machine learning, and deep learning. He could certainly play that game if he wanted to. "To be pedantic about it, a lot of us distinguish [between] AI and AGI as an 'artificial general intelligence,'" he says. But he is also not especially worried about the various differences in terminology. "It's good that we use them interchangeably," he says, "because it's been good to see the excitement around it, and it's good to attract people to the field. I've gotten comfortable with it."

"It's fascinating," Pichai says. The Maps team applied AI to see whether Google Maps users were finding parking easily when they arrived at their destinations. "They have to distinguish between people who have just shown up in a Lyft and gotten out, versus actually driving the car and getting parking quickly."

We've gotten used to lots of online services quietly getting better thanks to AI, but Pichai wants to drive that even more aggressively into the devices we're using. In short, he wants to have AI change the user interface of our phones.

"The product can learn and adapt over time," Pichai says. "You see very little of that today. My favorite [example] is I open Google Fit [every day] to a certain view, and I navigate to a different view." One wonders why he doesn't just wander over to the Google Fit team and ask them to change it. Instead, apparently, he would like AI to realize what you're doing with your phone "300 times a year" and make it simpler.

"Multitouch was a huge progress," he says. "But I think we will all interact with [our devices] in more conversational, sensory ways: using voice, vision, and other things. That's important to me." Google Lens, for example, is launching on the Pixel today. Like Samsung’s Bixby (presumably Google's solution will work better), it can identify real-world objects and search for them.

The most surprising example of AI changing interfaces comes up when I ask another question about privacy. People are already skittish about how much Google knows about them, and they are unclear on how to manage their privacy settings. Pichai thinks that's another one of those problems that AI could fix, "heuristically."

"Down the line, the system can be much more sophisticated about understanding what is sensitive for users, because it understands context better," Pichai says. "[It should be] treating health-related information very differently from looking for restaurants to eat with friends." Instead of asking users to sift through a "giant list of checkboxes," a user interface driven by AI could make it easier to manage.

Of course, what’s good for users versus what’s good for Google versus what’s good for the other business that rely on Google’s data is a tricky question. And it’s one that AI alone can’t solve. Google is responsible for those choices, whether they’re made by people or robots.

Here's Pichai's answer to the question of whether Google has a specific strategy for interacting with the Trump administration: "We don't get involved with politics, but the way we approach these things is: we have values. So for issues where we feel like we have a role to play, it affects the concerns of our employees, or it affects society related to some core values, we take a strong stance. The counter is also true. If there are issues on which we feel like the right things are getting done, we want to be a real proactive voice in helping. But we have to be thoughtful. The role of a company in a democratic context, I think it's important that we also respect democratic institutions, democratic outcomes, and engage constructively. So that's the framework with which I think about how to do it." Of course, Google does get involved in politics — it runs one of the largest lobbying shops in DC.

Again and again, our conversation seems to come back to Google's responsibility as such a large, multinational company. Here's an obvious example: it's easy to forget that Android is now the largest computing platform on the planet, and Google is the steward of that platform.

"First and foremost, I think of Android as an open platform,” says Pichai. "Android is not a Google product or service." That's true insofar as Android is open source: anybody can use it to power a device, with or without Google. Many Chinese phone makers (and, of course, Amazon) do just that. But the reality is that most people associate Android with Google, and so Google has a responsibility for it.

Pichai does believe that Google "can use Android thoughtfully to help get the right things to happen." One of those things is improving privacy for the 2 billion-plus people who use it monthly. "I think Android giving [developers] an open framework for on-device machine learning is an important thing to drive," Pichai says.

He also believes that the day is fast coming when a $50 phone with 4G LTE could be really good. "In India, it would mean in the next two years, you would talk about hundreds of millions of people getting access to computing."

As ambitious as Google is with its own hardware, it's still a tiny drop in the bucket compared to the company's online business. Pichai won't say when we can expect to see hardware sales become a big, broken-out part of its financial calls, outside of saying it'll definitely happen in the next five years.

So while Google will be hyping up its hardware products — and some of them do seem pretty great — the larger spotlight will continue to be on the content it shows to users online. The amount of scrutiny companies like Facebook and Google — and Google’s YouTube division — face over presenting inaccurate or outright manipulative information is growing every day, and for good reason.

During our conversation about getting things right in search, I press Pichai on the fact that Google is beginning to offer feeds of content in the Google app. It's a little like Facebook's News Feed, but it’s using your search history instead of your friends to populate it.

Pichai thinks that Google's basic approach for search can also be used for surfacing good, trustworthy content in the feed. "We can still use the same core principles we use in ranking around authoritativeness, trust, reputation. The principles can apply equally. Whether you type in the query and we are surfacing [an answer] or whether we are proactively surfacing it, you shouldn't change that. I feel comfortable that the same set of things work."

What he's less sure about, however, is what to do beyond the realm of factual information — with genuine opinion: "I think the issue we all grapple with is how do you deal with the areas where people don't agree or the subject areas get tougher?"

When it comes to presenting opinions on its feed, Pichai wonders if Google could "bring a better perspective, rather than just ranking alone. … Those are early areas of exploration for us, but I think we could do better there."

You can be sure that whatever the results of that exploration are, they will involve AI. Google's ability to do innovative things with AI in both hardware and online consumer products is unmatched. But as we ask Google to make more and more decisions for us, it’s clear that Pichai will have to show us that his AI has judgement, not just algorithms.