Apple shook up the world of logins last week, offering a new single sign-on (or SSO) tool aimed at collecting and sharing as little data as possible. It was a deliberate shot at Facebook and Google, which currently operate the two major SSO services. But while Google wasn’t happy about the veiled privacy jabs, the company’s login chief is surprisingly sunny about having a new button to compete with. While the login buttons are relatively simple, they’re much more resistant to common attacks like phishing, making them much stronger than the average password — provided you trust the network offering them.

As Google expands its own Android two-factor system, I talked with product management director Mark Risher about why Apple’s new sign-in button might not be as scary as it seems.

This interview has been lightly edited for clarity.

It’s hard to put a finger on the benefit of all of these different login tools, but it does feel like things are getting better? In my personal experience, I’m not being asked for a password nearly as often as I was five years ago.

Right, and it’s way, way better. Usually with passwords they recommend the capital letters and symbols and all of that, which the majority of the planet believes is the best thing that they should do to improve their security. But it actually has no bearing on phishing, no bearing on password breaches, no bearing on password reuse. We think that it’s much more important to reduce the total number of passwords out there. Once you start federating accounts, it means that maybe you still have a few passwords, but some new service you’re just trying out doesn’t need a 750-person engineering team dedicated to security. It doesn’t need to build its own password database, and then deal with all the liability and all the risk that comes with that.

“We think it’s much more important to reduce the total number of passwords”

You also handle Google’s SSO tool, which got some competition from Apple last week at WWDC. Part of the pitch seemed to be that Apple’s SSO system will collect less data and respect privacy more. Do you feel like that’s a fair criticism?

I will take the blame that we have not really articulated what happens when you press that “sign in with Google” button. A lot of people don’t understand, and some competitors have dragged it in the wrong direction. Maybe you click that button that it notifies all your friends that you’ve just signed into some embarrassing site. So getting someone out there to reinvigorate the space and to make it clear what this means and what happens, that is really beneficial.

But there was a bunch of innuendo wrapped around the release that suggested that only one of them is pure, and the rest of them are kind of corrupt, and obviously I don’t like that. We only log the moment of authentication. It’s not used for any sort of re-targeting. It’s not used for any sort of advertising. It’s not distributed anywhere. And it’s partly there for user control so that they can go back and see what’s happened. We have a page, part of our security checkup, that says, “here’s all the connected apps, and you can go and break that connection.” This current product, I haven’t seen how it will be built, but it sounds like they will log that moment as well and then also, every email that’s ever sent by that company, which sounds a lot more invasive. But we’ll see how the details work out.

I honestly do think this technology will be better for the internet and will make people much, much safer. Even if they’re clicking our competitors button when they’re logging into sites, that’s still way better than typing in a bespoke username and password, or more commonly, a recycled username and password.

The basic premise of this kind of login is that you can log in once to Google (or Apple or Facebook) and then extend that login to everything else. But does that model still make sense? Why not have different levels of security for different services instead of putting all our eggs in one basket?

“We’re being much more opinionated because our users are asking us to be much more opinionated”

Part of your premise is I have high-security and low-security services. But the problem is that things don’t stay in that low-security bucket. We evolve over time. When I first signed up for Facebook in 2006, I didn’t have anything useful there. Nowadays, it’s much more important. And how many people go back and upgrade? It’s quite rare. The other problem is we see lots and lots of these lateral attacks, where someone doesn’t go directly after your bank, they go after your friend or your assistant and they use that account to send a message that’s convincingly from them, asking for a wire transfer or asking for the answer to your secret question, which they can then go and play back into the site. So the more of these accounts that you leave loosely protected, the more exposed you are to that.

People often push back against the federated model, saying we’re putting all our eggs into one basket. It sort of rolls off the tongue, but I think it’s the wrong metaphor. A better metaphor might be a bank. There are two ways to store your hundred dollars: you could spread it around the house, putting one dollar in each drawer, and some under your mattress and all of that. Or you could put it in a bank, which is one basket, but it’s a basket that is protected by 12-inch thick steel doors. That seems like the better option!

You also ran into some security concerns around the Titan Security Key last year. Some security experts were worried that any key made in China was potentially vulnerable. How much do you worry about supply chain interference?

“some of the innuendo from Apple was a little annoying”

It’s definitely part of the threat model. It’s something that we engineered for, all the way down to the protocol. I do think some of the response to the Titan key was unnecessarily alarmist, for a few reasons. One is that, those concerns had always been part of our mindset. So we said, we won’t trust people, regardless of what country they’re in. That’s why the chip is sealed. The chip has an attestation that’s available for it. The chip is not field-upgradeable. In fact, that’s why we just did all these replacements, because by design, we can’t push code out there to change it. There were many reasons why I didn’t think that was the real threat people should be concerned about.

Over the past few years, there’s been a major shift in the way people think about tech privacy — not trusting companies less, but also being aware of all the different ways things can go bad once all this data is in the open, getting shared and combined in different ways. How have you responded to that?

We’ve really gone through a paradigm shift. We used to say, it’s your data, we’ll just let you make a decision and then that’s on you. Now we’re being much more opinionated because our users are asking us to be much more opinionated. You can see that manifest in the security checkup, which now actually gives you a personalized set of recommendations based on your own patterns. It used to say you have 16 different devices, like see if anything was suspicious. And users said “No, why don’t you tell me what looks suspicious?” So now we say, “You have 16 devices. These four we haven’t seen in 90 days. Are you sure you didn’t give it to a friend and forget to sign out or, you know, sell it on eBay?” There’s this delicate balance: how do you nag someone just the right amount, but also give them that sort of editorial level of protection that they’re expecting?

There is this concern with the Apple sign-in that even if it’s a positive product, they’re being too heavy-handed in forcing it on developers. You could say the same thing about a lot of the Google projects you’re talking about. Do you worry about nudging users too hard?

I worry about it. That’s the problem with cynicism. Cynicism is when people don’t trust your motives. You say, “Here’s a product that will keep you more safe,” and people say, “Hey, what are you going to do with it?” I think it’s an ecosystem problem. We have a competitor who was collecting phone numbers as a security challenge, but then allegedly also using them to build up a graph for advertising re-targeting. That’s bad for the whole ecosystem because it makes people not trust us.

We try to set a very high bar. And we continue looking for places where we can refocus and re-audit our best practices and keep raising that bar. But to some degree, it’s an ecosystem problem. The worst behavior in the market is the one that everyone sees. And that’s why some of the innuendo from Apple was a little annoying, from our standpoint. Because we’re trying to really hold ourselves to a high standard.