This week on the interview episode of The Vergecast, Nilay Patel chats with Alex Stamos, director of Stanford’s Internet Observatory and former chief security officer for Facebook.

Stamos was at Facebook during the Cambridge Analytica scandal, so the discussion covers a lot of what was going on at Facebook and how the scandal has changed Facebook since. They also get into the trade-offs big platforms have to make between issues like end-to-end encryption, working with law enforcement, keeping users secure from bad actors, and what the threats are for platforms in the upcoming election. Below is a lightly edited excerpt of the conversation.

Nilay Patel: Obviously I want to talk to you about Facebook. But we just had Michael Bennet, senator from Colorado, on the show and he wrote a book about election security. When you think about Facebook, it was the center of election interference in 2016, basically like posting memes from Russians on Facebook. Do you think we’re ready for 2020 right now? Because Bennet really did not think that we were ready.

Alex Stamos: Yeah, so I’ve met with Senator Bennet and we’ve talked a lot about this kind of stuff. We just put out a report from our group at Stanford. You can go to electionreport.stanford.edu if you want to see it, but we have around 40 recommendations for how Congress, tech companies, and individuals can prepare for 2020. If we look at 2016, there’s actually three or four different kinds of interference by the Russians. So you have what you refer to, which is the online meme wars, which is mostly on Twitter and Facebook. You have the campaign of breaking into Podesta’s email, breaking into the DNC, and then leaking out information in a way that changed the overall information environment to the detriment of Hillary Clinton. There is the overt propaganda campaign. So there’s Russia Today and Sputnik and such, and then there was the direct attacks against election infrastructure.

So I think our response as a society has been different for those four different lanes. So that the kind of meme lord stuff, I think that’s actually where we’ve been best prepared in that the responsibility there kind of cleanly falls to the tech platforms, and they have done things. The big difference between now and 2016 is organized government propaganda was not anybody’s job at the tech companies in 2016. So I kind of inherited this as an issue that I got to lead the team that worked on this, because we had an intelligence team whose job it was to look for governments doing bad things online. That was based upon a very traditional idea of what is government interference online. Malware account takeovers, suppression of dissidents. It did not include, you know, hot takes and edge-lording by people pretending to be Black Lives Matter activists who were actually in St. Petersburg. So a lot has changed ... That is now an entire kind of subfield of trust and safety being invented right now at places like Google, Twitter, and Facebook, and there are people whose entire job it is to do that.

And then the government has reacted as well in that. There are people inside the government side whose job it is to work on these issues. I think if you did a kind of a big look at 2016 as a society, we had this big blind spot because it was really nobody’s job to be tracking the Internet Research Agency and the other kinds of online propaganda outlets because it wasn’t considered a traditional part of cyber security. And so now it’s the NSA and Cyber Command. There’s people working on this, there’s a foreign influence task force in the FBI, there’s people working on this at DHS and they’re working with the folks in the company so you know whether the precautions are great or not. At least this is now a field that people are focusing on and that was not true at all this time three years ago.

How is Facebook doing and how prepared are they for the election?

So I think these folks are pretty well prepared for what happened in 2016. It has the broadest set of advertising transparency. Unlike Google, Facebook considers issue ads to be political ads. I think that’s a really important step because under our assessment at Facebook during our investigation, something like 80 percent of the Russian ads that they ran were not illegal under US law because they’re not electioneering. So Facebook actually takes a much broader definition of what is a disallowable political ad than Google does. I think Facebook has the largest team.

I think that the hardest thing for Facebook is going to be to try to predict how the non-Facebook products are to be used. So Instagram has some of the same problems Twitter has in that you can have a pseudo anonymous identity on Instagram. The fact that Instagram is mostly images give some benefit, but not a ton. As you know, the Russian troll factories have professional meme farms. Like they have graphic designers using Illustrator all day to create memes. So “is Instagram ready?” is actually a big question. I’m guessing that Instagram is well behind what’s happened on Facebook.com. And then the use of WhatsApp — WhatsApp: number one source of disinformation in Southeast Asia. Will WhatsApp, with its end-to-end encryption, be used in the same way in the United States? It seems unlikely in 2020, but after 2020 as people move to those platforms, I think they’ll become an issue.