It seemed easy enough at the time for Facebook to minimize the size and scope of the mess it made. The company first downplayed the problem by focusing on the $100,000 of ads the IRA purchased from Facebook, a nominal amount compared to the nearly $13 billion in ad revenue Facebook made in the fourth quarter of 2017 alone. But the numbers only grew from there. In his testimony, Stretch revealed that 126 million people had been exposed to Russian propaganda on Facebook. Asked about how many people were reached on Instagram, Stretch ratcheted the figure up another 20 million. As recently as March, the company had still not yet calculated how many people followed Russian trolls on Instagram. And just last week, it announced that it had found and suspended another nearly 300 accounts and pages linked to the IRA across Facebook and Instagram.

Facebook's public shaming continued shortly after the hearings, when the House Intelligence Committee published some of the ads and other content the Russian trolls shared on both Facebook and Twitter. For most people, it was the first concrete look at both the divisiveness and ugliness of the content that rocked the election.

In early March, at least, it seemed Facebook had the public relations crisis under control.

The hits kept coming. By January, WIRED reported that special counsel Robert Mueller interviewed at least one Facebook employee as part of his ongoing inquiry into Russian interference in the 2016 election. Just a month later, Mueller published a 37-page indictment of 13 individuals associated with the IRA, which laid out exactly how they "conducted operations on social media platforms such as YouTube, Facebook, Instagram, and Twitter.” Not only did they create phony Facebook Pages like Blacktivist and Heart of Texas, but they also sent Facebook messages to then-candidate Donald Trump's Florida staff, asking for help organizing pro-Trump flash mobs throughout the swing state.

As the news broke, Facebook announced a spate of changes to its political advertising policies, including plans to label political ads as such and create an archive where people can see the ads, who paid for them, and information about how much they cost and who they reached. In early March, at least, it seemed Facebook had the public relations crisis under control.

The Cambridge Analytica Mess

Over St. Patrick's Day weekend, The New York Times, alongside The Guardian and The Observer, published simultaneous stories reporting that Cambridge Analytica and its British counterpart SCL had accessed 50 million Facebook users' data without their knowledge or permission. What's more, Facebook acknowledged that it had known about the violation since 2015. The company tried to preempt the story by suspending both companies, as well as a former SCL employee-turned-whistleblower named Christopher Wylie, and a researcher named Aleksandr Kogan, who gave Cambridge and SCL the data to begin with.

By that Monday, the company's stock price began to free fall. Suddenly, Facebook was forced to answer not just for its political advertising policies from 2016, but its entire history of data privacy policies, focusing especially on its Social Graph API, which allowed developers to build apps on top of Facebook—and scrape up reams of users' unwitting friend data while they were at it. Facebook phased those capabilities out in 2015, but the Cambridge Analytica scandal revealed that the company had no mechanism in place to ensure developers weren't sharing and misusing that data.

Right on cue, lawmakers began calling Facebook back to Congress. This time, they wanted Zuckerberg. And yet, for five days after the story broke, the camera-shy billionaire was remarkably silent. In the mean time, more dirt about Cambridge Analytica rose to the surface, thanks to an undercover video from the UK's Channel 4 that showed Cambridge Analytica's CEO Alexander Nix discussing the use of extortion and bribery on behalf of clients. Facebook hadn't just leaked user data to any old data miners; it had leaked it to apparently ignoble ones.