“I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos” wrote Facebook Chief Security Officer Alex Stamos on Saturday in a reeling tweetstorm. He claims journalists misunderstand the complexity of attacking fake news, deride Facebook for thinking algorithms are neutral when the company knows they aren’t, and encourages reporters to talk to engineers who actually deal with these problems and their consequences.

Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks. — Alex Stamos (@alexstamos) October 7, 2017

Yet this argument minimizes many of Facebook’s troubles. The issue isn’t that Facebook doesn’t know algorithms can be biased or that people don’t know these are tough problems, but that the company didn’t anticipate abuses of the platform and work harder to build algorithms or human moderation processes that could block fake news and fraudulent ad buys before they impacted the 2016 U.S. presidential election, instead of now. And his tweetstorm completely glosses over the fact that Facebook will fire employees that talk to the press without authorization.

[Update: 3:30pm PT) I commend Stamos for speaking so candidly to the public about an issue where more transparency is appreciated. But simultaneously, Facebook holds the information and context he says journalists and by extension the public lack, and the company is free to bring in reporters for the necessary briefings. I’d certainly attend a “Whiteboard” session like Facebook has often held for reporters in the past on topics like News Feed sorting or privacy controls.]

Stamos’ comments hold weight because he’s leading Facebook’s investigation into Russian election tampering. He was the Chief Information Security Officer as Yahoo before taking the CSO role at Facebook in mid-2015.

The sprawling response to recent backlash comes right as Facebook starts making the changes it should have implemented before the election. Today, Axios reports that Facebook just emailed advertisers to inform them that ads targeted by “politics, religion, ethnicity or social issues” will have to be manually approved before they’re sold and distributed.

And yesterday, Facebook updated an October 2nd blog post about disclosing Russian-bought election interference ads to congress to note that “Of the more than 3,000 ads that we have shared with Congress, 5% appeared on Instagram. About $6,700 was spent on these ads”, implicating Facebook’s photo-sharing acquisition in the scandal for the first time.

Stamos’ tweetstorm was set off by Lawfare associate editor and Washington Post contributor Quinta Jurecic, who commented that Facebook’s shift towards human editors implies that saying “the algorithm is bad now, we’re going to have people do this” actually “just entrenches The Algorithm as a mythic entity beyond understanding rather than something that was designed poorly and irresponsibly and which could have been designed better.”

Here’s my tweet-by-tweet interpretation of Stamos’ perspective:

I appreciate Quinta's work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV. https://t.co/CWulZrFaso — Alex Stamos (@alexstamos) October 7, 2017

He starts by saying journalists and academics don’t get what it’s like to actually like to implement solutions to hard problems, yet clearly no one has the right answers yet.

I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos. — Alex Stamos (@alexstamos) October 7, 2017

Facebook’s team has supposedly been pigeonholed as naive of real-life consequences or too technical to see the human impact of its platform, but the outcomes speak for themselves about the team’s inadequacy to proactively protect against election abuse.

Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks. — Alex Stamos (@alexstamos) October 7, 2017

Facebook gets that people code their biases into algorithms, and works to stop that. But censorship that results from overzealous algorithms hasn’t been the real problem. Algorithmic negligence of worst-case scenarios for malicious usage of Facebook products is.

In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues. — Alex Stamos (@alexstamos) October 7, 2017

Understanding of the risks of algorithms is what’s kept Facebook from over-aggressively implementing them in ways that could have led to censorship, which is responsible but doesn’t solve the urgent problem of abuse at hand.

For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda. — Alex Stamos (@alexstamos) October 7, 2017

Now Facebook’s CSO is calling journalists’ demands for better algorithms fake news, because these algorithms are hard to build without becoming a dragnet that attacks innocent content too.

Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data. — Alex Stamos (@alexstamos) October 7, 2017

What is totally false might be somewhat easy to spot, but the polarizing, exaggerated, opinionated content many see as “fake” is tough to train AI to spot because of the nuance with which it’s separated from legitimate news, which is a valid point.

A bunch of the public research really comes down to the feedback loop of "we believe this viewpoint is being pushed by bots" -> ML — Alex Stamos (@alexstamos) October 7, 2017

Stamos says it’s not as simple as fighting bots with algorithms because…

So if you don't worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it's easy! — Alex Stamos (@alexstamos) October 7, 2017

…Facebook would end up becoming the truth police. That might lead to criticism from conservatives if their content is targeted for removal, which is why Facebook outsourced fact-checking to third-party organizations and reportedly delayed News Feed changes to address clickbait before the election.

Likewise all the stories about "The Algorithm". In any situation where millions/billions/tens of Bs of items need to be sorted, need algos — Alex Stamos (@alexstamos) October 7, 2017

Even though Facebook prints money, some datasets are still too big to hire enough people to review manually, so Stamos believes algorithms are an unavoidable tool.

My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences. — Alex Stamos (@alexstamos) October 7, 2017

Sure, journalists should do more of their homework, but Facebook employees or those at other tech companies can be fired for discussing work with reporters if they don’t have PR approval.

And to be careful of their own biases when making leaps of judgment between facts. — Alex Stamos (@alexstamos) October 7, 2017

It’s true that as journalists seek to fight for the public good, they may overstep the bounds of their knowledge. Though Facebook’s best strategy here is likely being more thick-skinned to criticism while making progress on the necessary work rather than complaining about the company’s treatment.

If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased — Alex Stamos (@alexstamos) October 7, 2017

Journalists do sometimes tie everything up in a neat bow when they’re really messier, but that doesn’t mean we’re not at the start of a cultural shift about platform responsibility in Silicon Valley.

If your piece assumes that a problem hasn't been addressed because everybody at these companies is a nerd, you are incorrect. — Alex Stamos (@alexstamos) October 7, 2017

Stamos says it’s not a lack of empathy or understanding of the non-engineering elements to blame, though Facebook’s idealistic leadership did certainly fail to anticipate how significantly its products could be abused to interfere with elections, hence all the reactive changes happening now.

If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common. — Alex Stamos (@alexstamos) October 7, 2017

Another fair point, as we often want aggressive protection against views we disagree with while fearing censorship of our own perspective when those things go hand in hand. But no one is calling for Facebook to be haphazard with the creation of these algorithms. We’re just saying it’s an urgent problem.

If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad — Alex Stamos (@alexstamos) October 7, 2017

This is true, but so is the inverse. Facebook neded to think long and hard about how its systems could be abused if speech wasn’t controlled in any way and fake news or ads were used to sway elections. Giving everyone a voice is a double-edged sword.

Likewise if your call for data to be protected from governments is based upon who the person being protected is. — Alex Stamos (@alexstamos) October 7, 2017

Yes, people should take a wholistic view of free speech and censorship, knowing both must fairly cross both sides of the aisle to have a coherent and enforceable policy.

A lot of people aren't thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers. — Alex Stamos (@alexstamos) October 7, 2017

This is a highly dramatic way of saying be careful what you wish for, as censorship of those you disagree with could bloat into censorship of those you support. But this actually positions Facebook as “the gods”. Yes, we want better protection, but no, that doesn’t mean we want overly aggressive censorship. It’s on Facebook, the platform owner, to strike this balance.

Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. FIN — Alex Stamos (@alexstamos) October 7, 2017

Not sure if this was meant to lighten the mood, but it made it sound like his whole tweetstorm was flippantly produced on a whim, which seems like an odd way for the world’s largest social network to discuss its most pressing scandal ever.

Overall, everyone needs to approach this discussion with more nuance. The public should know these are tough problems with potential unintended consequences for rash moves, and that Facebook is aware of the gravity now. Facebook employees should know that the public wants progress urgently, and while it might not understand all the complexities and sometimes makes its criticism personal, it’s still warranted to call for improvement.