This month, two magnificently embarrassing public-relations disasters rocked the Facebook money machine like nothing else in its history.

First, Facebook revealed that shady Russian operators purchased political ads via Facebook in the 2016 election. That’s right, Moscow decided to play a role in American democracy and targeted what are presumed to have been fake news, memes, and/or various bits of slander (Facebook refuses to disclose the ad creative, though it has shared it with special counsel Robert Mueller) at American voters in an attempt to influence the electoral course of our 241-year-old republic. And all that on what used to be a Harvard hook-up app.

WIRED OPINION ABOUT Antonio García Martínez (@antoniogm) was the first ads targeting product manager on the Facebook Ads team, and author of the memoir Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley. He wrote about the internet in Cuba in WIRED’s July issue.

Second, reporters at ProPublica discovered that via Facebook’s publicly available advertising interface, users with interests in bigoted terms like "how to burn Jews" could be easily targeted. In the current political climate, the optics just couldn’t be worse.

For me, reading the coverage from the usual tech journalist peanut gallery was akin to a father watching his son get bullied in a playground for the first time: How can this perfect, innocent creature get assailed by such ugliness?

You’re likely thinking: How can the sterile machinery of the Facebook cash machine inspire such emotional protectiveness? Because I helped create it.

In 2011, I parlayed the sale of my failing startup to Twitter into a seat on Facebook’s nascent advertising team (for the longer version, read the first half of my Facebook memoir, Chaos Monkeys). Improbably, I was tasked with managing the ads targeting team, an important product that had until then dithered in the directionless spontaneity of smart engineers writing whatever code suited their fancy.

"Targeting" is polite ads-speak for the data levers that Facebook exposes to advertisers, allowing that predatory lot to dissect the user base—that would be you—like a biology lab frog, drawing and quartering it into various components, and seeing which clicked most on its ads.

My first real task as Facebook product manager was stewarding the launch of the very system that was the focus of the recent scandal: Code-named KITTEN, it ingested all manner of user data—Likes, posts, Newsfeed shares—and disgorged that meal as a large set of targetable "keywords" that advertisers would choose from, and which presumably marked some user affinity for that thing (e.g. "golf," "BMW," and definitely nothing about burning humans).

Later that year, in another improbable turn of events that was routine in those chaotic, pre-IPO days, I was tasked with managing the cryptically named Ads Quality team. In practice, we were the ads police, a hastily assembled crew of engineers, operations people, and one grudging product manager (me), charged with the thankless task of ads law enforcement. It was us defending the tiny, postage-stamp-sized ads (remember the days before Newsfeed ads?) from the depredations of Moldovan iPad offer scammers, Israeli beauty salons uploading images of shaved vulvas (really), and every manner of small-time fraudster looking to hoodwink Facebook’s 800 million users (now, it’s almost three times that number).

So now you’ll perhaps understand how the twin scandals—each in a product that I helped bring to fruition—evoked such parental alarm.

What can Facebook do about all this?

Let’s set aside the ProPublica report. Any system that programmatically parses the data effluvia from gajillions of users, and outputs them into targeting segments, will necessarily produce some embarrassing howlers. As Buzzfeed and others highlighted in its coverage of the scandal, Google allows the very same offensive targeting. The question is how quickly and well those terms can be deleted. It’s a whack-a-mole problem, one among many Facebook has.

Also, there’s zero evidence that any actual ads targeting was done on these segments (beyond the $30 that ProPublica spent). Actual ad spend on the million-plus keywords that Facebook offers follow what’s called a long-tail distribution: Obscure terms get near-zero spend, and Facebook’s own tools show the reach for the offensive terms was minimal. Keyword targeting itself isn’t very popular anymore. Its lack of efficacy is precisely why we shipped far scarier versions of targeting around the time of the IPO; for example, targeting that’s aware of what you’ve browsed for online—and purchased in physical stores—nowadays attracts more smart ad spend than any keywords.