Earlier this month, Facebook admitted that its algorithm approved $100,000 worth of ads that point to fake news pages between June 2015 and May 2017. After some internal investigation, the company found that both the accounts that purchased the ads and the pages they advertised were from Russia, suggesting that there's a fake news circle operating out of the country. As a result, Facebook trained its algorithm to be better at blocking ads pointing to fake news, but whatever improvements it implemented clearly weren't enough.

The social network removed the anti-Semitic categories ProPublica found after the publication told the company about it. Rob Leathern, the company's product management director, said in a statement:

"We don't allow hate speech on Facebook. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes. However, there are times where content is surfaced on our platform that violates our standards. In this case, we've removed the associated targeting fields in question. We know we have more work to do, so we're also building new guardrails in our product and review processes to prevent other issues like this from happening in the future."

Facebook has a lot "more work to do" indeed, because a follow-up investigation by Slate shows that the ad network also recognizes "Kill Muslimic Radicals" and "Ku-Klux-Klan" as valid ad categories.

Update: A Facebook rep has reached out and clarified Facebook's ad network takes categories from how people describe themselves in their profiles with no algorithm involved. The company has removed advertisers' ability to target people based on "self-reported targeting fields" -- its term for sections in your profile where you can enter your education, employer and the like -- until it finds a solution to the issue. Here's the company's updated statement: