Facebook seems to have it backward: Faulted for not rejecting false, racist political ads, it rejects an anti-racist one

Rekha Basu | The Des Moines Register

Show Caption Hide Caption AOC grills Mark Zuckerberg on fake political ads on Facebook After criticism for failing to detect and purge election meddling in 2016, Facebook has made safeguarding elections one of its top priorities.

Facebook is under fire again over its ad policies, this time by its own staff. But as my recent experience with the social media giant suggests, the company still doesn't seem to get it.

Last month hundreds of Facebook employees signed a letter calling on CEO Mark Zuckerberg to hold political ads to the same standard as other ads. They complained the social media network doesn’t check the accuracy of political ads before running them, enabling false claims to be spread as fact. Current policy, the employees said, "doesn't protect voices but instead allows politicians to weaponize our platform."

Facebook has contended its approach is "grounded in Facebook’s fundamental belief in free expression." But as the employees declared in their letter, "Free speech and paid speech are not the same thing."

And if my case is any example, Facebook isn't really committed to free expression either.

Facebook solicits users to boost the visibility of their posts on business/professional pages for a fee. I'd done that once or twice before with columns I'd linked on my column page Rekha Basu (Des Moines Register). But on Oct. 17 I tried to boost one objecting to a gun show vendor's promotion of racism, white supremacy and Nazism in signs, posters and flags. Facebook rejected it.

The lead-in I'd written above the photo and column link said, "A gun show that markets explosives, real and simulated, using Nazi and white supremacist imagery, should have no place on Iowa State Fair grounds."

"Boosted post not approved," was all I heard back from Facebook. .

I called and emailed the press contacts listed for Facebook to find out specifics on its ad policy and what provision I might have violated. After all, the column had been vetted by editors at a mainstream daily newspaper and published on the Register website. I got back an email asking for a screen shot of my rejected post, which I sent. back. No response. Hearing nothing back, I made another attempt to boost the post. It was rejected again.

A statement on Facebook Business says advertisers are prohibited from using ads to discriminate against individuals or groups of people based on race, ethnicity, national origin, religion, sex and other legally protected status. You have to sign agreement with the policy, which I did.

So why would Facebook have a problem with a post denouncing such discrimination, as mine did? My piece condemned hateful materials being displayed amid the sales of explosives and weapons, appearing to entice violence against particular groups.

Civil rights activists last year complained that Russia’s Internet Research Agency had run Facebook ads around the 2016 elections encouraging racial and religious hatred and attempting to suppress minority voter turnout. Facebook acknowledged the problem and said it was taking steps to address the targeting of racial and ethnic minorities. But if you take a deeper dive into its policies on the social issues or political ads that require prior authorization, some of the examples offered seem right in line with Facebook's stated principles against discrimination and for free speech. Such as:

"Free speech is under attack across college campuses — excessive censorship needs to stop.”

“It's time for us all to stand up and demand equal rights for women.”

A few more examples:

Is socialism on the rise?"

"Stand up and defend gun rights."

"U.S. tax dollars should not fund socialized medicine."

And even:

“Elections next month will reveal a lot about voters' views on where the country is headed.”

Why would opinions like those not be allowed in political ads? Where Facebook has been rightfully criticized is for not verifying the facts or taking down posts after they were proven untrue. One example was a Trump campaign ad declaring Joe Biden had offered Ukraine $1 billion in aid in exchange for its forcing out a man investigating a company tied to Biden’s son, Hunter. Instead, Biden and other members of the Obama administration and foreign leaders had pushed for the prosecutor's removal for allegedly ignoring corruption, the Times reported.

Now the Republican-led Senate Intelligence Committee has issued a report warning of new signals that Russia and other countries may try to interfere in the 2020 elections. How will Facebook handle a potential repeat of 2016 disinformation ads?

Also, the FBI last week arrested a self-proclaimed white supremacist who had bought what he believed were pipe bombs and dynamite from undercover agents to blow up Temple Emanuel synagogue in Pueblo, Colorado. He allegedly had shown up wearing a Nazi armband and carrying a copy of “Mein Kampf,” and is said to have planned to poison members of the congregation as part of a “racial holy war.”

Yet Facebook finds it inappropriate to run boosted posts against just such propaganda. It seems they've got things backward. The question is why.