For indispensable reporting on the coronavirus crisis, the election, and more, subscribe to the Mother Jones Daily newsletter.

On September 26, a handful of Facebook executives joined a large group of civil rights advocates in Atlanta for a forum addressing discrimination in technology. Sheryl Sandberg, the company’s chief operations officer, sat in the front row. The event signaled a new level of collaboration between Facebook and activists who had long been at odds.

But shortly before the event, Facebook made a decision that hit the civil rights community like a gut punch. Two days ahead of the gathering, Facebook announced it would allow politicians to publish false information in Facebook posts, arguing that politicians’ lies are newsworthy and deserve protections that other Facebook users do not. The organization also said it would not factcheck politicians’ paid ads.

“He is willing to allow politicians to lie, to cheat, and to suppress votes.”

After years of trying to get Facebook to remove hate speech and voter suppression from its platform, the civil rights advocates saw Facebook’s move as establishing a truck-sized loophole in its content rules just to avoid taking down President Donald Trump’s campaign ads. Ahead of 2020, they fear the new guidelines will allow the free flow of racist content that puts people at risk of harm and suppresses votes.

“We were taken a bit aback that not only would this news come out, but they would break this news right before the forum without giving us a heads up or having a discussion,” says Rashad Robinson, the president of the civil rights group Color of Change, which organized the event, noting that the decision was made despite previous calls from his organization and others for Facebook to assess the impact of its decisions on minorities before implementation.

The lack of communication over the policy led to tense moments and anger at the gathering, just as advocates felt historic progress was being made. Henry Fernandez, a fellow at the Center for American Progress, accused Facebook from the stage of inventing the policy to avoid a confrontation with Trump. “Because Facebook does not want to say this is really about Trump, it has extended this protection to all elected officials,” Fernandez said. “So now if you are a school board member or sheriff in a small town, you can say the most vile things on Facebook because 300 people voted for you? That’s bad policy, made in fear that conservatives will say Facebook is silencing Trump.”

The reaction from Facebook representatives in Atlanta was that feedback was welcome and that the policy was still evolving, according to advocates who attended. But in the weeks following the forum Zuckerberg has doubled down and defended the new rule multiple times, even as its contours remain hazy and lawmakers and some Facebook employees urge him to reconsider.

Civil rights advocates believe the new policy, which Zuckerberg says is to preserve free speech, has dangerous implications. At a hearing before the House Financial Services Committee last week, Rep. Sean Casten (D-Ill.) asked Facebook CEO Mark Zuckerberg whether a member of the American Nazi Party running for office could post hate speech that would be taken down if posted by someone who wasn’t a candidate. Zuckerberg dodged the question, but Casten, who pointed out that a Nazi actually ran for Congress in his home state last year, had highlighted how the rule could become a loophole allowing fascist propaganda. Rep. Alexandria Ocasio-Cortez (D-NY) then asked Zuckerberg if she could purchase Facebook ads falsely accusing her colleagues of taking certain votes. Zuckerberg said “probably.”

Facebook has set some limits to this hands-off policy. In a major speech at Georgetown University defending the rule, Zuckerberg said “there are exceptions, and even for politicians we don’t allow content that incites violence or risks imminent harm—and of course we don’t allow voter suppression.” Those exceptions mean that Facebook has put itself in the position of deciding whether any given post or ad from a politician incites violence, risks imminent harm, or suppresses votes.

Just last week, a state senator in North Dakota posted a picture of what he claimed was Rep. Ilhan Omar (D-Minn.) hoisting a gun at an Al Qaeda training camp. “She is trying to get this picture blocked,” Republican Oley Larsen wrote on Facebook. “Share it everywhere.” In another post, Larsen called Omar a “terrorist.” While the image has been widely tied to Omar, it actually depicts a 1978 Somali military training exercise. (Omar was born in 1982.) Larsen came under pressure from his own party and ultimately removed the posts after acknowledging they contained falsehoods, but the posts were permissible under Facebook’s rule. A Facebook spokeswoman, Ruchika Budhraja, confirmed that the company would not have deleted the material had it stayed up, but would have factchecked it, labeled it false, and demoted it so that fewer people could view it. Budhraja says the company’s newsworthiness policy requires a case-by-case balancing of what it perceives is important for the public to see and the risk of harm.

Though Facebook didn’t plan to remove Larsen’s post as an imminent threat, Omar disagreed. She tweeted that Larsen’s post is “designed to stir up hate and violence” and was “threatening my life.” Omar, who is among the first two Muslim women elected to Congress, is a frequent target of caustic attacks from President Trump and the far right. That invective has come with multiple death threats, and she has been forced to travel with an increased security detail.

Trump has targeted the representative not just in speeches and tweets, but also in Facebook ads in which Trump’s campaign says that Omar is anti-Semitic and suggests she is sympathetic to terrorists. “The Democratic Party has sunk so low that they’re embracing Anti Semite Ilhan Omar, who recently minimized the terrorist attacks of 9/11 as ‘some people did something,’” a Trump campaign ad this spring said, referring to an out-of-context line from a speech Omar gave about being Muslim in America after 9/11.

“False ads, false speech, misinformation by politicians is not just about a lie: It’s endangering people’s lives,” says Madihha Ahussain of Muslim Advocates, a civil rights group that has been pushing Facebook for years to remove anti-Muslim groups and content. “They’re really missing the boat.”

“False ads, false speech, misinformation by politicians…It’s endangering people’s lives.”

Advocates have also been critical of Facebook for allowing Trump’s campaign to publish more than 2,000 ads using the word “invasion” to paint a picture of menacing immigrants storming across the Southern border. “That language is used very specifically by the El Paso shooter, and it’s also used by the synagogue shooter in Pittsburgh,” says Fernandez, who co-chairs Change the Terms, a coalition pushing tech companies to fight hate speech. According to Fernandez, Facebook’s decision to allow the language means “the standard that they’re using seems to go beyond the standard that’s actually used by mass killers who are anti-Semitic, anti-Muslim, or white nationalist, or anti-Latino.”

“This policy allows politicians to use dangerous speech that history has shown leads to mass killings,” warns Fernandez, who points out that leaders from Hitler to Rwanda’s Léon Mugesera have used such language “to dehumanize minorities before mass killings.”

On a practical level, Zuckerberg and top company officials don’t yet appear to fully grasp the difficulty of carrying out the new policy. In 2011, political scientist Jennifer Lawless estimated that there are more than 500,000 elected officials in the United States, from Congress to county sheriffs to local water boards. Add in the candidates who run against them in both general elections and primaries, and there could easily be over a million Facebook users now entrusted with a special ability to spread lies.

On a press call last week, BuzzFeed’s Alex Kantrowitz asked Zuckerberg about this very point: “Can anyone just start running for their local school board and then start running ads with misinformation?” The answer was basically yes, as long as they register as a candidate, Facebook will consider that person a candidate under the policy.

“Facebook is saying: If you are a politician who wishes to peddle in lies, distortion and not-so-subtle racial appeals, welcome to our platform,” Vanita Gupta, who runs the Leadership Conference on Civil and Human Rights, wrote in a Politico op-ed criticizing the policy. This policy, she argued, sets the stage for politicians to stir up disinformation and racial division as part of their 2020 election strategies. “Free expression is a core principle of our democracy. But so are fair elections.”

For years, Facebook has had a fraught relationship with civil rights groups, which have sued the company for violating federal civil rights laws and lobbied an intransigent Facebook to take discrimination seriously on the platform. Last November, the dialogue deteriorated when the New York Times reported that Facebook had hired a Republican public relations firm to undermine Color of Change. Amid a cascade of bad press, Facebook signaled a renewed dedication to working with civil rights groups. Sandberg took a more visible role in this effort, overseeing an audit of the company’s impact on civil rights, which is scheduled to wrap up in the next few months.

Over the last year, advocates have supported new efforts by the company to tackle voter suppression and hate speech, while cautioning that much more needs to be done. Color of Change has continued to push for Zuckerberg to leave Facebook’s board, where his outsized voting power gives him virtually total control over a company whose user base outnumbers Christianity. Still, with the ice melting, Color of Change and Facebook set out to jointly host the Atlanta forum. “For so many years, all these conversations would happen behind closed doors, they would be off the record, we would be signing NDAs,” says Ahussain, referring to Facebook’s practice of having visitors to its offices sign non-disclosure agreements. “This is the first of its kind with the civil rights community and Facebook coming together in this way in a public setting.”

“That’s bad policy, made in fear that conservatives will say Facebook is silencing Trump.”

The decision to allow politicians to publish false information wasn’t the only step by Facebook that threw cold water on the forum. On October 14, Politico revealed that Zuckerberg hosted a series of dinners at his home for conservative personalities, including Sen. Lindsey Graham (R-S.C.), Fox News host Tucker Carlson, and podcaster Ben Shapiro. The effort, according to Politico, is an attempt to appease Trump, who believes Facebook censors conservatives—an accusation without evidence—and ward off regulation from his Justice Department. The story revealed that while civil rights advocates were enjoying access to Sandberg, the right was going straight to Zuckerberg, and being wined and dined.

“As far as we know, he hasn’t ever had meetings like this with advocates representing impacted communities,” says Ahussain, adding that at “the town hall, Sheryl Sandberg was there, but Mark Zuckerberg was not.”

Zuckerberg has framed his decision not to factcheck politicians’ ads as in keeping with the free speech afforded by Martin Luther King Jr. and Frederick Douglass, both name-checked in his Georgetown speech. But when it comes to the policy, it’s unclear if he discussed it with anyone with civil rights expertise. While Budhraja, the Facebook spokeswoman, would not say whether Zuckerberg has ever met in-person with civil rights advocates, she said the company had weighed civil rights when designing the newsworthiness exception and that Zuckerberg had consulted civil rights advocates on the Georgetown speech, but would not elaborate on either claim. According to the Washington Post, Gupta at the Leadership Conference for Civil Rights expressed concerns about the policies in a call with Zuckerberg before the speech, arguing the company lacked the civil rights expertise to make responsible decisions. Zuckerberg reportedly responded that he had “several people from the Obama White House.” It’s unclear if either this conversation with Gupta or the presence of former Obama employees is what Facebook is referring to when it says it has involved the civil rights community in the creation of these policies.

“Regardless of what Mark Zuckerberg will say about the First Amendment, he is willing to allow politicians to lie, to cheat, and to suppress votes because he’s afraid of the Trump Justice Department and potential regulation,” argues Robinson. “And he’s willing to go down in history that way.”