A collection of leaked documents appears to show Facebook redefining rules around hate speech and re-educating content moderators following the Charlottesville protest in 2017.

According to leaked internal documents obtained by Motherboard, the 2017 Charlottesville protests were a moment of intense introspective for Facebook, which scrambled to redefine what they consider “hate speech” and to educate content moderators about American white nationalists. One training document obtained shortly after the protests reads: “Recent incidents in the United States (i.e. Charlottesville) have shown that there is potentially confusion about our hate org policies and the specific hate orgs in specific markets.”

A log of updates to hate speech policy documents show some of the new phrases and sentiments that were defined as hate speech following the Charlottesville protest. In November 2017, trainers added the comparison of Mexican people to worms as an example of hate speech, in December they added the comparison of Muslims and pigs, and in February, trainers added that referring to transgender people as “it” rather than their preferred pronouns was hate speech.

Five months after the Charlottesville protests, Facebook added slides explaining the social media firms position on white nationalism, supremacy, and separatism. Interestingly, the slides stated that the company does not “allow praise, support, or representation of white supremacy as an ideology” but does allow positions on white nationalism and separatism to be praised or discussed.

Facebook notes that nationalism as an ideology is not specifically racist, stating that it is an “extreme right movement and ideology, but it doesn’t seem to be always associated with racism (at least not explicitly).” Facebook then notes that “In fact, some white nationalists carefully avoid the term supremacy because it has negative connotations.”

However, Facebook notes that the difference between nationalism and supremacy expressed by some users can be hard to distinguish. “Overlaps with white nationalism/separatism, even orgs and individuals define themselves inconsistently,” says one slide in a section titled “challenges” for white supremacy. Another slide asks: “Can you say you’re a racist on Facebook?” Facebook’s official response to this is “No. By definition, as a racist, you hate on at least one of our characteristics that are protected.”

High profile users, individuals, and organizations are classified as hate groups based on “strong, medium, and weak signals,” according to another slide. A strong signal would be a user that is a founder of a prominent “h8 org” as Facebook refers to them, a medium signal would include using a logo or symbol from a banned hate group or repeatedly using dehumanizing language towards certain groups.

Facebook told Motherboard in a statement that they evaluate “whether an individual or group should be designated as a hate figure or organization based on a number of different signals, such as whether they carried out or have called for violence against people based on race, religion or other protected categories.”

Facebook says that they do not classify every organization listed as a hate group by the Anti Defamation League as a hate group on their platform, but states that: “Online extremism can only be tackled with strong partnerships which is why we continue to work closely with academics and organisations, including the Anti-Defamation League, to further develop and refine this process.”

Facebook commented on their attempts to crack down on certain groups on the platform saying: “Our policies against organized hate groups and individuals are longstanding and explicit — we don’t allow these groups to maintain a presence on Facebook because we don’t want to be a platform for hate. Using a combination of technology and people we work aggressively to root out extremist content and hate organizations from our platform.”