White nationalist and neo-Nazi trolls have found a home on Google’s social media platform, Google Plus.

Many groups espousing racist rhetoric and hate speech were kicked off Facebook and Twitter after violence erupted at the “Unite the Right” rally last summer in Charlottesville, Va., where a woman was killed by a car that was driven into a crowd of protesters.

While such voices have been kicked off Facebook and Twitter, they have not been purged from Google Plus.

ADVERTISEMENT

Groups openly posting explicitly racist and anti-Semitic content have established dozens of Google Plus communities, the equivalent of Facebook groups. The communities have follower counts that range from the hundreds to the thousands.



Some of the communities reviewed by The Hill are still active. Others appear to be abandoned but still serve as repositories of hate content with links directing users to hate speech and white nationalist communities on other platforms and websites.

Google Plus’s user policy stipulates that much of the content posted by such groups is not welcome on its platform. But many posts with racist or anti-Semitic content have remained on the social media platform for months and even years.

The groups are often easily accessible through searches of known neo-Nazi and white nationalist groups, and their posts cover the gamut of hateful speech and imagery, including swastikas.

One meme shows a black woman holding up a sign at a rally that says “They can’t kill us all #BlackLivesMatter,” accompanied by an image of a Klansman holding a shotgun underneath with text superimposed on it that reads “Challenge accepted.”

Another shows a blurry image of what appears to be a white man pointing a gun at a black toddler.

Reached for comment about the posts and communities, Google said that it takes “these issues incredibly seriously.”

“We have clear policies against violent content as well as content from known terrorist organizations and when we find violations, we take swift action,” a Google spokesperson said in an emailed statement.

“We have a team dedicated to keeping violent content and hate speech off our platforms, including Google+. And while we recognize we have more to do, we’re committed to getting this right.”

It’s unclear if Google was aware of the content before The Hill inquired about the communities and posts. The company declined to directly answer The Hill’s questions about when it became aware of the content.

Oren Segal, director the Anti-Defamation League’s Center on Extremism, noted that while Google Plus isn’t the most popular platform for white nationalists, the content on it should concern people.

“The community and the recruitment happens there. Whether it’s Google Plus, Twitter or other platforms, it’s significant,” he said. “We’ve seen how online activity leads to real-world consequences.”

Jonas Kaiser, a researcher at Harvard’s Berkman Klein Center, noted that at least two of the links The Hill provided him were not accessible via a German IP address. Many types of anti-Semitic content are illegal in Germany, and Kaiser said that if Google is blocking such content there, it suggests that the company is aware of at least some of it.

Google declined to comment when asked about Kaiser’s analysis.

Hate speech from Nazi and white supremacist groups is not unique to Google Plus. Experts say that such content exists across the web on lesser-known platforms and in harder to find spots on Facebook and Twitter.

Kaiser noted that while the amount of content on Google Plus is still smaller than sites that have more laissez-faire hate speech policies, they’re all part of a network.

“If there is good news, it’s that the member size is relatively small in comparison to what you see on YouTube in regard to views [of white nationalist videos],” he said.

“Impactwise, it’s hard to specify on specific platforms. They are all connected to each other in one way or another. Google Plus links to YouTube videos and LiveJournal links,” he added. “They’re all just one piece of the bigger map of the far-right ecosystem.”

The content posted by the groups directly violates Google Plus’s user policy, which forbids “content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics.”

“This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line,” Google Plus’s user policy continues.

Pro-Islamic State in Iraq and Syria (ISIS) groups have also been able to find places to operate on Google Plus, which has raised criticisms that the company is not doing enough to root out such content.

The Hill previously found dozens of easily accessible pro-ISIS communities and sympathizers on Google Plus’s platform that groups say Google ignored even when they were flagged directly to the company.

Part of Google’s problems in addressing dangerous content may be the result of it relying on its own community to report posts that violate its policies.

In a 2014 post titled “Fighting Online Hate Speech,” Google noted that it depended on its users to help police such content.

“These reporting systems operate much like an online neighborhood watch. We ask your help in maintaining a community that provides a positive and respectful experience for everyone,” the post said.

As usership has declined on Google Plus, so has one of the company’s main enforcement mechanisms for hate speech. If normal users aren’t on the platform to report racist and hateful content, it can easily go unchecked.

“On Facebook, you always have two forms of control: oversight from the platform itself, but there are also so many more users who are sensitive to that content and will flag that immediately,” Kaiser said.

“Fringe groups are being drawn to platforms like Google Plus because they know that they can post their content there,” he added.