It's no secret that Facebook (along with other social media giants) are under the microscope of governments all around the world, as their platforms continue to be used to promote hateful, abusive and dividing content. Fishman said that while Facebook is now being more transparent about these issues, and also creating tools to counter terrorism, harassment, trolls and bots with AI and human sleuths, the company still needs to do more.

"As good as algorithms can be at surfacing potentially dangerous content, there's a lot of nuance here," Fishman said. "One of the risks is that we want to be very aggressive at taking down content, but we want to protect the ability of our users to speak about controversial issues, to speak sometimes in contentious ways, and so to make some of those nuanced calls you really need human beings."

Facebook Lead Policy Manager of Counterterrorism, Brian Fishman (second from left to right), speaks at SXSW 2018.

The effort to have actual people, not just automated systems, monitoring and reviewing for any potentially abusive content is key to Facebook and others like Twitter and YouTube, especially as their algorithms have proven to be flawed. That's one of the reasons governments are calling for these sites to be more closely regulated. Last year, Germany introduced a law that will impose hefty fines on social media platforms, including of course Facebook, if they fail to remove harmful content such as hate speech in under 24 hours.

Many like London Mayor Sadiq Khan fear that government regulation on companies like Facebook is a bad path to go down on, as it could hinder innovation. But, he said during a keynote at SXSW, that they also can't be above the law and that with the resources and skills they have, Facebook, Twitter and YouTube should be doing more to tackle these issues. "We have to do a better job to reassure users and governments," Fishman said "and that's something that I'm thinking about and working on."

Catch up on the latest news from SXSW 2018 right here.