"Our stance is simple: There's no place on Facebook for terrorism," Monika Bickert, Director of Global Policy Management, and Brian Fishman, Counterterrorism Policy Manager, wrote. "We believe technology, and Facebook, can be part of the solution."

To that end, the company has leveraged a mix of artificial intelligence systems and human expertise to combat extremist threats posted on its site. AI is a fairly new addition to Facebook's arsenal but is already being used for automated image recognition, which recognizes known extremist images and prevents them from being uploaded. The company is also reportedly training a neural network to recognize and remove written text that praises or supports terrorist organizations like ISIS.

Facebook's AI is also capable enough to search through related "clusters" of posts and pages to find other offending materials as well as recognize when previously banned users attempt to create new accounts. The company hopes to expand these features to its other apps, like Instagram, in the future.

As for the company's human-based moderation, Facebook still greatly depends on the users to self-police and report each other. However they are expanding their Community Operations teams by 3,000 employees over the next year to help address reports faster. What's more, Facebook now employs a 150-member "strike team" of sorts. These specialists -- academics, former prosecutors and law enforcement -- are focused either primarily or exclusively on counterterrorism-related tasks.

Of course, Facebook isn't going it alone. The company has partnered with others in the tech industry such as Microsoft and Twitter to create a common database of "hashes" identifying terrorist material and propaganda. Facebook is also working with governments, turning over whatever information they can to law enforcement of E2E encrypted messages that pass through its network.