Facebook taps artificial intelligence in new push to block terrorist propaganda

Jessica Guynn | USA TODAY

SAN FRANCISCO — With attacks on Western targets increasing pressure on Facebook, the giant social network says it's making a new push to crack down on terrorist activity by using sophisticated algorithms to mine words, images and videos to root out and remove extremists' propaganda and messages.

Artificial intelligence can't do the job alone, so Facebook says it has amassed a team of 150, including counterterrorism experts, who are dedicated to tracking and taking down propaganda and other materials.

It's also collaborating with fellow technology companies and consulting with researchers to keep up with the ever-changing social media tactics of the Islamic State and other terror groups.

"Just as terrorist propaganda has changed over the years, so have our enforcement efforts. We are now really focused on using technology to find this content so that we can remove it before people are seeing it," says Monika Bickert, a former federal prosecutor who runs global policy management, the team that decides what can be posted on Facebook. "We want Facebook to be a very hostile environment for terrorists and we are doing everything we can to keep terror propaganda off Facebook."

Sharp criticism from European officials, advertiser boycotts and lawsuits from family members of people killed in terrorist attacks are pushing Facebook, Google, Microsoft and Twitter to find more effective ways to banish terrorist activity.

New video digital fingerprinting technologies called "hashes" are helping flag and intercept extremist videos before they are posted. But these new tools can't yet keep terrorists from gathering on Facebook to recruit and communicate with followers.

In the wake of the London attacks, British Prime Minister Theresa May has accused Facebook and other companies of not doing enough to crack down on terrorist activity. This week May said she and French President Emmanuel Macron were working on a plan that would make Internet companies legally liable for extremist materials on their services.

"They want to hear that social media companies are taking this seriously. We are taking it seriously," Bickert said. "The measures they are talking about, we are already doing."

For years Facebook balanced the threat to free speech with its ongoing efforts to eradicate terrorist propaganda. About a year ago Facebook intensified efforts to combat terrorism, resulting in the removal of a great deal of that activity from its platform, says Seamus Hughes, deputy director of the program on extremism at George Washington University.

"Facebook at some point in the last year planted a flag in the ground and said: Not on our platform," Hughes said.

WhatsApp, Telegram

Even as Facebook makes progress on one terrain, new battlefields emerge. Researchers like Hughes say much of the terrorist activity that has left Facebook has migrated to encrypted messaging services such as Telegram and Facebook-owned WhatsApp. Facebook Live, the real-time streaming service, also presents a new challenge. And terrorists are still lurking out of sight on Facebook in private groups.

Artificial intelligence is already improving Facebook's ability to stop the spread of terrorist content on Facebook such as flagging and intercepting known terrorist videos before they can be uploaded to Facebook, says Brian Fishman, lead policy manager for counterterrorism at Facebook and the author of The Master Plan: ISIS, al-Qaeda, and the Jihadi Strategy for Final Victory. Artificial intelligence is also being used to analyze text that has been removed for supporting or praising terrorist organizations such as the Islamic State and Al Qaeda and their affiliates to detect other content that may be terrorist propaganda. That same technology is being used to ferret out private groups that support terrorism.

Facebook, which relies on its nearly 2 billion users to alert the company to content that violates its rules, says it now finds more than half of accounts that are removed from Facebook for terrorist activity on its own. But artificial intelligence has its limits, making human intervention necessary, for example to distinguish between an image in a news article about terrorism and terrorism propaganda.

"There is no switch you can flip. There is no find the terrorist button," Fishman said.

As in the offline world, terrorists tend to operate in clusters, so it identifies pages, groups, posts or profiles supporting terrorism to identify other accounts and content that support terrorism. Facebook is also getting better at keeping these terrorists and their sympathizers from setting up new fake accounts so that Facebook is not engaged in an endless game of whack-a-mole as terrorists create accounts as quickly as they can be deleted, he said.

RELATED:

Facebook CEO Mark Zuckerberg wrote about the use of artificial intelligence to police content in his nearly 6,000-word community letter in February.

"Artificial intelligence can help provide a better approach," he wrote. "This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community."

Zuckerberg also underscored the importance of "protecting individual security and liberty." "We are strong advocates of encryption and have built it into the largest messaging platforms in the world — WhatsApp and Messenger," he wrote.

Facebook and other companies have drawn a hard line on end-to-end encryption, saying the technology has legitimate uses such as for human rights activists and journalists who need to know their communications can only be read by the sender and the recipient. That has caused friction with national security and law enforcement officials because it makes it more difficult for them to access digital data.

When asked about messaging services, Fishman said Facebook is applying what it has learned from terrorist activity and applying it to its family of apps including WhatsApp.

"We don't want this material to be on Facebook and we want to keep the Internet as a whole free from this kind of material, too," Fishman said.

New challenges: Facebook Live, Groups

Facebook groups are another trouble spot. James Hodgkinson, the 66-year-old Illinois man who was killed by police after he opened fire on Republican members of Congress this week, belonged to a Facebook group with the name "Terminate the Republican Party."

A new challenge for counterterrorism efforts at Facebook: Facebook Live, which has seen a sharp uptick in all kinds of violence.

Last year, Facebook shut down the account of a suspected terrorist who live-streamed threats to the Euro 2016 soccer tournament after killing a French police captain and his partner and taking the couple's child hostage in their home outside Paris.

"With Facebook Live, as with Facebook in general, the vast majority of people who are using that product are using it for good reasons," Bickert said. "But, as with any product, there will always be some people who use it for bad purposes and so we are trying to learn the best ways of finding that early."