A man holds a smartphone with the icons for social networking apps on the screen. S3studio | Getty Images

U.K. lawmakers urged advertisers to boycott internet firms that fail to remove or control the publication of extremist content. In a report published Thursday evening, the U.K. Parliament's Intelligence and Security Committee concluded that security agencies needed help from the likes of Facebook, Twitter and Google to curb the "enormous growth" in online extremist material. The committee said online communications service providers (CSPs) had made "little tangible progress over the last four years" to tackle the publication of this content. "Action that affects the CSPs' profits clearly hits home harder than any sense of 'doing the right thing'," the report said. "Encouraging companies who advertise on the CSPs' platforms to put pressure on the CSPs to remove extremist content — with the threat of pulling their adverts if they do not — will have more impact on the CSPs."

Unilever boycott

The Committee said the U.K. government should seek to lobby the business community to take action, "following the Unilever example." In February, Unilever threatened to boycott Facebook and Google if they failed to police extremist and illegal content. At the time, a Facebook spokesperson told the BBC: "We fully support Unilever's commitments and are working closely with them." The Committee also pointed to Google subsidiary YouTube, which experienced an exodus of advertisers earlier this year over commercials appearing alongside extremist and illegal content. A Google spokesperson told CNBC on the phone that 98 percent of all videos it has removed for violent extremism were now flagged by Google's machine-learning algorithms. Since introducing this technology in June 2017, the number of those videos removed before exceeding 10 views grew from 8 percent to more than 50 percent, it said. The spokesperson added that Google had hit its goal of ensuring that 10,000 people were working to address content that violated its policies, and noted that Google had invested $5 million in supporting non-profit organizations that focused on tackling hate and extremism. A Twitter spokesperson said the company was committed to "improving the health of the conversation on Twitter." "Safety is a key part of this goal. In relation to terrorist content, 95% of it is now being removed proactively through our technology – 75% before their first Tweet," they told CNBC via email. "We will also continue to work collaboratively with the Home Office, law enforcement, and our peer companies through the Global Internet Forum to Counter Terrorism, with a view to making further progress." Facebook has taken several steps to reduce extremist content on its platform. It states that 99 percent of content relating to terror groups Islamic State and al-Qaeda is taken down before being flagged up by users.The team responsible for enforcing Facebook's policies is made up of around 30,000 people, with 200 dedicated specifically to counter-terrorism procedure. A spokesperson for Facebook declined to comment directly on the government report.

Business leverage