A committee of UK lawmakers this week said that Facebook, Google, and Twitter are "consciously failing" to combat terrorist propaganda and recruitment on their platforms, escalating an ongoing debate over the role of social media companies in curtailing online extremism. In a wide-ranging report on radicalization published Thursday, the UK Parliament's Home Affairs Committee said that social media platforms have become "the vehicle of choice in spreading propaganda and the recruiting platforms for terrorism." The Wall Street Journal first reported on the committee's findings on Wednesday.

Lawmakers in the US and Europe have called on social media companies to crack down on propaganda spread by ISIS and other extremist groups, following a spate of recent attacks. The Obama administration has been working with tech companies to create counter-messaging campaigns, and some social networks have publicly touted an increase in suspended accounts linked to extremist groups. But some rights groups have said that the crackdown could curtail free speech, expressing concerns over governments delegating too much power to private tech firms.

"a drop in the ocean."

This week's parliamentary report comes after Twitter announced that it has suspended 235,000 terrorist-linked accounts since February, and 360,000 accounts since mid-2015. The report also notes that in 2014, Google removed more than 14 million videos related to a wide range of abuses, but the committee claims that these efforts still amount to "a drop in the ocean."

"Huge corporations like Google, Facebook and Twitter, with their billion dollar incomes, are consciously failing to tackle this threat and passing the buck by hiding behind their supranational legal status, despite knowing that their sites are being used by the instigators of terror," said Keith Vaz, chair of the Home Affairs Committee, in a statement.

The report was based on testimony gathered from intelligence officers, Muslim community organizations, IT companies, and counter-terrorism experts. The committee called on social media companies to adopt a "zero tolerance approach to online extremism," and urged the UK government to adopt regulations that would compel the companies to swiftly remove extremist content and cooperate with investigations. Earlier this year, the EU adopted regulations that require tech companies to review extremist content within 24 hours of being notified, and to remove it, if necessary.

The committee also found it "alarming" that major tech companies have "teams of only a few hundred employees" tasked with monitoring content, and that Twitter "does not even proactively report extremist content to law enforcement agencies." In announcing its account suspensions earlier this month, Twitter said that it has begun using automated technologies, including "proprietary spam-fighting tools," to support its abuse reporting system.

Facebook, Twitter, and Google-owned YouTube defended their efforts in statements provided to The Wall Street Journal. YouTube and Facebook also noted that they have encouraged users to create counter-narratives, which aim to refute propaganda from ISIS and other extremist groups. A recent Google-funded study from a London-based think tank found that such campaigns can be effective in sparking online debate.