Lawmakers said Wednesday that they continue to have questions after Google, Facebook, Twitter and Microsoft briefed a House panel on their efforts to take down extremist content online.

Members of the House Homeland Security Committee questioned representatives from some of Silicon Valley's largest companies in a closed-door briefing about how they deal with white supremacist and bigoted content online.

ADVERTISEMENT

"While I'm encouraged by their answers, we still have a long way to go," Rep. Val Demings Valdez (Val) Venita DemingsFlorida Democrat introduces bill to recognize Puerto Rico statehood referendum Sunday shows - Trump team defends coronavirus response Demings slams GOP coronavirus relief bill: Americans 'deserve more than the crumbs from the table' MORE (D-Fla.), who sits on the committee, told The Hill after the briefing.

Homeland Security Chairman Bennie Thompson Bennie Gordon ThompsonHouse panel pans ICE detention medical care, oversight Senate to hold nomination hearing for Wolf next week Hillicon Valley: FBI chief says Russia is trying to interfere in election to undermine Biden | Treasury Dept. sanctions Iranian government-backed hackers MORE (D-Miss.) invited the tech companies to come to Capitol Hill and discuss their efforts to crack down on violent extremists following the mass shooting at two New Zealand mosques earlier this month, an attack that was live-streamed online.

"While I appreciate the tech sector’s cooperation in coming to Congress to brief us on their efforts to stop the propagation of terrorist content on their platforms, we still need more information," Thompson said in a statement after the briefing Wednesday.

Thompson had told the companies in a letter last week to "do better" after the violent footage spread at record speeds across each of their platforms. The companies scrambled to remove the video and hateful content applauding the killer's actions, but it had already gone viral.

The House chairman said that over the next few months the Homeland Security panel will continue to engage with other tech and social media companies as well as non-profit groups that have expertise in combatting extremists online.

Demings told The Hill that lawmakers asked tech representatives about how often they engage with law enforcement, as well as how much money they put towards moderating hateful posts.

While Republicans are focused on how the tech companies can do a better job policing themselves, Democrats are pledging to leverage their oversight powers in the House to ensure that tech companies take specific actions.

Rep. Mike Rogers Michael (Mike) Dennis RogersDemocrats slam DHS chief for defying subpoena for testimony on worldwide threats Remembering 9/11 as we evaluate today's emerging threats Hillicon Valley: Tech CEOs brace for House grilling | Senate GOP faces backlash over election funds | Twitter limits Trump Jr.'s account MORE (R-Ala.) said in a statement to The Hill that he believes the tech industry has "made serious improvements at immediately flagging and removing violent extremist content" since 2017.

"To further this conversation, we need to ensure that more, smaller tech firms join this effort to prevent the spread of terrorism-related content," Rogers said.

Rep. Don Payne (D-N.J.) added that the committee will focus on getting tech companies to deal with content from "the lone wolf, the white supremacist," noting international terrorism has not been the largest issue facing the United States in over "a decade."

"We appreciated the opportunity to share Microsoft’s perspective on how we can learn from - and take new actions - after the horrific attack in New Zealand," a Microsoft spokesperson said in a statement to The Hill. “We are ready to work with the committee and with others to address these issues.”

Critics have pointed out that tech companies have been largely successful in rooting out content promoting ISIS and al-Qaeda, but domestic terrorist content has not seen the same widespread removals.

Microsoft, Google, Facebook and Twitter in 2017 formed the Global Internet Forum to Counter Terrorism (GIFCT), an initiative aimed at curbing the spread of terrorist content online. Now, some lawmakers say they should use a similar model to remove neo-Nazi, white supremacist and extremist content, which has flourished on the largest social media platforms.

Microsoft President Brad Smith in a blog post this week called for an "industrywide approach" to violent extremism.

All of the platforms have emphasized that their guidelines prohibit violent content of any kind, but their policies are murkier when it comes to hate-mongering or bigotry due to freedom of expression concerns.

Facebook on Wednesday announced that it will begin banning white nationalist or white separatist content on its platform starting next week, a move applauded by Thompson in a statement.

Facebook declined to comment on the briefing. Twitter and YouTube did not immediately respond to The Hill's request for comment.

Lawmakers told The Hill that they plan to continue pressing the companies over their policies.

"This needs to become a priority for them, and I’m not sure it is," Payne said.