Facebook, Google and Twitter told Congress Wednesday that they've gone beyond screening and removing extremist content and are creating more anti-terror propaganda to pre-empt violent messages at the source.

Representatives from the three companies told the Senate Committee on Commerce, Science and Transportation that they are, among other things, targeting people likely to be swayed by extremist messages and pushing content aimed at countering that message. Several senators criticized their past efforts as not going far enough.

"We believe that a key part of combating extremism is preventing recruitment by disrupting the underlying ideologies that drive people to commit acts of violence. That's why we support a variety of counterspeech efforts," said Monika Bickert, Facebook's head of global policy management, according to an advance copy of her testimony obtained by CNBC.

Bickert said that in addition to using image matching and language analysis to identify terror content before it's posted, the company is ramping up what it calls "counterspeech."

Facebook is also working with universities, nongovernmental organizations and community groups around the world "to empower positive and moderate voices," Bickert said.

Google's YouTube, meanwhile, says it will continue to use what it calls the "Redirect Method," developed by Google's Jigsaw research group, to send anti-terror messages to people likely to seek out extremist content through what is essentially targeted advertising. If YouTube determines that a person may be headed toward extremism based on their search history, it will serve them ads that subtly contradict the propaganda that they might see from ISIS or other such groups. Meanwhile, YouTube supports "Creators for Change," a group of people who use their channels to counteract hate.