The big social media companies think what went wrong during the 2016 campaign was that Kremlin proxies passed themselves off as Americans on their platforms. Senators interrogating executives from those firms on Tuesday said that wasn’t enough, identifying the problem as the substance of the propaganda itself, not just that it came from Russia.

And that seemed to make Facebook, Google and Twitter uneasy. Because if social media companies have to get into the business of what is truth and what is agitprop, it might mean a radical restructuring of their operations, their views of themselves, and their relationships with both users and Washington.

That divide became apparent during the first of a gauntlet of high-profile congressional hearings this week examining Russian propaganda on social media and its impact on the 2016 presidential election. It suggests that when the social media giants go before the Senate and House intelligence committees tomorrow, they’ll be in for rough treatment.

Representatives of Facebook, Google, and Twitter told members of the Senate judiciary committee Tuesday afternoon that they deplored the “vile” anti-immigrant, Islamophobic and other bigoted messages spread by a propaganda factory linked to the Kremlin. Facebook’s chief counsel, Colin Stretch—likely reading the political moment after two months of relentless criticism of his company—repeatedly said how “painful” it was to see Facebook hijacked with “inflammatory” messages pushed by Russia.

But Stretch and his colleagues were more comfortable defining the problem in terms of Russians posing as Americans to post content aimed at Americans, rather than contending with the substance of the messages themselves.

“It’s particularly exploitative in so far as it was directed at groups that have every reason to expect us to protect the authenticity of debate on Facebook,” Stretch said, when asked about a deepening concern from civil rights groups that Facebook is a vector of hate speech against the vulnerable.

That makes sense for companies that see themselves as neutral platforms for communication, rather than arbiters of speech—let alone truth. And when Sen. Ted Cruz, a Texas Republican, laid into Google for what he considered a liberal bias in its algorithm, the company representatives appeared reluctant to act as editors.

But the civil rights groups aren’t worried about authenticity so much as they’re worried about “hate groups on your platform,” as a recent coalition letter put it, and the bigotry they spread. The civil rights groups do not find the companies’ policies against inappropriate or harmful content sufficient.

Stretch’s position, echoed by his colleagues, also elides an uncomfortable fact of the Russian propaganda efforts: much of what the Russians posted on inauthentic accounts was material culled from across the internet from American communities themselves, for camouflage. What’s a social media company concerned with inauthenticity to do when a Russian group squatting a real American organization’s Facebook page posts both benign and inflammatory information?

Twitter, Facebook, and Google didn’t have a lot of answers to offer incredulous senators. Mostly they came in the form of assurances that the companies will add more humans to their review teams—though all three expressed faith in the sophistication of their algorithms to identify suspicious behavior. And in some cases, the regulation-wary social media industry, now experiencing its first major wave of sustained congressional dissatisfaction, was quick to point to self-policing as an answer.

Two committee Democrats, Sen. Dianne Feinstein of California and Sen. Richard Blumenthal of Connecticut, brought up examples shortly before the 2016 election of Twitter-borne memes falsely informing users that they could cast their votes by text. Sean Edgett, Twitter’s general counsel, said Twitter took the tweets down—but considered it more significant that “eight times” more users tweeted and retweeted “a complete counternarrative” calling out the disinformation. (It was left to Sen. Amy Klobuchar, a Minnesota Democrat, to point out that the message in question represented illegal voter suppression.)

For months, all three companies have come under fire from legislators for neither effectively policing their influential platforms nor sufficiently disclosing the extent of Russian propaganda their networks host.

Twitter, for instance, has only given the Senate intelligence committee a thumb drive of 1,800 paid promotional tweets from the Kremlin’s English language Russia Today broadcaster. It has offered little explanation of the widespread presence of bots and imposter accounts lined to Russia, outside of identifying 201 accounts believed to be Kremlin-backed imposters in a late September blog post. Those Twitter accounts were piggybacking on some 470 Facebook accounts themselves linked to Russian propaganda that Facebook began revealing in early September.

Facebook has thus far given the most extensive presentation of any social-media company. It has identified some 3,000 ads costing $100,000 as examples of Russian propaganda. But it has released little to the public, prompting reporters to trace the extent of the imposter messages. Among them were Facebook accounts pretending to be U.S. Muslim groups, hosting Islamophobic and anti-refugee messages, and—with associated accounts on Google’s YouTube—posing as Black Lives Matter activists, all to push narratives convivial to Russian foreign policy interests to unsuspecting American audiences. Some promoted real-life rallies in support of Donald Trump.

Daily Beast sources familiar with Facebook’s ad platform believed in early September that as many as 70 million Americans might have been exposed the Russian propaganda. But late on Monday, with the hearings looming, the companies provided new exposure estimates. Through Facebook, St. Petersburg propaganda factory the Internet Research Agency (IRA) reached an estimated 126 million people. Stretch explained that the far greater exposure was the result not of the paid content Facebook identified earlier, but of organic content produced by the IRA and others that authentic users share themselves—essentially, the dark side of virality.

Twitter has now found and suspended more than 2,700 accounts linked to the IRA—an order of magnitude beyond what it disclosed last month—that posted more than 131,000 tweets in just two months during fall 2016. That same period, the lead up to the election, saw 36,000 Twitter bots tweeting 1.4 million election-relevant tweets tied to the Russian effort, garnering some 288 million impressions.

And despite RT’s massive presence on YouTube, Google is saying it has found only 18 YouTube channels creating more than 1,100 videos featuring Russian propaganda—but with a minimal reach of 309,000 views between mid-2015 and late 2016.

While Facebook, Google and Twitter updated senators on the scale of Russian propaganda exposure, they didn’t address pertinent unresolved questions about their role in the 2016 election. It remains unknown, for instance, if and how the Kremlin geo-targeted key states, particularly the upper midwestern states that Hillary Clinton took for granted and Trump unexpectedly won by narrow margins. Stretch said 75 percent of the identified Facebook Russian propaganda was “targeted to the U.S. as a whole,” but did not elaborate on the “more granular” 25 percent that targeted specific states.

Patrick Leahy, a Vermont Democrat, brought out a chart of imagery from still-active Facebook accounts that bore substantial similarities to banned Russian-connected accounts. Some included the phrases “Infidel”—a prominent keyword for anti-Islam communities—and “Bernie-Trump” pseudo-bumper stickers.

“These strongly resemble pages you’ve already linked to Russia. At a minimum, these pages are inflammatory,” Leahy said. “Can you tell me with certainty that none of these were created by Russian-linked organizations?” Stretch could not, and shifted the question.

“I can tell you with absolute certainty that none of them were linked to the accounts that we identified as coordinated inauthentic activity, because we’ve removed all of those accounts from our site,” he replied.

“They’re pretty similar,” Leahy continued.

“A core problem with the accounts we identified was a lack of authenticity. So it wasn’t so much the content, although to be clear, much of that content was offensive and has no place on Facebook,” Stretch said, while the content—Russian-generated or not—remained on Facebook.