While Mark Zuckerberg was Halloween trick-or-treating with his family, representatives from Google, Facebook, and Twitter spent hours being grilled by lawmakers over their companies’ roles in Russian state-sponsored disinformation efforts. In public hearings with the Senate Judiciary Committee, as well as the House and Senate Select Committees on Intelligence, lawyers for the three tech giants presented their findings on their respective platforms’ roles in the chaos of the 2016 election.

Their appearance, which was announced in early October, came on the heels of the arrest of former Trump campaign chairman Paul Manafort for conspiracy, money laundering, and several other charges. Yet, while the back-to-back hearings shed a great deal of light on the nature and extent of Russian-led disinformation efforts on social media, they exposed a number of faultlines and shortcomings in how the conversation over Russian interference has played out. The struggles of both lawmakers and tech leaders to grapple with the question of what made the American public susceptible to these sorts of disinformation efforts was worrisome, to say the least. So, too, was the committees’ tendency to dance around the question of social media’s growing role in the democratic process. Many — like Senator John Kennedy (R-LA), who observed that while these corporations “do enormous good ... [their] power sometimes scares [him]” — stopped far short of calling out the immense power granted to these companies.

Prior to the hearing, a flurry of a new information about the extent of Russian-backed trolling operations was revealed, including more details about the number of accounts involved and the reach — the number of people who could potentially have seen the post — of their content. On Monday, Google published some of its findings regarding the Internet Research Agency (IRA), a St. Petersburg-based, pro-Kremlin troll farm, including its ad spending and use of various Google products, such as YouTube, where the company’s researchers said they identified at least 18 channels, with a total of 1,108 videos associated with the agency. Facebook, meanwhile, reported that nearly 126 million people were served content from IRA-backed accounts, some of which was sponsored content (i.e., ads). And Twitter, which had previous attempted to downplay the presence of Russian-backed trolls and bots on its platform, noted that some 36,746 accounts it associates with Russian disinformation operations had generated content during the 2016 election cycle. That said, these accounts — which Twitter was quick to emphasize represented “1/100th of a percent (0.012%) of the total accounts on Twitter at the time” — were not necessarily tied to a specific Russian troll farm but were said to share characteristics that analysts believed would indicated they were part of a broader disinformation campaign.

While these findings are, on the face of it, alarming, lawmakers and industry representatives tussled over the implications of these large numbers and the actual palpable effects on the American public. As Senator Mark Warner (D-VA) stated Wednesday, paid ads “are just the tip of a very large iceberg.” Instead, he continued, “the real story is the amount of misinformation and divisive content that was pushed for free on Russian-backed pages.” Twitter, Warner also contended, appeared to be underestimating the number of problematic accounts on its hands as well. As he noted in his opening statement, citing a study from analysts at the University of Southern California and Indiana University, “independent researchers have estimated that up to 15 percent of Twitter accounts — or potentially 48 million accounts — are fake or automated.” Of course, whether those accounts are associated with a state-sanctioned or state-run disinformation effort is another matter entirely.

As any disgruntled social media manager can tell you, a broad reach is nice and demonstrates an ad campaign has been effective, but only in part. Reach without engagement is useless. High-powered accounts, especially those associated with political campaigns, redistributing content were the best indicator that a disinformation campaign had been successful. As The Daily Beast reported in October, accounts like @Ten_GOP — a now-suspended IRA-run account that masqueraded as an official mouthpiece of Tennessee's Republican Party and gained 100,000-plus followers before being deactivated in August — were amplified by Trump campaign staff and affiliates, including Kellyanne Conway, Donald Trump, Jr., and Michael Flynn. The @Ten_GOP in particular found willing and eager partners, of sorts, in media outlets like the Gateway Pundit, Breitbart News, InfoWars, and even Fox News throughout the election cycle, who all quoted the account extensively. (Admittedly, it’s hard to say this resulted in a significant change in these websites’ and networks’ editorial quality.) Right-wing conspiracy nuts weren’t the only ones listening. As ThinkProgress noted, in October 2016 the Daily Dot picked up an unverified story about a black Comcast worker receiving a ticket for fixing wiring at a house. The post came from an account known as “Crystal Johnson” that has since been uncovered as a Russian troll.

All this implies Americans have become passive bystanders in an information war of supposedly epic proportion.

It’s easy — as both committees were eager to point out — to feel like Facebook, Twitter, and Google were all asleep at the wheel. “Why has it taken Facebook 11 months to come forward and help us understand the scope of this problem, see it for the problem it is, and begin to work in a responsible way to address it?” wondered Senator Chris Coons (D-DE) during an exchange with the company’s general counsel, Colin Stretch. Despite several overwrought appeals to the dire “national security” issues that tech companies ignored, the foremost issue appeared to be a woeful lack of regulatory framework. In particular, although many boosted posts from Russian accounts were not directly election related, lawmakers cited the fact that the same requirements for transparency surrounding election-related advertising don’t apply to social media platforms as one of their central concerns—and also one of the few areas where improved cooperation between technology companies and the federal government is not only far from nefarious, but actually necessary. The Honest Ads Act, a bill sponsored by Senator Amy Klobuchar (D-MN) that aims to “enhance transparency and accountability for online political advertisements” by requiring advertisers to disclose their identity, is one example of an attempt to grapple with this concern.

Still others took out their frustration in other ways, wondering why data behemoths like Facebook failed to put two and two together. On more than one occasion, lawmakers veered into the absurd. In one of Tuesday’s more bizarre exchanges, Senator Al Franken (D-MN) chastised Facebook, asking: “You can’t put together ‘rubles’ with ‘political ad’ and point out, ‘Hm, those two data points spell out something bad’?” If there was ever a question worth shrugging off during a congressional testimony, that was certainly one.

Like many of the conversations surrounding Russia’s electoral interference, Tuesday’s and Wednesday’s hearings managed to avoid the question of why such a disinformation campaign could carry on for so long. Although numerous lawmakers referred to Russia’s campaign as effective and “sophisticated,” the actual ads and content that were produced appear to be anything but. Jacked multicolor Bernie? Satan Hillary fighting Jesus? Twitter accounts with handles such as @PeeOnHillary or @WokeFromDay1? RT’s and Sputnik’s often ham-fisted coverage of U.S. politics seems downright professional by comparison.

Who would fall for this — and why? There are, of course, those who have; last year, two Russian-backed Facebook groups — one promoting Texas secession and another aimed at American Muslims — managed to kick off an armed protest at Houston’s Islamic Da’wah Center. Instead of pondering why American Facebook users would participate in sharing and broadcasting “fake news” campaigns originating in Russia, some lawmakers and analysts opted to look outward. Warmongering rhetoric helps. As Senator Dianne Feinstein (D-CA) contended in Wednesday’s Senate hearing, the main lesson we should gleen for the 2016 election is that “we are at the beginning of what could be cyberwar” — a bold claim given that if Russia has waged actual cyberwar, it’s been in places like Estonia, where a 2007 Russian-led cyber attack took down the websites for numerous banks, media outlets, and government agencies. Others, like frequent expert cybersecurity witness Clint Watts, assert that matters could just as easily get worse; after all, he noted on Tuesday, “Russian influence is rife.”

All these comments imply Americans have become passive bystanders in an information war of supposedly epic proportion, taken in wholly by the irresistible power of targeted Facebook marketing. Facebook, Twitter, and Google — while still accountable for being unprepared for such an attack — are simply caught in the middle of the United States’s revivified adversarial relationship with Russia. Failure to abide by the whims of U.S. foreign policy takes on new meaning; as Senator Tom Cotton (R-AR) pondered in a back-and-forth with Twitter’s general counsel about the site’s inability and/or refusal to boot off Julian Assange and Wikileaks, “is it biased to side with America over our adversaries?”

On more than one occasion, lawmakers veered into the absurd.

By no means does that let these monstrous tech conglomerates off the hook. That didn’t stop Stretch, though, who emphasized that “our goal is to bring people together. These foreign actors sought to tear people apart.” Still, Facebook’s incessant appeal to unity sounds like a lame, PR-friendly excuse for shrugging off the more nefarious uses of its platform. Meanwhile, Twitter’s solution of giving RT and Sputnik the boot in terms of advertising, ostensibly under the guise of preserving the integrity of the platform, seems half-assed at best, especially given a recent BuzzFeed scoop expounding on the company’s efforts to court RT in the early days of the 2016 election. (There is also the company’s refusal to, say, ban the American Nazi Party, which has probably done more to divide Americans than Sputnik ever will.) Maybe these companies should have come to the U.S. government to negotiate restrictions on election-related advertising; even so, their platforms are doing, in many ways, what they were designed to do.

Meanwhile, a cottage industry of self-appointed disinformation experts has arisen to help social media companies solve their problems. Some, like Molly McKew — who, in a testimony before the U.S. Helsinki Commission in September, called for the execution of “rapid response operations” (what exactly this entails is unclear) carried out by U.S. special forces and counterintelligence assets — have eagerly offered their services. Yet these simplistic narratives about Russian influence in particular are dangerous. For one, it ignores the fact that “fake news” isn’t just a Russia problem — disinformation is peddled by media organizations both in the United States and beyond, both in the interest of maximizing profits and for political purposes.

Although numerous lawmakers were avidly pushing the idea that these social media disinformation campaigns are a legitimate national security issue, efforts that actually prioritize security-focused remedies tend to miss the point: What we need is a more forward-thinking civic education and a more robust safety net. As Nina Jankowicz, a scholar at the Kennan Institute, wrote in The New York Times in late September, “disinformation can be defeated without the establishment of a shiny new initiative cased in the language of Cold War 2.0.” Referencing McKew, she wrote that instead of “‘rapid information operations,’ the United States should work to systematically rebuild analytical skills across the American population and invest in the media to ensure that it is driven by truth, not clicks.” Americans need to embrace their social obligation to prevent the spread of false information, although ensuring they do so means rebuilding confidence is crucial, too. After all, as she told me in an email, “the growing trust gap between people and their governments exists not only because of the deteriorating media environment, but because people feel left behind for one reason or another, and it's government’s job to bring them back into the discussion.”

Whether the almost singularly Russia-focused discussions of disinformation will allow space for that conversation is unclear. Alas, if the circus on Tuesday and Wednesday is any indication, it’s hard to hold out hope that they will.