When it comes to shaping our national discourse, there may be no institution with more influence than social media. There may be no institution as vulnerable to covert manipulation either.

The concern about our inability to have an informed, truthful national discussion has reached Congress, which is investigating how our most popular sites became unwitting vehicles for Russian propaganda.

On Wednesday, representatives from Facebook, Twitter and YouTube are scheduled to testify before members of the Senate and House intelligence committees investigating Russia’s interference in our election.

Much of the recent commentary has focused on the extent to which the companies are willing to remedy the perviousness of their platforms to malevolent actors or share data with investigators. But they’re no longer the critical questions. Now that it’s a national security issue, it’s not going to be up to them.

The more likely outcome is that they’re going to be subject to new rules to prevent this all from happening again. And the social media transparency bill garnering support in the U.S. Senate, which would compel social media to show who is funding political advertisements, could be just the first regulatory salvo. The companies’ biggest fear should be laws forbidding certain algorithms altogether.

The extent to which social media was weaponized against us in 2016 is becoming clearer. Countless news stories have documented how Russian-state trolls gamed Facebook, YouTube and Twitter algorithms with fake accounts and advertisements to sow division and help bolster the Kremlin’s preferred candidate, Donald Trump.

We now know that companies that pride themselves on meaningfully organizing the world’s digital information were easily used for information warfare by a foreign power. They weren’t on guard against it. They were overwhelmed by content. And they’ve largely eliminated human gatekeepers.

They still hold on to the notion that they are neutral technology companies, not media organizations with the responsibilities shouldered by newspaper editors to police content, even as they’ve become a primary place Americans get their news. But they may not be able to hold on to that idea much longer.

“The companies that create the cloud have to figure out how to crack their own code to protect democratic voting, defend against terrorism and enable Americans to separate truth from falsehood,” former Federal Communications Commission Chairman Reed Hundt told me. “No important medium can escape these three responsibilities, and digital media soon will be the most important ever.”

The reach of social media to disseminate propaganda and fake news is astounding, according to a study done by Jonathan Albright, research director of the Tow Center for Digital Journalism at Columbia University. Albright looked at just six out of the 470 Russian-bought pages and accounts and found that the content had been shared 340 million times.

Equally troubling was Facebook’s response.

According to the Washington Post, a day after Albright published his analysis showing Russian content was seen by hundreds of millions more people than Facebook originally said, Facebook “scrubbed from the Internet nearly everything — thousands of Facebook posts and the related data — that had made the work possible.”

Now the companies are turning to crisis communications firms and lobbyists for help. Facebook took out a full-page ad in the New York Times promising to strengthen its ad policy. Still, even a year after the election, there continues to be evidence of how easy it is to circulate fake news via ad buys on the platform.

With the 2018 midterm elections looming, American officials predict Russia will repeat its attacks, a warning our president either dismisses or ignores. And Russian troll farms are still creating fake Twitter hashtags to turn Americans against each other.

Facebook’s chief operation officer, Sheryl Sandberg, told the Axios news site: “What we really owe the American people is determination” to do “everything we can” to defend against threats and foreign interference.

Determination to try to fix the problem is welcome. But there remain signs the companies don’t quite appreciate the magnitude of this problem.

On Oct. 6, academics, concerned citizens and representatives of YouTube, Facebook and Twitter gathered for the launch of Stanford University’s new Global Digital Policy Incubator, an organization that will address new threats posed by digital technology.

The day turned into an extended debate over responsibility. There was a lot of fretting about government oversight and regulation. Several held up a new law passed in Germany that would fine social media companies if they didn’t remove hate speech as an example of what the United States shouldn’t or couldn’t do because of our First Amendment.

Facebook’s representative touted its hiring of 1,000 people to review ads and new restrictions on ad content. YouTube’s explained why the platforms couldn’t be responsible for determining content veracity. And Twitter’s said “the information ecosystem is much bigger” than social media. They were on the defensive.

The incubator’s new executive director, Eileen Donahoe, noted that the tech companies “may be getting policy wrong.” Mike Brown, the former CEO of the cybersecurity firm Symantec, said he had no doubt that government would get involved and said there was no way the private sector could self-regulate because, “You get a very hodgepodge answer.”

Nicole Wong, the former U.S. deputy chief technology officer and former legal counsel for Google and Twitter, implored the technology companies to fix their vulnerabilities before the government imposed answers. “If you don’t build it, you are going to get regulated into it. It is incumbent upon the companies to figure out the right solutions here because the panic is real.”

Even if they don’t fully embrace that traditional editorial role, social media companies can devote the same amount of resources to solving the engineering problems related to foreign propaganda and fake news that they have to preventing spam and pornography, which — by contrast — have been deemed a threat to their business model. They can elevate content that is real and credible over what is most clicked-on and lucrative. They have in recent days announced new policies that should make it easier for users to figure out who is buying political advertisements — a move toward transparency that should continue.

The hijacking of social media is now primarily a national security story, and one that should matter to any American who cares about the integrity of our democracy. Congress is right to demand answers from Silicon Valley executives and then, perhaps, yes, subject them to regulations just like any other industry from food to medicine to banking. If the companies find this all too onerous, then perhaps there will be a market for new social networking sites that are up to the task.

Janine Zacharia, a former Washington Post reporter, teaches journalism at Stanford University. To comment, submit your letter to the editor at SFChronicle.com/letters.