In 2007, Admiral Mike McConnell, the wonky former head of the National Security Agency, became the director of National Intelligence, and soon discovered that many senior American officials were not remotely prepared for the advent of digital warfare. (Less than a year earlier, Senator Ted Stevens, of Alaska, who chaired the main Senate committee that regulates the Internet, had described the Web as a “series of tubes.”) To grab his peers’ attention, McConnell adopted the intelligence community’s version of a party trick: when visiting a Cabinet officer, he would pull out a copy of a memo that had been written by his host and then stolen. The Chinese, he might explain, hacked this from you—and we hacked them to get it back.

A decade later, nobody in Washington remains ignorant of such risks. The hacking that took place during the 2016 election, including attacks that exposed the inner workings of the Democratic National Committee and of Hillary Clinton’s campaign, has opened a new chapter in the long-predicted rise of cyber conflict. If the first fifteen years of the twenty-first century were dominated by the war on terror, we are now entering a period when the war on cyber—and war by cyber—will very likely loom as large in our discussions of national security. Last week, WikiLeaks released an archive of cyber tools stolen from the C.I.A.; it was hardly a surprise that the C.I.A. spies on phones and computers, even if it was news that the agency might hijack a Samsung television and use it as an eavesdropping tool. President Donald Trump’s aide Kellyanne Conway took advantage of that news to promote the myth that Barack Obama might have “wiretapped” Trump though household electronics. Surveillance can be conducted with “microwaves that turn into cameras,” she said on Sunday. “We know this is a fact of modern life.” (Faced with ridicule, she later said her microwave-Obama-Trump scenario was taken out of context.)

When the risks of cyber attacks and surveillance are politicized and exploited, it’s easy to overlook genuine hazards. In an Op-Ed published on Tuesday in the Times, Bruce G. Blair, a former missile-launch control officer and now a research scholar in the Program on Science and Global Security, at Princeton, warned of the risks that hacking pose to America’s nuclear arsenal. In recent years, he noted, the U.S. has discovered vulnerabilities in its own systems, including a glitch that “could have allowed hackers to cause the missiles’ flight guidance systems to shut down, putting them out of commission and requiring days or weeks to repair.” He asked: “Could a foreign agent launch another country’s missiles against a third country? We don’t know.”

One of the most persistent challenges in this new era is, to put it bluntly, deciding how much to freak out. The temptation to overreact to a sudden threat—by passing hasty laws, intruding on civil liberties, or spending money on the wrong defenses—is profound. Reflecting the profusion of recent headlines about the risk of hacking, a joke making the rounds in Washington these days is that the best way to guarantee funding for your project is to add “cyber” to the title. In January, the Department of Energy declared that the U.S. power grid “faces imminent danger” of cyberattack, though critics argue that the risks of a total shutdown of American power are often overstated, given that it would require the physical destruction of multiple substations. (Cris Thomas, a strategist at Tenable, a cybersecurity firm, has sought to counter some paranoia by pointing out non-cyber risks: his Web site, CyberSquirrel1, has collected thousands of reports of attacks on the U.S. power grid perpetrated by squirrels, birds, and other animals.)

Still, a decade after McConnell had to shame his colleagues into paying attention, there remains, in political circles, a certain dubious mind-set regarding hacking, in part because many senior members of government remain digital amateurs. As recently as 2013, most members of the United States Supreme Court, the very jurists who weigh legal questions about technology and privacy, had yet to start using e-mail.

Almost always, journalists and analysts describe the latest cyber attack as a “sophisticated” operation, even when technical experts describe them as ordinary and preventable. Ben Buchanan, a Harvard researcher and the author of a new book called "The Cybersecurity Dilemma,” wrote this week on the Cipher Brief, a security blog, that “when every case is described as unprecedented and every threat actor billed as nearly unstoppable, it fuels what I call ‘the legend of sophistication.’ The effect of such a legend is to paint a picture of a world with so many talented adversaries that practical cybersecurity is out of reach.”

In some cases, the costliest attacks are relatively low-tech. Hackers accused of working for Russian intelligence breached the Gmail account of John Podesta, the chairman of Hillary Clinton’s campaign, using an old-fashioned technique called “spear-phishing”: sending an e-mail under false pretenses to garner personal information, such as a password. Thomas Rid, a scholar at King’s College, in London, told me, “It's like an I.E.D. In the nineties, leading up to Afghanistan, you had this expectation that the future of warfare would be very high tech, and that America would be leading because the American Armed Forces were spending so much money on network-centric platforms. But then what happened is the I.E.D. improvisation. If you drive with a vehicle that has wheels, it can be attacked. If you have an e-mail account, it can be hacked.”

Given the risks, there is a growing pressure to engage in a cyber arms race, a generational effort to meet force with force that would, no doubt, enrich some members of the national-security industry. But there may be smart ways to defuse the risks instead of heightening them. Michael Sulmeyer, a senior Pentagon official in charge of cyber policy during the Obama Administration, said that it would be a mistake to simply reawaken the mind-set of the Cold War arms race. “It's easy to think, How do we impose costs? How do we threaten pain?” Sulmeyer told me. “But deterrence was a mentality from the Cold War that came about as our strategy only because defense is not possible against nuclear arms; there is no defending against a thousand incoming warheads. But here, we have to make ourselves harder to hit. What we're talking about is: why don't accounts like Podesta’s have two-factor authentication by default?” Sulmeyer, who now directs the Belfer Center's Cyber Security Project, at the Harvard Kennedy School, wants politicians and technology companies to adopt stronger defenses in part by encouraging them to share data on the threats they are facing.

In his book “Dark Territory,” a compelling history of cyberwarfare, Fred Kaplan notes that just a few months after the bombing of Hiroshima and Nagasaki, the military strategist Bernard Brodie, the architect of American nuclear deterrence, wrote, “Thus far the chief purpose of our military establishment has been to win wars. From now on its chief purpose must be to avert them.” The book containing that passage was called “The Absolute Weapon.” There have been additions to the arsenal since the beginning of the Cold War, but, as is still true of nuclear weapons, the American public, and the politicians who serve in our names, should be less interested in how to win a cyber war than in how to prevent it.