In 1945, the United States organized a committee to investigate whether nuclear weapons should become a central military technology, or whether to abjure the weapons and, through self-restraint, avoid a costly and potentially deadly nuclear arms race. Led by Undersecretary of State Dean Acheson and Chairman of the Tennessee Valley Authority David Lilienthal, the committee produced the eponymous Acheson-Lilienthal Report, which, after it failed to gather reasonable support, marked a turning point in the Cold War and signaled the beginning of the nuclear arms race.

In 1945, the United States organized a committee to investigate whether nuclear weapons should become a central military technology, or whether to abjure the weapons and, through self-restraint, avoid a costly and potentially deadly nuclear arms race. Led by Undersecretary of State Dean Acheson and Chairman of the Tennessee Valley Authority David Lilienthal, the committee produced the eponymous Acheson-Lilienthal Report, which, after it failed to gather reasonable support, marked a turning point in the Cold War and signaled the beginning of the nuclear arms race. Almost 70 years later, we find ourselves at a similar juncture with cyberwarfare. Cyberweapons do not appear to be capable of mass destruction in the way nuclear weapons clearly are, but they hold at risk some of the most precious assets of our time: the information storage and control mechanisms on which modern society has been built. It is not difficult to imagine catastrophic scenarios such as the destruction of a banking sector, the elimination of a stock market, the flooding of a dam, or the poisoning of a water supply — all initiated by malfunctions induced by malicious software. The United States rushed into the nuclear age eager to cement its technical superiority, causing a decades-long nuclear arms race that threatened global extinction. Before policymakers go too far, they should now take a moment to consider the implications — both intended and unintended — of cyberweapons.

While digital spying has taken place for decades, the era of computer-mediated destruction has only recently begun. Early this month The New York Times published an investigative feature that explored Olympic Games, a cyberweapons program designed to sabotage an element of another country’s infrastructure. Started during the Bush administration, this is the first known program of its kind. In embarking on Olympic Games, the United States and Israel stepped boldly, but naively, into uncharted territory.

The first battle of Olympic Games reached the public eye in July 2010, when news broke of Stuxnet, a creative worm designed to cause Iran’s uranium-enrichment centrifuges to explode by changing, with software, their operating parameters. On its heels were Duqu, Wiper, and Flame, a set of multipurpose tools that collected intelligence, identified vulnerabilities, and sabotaged information systems.

In some small way, the strategic vision of Olympic Games is commendable. Cyberattacks might have reduced Israeli pressure for conventional military strikes that could have led to a deadly and protracted war with Iran and triggered Iran to race for the bomb. The cyberstrategy might have also been rationalized as providing more opportunity for diplomacy — but as with most experimental programs, events did not go according to plan and unforeseen consequences soon emerged.

Consider as a case study Stuxnet: First injected into Iran’s computers in June 2009, the worm appears to have destroyed more than 1,000 of Iran’s 5,000 gas centrifuges, according to data reported by the International Atomic Energy Agency (IAEA). However, by drawing from its centrifuge reserves, Iran was able to replace quickly its destroyed centrifuges and compensate for the losses, even while the Stuxnet attack was ongoing.

Indeed, if the measure of Iran’s progress toward a nuclear weapon is its inventory of enriched uranium, then Iran came out ahead. IAEA data indicates that Iran was able to boost output enough to reverse all Stuxnet-induced production losses by March 2010, about eight months after the attack first began to have an effect. After the successful eradication of Stuxnet in the summer of 2010, Iran sustained its heightened level of production, expanding its low-enriched uranium stockpile at rates exceeding the pre-Stuxnet trend. If, without Stuxnet, Iran would have expanded production according to its historical trajectory, then one would conclude that the cyberattack wound up enhancing Iran’s ability to make nuclear weapons instead of setting the program back.

What went wrong? Stuxnet was designed to operate on an ongoing basis without being detected: a strategy of steady attrition in the pursuit of time. The worm was not supposed to leave Iran or be discovered — but it soon spread beyond the confines of Iran’s nuclear facilities until, ultimately, members of the computer-security community identified it. Stuxnet both failed to operate according to plan and failed to have a long-term benefit. Perhaps, then, the lesson for the authors of future cyberweapons is to recognize the short-lived and unpredictable nature of cyberattacks and aim for more acute, immediate destruction, rather than persistent manipulation of another nation’s assets — a worrisome conclusion suggesting that cyberweapons may be better suited for terror than for strategic affairs.

After Stuxnet, other components of the cyber affront were quickly exposed and removed, and Iran’s uranium-enrichment capabilities grew faster than ever. The American and Israeli leaders who launched the games suddenly found themselves in a state of panic. Their ability to influence Iran’s nuclear program had dropped precipitously, yet no diplomatic progress had been made to ensure a soft landing. Perhaps leaders had grown too narrowly focused on the play-by-play excitement of a new cyberattack and too comfortable with relative inaction on the diplomatic front. Or perhaps leaders began to feel that a technical fix was potentially within reach, or at least that cyberattacks could hold Iran’s nuclear program at bay until its leaders capitulated to the pressure of sanctions. Whatever the likely reasons, the current reality is that the United States finds the diplomatic challenge harder than ever before: After Stuxnet, Iran, with even larger centrifuge reserves, has more to sacrifice, but now trusts the United States even less. Furthermore, Israeli threats of armed conflict have reached a new high. The situation has become unstable, and Olympic Games has yet to realize any enduring benefits.

Despite their questionable utility, the cyberattacks have not been without consequence. Immediately after Iran admitted to being a victim of Stuxnet, it created a new Cyber Command of its own. Brig. Gen. Gholamreza Jalali, the head of Iran’s Passive Defense Organization, said that the Iranian military was prepared “to fight our enemies” in “cyberspace and Internet warfare,” a formula that may imply aspirations to go on the offensive. The US Defense Department responded by announcing a new policy in which cyberattacks against US assets are considered to be acts of war. More bold steps into the darkness.

In the world of armaments, cyberweapons may require the fewest national resources to build. That is not to say that highly developed nations are not without their advantages during early stages. Countries like Israel and the United States may have more money and more talented hackers. Their software engineers may be more skilled and exhibit more creativity and critical thinking owing to better training and education. However, each new cyberattack becomes a template for other nations — or sub-national actors — looking for ideas. Stuxnet revealed numerous clever solutions that are now part of a standard playbook. A Stuxnet-like attack can now be replicated by merely competent programmers, instead of requiring innovative hacker elites. It is as if with every bomb dropped, the blueprints for how to make it immediately follow. In time, the strategic advantage will slowly fade and once-esoteric cyberweapons will slowly become weapons of the weak.

Whatever the greater nature of cyberwarfare, it is clear that individual cyberweapons are inherently fragile. They work because they exploit previously unknown vulnerabilities. Stuxnet, for example, exploited four “zero day” vulnerabilities in the Windows operating system. As soon as Stuxnet made them public, they were patched and thus no longer available vectors for future attacks or intelligence gathering. Such vulnerabilities are also closed through routine software updates and patches. Powerful hacker entities like the US National Security Agency must continue to discover new weaknesses in an attempt to stay ahead, and probably maintain a sizable list of unpublished vulnerabilities for future exploitation — but to what end? These security gaps apply to all computer systems of a specific type regardless of national borders. Every vulnerability kept secret for the purpose of enabling a future cyberattack is also a decision to let that vulnerability remain open in one’s own national infrastructure, allowing it to be exploited by an enemy state or even a terrorist hacker. This raises a basic philosophical question about how states should approach the question of cyberwarfare: Should countries try to accrue offensive capabilities in what amounts to a secret arms race and, in doing so, hold their own publics at risk? Or should states take a different tack, releasing knowledge about vulnerabilities in a controlled way to create patches to shore up their own digital frontiers?

We are at a key turning point — the Acheson and Lilienthal moment of the digital age in which a nation must decide what role cyberweapons will play in its national defense. As nations begin to build out cyberwarfare organizations, they run the risk of creating bureaucratic entities that will seek to protect offensive cyber capabilities and in doing so will necessarily subject their own publics to cyber vulnerabilities. For states that have little to lose on the cyber front, an offensive approach may be interesting. But for the United States and other highly developed nations whose societies are critically and deeply reliant on computers, the safe approach is to direct cyber research at purely defensive applications. Fortunately, unlike the Acheson and Lilienthal moment of the nuclear age, the United States can make this choice unilaterally. The alternative approach, to continue to launch ambitious cyberattacks, is to cross the Rubicon with an unpracticed weapon, naked to the attacks of enemies and terrorists alike.

Editor’s note: This article was updated on June 7, 2012.