Virtual Nuclear Weapons Design and the Blur of Reality

With explosions taking place virtually, how much harder will it be for weapons scientists to confront the destructive power of their work and its ethical implications?

The Castle Bravo nuclear test, the detonation of the most powerful thermonuclear device ever tested by the U.S. Source image: Wikimedia Commons

By: Sherry Turkle

Thirty years ago, designers and scientists talked about simulations as though they faced a choice about using them. These days there is no pretense of choice. Theories are tested in simulation; the design of research laboratories takes shape around simulation and visualization technologies. This is true of all fields, but the case of nuclear weapons design is dramatic because here scientists are actually prohibited from testing weapons in the physical realm.

This article is adapted from Sherry Turkle’s book “Simulation and Its Discontents.”

In 1992, the United States instituted a ban on nuclear testing. In the years before the ban, frequent physical tests, first above ground and then underground at the Nevada Nuclear Test Site, provided weapons designers with a place to do basic research. Through tests they developed their scientific intuitions even as they reassured themselves that their weapons worked. More than this, the tests compelled a respect for the awesome power of nuclear detonations. Many testified to the transformative power of such witnessing.

In the years after the 1992 ban, newcomers to the field of nuclear weapons design would see explosions only on computer screens and in virtual reality chambers. At Lawrence Livermore and Los Alamos National Laboratories, some of the most powerful computer systems in the world are used to simulate nuclear explosions. Until recently, these simulations took place in two dimensions; now, simulations are moving into three dimensions. In a virtual reality chamber at Los Alamos known as a CAVE (an acronym for Cave Automatic Virtual Environment), one stands “inside” a nuclear explosion wearing 3D goggles, in order to observe it, one is tempted to say, “peacefully.” The CAVE simulation is there to “demo” an explosion; those who work there become accustomed to experiencing in the virtual what could never be survived in the real.

One senior scientist is concerned about the moral effects of moving nuclear weapons research to virtual space, but he and his colleagues are also troubled about the effects of virtuality on their science itself.

When nuclear testing moved underground, it became easier for weapons designers to distance themselves from the potential consequences of their art. Hidden, the bomb became more abstract. But even underground testing left craters and seismic convulsions. It scarred the landscape. Now, with explosions taking place on hard drives and in virtual reality chambers, how much harder will it be for weapons scientists to confront the destructive power of their work and its ethical implications? One weapons designer at Livermore laments that he has only once experienced “physical verification” after a nuclear test, he told me in 2003 at a workshop on simulation and visualization. He had “paced off the crater” produced by the blast. It changed him forever. His younger colleagues will not have that.

This senior scientist is concerned about the moral effects of moving nuclear weapons research to virtual space, but he and his colleagues are also troubled about the effects of virtuality on their science itself. They argue that “physical intuition is a skill you want to keep,” as one told me, and worry that the enthusiastic reactions of young designers to new, flashy virtual reality demonstrations are naïve. One says: “The young designers look at anything new and they say, ‘This is so much better than what we had before. We can throw out everything we did before!’” Senior scientists at the national laboratories describe young designers immersed in simulation as “drunk drivers.” Within simulation, the happily inebriated show less judgment but think they are doing fine.

Dr. Adam Luft, a senior weapons designer at Los Alamos, shows sympathy for the young designers: The new rules compel them to fly blindly. They cannot test their weapons because they must work in the virtual and they are given computer systems whose underlying programs are hard to access. Luft himself feels confident only if he is able to access underlying code. He is frustrated by the increasingly opaque simulations of his work environment. When something goes wrong in a simulation, he wants to “dig in” and test aspects of the system against others. Only a transparent system “lets [me] wander around the guts of [a] simulation.” He is wary of making any change to a weapon without personally writing its code. Luft worries that when scientists no longer understand the inner workings of their tools, they have lost the basis for trust in their scientific findings, a concern that mirrors those of MIT designers and scientists of 30 years before.

“The young designers look at anything new and they say, ‘This is so much better than what we had before. We can throw out everything we did before!’”

Across professions, successful simulation gives the sense that digital objects are ready-to-hand. Some users find these interfaces satisfying. Others, like Luft, focused on transparency, are not so happy. They look askance at younger designers who are not concerned about whether they wrote or have even seen underlying code. One of Luft’s colleagues at Los Alamos describes his “fear” of young designers: “[They are] good at using these codes, but they know the guts a lot less than they should. The older generation… all did write a code from scratch. The younger generation didn’t write their code. They grabbed it from somebody else and they made some modifications, but they didn’t understand every piece of the code.” He speaks with respect of “legacy codes,” the old programs on which the new programs are built. “You can’t throw away things too early,” he says. “There is something you can get from [the legacy codes] that will help you understand the new codes.”

At Livermore, in 2005, a legendary senior weapons designer — Seymour Sack — was preparing to retire. At an MIT workshop, his colleagues discussed this retirement and referred to it as “a blow.” They were anxious about more than the loss of one man’s ability to make individual scientific contributions. He had irreplaceable knowledge about the programming that supported current practice, one weapons designer told anthropologist Hugh Gusterson, who published a paper on the topic of scientific involution across three generations of nuclear science. His colleagues fretted: “He has such a great memory that he hasn’t written down lots of important stuff. How will people know it?”

The response to this scientist’s imminent retirement was a movement to videotape him and all the other scientists who were about to leave service. This was no ordinary oral history. It was infused with anxiety. Those who know only the top layer of programs feel powerful because they can do amazing things. But they are dependent on those who can go deeper. So those who feel most powerful also feel most vulnerable.

Nuclear weapons design is divided by dramatic generational markers: Some designers grew up with routine underground testing, some glimpsed it, some have only experienced virtual explosions.

Nuclear weapons design is divided by dramatic generational markers: Some designers grew up with routine underground testing, some glimpsed it, some have only experienced virtual explosions. Some designers were trained to program their own simulations, some simply “grab code” from other people and are unfazed by the opaque. Yet when Luft sums up attitudes toward simulation in his field, he makes it clear that the wide range of opinion does not reduce to simple generational criteria. The cultures of weapons laboratories are also in play. For example, at Livermore, older weapons scientists who were very hostile to simulation became far more positive when the laboratory adopted a new metaphor for weapons design. Livermore began to liken weapons design to bridge building. According to this way of thinking, engineers do not need to “test” a bridge before building it: One is confident in its design algorithms and how they can be represented in the virtual.

At Livermore, the change of metaphor made simulation seem a reasonable venue for weapons testing. And at Los Alamos, there are younger scientists who find themselves eloquent critics of immersive virtual reality displays. One says: “I was so attuned to making plots on my computer screen. I was surprised at how little new I learned from [the RAVE].” (The RAVE is the nickname for Los Alamos’s virtual CAVE technology.) This designer complains about not being able to work analytically in the RAVE; others say that it gives them a feeling of disorientation that they cannot shake. In the RAVE, scientists work in a closed world with rigorous internal consistency, where it is not always easy to determine what is most relevant to the real. For some younger scientists, even those who grew up in the world of immersive video games, the RAVE seems too much its own reality.

Across fields, scientists, engineers, and designers have described the gains that simulation has offered — from buildings that would never have been dared to drugs that would never have been developed. And they also describe the anxiety of reality blur, that “breaking point” where the observer loses a sense of moorings, bereft of real-world referents and precedents. And the very complexity of simulations can make it nearly impossible to test their veracity: “You just can’t check every differential equation,” says Luft. He pauses, and says again, “You just can’t, there are just too many.” In nuclear weapons design you can make sure that you have solved equations correctly and that your system has internal consistency. In other words, you can “verify.” But he adds, “validation is the hard part. That is, are you solving the right equations?” In the end, says Luft, “Proof is not an option.”

NOTE: All participants in the several studies that led to “Simulation and Its Discontents,” from which this article is excerpted, are granted anonymity, usually by simply identifying them as professor or student, or as a practicing scientist, engineer, or designer. When particular individuals take ongoing roles in my narrative, I provide them with pseudonyms for clarity.

Sherry Turkle is the Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology at the Massachusetts Institute of Technology. She is the author and editor of several books, including “Reclaiming Conversation,” “Alone Together,” “Evocative Objects,” and “Simulation and Its Discontents,” from which this article is adapted.