The nuclear industry has claimed that a Fukushima-type event is unlikely to happen in the United States, because few US nuclear power plants are vulnerable to tsunamis. But to some degree, every nuclear plant is vulnerable to natural disaster or deliberate attack, and no nuclear plant can be assumed to withstand an event more severe than the “design-basis accidents” it was engineered to withstand. Many US nuclear plants appear to be subject to greater risks than they were designed to handle, particularly in regard to earthquakes. The author suggests that the US Nuclear Regulatory Commission should expand the universe of events that new and existing nuclear plants must be designed to survive and require reactors to be upgraded accordingly.

The mammoth wave that struck Japan on March 11, 2011 not only caused a profound human tragedy and an unprecedented nuclear plant crisis but also threw cold water on the prospects for a “nuclear renaissance” any time soon. The spectacle of four reactors in a row blowing up, amid the display of the crude and desperate measures employed by the plant personnel to contain the disaster, belied the reassuring platitudes that the industry had served up for decades about the inherent safety and cleanliness of nuclear power and the competence of its overseers. Public trust in nuclear power, which had grown steadily as the years passed since Chernobyl without another serious nuclear accident, seems to have plummeted overnight, with polls showing, quite understandably, that a majority of not only the Japanese public but also people around the world now oppose nuclear power (Layne, 2011; Reaney, 2011). Fukushima has pushed nations that were teetering on the edge of major decisions on nuclear power, like Germany, off the cliff. Potential new entrants into the nuclear power enterprise, including Italy and Thailand, got cold feet. It seems unlikely that the nuclear industry and its regulators will regain public support in many nations without a dramatic change in the way they do business: Fundamentally, they must be more honest about what is known, and what isn’t known, about the safety of nuclear power.

Unfortunately, early signs don’t suggest the industry is going to transform the way it deals with the public. Soon after Fukushima, the Nuclear Energy Institute, the chief lobbying organization of the US nuclear industry, began to run advertisements defending the status quo by lauding the “state-of-the-art technology that layers precaution on top of precaution” at US nuclear plants. The ads did not note that many US nuclear plants were 1970s-vintage boiling-water reactors nearly identical in design to those at Fukushima Daiichi.

Certain vendors of new nuclear reactors took a different tack, opportunistically claiming that their designs were superior to the current generation of reactors and would have been able to withstand a catastrophic event such as that which afflicted Fukushima. These statements were also fundamentally misleading.

The truth of the matter is that no nuclear plant, old or new, can be assumed to be able to survive any event more severe than the “design-basis accidents” that it was designed to withstand. This is little different from the design process for any engineered facility. The scope of the “design basis” of a nuclear plant is set by regulators, who determine the necessary level of safety by choosing factors such as the type, severity, and likelihood of the events that the plant must be able to survive. In addition, since the analyses that plant designers must perform to demonstrate compliance with the design basis are sometimes quite uncertain, another major consideration is the “safety margin” between the results of these analyses and the safety goals. Greater margins mean larger buffers against uncertainties that may cause outcomes to be worse than designers predict.

If a nuclear plant experiences an event that is beyond its design basis, however, then all bets are off. This is what happened at the Fukushima Daiichi Nuclear Power Station, which was subject to a huge earthquake and a series of enormous tsunami waves less than an hour later. According to the preliminary report of the International Atomic Energy Agency (IAEA), only three of the six reactors experienced a level of shaking greater than their design bases. But the peak water level of the ensuing tsunami was about 46 feet, whereas the plant was not prepared to withstand a level greater than 33 feet high (IAEA, 2011). The resulting flood caused the failure of all but one of 12 available emergency diesel generators and damaged the plant’s electrical circuitry and other vital equipment. Coupled with the loss of external power caused by the initial earthquake, Units 1–5 lost all AC electrical power, a condition known as station blackout. Without eventual restoration of a power source, current-generation nuclear plants will lose the ability to provide sufficient water to keep the reactor cores cool, resulting in core overheating and meltdown. This sequence of events ultimately occurred at each of units 1, 2, and 3.

Fukushima has already revealed a number of issues that regulators around the world should have been aware of but apparently weren’t. At Fukushima, current regulatory policies failed in the following ways:

Station blackouts lasted far longer than regulators assumed.

Strategies to prevent core damage or hydrogen explosions were far less successful than expected.

Lack of accurate or functional instrumentation posed far greater challenges than projected.

Restoration of stable core cooling was far more difficult and took far longer than assumed.

Management of contaminated cooling water was a much more serious issue than expected.

Significant levels of radiation exposure occurred much farther from the release site than projected.

Current designs: Calculating the likelihood of another Fukushima After the accident, US industry spokespeople claimed that a Fukushima-type event was very unlikely to happen in the United States because few US plants are vulnerable to tsunamis. This claim misses a vital point: Every nuclear plant is vulnerable to some degree to natural disasters like earthquakes, floods, and high winds or to deliberate disasters (including terrorist attacks), and the possibility always exists that an unexpectedly severe event will occur. The risk to the public from such occurrences depends on the likelihood of such extreme events and on how plants would respond should such events occur. Significant uncertainties exist in regard to both these factors. For example, the Nuclear Regulatory Commission (NRC) requires that US plants identify what is known as the “safe shutdown earthquake” (SSE) and ensure that certain systems would function after such an earthquake occurs. SSEs are determined for each plant at the time of licensing, and the NRC requires plants to have an “adequate margin” to survive one (US NRC, 2011a). The NRC Fukushima near-term task force, however, concluded that “significant differences may exist between plants in the way they protect against design-basis natural phenomena and the safety margin provided” (US NRC, 2011b: 29). Not knowing the size of the safety margin makes it difficult to predict how vulnerable these plants would be to natural disasters like earthquakes that exceed their SSE. This is a major concern now, because new information on seismic hazards indicates that many nuclear plants may be subject to greater earthquake risks than they were designed to handle. According to a recent NRC assessment, there is about a 3 percent chance each year that one of the 104 US nuclear reactors will experience an earthquake that exceeds its safe shutdown earthquake. While many of these are in the eastern and southern United States, the plant that has the highest risk of experiencing an earthquake exceeding its SSE—nearly 0.4 percent per year—is Diablo Canyon in California. If this plant receives the 20-year license renewal it has requested from the NRC, it will have about a 13 percent chance of being subjected to an earthquake more severe than its SSE before the end of its extended operating lifetime in 2045. At first glance, it would appear that regulators could address this problem by expanding the universe of events that nuclear plants must be designed to survive and requiring reactors to be upgraded accordingly. Both the NRC’s Fukushima near-term task force and the Union of Concerned Scientists have recommended changes along these lines. But this is easier said than done. Regulators would have to decide how far to raise the safety bar. The last time the NRC went through such an effort was after the September 11 terrorist attacks, when the NRC determined that the level of security at nuclear plants was inadequate. The process to set the new level of required protection, by upgrading the “design-basis threat,” was a tortuous exercise in negotiation with industry that took two years to accomplish and ended up with a result that was far below the terrorist threat level actually faced by US infrastructure. The NRC has always had difficulty processing new information suggesting that the design basis was not adequate. The 1979 Three Mile Island accident, which involved multiple system failures and operator errors leading to core damage and a hydrogen explosion, was a beyond-design-basis accident. Although the NRC subsequently did enact some new regulatory requirements addressing specific problems that came to light during the accident, it declined to strengthen requirements that would have reduced the risk of severe accidents across the board. In its 1985 policy statement on severe accidents, the NRC declared by fiat that “existing plants do not pose an undue level of risk to the public” and that “operating nuclear power plants require no further regulatory action to deal with severe accident issues unless significant new safety information arises to question whether there is adequate assurance of no undue risk” (US NRC, 1985). This policy was sharply criticized by NRC Commissioner James Asselstine, who voted against it (US NRC, 1985), saying, “The commission’s action today fails to provide even the most rudimentary explanation of, or justification for, these sweeping conclusions. As a basis for rational decision-making, the commission’s severe accident policy statement is a complete failure.” This policy created a very high barrier for the institution of new regulations to address severe accident risks. By failing to expand the scope of what it designates as “adequate protection,” the NRC would not be able to impose any new requirement on nuclear plants (what is known as “backfitting”) unless it found that “there is a substantial increase in the overall protection of the public health and safety or the common defense and security to be derived from the backfit and that the direct and indirect costs of implementation for that facility are justified in view of this increased protection.” In other words, such regulations must meet a cost-benefit test, where the benefits are interpreted by the NRC as a reduction in the number of deaths from cancer that would result from the safety improvement. This rule was developed to conform to a 1981 executive order by President Ronald Reagan that blocked regulations with costs exceeding their projected benefits. Asselstine criticized this heavy reliance on cost-benefit analysis because it was based on average values of calculated safety risks and did not take uncertainties into account (US NRC, 1985). “Factoring into the decision the uncertainties in estimating the level of core meltdown risk would lead to a decision to search for ways to reduce the risks,” Asselstine wrote. “However, given the current political climate, there is little sympathy for backfitting existing plants. Thus, the Commission chooses to rely on a faulty number which supports the outcome they prefer and to ignore the uncertainties.” The NRC’s reluctance to expand the somewhat arbitrary historical list of design-basis accidents has led to gaps in the way severe accidents are treated, even when new information reveals serious safety concerns. For instance, the NRC recognized decades ago that a station blackout could pose a grave danger to a nuclear plant and decided that new requirements were needed. Because such events were considered to be highly improbable, however, the standards imposed by the NRC were weak. The NRC required that plants be able to cope with a blackout only for a short period of time, based on an assessment of how long it would take for power to be restored. As a result, most US plants only have four to eight hours of electric power—provided by batteries and additional generators—to cope with a blackout. But, even worse, the equipment needed to cope with a station blackout does not have to be what the NRC calls “safety-related”—that is, it doesn’t have to meet the high availability and reliability and quality assurance standards required for equipment that mitigates design-basis accidents, such as earthquakes and floods. As a result, no US nuclear plant would have been in a position to cope with an event like Fukushima, which caused a station blackout that lasted on the order of 10 days and in any event would likely have destroyed the equipment in place to cope with the blackout. A similar situation exists with regard to the equipment that the NRC required nuclear plants to acquire to be able to mitigate a 9/11-style aircraft attack that could cause loss of large areas of the plant from explosions and fire. Because the NRC determined that type of attack to be “beyond-design-basis,” the equipment and procedures were not required to be highly reliable, and NRC’s post-Fukushima inspections indeed revealed that much of this equipment would probably not be able to withstand a large seismic or flooding event either.

New reactors, old disasters, and lessons to learn One might think it would be easier to address Fukushima-related issues in reactors that are still on the drawing board than in operating reactors, since any design-related changes could be implemented without the need for backfitting existing structures. Because of the NRC’s reactive approach to reactor safety, however, the opportunity to implement design enhancements in next-generation reactors could be lost. The NRC’s policy on advanced reactors is that they do not have to be safer than operating reactors, because operating reactors are already safe enough. As a result, the current crop of new reactor designs is not clearly safer than what’s in use. New reactor vendors have advertised that their reactors are significantly safer—but this turns out to be true only if the threat of extreme natural phenomena, such as large earthquakes, is not taken into account. In the absence of regulatory requirements, new reactors simply will not be designed with a sufficiently robust capacity to withstand events beyond the current design basis, because if they were, they would likely be too expensive to compete with reactors that meet only minimum standards. For example, Westinghouse has claimed that its AP1000 reactor would be able to withstand a station blackout for 72 hours. The AP1000 is a light water reactor with passive safety features, which means that its design-basis cooling functions do not require the use of active systems like motor-driven pumps, relying only on gravity-driven systems and natural convection cooling. The plant is able to maintain core cooling without electrical power because it has a large tank of water above the reactor vessel and other systems that passively provide coolant flow for 72 hours. After 72 hours, however, the tank needs to be replenished—a task that requires electricity and operator actions. The AP1000 would not have been in a better position to withstand a 10-day station blackout than the Mark I boiling water reactors at Fukushima Daiichi. Also, Westinghouse was only required to show that the passive cooling systems would work in design-basis events, so there is no basis for assuming they would be able to work after a beyond-design-basis natural disaster. And the NRC does not require the active equipment that would be needed after the 72-hour period to be safety-related, so there would be no guarantee that it would be available and reliable after either design-basis or beyond-design-basis events. The AP1000 or any other new design is only as robust as the set of requirements that it must meet. Some vendors of small modular reactors (SMRs) have argued that their designs also have inherent capabilities to protect against Fukushima-type accidents. SMRs are defined as reactors that have a power level of less than 400 MW-electric and are compatible with assembly-line manufacture. One of the main advantages of SMRs is that they could be used by utilities to add nuclear power in smaller increments that would be better matched to gradual increases in demand. The vendors claim that small reactors would be easier to passively cool than large reactors because of the lower amount of heat that they would generate. Also, the vendors say, the smaller reactors could be built underground, providing additional protection against certain natural events. While there is a grain of truth in these claims, once again they do not tell the whole story. For instance, although underground siting could enhance protection against aircraft attacks and earthquakes, it could also have disadvantages in other circumstances. Emergency diesel generators and electrical switchgear at Fukushima Daiichi were installed below grade to reduce their vulnerability to seismic events, but this increased their susceptibility to flooding. And in the event of a serious accident, emergency crews could have greater difficulty accessing underground reactors. Moreover, accidents affecting multiple small units at a site may cause complications that could outweigh the advantages of having lower heat-removal requirements per unit. Fukushima has demonstrated the additional challenges presented at nuclear plant sites when multiple reactors are affected. In its June 2011 report to the IAEA, the Nuclear and Industrial Safety Agency of Japan wrote, “The accident occurred at more than one reactor at the same time, and the resources needed for accident response had to be dispersed. Moreover, as two reactors shared the facilities, the physical distance between the reactors was small.… The development of an accident occurring at one reactor affected the emergency responses at nearby reactors” (Nuclear Emergency Response Headquarters, 2011: XII-5).

A safer future It is highly unlikely that a technological magic bullet will inoculate nuclear power against the eventuality of another Fukushima. Regulators and the public worldwide should work together to come to a consensus regarding the level of risk of nuclear power that is acceptable, and nuclear energy will have to adjust to this new, higher design basis or face obsolescence. One should heed the words of the Kemeny Commission, which was convened to examine the Three Mile Island (TMI) accident: “[T]his accident was too serious. Accidents as serious as TMI should not be allowed to occur in the future” (The President’s Commission on the Accident at Three Mile Island, 1979). Since these words were written, four nuclear reactors have experienced accidents far more serious than those at Three Mile Island. The world’s response to that accident was clearly inadequate to fulfill the Kemeny Commission’s mandate. If history repeats itself and regulators now take steps that are too timid to address the root causes of the Fukushima accident, they must bear full responsibility when the next nuclear disaster occurs. And the NRC should keep this in mind as it considers its next steps in response to Fukushima.

Acknowledgements This article is part of a special issue on the disaster that occurred at the Fukushima Daiichi Nuclear Power Station in March 2011. Additional editorial and translation services for this issue were made possible by a grant from Rockefeller Financial Services.

References