Devastating—and, in some sense, unforeseen—earthquakes in Nepal, Japan, New Zealand, Haiti, and elsewhere have triggered a heated debate about the legitimacy and limitations of probabilistic seismic hazard assessment (PSHA) [see Frankel, 2013; Stein and Stein, 2014]. PSHA attempts to capture the likelihood of exceeding a specific level of shaking over any time period of interest, explicitly incorporating data uncertainty and lack of knowledge.

To address this debate, four workshops held in 2013–2014 at the U.S. Geological Survey (USGS) John Wesley Powell Center for Analysis and Synthesis brought together university, government, and insurance industry scientists from countries that straddle plate boundaries and those in plate interiors. Participants were invited; the workshops’ goals involved developing tests of PSHA and other earthquake hazard assessment strategies and seeking viable alternatives to overcome weaknesses to these strategies.

Scientists adopted a novel approach to ensure civil debate: roundtable discussions, sometimes over a group activity such as hiking. Workshop coordinators adopted a novel approach to ensure civil debate—roundtable discussions, sometimes over a group activity such as hiking. This approach could be adapted by any seeking to resolve scientific debates within disciplines. Three crucial aspects of the approach proved to be key to running the workshops.

First, we asked each invitee to take what we called the “Powell Blood Oath”; Each was welcome to argue passionately for personal views but must also present and acknowledge the weaknesses in that position. The oath kept everyone humble; no one grandstanded or dismissed others because no one had all the answers. Those who could not abide by the oath turned our invitation down.

Second, we sat around the table, each participant with a laptop plugged into the projector, so that anyone could interject with figures or images from their computer by clicking a switch. No lectern, no uninterrupted talks, no fealty to the clock; everything was conversational, open, informed, and fluid. The minutes were written into an Etherpad that all could access and modify on the fly, so no single person shaped the record.

Third, we took a hike in the Rockies during the middle day of each workshop, during which the scientific conversations only deepened. Talking on a hike is less confrontational than around a table—delicate issues got discussed in depth. On the climb, when short of breath, you talk less and listen better. People who were quiet around the table found themselves in deep discussions; giving their views had greater impact. At the top, even hikers with divergent views have shared an accomplishment, which brings everyone together.

Critics of PSHA and critics of the cohosts—the USGS and the Global Earthquake Model (GEM) Foundation—were invited and listened to. Those who lead the PSHA modeling for their nations saw how others are tackling similar problems with different approaches. Together, we identified the tests that are most needed to assess the value of PSHA and what tools are most needed to improve it.

Two major efforts grew out of the Powell meetings: the global earthquake activity rate (GEAR) model and a retrospective test of the U.S. National Seismic Hazard Mapping Project models.

Global Earthquake Activity Rate Model

GEAR gives the rate of earthquakes of all sizes everywhere on Earth [Bird et al., 2015]. It was constructed through a blend of the GEM strain rate model, which reflects the forces that drive fault slip, and the Global Central Moment Tensor Catalog of seismicity (the frequency of earthquakes in a given region), which records the results of fault slip.

This model was built by a uniform, open, and reproducible process. It has been submitted for independent testing at the Collaboratory for the Study of Earthquake Predictability (CSEP). Because GEAR can be applied uniformly over the globe, it can serve as a reference model for regional efforts that use local fault and seismic data (see Figure 1).

Testing Successive U.S. Seismic Hazard Models Against Observed Shaking

Because PSHAs are provided to the public, a second goal to emerge from the discussion was to demonstrate their utility. One outgrowth of this goal is a retrospective test of the 1996–2014 U.S. National Seismic Hazard Mapping Project (NSHMP) models.

After the workshops, all of the strong motion or “Did You Feel It?” observations from California were pooled to create a single hazard curve, giving the probability of exceedance (shaking above a given level) as a function of ground motion, which was compared to a pooled prediction curve from the models. The results are sensitive to how the data are binned and counted, but for high values of shaking (>10% that of the acceleration of gravity), each successive NSHMP model does a better job of matching the data.

Unresolved Problems of PSHA

According to the Powell workshop participants, assignments of the maximum earthquake magnitudes that will ever happen on faults are some of the least defensible elements of PSHA. We know only that the longer the observation period is, the higher the observed maximum magnitude is.

PSHA modeling typically seeks to strip out aftershocks, foreshocks, and swarms to isolate main shocks. This “declustering” is highly uncertain, leaving anywhere from 80% to 20% of the earthquakes as “main shocks.” The gathered participants agreed that there should be standardized declustering algorithms and tests of whether the declustered catalog exhibits Poissonian behavior (in other words, whether earthquakes are independent of each other).

The Powell process brought people with opposing views together to work around a table and on a trail to find common ground. Regions far from the edges of tectonic plates present some of the most difficult conditions for PSHA. Since little is typically known of the faults or their slip or strain rates, the historical record of quakes is often used, in which the distribution of small shocks is smoothed and scaled to estimate the rate and distribution of large shocks. But do recent small shocks forecast large ones? Even if this strategy is justified, the appropriate amount of smoothing is unknown, attendees agreed.

The Powell process generated new models, tests, and problems. But perhaps more important, it brought people with opposing views together to work around a table and on a trail to find common ground.

Acknowledgments

We are grateful for support from the USGS Powell Center and the GEM Foundation and for the outstanding efforts of the Powell Center’s codirector and host, Jill Baron.