During the 13-month trial, the prospect of being found guilty struck the men as preposterous. They didn’t dismiss the situation as unserious: People had died, after all. But the notion that they were guilty of negligence — of manslaughter — simply defied reason. “Nobody really believes that I would be so stupid as to go to Italy’s most seismically active region to say to the people, ‘Don’t worry’,” Boschi told me.

But Picuti deftly used the scientists’ own work against them. He showed the court a seismic hazard map of Italy produced by INGV, where Boschi and Selvaggi worked. Using a color-coded system from deep red (highest risk) to pale green (lowest), the map shows the probability of a major earthquake over the next 50 years.

At first glance, two locations look downright dangerous: One near the southernmost part of the country, the other a wedge-shaped area that runs directly over L’Aquila. Picuti contrasted this level of risk portrayed by the map with Boschi’s “improbable” assessment.

When I asked Picuti about the map, its 50-year time horizon, and the fact that day-to-day risk of a major earthquake remains low even in the areas colored deepest red, he was dismissive. “The people hear ‘low’ or ‘improbable,’ and then think: low.” He told the judge that what the public should have heard is what it says on the map: highest.

But Picuti’s masterstroke was wielding a 1995 academic paper called “Forecasting When Larger Crustal Earthquakes Are Likely to Occur in Italy in the Near Future.” Using historical records, geological evidence, and the best seismic data available at the time, seismologists tried to predict earthquakes for different areas of Italy over time scales of 5, 20, and 100 years. According to the model, the probability that L’Aquila would get hit with a major earthquake within all of those ranges was 1. That is not a typo. The model predicted a major earthquake in L’Aquila with 100 percent certainty.

The lead author of the paper? Enzo Boschi.

To hammer home this point, Picuti put another INGV seismologist on the stand. He summarized what Boschi and his colleagues had written, but then surprised Picuti by explaining that the model was simply wrong.

A formula that produced a 100 percent chance of an earthquake occurring in the next five years would obviously give the same forecast for the next 20 or 100 years, he said. Yet no major seismic event had occurred during that initial five-year window. The quake’s non-arrival doesn’t mean that in year six it’s overdue and that in year seven it’s even more so. All it means is that the model itself is wrong. Boschi and his coauthors had even flagged their conclusion as suspect in the paper itself.

As Selvaggi watched this testimony unfold, he couldn’t help feeling hopeful. Picuti had trucked out evidence that earthquake prediction isn’t possible, and then let a highly credentialed scientist deliver a lesson in probability and the scientific method to an audience that evidently needed one.

But Selvaggi was too optimistic. On October 22, 2012, Judge Marco Billi, an athletic 43-year-old with short-cropped black hair, walked to the front of the makeshift courtroom. Italy does not use juries: The decision was for Billi alone. Eyes down, he read his verdict in a barely audible monotone. For delivering “inexact, incomplete, and contradictory information,” the scientists and engineers were found guilty of involuntary manslaughter. They each received a six-year prison sentence, pending appeal.

As Billi saw it, the attendees of that 2009 meeting were responsible for the deaths. Not of all the victims—only those who Picuti could show had a habit of fleeing their houses when there was a tremor. The science underlying Boschi’s 1995 paper was of no interest to Billi. As he later told me: “We didn’t look at the details of the model. We only looked at what he [Boschi] wrote — that is, that there was a probability of 1 that L’Aquila will have a major earthquake. That’s all. It’s Boschi’s words!”

Boschi is furious over this mind-set. It’s not only that the earlier model was faulty, that Picuti can’t understand probabilities to save his life, or even Billi’s staggering contortion of logic. It’s that we shouldn’t be here in the first place, talking about research and scientific papers, the whole point of which is to share so that others can disprove or refine what you’ve come up with. “I am willing to go to jail for this point,” he thunders. “A scientist can write whatever opinions he wants in a scientific paper and it is off limits to a judge.”

Even in the land of Berlusconi and the judicial circus of cases like Amanda Knox’s, convicting a bunch of geoscientists in the wake of a natural disaster marks a new low. What would Galileo say? But what happened in L’Aquila is a window onto how we think about, communicate, and live with risk, and about impediments to clear thinking that afflict us all.

In the winter of 1951, a group of CIA analysts filed report NIE 29–51. Its aim: to examine whether the Soviets would invade Yugoslavia. And the bottom line? “Although it is impossible to determine which course the Kremlin is likely to adopt, we believe… that an attack on Yugoslavia in 1951 should be considered a serious possibility.” Once finalized, the report made its way into the bureaucratic machine.

A few days later, a State Department official met up with the intelligence whiz whose team had composed the report. What did serious possibility mean? The CIA man, Sherman Kent, said he thought maybe there was a 65 percent chance of an invasion. But the question itself troubled him. He knew what serious possibility meant to him, but it clearly meant different things to different people. He decided to survey his colleagues.

The result was shocking. Some thought it meant there was an 80 percent chance of invasion; others interpreted the possibility as low as 20 percent.

Years later, Kent published an article in Studies in Intelligence that used the Yugoslavia report to illustrate the problem of ambiguity, particularly when talking about uncertainty. He even proposed a standardized approach to the language used for risk analysis — “probable” to indicate 75 percent confidence, give or take about 12 percent, “probably not” for 30 percent confidence, give or take about 10 percent, and so on.