Imagine two convex polygons, such as a rectangle and a circle, centered on a point that serves as the target. Darts thrown at the target will land in a bell curve or “Gaussian distribution” of positions around the center point. The Gaussian correlation inequality says that the probability that a dart will land inside both the rectangle and the circle is always as high as or higher than the individual probability of its landing inside the rectangle multiplied by the individual probability of its landing in the circle. In plainer terms, because the two shapes overlap, striking one increases your chances of also striking the other. The same inequality was thought to hold for any two convex symmetrical shapes with any number of dimensions centered on a point.

Special cases of the GCI have been proved — in 1977, for instance, Loren Pitt of the University of Virginia established it as true for two-dimensional convex shapes — but the general case eluded all mathematicians who tried to prove it. Pitt had been trying since 1973, when he first heard about the inequality over lunch with colleagues at a meeting in Albuquerque, New Mexico. “Being an arrogant young mathematician … I was shocked that grown men who were putting themselves off as respectable math and science people didn’t know the answer to this,” he said. He locked himself in his motel room and was sure he would prove or disprove the conjecture before coming out. “Fifty years or so later I still didn’t know the answer,” he said.

Despite hundreds of pages of calculations leading nowhere, Pitt and other mathematicians felt certain — and took his 2-D proof as evidence — that the convex geometry framing of the GCI would lead to the general proof. “I had developed a conceptual way of thinking about this that perhaps I was overly wedded to,” Pitt said. “And what Royen did was kind of diametrically opposed to what I had in mind.”

Royen’s proof harkened back to his roots in the pharmaceutical industry, and to the obscure origin of the Gaussian correlation inequality itself. Before it was a statement about convex symmetrical shapes, the GCI was conjectured in 1959 by the American statistician Olive Dunn as a formula for calculating “simultaneous confidence intervals,” or ranges that multiple variables are all estimated to fall in.

Suppose you want to estimate the weight and height ranges that 95 percent of a given population fall in, based on a sample of measurements. If you plot people’s weights and heights on an x–y plot, the weights will form a Gaussian bell-curve distribution along the x-axis, and heights will form a bell curve along the y-axis. Together, the weights and heights follow a two-dimensional bell curve. You can then ask, what are the weight and height ranges — call them –w < x < w and –h < y < h — such that 95 percent of the population will fall inside the rectangle formed by these ranges?

If weight and height were independent, you could just calculate the individual odds of a given weight falling inside –w < x < w and a given height falling inside –h < y < h, then multiply them to get the odds that both conditions are satisfied. But weight and height are correlated. As with darts and overlapping shapes, if someone’s weight lands in the normal range, that person is more likely to have a normal height. Dunn, generalizing an inequality posed three years earlier, conjectured the following: The probability that both Gaussian random variables will simultaneously fall inside the rectangular region is always greater than or equal to the product of the individual probabilities of each variable falling in its own specified range. (This can be generalized to any number of variables.) If the variables are independent, then the joint probability equals the product of the individual probabilities. But any correlation between the variables causes the joint probability to increase.

Royen found that he could generalize the GCI to apply not just to Gaussian distributions of random variables but to more general statistical spreads related to the squares of Gaussian distributions, called gamma distributions, which are used in certain statistical tests. “In mathematics, it occurs frequently that a seemingly difficult special problem can be solved by answering a more general question,” he said.