In August 2017, Eddie Tipton was convicted for masterminding one of the biggest lottery scams in the U.S. As the Multi-State Lottery Association’s former information security director, Tipton had access to the software generating random numbers for the lottery. He inserted extra lines of code so when certain conditions were met, the algorithm would follow a different path, producing a smaller, more predictable set of winning lottery numbers. Tipton rigged six winning drawings across five different U.S. states amounting to a total of more than $24 million in prize money.

From lotteries and elections to cryptography and quantum mechanics, randomness plays a critical role in ensuring fairness — but only if the source of randomness can be trusted at all times. This may not always be the case for a single source, which could have its own bias or could be influenced by outside forces, as seen in Tipton’s lottery rigging. And this is where decentralization comes in, providing distributed sources of unbiased, unpredictable, and verifiable randomness, which could improve reliability and make it harder to compromise.

Generating randomness dates back centuries. Researchers from the University of California, Davis and the American Museum of Natural History found that ancient people would roll large, irregularly shaped items made of bone, clay, ivory, metal, or stone to determine their fate. This later evolved into games of chance involving the rolling of cubic dice — a classic example of randomness. With each roll, a die has an equal likelihood of landing on any of its six sides, but which side that would be is unpredictable. As the researchers wrote in their study, “Gamblers may have seen dice throws as no longer determined by fate, but instead as randomizing objects governed by chance.”

Randomness transcended games of chance, and in 1951, the world’s first commercially available general-purpose computer, the Ferranti Mark 1, was born. With it came a random number generator designed by renowned computer scientist Alan Turing. One of the first uses of Ferranti Mark 1’s built-in random number generator was to write random love letters.

Yet even complex machines find it difficult to produce a seemingly simple random number. Computers thrive on predictability, following the instructions of the algorithms programmed into them. Programming randomness into a computer requires simulating it through a pseudorandom number generator, which uses a small input value known as a “seed” and then stretches that seed into a long sequence using mathematical transformations.

“If I have a thousand random numbers and I’m seeing some pattern in the numbers, I could use that knowledge to predict future values.”

But the values produced by pseudorandom number generators aren’t truly unpredictable. “Each time you take the same seed, you’ll get exactly the same sequence,” says Ewa Syta, an assistant professor of computer science at Trinity College in Hartford, Connecticut. “So it can’t be perfectly random because something random isn’t supposed to be predictable or reproducible.”

To get a truly random number, you’ll need a high-entropy source. Entropy is the measure of the unpredictability of a random number. “If something has low entropy, there’s a lot of repetition in it. Repetition gives you patterns, and if you have a pattern, you can make some guesses as to what the next value in the sequence will be,” says Syta. “If I have a thousand random numbers and I’m seeing some pattern in the numbers, I could use that knowledge to predict future values.”

This makes high-entropy sources — such as natural phenomena or physical events — valuable in random number generation. The higher the entropy, the more unpredictable and the more random a value is.

Some examples of random number generation using high-entropy sources include HotBits, Random.org, and the randomness beacon project at the National Institute of Standards and Technology (NIST). HotBits uses radioactive decay as its entropy source while Random.org uses atmospheric noise. NIST’s randomness beacon — a service that generates sequences of numbers in an unpredictable pattern at regular intervals — combines entropy from at least two independent random number generators.

These projects either derive their randomness from a single source or are run by a single entity. That’s a problem because either could be exploited to yield biased results. The source or entity could generate randomness to its advantage, or it could be coerced into producing randomness that would benefit an outside body. “It becomes a single point of failure in the entire protocol,” says Syta. “If we have public randomness produced by a single source, we have to trust that source to behave well.”

To solve the issues brought about by a single source, a coalition of technologists and businesses called the League of Entropy is decentralizing randomness through a global network of beacons that generates random numbers every 60 seconds, along with the mathematical signatures to prove they haven’t been tampered with. “The randomness we’re presenting is much more difficult to bias because it requires a lot of different participants with very different structures to agree,” says Nick Sullivan, head of cryptography at the internet security company Cloudflare, one of the organizations making up the League of Entropy, which was formed in June 2019.

Given that the random numbers generated by the league are public, they can’t be used for anything that must always be kept secret, such as secure cryptographic keys. They can, however, be used for applications that require the randomness to be known publicly, such as in election audits, lotteries, scientific simulation methods, clinical or drug trial selections, and distributed ledger platforms (including cryptocurrencies and blockchain-based platforms).

The League of Entropy’s underlying cryptographic architecture is based on the open-source drand project, a distributed randomness beacon developed by Protocol Labs researcher Nicolas Gailly with the help of crypto-security researchers Philipp Jovanovic and Mathilde Raynal. What started as an intellectual challenge for these collaborators turned into a public service backed by Cloudflare and other members.

The project has two core phases: a distributed key generation phase and a threshold signature scheme. When a random value is requested from the league, its network of beacons go through these phases before generating the value. An election board, for example, could use this value as a basis for choosing which voter districts or precincts to audit, thereby limiting any potential manipulation of the auditing process to reach a desired result.

The distributed key generation phase serves as a setup phase, wherein a private key and its associated public key are generated. No one participant owns the private key; instead, each owns a share of it. Gailly uses Cloudflare’s pizza slice analogy to demonstrate how distributed key generation works. “If every participant has their own pizza, they take a slice of this pizza and distribute it so every other participant gets a share,” he says. “And if every participant does this, then they can create a new pizza from their own slice and the different slices from the other participants.”

When it comes to randomness, more is definitely better than one.

This phase is followed by the threshold signature scheme, which guarantees that at least a certain number (the threshold) of entities participate in generating randomness. “Each participant generates a partial signature, and these partial signatures are assembled to create a full signature, and this signature is the randomness,” says Gailly.

Put simply, every member of the League of Entropy runs their own instance of drand, and when a random value is requested from the league’s website, the portions of randomness from each member are aggregated into a single randomness, which can be verified with the public key generated during the setup phase. Lotteries, for instance, could use this randomness generator to pick the day’s winning numbers. And because this randomness is publicly verifiable and not controlled by a single entity, the process and its outcomes could therefore be trusted.

“All the historical pieces of randomness that have ever been served from the League of Entropy are public,” says Sullivan. “But you can never predict ahead of time what the next random value is going to be.”

Other members of the league include Kudelski Security, Protocol Labs, the Swiss Federal Institute of Technology (École polytechnique fédérale de Lausanne or EPFL), and the University of Chile, with contributions from Gailly and EPFL researchers Ludovic Barman and Jovanovic. Combining each member’s sources of entropy — Cloudflare’s lava lamps, EPFL’s keyboard presses and mouse clicks, the University of Chile’s seismic measurements of earthquakes, Kudelski Security’s cryptographic random number generator, and Protocol Labs’ environmental noise — leads to highly unpredictable values. When it comes to randomness, more is definitely better than one.

Looking to the future, Sullivan expects more participants to contribute to the league and more applications to use the league’s randomness beacon. For Gailly, it’s about improving the scalability of the system and creating implementations using different programming languages so it could be incorporated into a broader range of applications. “We want to have this randomness as a service,” Gailly says. “And we want it to be usable by everybody.”