Say the words “quantum supremacy” at a gathering of computer scientists, and eyes will likely roll. The phrase refers to the idea that quantum computers will soon cross a threshold where they’ll perform with relative ease tasks that are extremely hard for classical computers. Until recently, these tasks were thought to have little real-world use, hence the eye rolls.

But now that Google’s quantum processor is rumored to be close to reaching this goal, imminent quantum supremacy may turn out to have an important application after all: generating pure randomness.

Randomness is crucial for almost everything we do with our computational and communications infrastructure. In particular, it’s used to encrypt data, protecting everything from mundane conversations to financial transactions to state secrets.

Genuine, verifiable randomness — think of it as the property possessed by a sequence of numbers that makes it impossible to predict the next number in the sequence — is extremely hard to come by.

That could change once quantum computers demonstrate their superiority. Those first tasks, initially intended to simply show off the technology’s prowess, could also produce true, certified randomness. “We are really excited about it,” said John Martinis, a physicist at the University of California, Santa Barbara, who heads Google’s quantum computing efforts. “We are hoping that this is the first application of a quantum computer.”

Randomness and Entropy

Randomness and quantum theory go together like thunder and lightning. In both cases, the former is an unavoidable consequence of the latter. In the quantum world, systems are often said to be in a combination of states — in a so-called “superposition.” When you measure the system, it will “collapse” into just one of those states. And while quantum theory allows you to calculate probabilities for what you’ll find when you do your measurement, the particular result is always fundamentally random.

Physicists have been exploiting this connection to create random-number generators. These all rely on measurements of some kind of quantum superposition. And while these systems are more than sufficient for most people’s randomness needs, they can be hard to work with. In addition, it’s extremely difficult to prove to a skeptic that these random-number generators really are random. And finally, some of the most effective methods for generating verifiable randomness require finicky setups with multiple devices separated by great distances.

One recent proposal for how to pull randomness out of a single device — a quantum computer — exploits a so-called sampling task, which will be among the first tests of quantum supremacy. To understand the task, imagine you are given a box filled with tiles. Each tile has a few 1s and 0s etched onto it — 000, 010, 101 and so on.

If there are just three bits, there are eight possible options. But there can be multiple copies of each labeled tile in the box. There might be 50 tiles labeled 010 and 25 labeled 001. This distribution of tiles determines the likelihood that you’ll randomly pull out a certain tile. In this case, you’re twice as likely to pull out a tile labeled 010 as you are to pull out a tile labeled 001.

A sampling task involves a computer algorithm that does the equivalent of reaching into a box with a certain distribution of tiles and randomly extracting one of them. The higher the probability specified for any tile in the distribution, the more likely it is that the algorithm will output that tile.

Of course, an algorithm isn’t going to reach into a literal bag and pull out tiles. Instead, it will randomly output a binary number that’s, say, 50 bits long, after being given a distribution that specifies the desired probability for each possible 50-bit output string.

For a classical computer, the task becomes exponentially harder as the number of bits in the string gets larger. But for a quantum computer, the task is expected to remain relatively straightforward, whether it involves five bits or 50.

The quantum computer starts with all its quantum bits — qubits — in a certain state. Let’s say they all start at 0. Just as classical computers act on classical bits using so-called logic gates, quantum computers manipulate qubits using the quantum equivalent, known as quantum gates.

But quantum gates can put qubits into strange states. For example, one kind of gate can put a qubit that starts with an initial value 0 into a superposition of 0 and 1. If you were to then measure the state of the qubit, it would collapse randomly into either 0 or 1 with equal probability.

Even more bizarrely, quantum gates that act on two or more qubits at once can cause the qubits to become “entangled” with each other. In this case, the states of the qubits become intertwined, so that the qubits can now only be described using a single quantum state.

If you put a bunch of quantum gates together, then have them act on a set of qubits in some specified sequence, you’ve created a quantum circuit. In our case, to randomly output a 50-bit string, you can build a quantum circuit that puts 50 qubits, taken together, into a superposition of states that captures the distribution you’d like to re-create.

When the qubits are measured, the entire superposition will collapse randomly to one 50-bit string. The probability that it’ll collapse to any given string is dictated by the distribution that is specified by the quantum circuit. Measuring the qubits is akin to reaching blindfolded into the box and randomly sampling one string from the distribution.

How does this get us to random numbers? Crucially, the 50-bit string sampled by the quantum computer will have a lot of entropy, a measure of disorder or unpredictability, and hence randomness. “This might actually be kind of a big deal,” said Scott Aaronson, a computer scientist at the University of Texas, Austin, who came up with the new protocol. “Not because it’s the most important application of quantum computers — I think it’s far from that — rather, because it looks like probably the first application of quantum computers that will be technologically feasible to implement.”

Aaronson’s protocol to generate randomness is fairly straightforward. A classical computer first gathers a few bits of randomness from some trusted source and uses this “seed randomness” to generate the description of a quantum circuit. The random bits determine the types of quantum gates and the sequence in which they should act on the qubits. The classical computer sends the description to the quantum computer, which implements the quantum circuit, measures the qubits, and sends back the 50-bit output bit string. In doing so, it has randomly sampled from the distribution specified by the circuit.

Now repeat the process over and over — for example, 10 times for each quantum circuit. The classical computer uses statistical tests to ensure that the output strings have a lot of entropy. Aaronson has shown, partly in work published with Lijie Chen and partly in work yet to be published, that under certain plausible assumptions that such problems are computationally hard, no classical computer can generate such entropy in anywhere near the time it would take a quantum computer to randomly sample from a distribution. After the checks, the classical computer pastes together all the 50-bit output strings and feeds it all to a well-known classical algorithm. “It produces a long string that is nearly perfectly random,” Aaronson said.

The Quantum Trapdoor

Aaronson’s protocol is best suited for quantum computers with about 50 to 100 qubits. As the number of qubits in a quantum computer passes this threshold, it becomes computationally intractable for even classical supercomputers to use the protocol. This is where another scheme for generating verifiable randomness using quantum computers enters the picture. It uses an existing mathematical technique with a forbidding name: a trapdoor claw-free function. “It sounds much worse than it is,” said Umesh Vazirani, a computer scientist at the University of California, Berkeley, who devised the new strategy along with Zvika Brakerski, Paul Christiano, Urmila Mahadev and Thomas Vidick.

Imagine a box again. Instead of reaching in and extracting a string, this time you drop in an n-bit string, call it x, and out pops another n-bit string. The box is somehow mapping an input string to an output string. But the box has a special property: For every x, there is another input string y that generates the same output string.

In other words, there exist two unique input strings — x and y — for which the box returns the same output string, z. This triplet of x, y and z is called a claw. The box, in computer science speak, is a function. The function is easy to compute, meaning that given x or y, it’s easy to calculate z. But if you are only given x and z, finding y — and hence the claw — is impossible, even for a quantum computer.

Update June 21, 2019: This article has been updated to include a reference to Aaronson’s unpublished work.