The goal, with Google’s quantum supremacy experiment, was to perform a contrived calculation involving 53 qubits that computer scientists could be as confident as possible really would take something like 9 quadrillion steps to simulate with a conventional computer. The qubits in Sycamore are laid out in a roughly rectangular grid, with each qubit able to interact with its neighbors. Control signals, sent by wire from classical computers outside the dilution refrigerator, tell each qubit how to behave, including which of its neighbors to interact with and when.

In other words, the device is fully programmable — that’s why it’s called a “computer.” At the end , the qubits are all measured, yielding a random string of 53 bits. Whatever sequence of interactions was used to produce that string — in the case of Google’s experiment, the interactions were simply picked at random — you can then rerun the exact same sequence again, to sample another random 53-bit string in exactly the same way, and so on as often as desired.

In its Nature paper, Google estimated that its sampling calculation — the one that takes 3 minutes and 20 seconds on Sycamore — would take 10,000 years for 100,000 conventional computers , running the fastest algorithms currently known. Indeed the task was so hard, Google said, that even directly verifying the full range of the results on classical computers was out of reach for its team. Thus, to check the quantum computer’s work in the hardest cases, Google relied on plausible extrapolations from easier cases.

IBM, which has built its own 53-qubit processor, posted a rebuttal. The company estimated that it could simulate Google’s device in a mere 2.5 days, a millionfold improvement over Google’s 10,000 years. To do so, it said, it would only need to commandeer the Oak Ridge Summit, the largest supercomputer that currently exists on earth — which IBM installed last year at Oak Ridge National Laboratory , filling an area the size of two basketball courts. (And which Google used for some of its simulations in verifying the Sycamore results.) Using this supercomputer’s eye-popping 250 petabytes of hard disk space , IBM says it could explicitly write down all 9 quadrillion of the amplitudes. Tellingly, not even IBM thinks the simulation would be especially easy — nor, as of this writing, has IBM actually carried it out. ( The Oak Ridge supercomputer isn’t just sitting around waiting for jobs .)

We’re now in an era where, with heroic effort, the biggest supercomputers on earth can still maybe, almost simulate quantum computers doing their thing. But the very fact that the race is close today suggests that it won’t remain close for long. If Google’s chip had used 60 qubits rather than 53, then simulating it s results with IBM’s approach would require 30 Oak Ridge supercomputers. With 70 qubits, it would require enough supercomputers to fill a city. And so on.

Is there real science behind the spectacle of these two tech titans locking antlers? Is “quantum supremacy,” divorced from practical applications, an important milestone at all? When should we expect those practical applications, anyway? Assuming Google has achieved quantum supremacy, what exactly has it proved — and is it something anyone doubted in the first place?

Let’s start with applications. A protocol that I came up with a couple years ago uses a sampling process, just like in Google’s quantum supremacy experiment, to generate random bits. While by itself that’s unimpressive, the key is that these bits can be demonstrated to be random even to a faraway skeptic, by using the telltale biases that come from quantum interference. Trusted random bits are needed for various cryptographic applications, such as proof-of-stake cryptocurrencies (environmentally friendlier alternatives to Bitcoin). Google is now working toward demonstrating my protocol; it bought the non-exclusive intellectual property rights last year.