When it comes to quantum computing, mostly I get excited about experimental results rather than ideas for new hardware. New devices—or new ways to implement old devices—may end up being useful, but we won't know for sure when the results are in. If we are to grade existing ideas by their usefulness, then adiabatic quantum computing has to be right up there, since you can use it to perform some computations now. And at this point, adiabatic quantum computing has the best chance of getting the number of qubits up.

But qubits aren't everything—you also need speed. So how, exactly, do you compare speeds between quantum computers? If you begin looking into this issue, you'll quickly learn it's far more complicated than anyone really wanted it to be. Even when you can compare speeds today, you also want to be able to estimate how much better you could do with an improved version of the same hardware. This, it seems, often proves even more difficult.

It's fast, honest

Unlike classical computing, speed itself is not so easy to define for a quantum computer. If we just take something like D-Wave's quantum annealer as an example, it has no system clock, and it doesn't use gates that perform specific operations. Instead, the whole computer goes through a continuous evolution from the state in which it was initialized to the state that, hopefully, contains the solution. The time that takes is called the annealing time.

At this point, you can all say, "Chris ur dumb, clearly the time from initialization to solution is what counts." Except, I used the word hopefully in that sentence above for good reason. No matter how a quantum computer is designed and operated, the readout process involves measuring the states of the qubits. That means there is a non-zero probability of getting the wrong answer.

This does not mean that a quantum computer is useless. First, for some calculations, it is possible to check a solution very efficiently. Finding prime factors is a good example. I simply multiply the factors together; if the answer doesn't come to the number I initialized the computer with, I know it got it wrong. In case of a wrong answer, I simply repeat the computation. When you can't efficiently check the solution, you can rely on statistics: the correct answer is the most probable outcome of any measurement of the final state. I can just run the same computation multiple times and determine the correct answer from the statistical distribution of the results.

So for an adiabatic quantum computer, this means speed is the annealing time multiplied by the number of runs required to determine the most probable outcome. While not the most satisfactory answer, it's still better than nothing.

Unfortunately, these two factors are not independent of each other. During annealing, the computation requires that all the qubits stay in the ground state. However, fast changes are more likely to disturb the qubits out of the ground state—so decreasing the annealing time increases the probability of getting an incorrect result. Do the work faster, and you may need to perform the computation more times to correctly determine the most probable outcome. And as you decrease the annealing time, wrong answers will eventually become so probable that they are indistinguishable from correct answers.

So determining the annealing time of an adiabatic quantum computer has something of a trial-and-error approach to it. The underlying logic is that slower is probably better, but we'll go as fast as we dare. A new paper published in Physical Review Letters shows that, actually, under the right conditions, it might be better to throw caution to the wind and speed up even more. However, that speed comes at the cost of high peak power consumption.

Adiabatic quantum computers ignore speed limits... or do they?

To recap, in an adiabatic quantum computer, the qubits are all placed in the ground state of some simple global environment. That environment is then modified such that the ground state is the solution to some problem that you want to solve. Now, provided that the qubits remain in the ground state as you change the environment, you will then obtain the correct solution.

The key lies in how fast you are allowed to modify the environment. If you do it very slowly, someone with a slide rule might beat you to the answer. If you do it very fast, your computation is likely to go wrong because the qubits leave the ground state. Fast modifications also require high peak power, so there is a trade-off between speed, power, and accuracy.

To understand the trade-off, let's use an example. Imagine the equivalent of a quantum ball and spring, otherwise known as the harmonic oscillator. In its lowest energy state, the oscillator is bouncing up and down with some natural frequency, which is given by the stiffness of the spring and the mass of the oscillator. In this case, changing the environment would mean increasing or decreasing the stiffness of the spring. To complete the analogy, the jumps between different quantum states increase and decrease the amplitude of oscillation, but those jumps don't change the frequency.

Next, imagine that we reduce the stiffness of the spring, making the system a bit floppier. The oscillation frequency slows, and the amplitude should also drop, but it will take a little time. If the pace of reduction is too fast, then the amplitude remains high for a moment, corresponding more closely to an excited state. As a result, the oscillator might leave the ground state.

To avoid this, we have to change the spring stiffness at a rate that is slow enough for the oscillator to bleed off the excess energy. Likewise, if we tighten the spring, the process gives energy to the oscillator. If we give it all that energy in one big lump, then it will be sufficient for the oscillator to jump to the excited state, if only briefly.

You can also think of this in terms of power. Although we might change the stiffness of the spring between two values, and therefore expend some amount of energy, the total power depends on how fast we make that change. A short sharp change requires high power, while a long slow change requires low power. So, you can think of three parameters that should be optimized: the speed of the change, the power consumption to complete the change, and the chance that the change drives the qubit out of the ground state.