In 2007, D-Wave announced with great fanfare that it had developed the world's first commercial quantum computer. Unfortunately, details were rather scarce, and it was hard to confirm that anything quantum was going on in the company's device. In the intervening time, D-Wave has backed away from its initial claims somewhat, now calling its device a quantum optimizer, and claiming that, while its device doesn't meet all the criteria to be called a quantum computer, it still offers benefits over a classical computer.

In a recent publication, researchers from D-Wave and Harvard University teamed up to use D-Wave's quantum optimizer to solve a protein folding problem. That demonstration, combined with a simulation of the device's performance, goes a long way to convincing me that D-Wave's optimizer may indeed be a quantum optimizer after all.

Folding proteins

The protein folding problem is a very difficult and very important one. Proteins are strings of amino acids, which, as they are joined up, can flop around and fold up in a huge number of ways. But—and this is the kicker—the final folded shape of the protein is what allows it to perform its function. Proteins that end up folded the wrong way don't work as well as a correctly folded protein or don't work at all, and they can even be harmful. At first glance it seems highly improbable that a protein with a virtually infinite number of potential configurations should, with near-certainty, fold itself correctly every time.

Current thought is that the correct configuration for a functional protein is the one that requires the least energy to hold it in place. This seems to be an eminently sensible idea, since every time it's knocked around by the environment, it's likely to refold into shapes that allow it to give up energy. Over the long run, any functional protein that could be knocked out of shape and not return to its functional form would be replaced.

To test this idea, and to learn more about protein shapes generally, researchers spend a lot of time calculating protein shapes, searching for the lowest energy form. But this is a long and tedious process, requiring many computer cycles per protein.

Folding proteins using magnets

One way to solve a protein folding problem is to place each amino acid randomly on a 3D grid and let them jump around. Each jump requires a certain amount of energy to get started, but that energy and more may be given up if the new location requires less energy—that is, if an amino acid interacts more strongly with those that have folded up next to it. The probability of a jump into any particular configuration depends on these energy calculations.

If you compute the energy for a large number of random jumps, you can find where the protein will have settled into a low energy configuration. But, is it the lowest energy solution? Maybe, maybe not. So, you start again, but with the amino acids in new starting positions.

This way of calculating protein folding is very similar to how magnets arrange their orientations on a 2D grid. If you control how strongly the magnets feel each other, then you can mimic the different bonding strengths between different amino acids, and the 3D nature of the protein. Once you have the magnets set up—setting this up is not easy, and it's a remarkable technical achievement on its own—you use it to find a low energy state.

When the magnets are hot, they have a lot of energy, and can flip their orientation. As they flip, they change the magnetic field around the other magnets, causing some of them to flip. This causes more magnets to flip, and so it carries on. However, you can slowly cool the magnets so there is less and less energy available to allow them to flip. With enough cooling, they tend to get locked into a configuration. If you have cooled slowly enough, then that configuration is likely to be the lowest energy configuration. Read that out, and you have the lowest energy 3D configuration of the protein that the magnets were modeling.

Now, in real life, the magnets are superconducting rings (superconducting quantum interference devices, or SQUIDs). The direction of the magnet is set by the direction that the current in the ring is circulating, and the coupling between the different magnets is not directly through the magnetic fields, but indirectly through capacitors, inductors, and other SQUIDs, This intervening hardware allows the coupling to be controlled. This method of calculating is called simulated annealing, and it works extremely well. It is, however, no faster than any other way of calculating the configuration of a protein: it is still a classical computer.

Riding a quantum horse to the rescue

So how does the quantum nature of a SQUID help? The trick is in the coupling between the different magnets. In the description I gave above, each magnet experiences the average of all the surrounding fields, so individual flips have an almost negligible effect on any other magnet. In a fully quantum description, the currents are added up, taking their phase into account, so interference between different SQUIDs can lead to cancellation or addition of their contributions, or anything in between.

Normally, we would discount phase, because the currents in each SQUID would have no fixed relationship to one another. In other words, there is no coherence. But if the SQUID array is coherent, then the interference between the different SQUIDs drives them toward the overall lowest energy solution faster than you would expect from a classical description.

And that brings us back to D-Wave, which has produced hardware that does simulated annealing, and claims that it is a quantum system. But it has been difficult to verify that claim.

The remarkable thing about this latest bit of work is not the protein folding—it was a rather small demonstration—but that they could use this to demonstrate that there may well be something quantum going on. The SQUID array was too small to directly simulate the six-amino acid protein—instead, the researchers broke the problem up into bits and combined them at the end. One of those bits was small enough that the SQUID array could be fully simulated, including the quantum bits, on a classical computer. The researchers found that the SQUID array behaved exactly as expected if the quantum aspects were contributing, solving the problem in the time expected. Experiment and theory agree, and all is well in the world.

But, as with all things, the picture is still incomplete. What I had hoped the paper would contain was a comparison between a full quantum simulation and a classical simulation. Let's imagine for a moment that the operating device loses coherence within a few nanoseconds, and after that, everything is classical. If the quantum simulation is accurate, it will reflect the loss of coherence, give the classical results, and agree with experimental results. A simulation that intrinsically assumes a lack of coherence (in other words, a classical model) will only agree if coherence is lost.

By comparing these two simulations with experiments, we would be able to be sure that the quantum part of the quantum optimizer was important. More interestingly, we would be able to make estimates of how the coherence of the array was decaying with time and distance.

Nevertheless, I have to say that I am largely convinced that D-Wave has produced evidence that its SQUID arrays might behave as a quantum optimizer. In the past, I have been less than convinced, and rather critical of D-Wave. I still think there is more to be learned about the degree of coherence in the SQUID arrays. This demonstration shows that there is much more to be done in practical terms before the optimizer is ready for larger problems. But progress has been rapid.

Scientific Reports, 2012, DOI: 10.1038/srep00571