Last week I was in Vancouver, to give talks at the University of British Columbia and at the American Association for the Advancement of Science annual meeting. As part of that visit, on Friday afternoon, John Preskill, John Martinis, Michael Freedman and I accepted a gracious invitation to tour the headquarters of D-Wave Systems in Burnaby (a suburb of Vancouver). We started out in a conference room, where they served us cookies and sodas. Being the mature person that I am, the possibility of the cookies being poisoned at no point crossed my mind.

Then we started the tour of D-Wave’s labs. We looked under a microscope at the superconducting chips; we saw the cooling systems used to get the chips down to 20 millikelvin. In an experience that harked back to the mainframe era, we actually walked inside the giant black cubes that D-Wave was preparing for shipment. (The machines are so large partly because of the need for cooling, and partly to let engineers go in and fix things.) Afterwards, D-Wave CTO Geordie Rose gave a 2-hour presentation about their latest experimental results. Then we all went out to dinner. The D-Wave folks were extremely cordial to us and fielded all of our questions.

In spite of my announcement almost a year ago that I was retiring as Chief D-Wave Skeptic, I thought it would be fitting to give Shtetl-Optimized readers an update on what I learned from this visit. I’ll start with three factual points before moving on to larger issues.

Point #1: D-Wave now has a 128-(qu)bit machine that can output approximate solutions to a particular NP-hard minimization problem—namely, the problem of minimizing the energy of 90-100 Ising spins with pairwise interactions along a certain fixed graph (the “input” to the machine being the tunable interaction strengths). So I hereby retire my notorious comment from 2007, about the 16-bit machine that D-Wave used for its Sudoku demonstration being no more computationally-useful than a roast-beef sandwich. D-Wave does have something today that’s more computationally-useful than a roast-beef sandwich; the question is “merely” whether it’s ever more useful than your laptop. Geordie presented graphs that showed D-Wave’s quantum annealer solving its Ising spin problem “faster” than classical simulated annealing and tabu search (where “faster” means ignoring the time for cooling the annealer down, which seemed fair to me). Unfortunately, the data didn’t go up to large input sizes, while the data that did go up to large input sizes only compared against complete classical algorithms rather than heuristic ones. (Of course, all this is leaving aside the large blowups that would likely be incurred in practice, from reducing practical optimization problems to D-Wave’s fixed Ising spin problem.) In summary, while the observed speedup is certainly interesting, it remains unclear exactly what to make of it, and especially, whether or not quantum coherence is playing a role.

Which brings me to Point #2. It remains true, as I’ve reiterated here for years, that we have no direct evidence that quantum coherence is playing a role in the observed speedup, or indeed that entanglement between qubits is ever present in the system. (Note that, if there’s no entanglement, then it becomes extremely implausible that quantum coherence could be playing a role in a speedup. For while separable-mixed-state quantum computers are not yet known to be efficiently simulable classically, we certainly don’t have any examples where they give a speedup.) Last year, as reported on this blog, D-Wave had a nice Nature paper that reported quantum tunneling behavior in an 8-qubit system. However, when I asked D-Wave scientist Mohammad Amin, he said he didn’t think that experiment provided any evidence for entanglement between qubits.

The “obvious” way to demonstrate entanglement between qubits would be to show a Bell inequality violation. (We know that this can be done in superconducting qubits, as the Schoelkopf group at Yale among others reported it a couple years ago.) Meanwhile, the “obvious” way to demonstrate a role for quantum coherence in the apparent speedup would be gradually to “turn down” the system’s coherence (for example, by adding an interaction that constantly measured the qubits in the computational basis), and check that the annealer’s performance degraded to that of classical simulated annealing. Unfortunately, the D-Wave folks told us that neither experiment seems feasible with their current setup, basically because they don’t have arbitrary local unitary transformations and measurements available. They said they want to try to demonstrate 2-qubit entanglement, but in the meantime, are open to other ideas for how to demonstrate a quantum role in the apparent speedup with their existing setup.

Point #3: D-Wave was finally able to clarify a conceptual point that had been bugging me for years. I—and apparently many others!—thought D-Wave was claiming that their qubits decohere almost immediately (so that, in particular, entanglement would almost certainly never be present during the computation), but that the lack of entanglement didn’t matter, for some complicated reason having to do with energy gaps. I was far from alone in regarding such a claim as incredible: as mentioned earlier, there’s no evidence that a quantum computer without entanglement can solve any problem asymptotically faster than a classical computer. However, that isn’t D-Wave’s claim. What they think is that their system decoheres almost immediately in the energy eigenbasis, but that it doesn’t decohere in the computational basis—so that, in particular, there would be entanglement at intermediate stages. If so, that would be perfectly fine from the standpoint of the adiabatic algorithm, which doesn’t need coherence in the energy eigenbasis anyway (after all, the whole point is that, throughout the computation, you want to stay as close to the system’s ground state as possible!). I understand that, given their knowledge of decoherence mechanisms, some physicists are extremely skeptical that you could have rapid decoherence in the energy basis without getting decoherence in the computational basis also. So certainly the burden is on D-Wave to demonstrate that they maintain coherence “where it counts.” But at least I now understand what they’re claiming, and how it would be compatible (if true) with a quantum speedup.

Let me now move on to three broader questions raised by the above points.

The first is: rather than constantly adding more qubits and issuing more hard-to-evaluate announcements, while leaving the scientific characterization of its devices in a state of limbo, why doesn’t D-Wave just focus all its efforts on demonstrating entanglement, or otherwise getting stronger evidence for a quantum role in the apparent speedup? When I put this question to Mohammad Amin, he said that, if D-Wave had followed my suggestion, it would have published some interesting research papers and then gone out of business—since the fundraising pressure is always for more qubits and more dramatic announcements, not for clearer understanding of its systems. So, let me try to get a message out to the pointy-haired bosses of the world: a single qubit that you understand is better than a thousand qubits that you don’t. There’s a reason why academic quantum computing groups focus on pushing down decoherence and demonstrating entanglement in 2, 3, or 4 qubits: because that way, at least you know that the qubits are qubits! Once you’ve shown that the foundation is solid, then you try to scale up. So, please support D-Wave if it wants to spend money to show Bell inequality violations, or other “smoking-gun” evidence that its qubits are working together coherently. You’re welcome, D-Wave!

The second question is one that I’ve encountered many times on the blogosphere: who cares how D-Wave’s system works, and whether it does or doesn’t exploit quantum coherence, as long as it solves practical problems faster? Sure, maybe what D-Wave is building is really a series of interesting, useful, but still basically “classical” annealing devices. Maybe the word “quantum” is functioning here as the stone in a stone soup: attracting money, interest, and talented people to build something that, while neat, ultimately doesn’t much depend on quantum mechanics at all. As long as D-Wave’s (literal!) black box solves the problem instances in such-and-such amount of time, why does it matter what’s inside?

To see the obtuseness of this question, consider a simple thought experiment: suppose D-Wave were marketing a classical, special-purpose, $10-million computer designed to perform simulated annealing, for 90-bit Ising spin glass problems with a certain fixed topology, somewhat better than an off-the-shelf computing cluster. Would there be even 5% of the public interest that there is now? I think D-Wave itself would be the first to admit the answer is no. Indeed, Geordie Rose spoke explicitly in his presentation about the compelling nature of (as he put it) “the quantum computing story,” and how it was key to attracting investment. People don’t care about this stuff because they want to find the ground states of Ising spin systems a bit faster; they care because they want to know whether or not the human race has finally achieved a new form of computing. So characterizing the device matters, goddammit! I pride myself on being willing to adjust my opinions on just about anything in response to new data (as I’ve certainly done in D-Wave’s case), but the insistence that black boxes must be opened and explanations provided is something I’ll carry to the grave.

Finally, given the skeptical-yet-positive tone of this post, some people will wonder whether I now regret my earlier, more unmitigated D-Wave skepticism. The answer is no! Asking questions is my job. I’ll give D-Wave credit whenever it answers some of the questions—as it did on this visit!—and will shift my views accordingly. But I’ll also neither stop asking nor apologize for asking, until the evidence for a quantum speedup becomes clear and indisputable (as it certainly hasn’t yet). On the other hand, I do regret the snowballing nastiness that developed as a combined result of my and other skeptics’ statements, D-Wave’s and its supporters’ statements, and the adversarial nature of the blogosphere. For the first time, I find myself really, genuinely hoping—with all my heart—that D-Wave will succeed in proving that it can do some (not necessarily universal) form of scalable quantum computation. For, if nothing else, such a success would prove to the world that my $100,000 is safe, and decisively refute the QC skeptics who, right now, are getting even further under my skin than the uncritical D-Wave boosters ever did.