Google’s Quantum Artificial Intelligence team, not content with merely sharing a D-Wave kinda-quantum computer with NASA, has announced that it will now be designing and building its own quantum computer chips. Rather than start from scratch, Google will absorb UC Santa Barbara’s quantum computing group, which recently created a superconducting five-qubit array that shows promise for scaling up to larger, commercial systems. Google, probably just behind IBM, now appears to be one of quantum computing’s largest commercial interests.

As you may know, Google has been researching potential applications of quantum computing since at least May 2013, when it bought a D-Wave quantum annealing computer with NASA. The Vesuvius chip inside the D-Wave system is kind of quantum, but not truly quantum in the sense that most scientists and physicists would describe a quantum computer. Benchmarks have shown that the D-Wave system only provides small speed-ups under very specific workloads — and in some cases, just your standard desktop PC might be faster than the D-Wave. We’re not saying that Google was hoodwinked, but I don’t think it’s a coincidence that it’s now investing in a very different area of quantum computing.

Enter John Martinis who, in the words of Google’s Hartmut Neven, is “the world’s authority on superconducting qubits.” Martinis used to be at UC Santa Barbara, but it seems he and his entire research team is joining Google’s Quantum AI laboratory. Way back in October 2013 Martinis gave a talk at Google about his work in superconducting qubits (embedded below) — and then in April, he and his team published their latest research in Nature. Seemingly at some point Neven (who runs the Quantum AI lab) was enamored and enthralled enough with the research to pick up the entire team. Presumably some money was involved. I wonder what kind of compensation UCSB gets.

The latest work by Martinis’ team, which will presumably be inherited by Google as it works towards realizing a computer capable of quantum AI, consists of a reliable five-qubit array. In the image at the top of the story, the five crosses are the qubits (called Xmons internally), and the squiggly lines are the readout resonators (for checking what value is stored in the qubit). The whole thing is superconducting — i.e. kept at cryogenic temperatures — but that isn’t really unusual, given that qubits are finicky beasts that very rapidly lose coherence at higher temperatures.

Read our featured story: The first metamaterial superconductor: One step closer to futuristic physics-defying contraptions

The main breakthrough of this recent work seems to be reliability. Because of their very nature, hardware that operates at a quantum level is unreliable and prone to errors — which then leads to untrustworthy results, and having to run the calculation hundreds of times to make sure you have the right result. The superconducting five-qubit array has a fidelity of over 99%, which is good — but to make it “commercially viable”, the team says it will need to push the error rate down to just “1 in 1,000.”

If you’re looking for more details on UCSB’s Xmons, there’s a slide deck created by Martinis [PDF] that goes into the structure of the qubits, and how they made them so reliable.

For more information on why Google is even investing in quantum computing in the first place, the video below is pretty good. It focuses on the D-Wave (the video was made last year), but all the general ideas are the same. In short, though, Google just wants to make sure it’s ready for the future, when classical computers simply might not have enough oomph to handle all of the data and calculations required by advanced AI, self-driving vehicles, robots, and so on.