IBM, Google, and D-Wave tend to garner the headlines about quantum computing, but aside from a brief hubbub around the Tangle Lake quantum chip announcement earlier this year, insight into Intel’s quantum strategy tends to lag.

This is almost certainly because there is an immediate commercial push for IBM’s Q Experience and D-Wave’s machines, which have sold into several companies and research labs already. Intel is less concerned with productizing as refining how quantum tech can be spun up via existing fabrication capabilities, which makes sense, even if some of those technologies could change in the next decade.

While Tangle Lake was Intel’s answer to the gate-model superconducting circuits familiar in IBM, Google, Rigetti, and others’ approaches, they are actually working on developing and comparing both superconducting circuits and spin qubits with a unique emphasis on the problem of dense control logic that comes with existing devices.

While this control issue might sound like a side problem, the management of all of these connections is incredibly complex and limits the current qubit count and functional scalability of current early-stage quantum systems. Intel is working on designing an integrated circuit that will allow generation of signals for the quantum devices with all of the controls for those on the same device with better performance. As with standard silicon, fewer quantum connections to the outside mean lower latency, better performance, and improved scalability.

Intel thinks that by moving more of the control electronics into the cryogenic units latencies can be dramatically reduced. They can also explore more options to reduce the connectivity issues of running all the signals into and out of the fridges with the goal of more cleanly scaling to higher qubit counts. “It is not feasible to run the two to four different signals all the way from the qubit plane up and out. We want to generate the RF controls and do the readout sensing and amplification right in the fridge to reduce the signaling in and out of the fridge from the electronics,” Jim Held, who heads the emerging tech group at Intel that focuses on quantum computing, explains.

In addition to potential lower latency, untangling the control mess is also critical for future scaling. This means more qubits, but also more stable qubits with faster control response. Since quantum systems are exponential in their compute power (adding a single qubit effectively doubles the capacity of a system) the capability goes up, but so does the control complexity since each qubit is individually controlled.

To put this in some context, consider a Skylake chip with around seven billion transistors yet only around 2,000 connections to the outside world, mostly power and ground-rooted. This many-device, relative few control wires paradigm is critical for silicon and is important for quantum as well. “It’s unreasonable to think we are going to get a huge number of quits and have multiple control wires for each, it won’t scale that way. So the big challenges are qubit design, the on-chip controls, and off-chip controls to help scale qubit counts without the interconnects and wires getting out of hand. That’s a distinguishing feature of our quantum program.”

Intel’s own transistor and fabrication technology expertise are another quantum differentiator, especially when it comes to the second type of quantum tech they are developing around silicon qubits, or spin qubits. This is different from the work they’re already doing on the superconducting side (similar to what IBM and Google are doing quantum-wise). Spin qubits are relatively large and easy to fabricate. At its barest, these are based on a single electron that can spin up or down based on a simple quantum property of all electronics. This type of qubit looks more like a transistor in one of Intel’s own advanced technology nodes—something that puts a path to quantum manufacturability in closer reach.

This may sound a bit too rooted in the real chip world than other quantum computing approaches, but Held says it the qubits are operated in a quantum regime, which means either with one microwave photon or a single electron. The capabilities to manufacture these are the same used in other Intel chips and the same that were used for the Tangle Lake quantum chip Intel debuted at CES earlier this year. This is palm-sized with 49 qubits and 108 connections to the outside—a large device by any standards. The spin qubit approach, however, offers some interesting capabilities in terms of form factors for quantum devices. It is (literally) one million times smaller than Tangle Lake and if designed and operated eventually, would offer the opportunity to put millions such devices in a very small space—all with existing manufacturing tech.

But there’s no need to watch the horizon for Intel’s quantum chip play. Held says that any of these technologies are at least 8-10 years away from product because the focus now is far more on research questions and understanding how to make a better device with the full software stack in tow.

Even with a production quantum device, Held says the quantum system will remain a co-processor in areas like HPC. He also says that other architectural options, including neuromorphic computing, are promising for solving other problems in existing systems, namely in terms of memory and data movement.

“Architecture continues to be important and the evolution of the industry and workloads means that we are exploring some architectural changes and opportunities that are perhaps a little more different and disruptive to the way programming will be done than the incremental approaches that have sufficed for a long time and across many things. Architecture will remain important and has been more visible to the software community than it has historically.”

In addition to exploring all of this in the emerging technologies group at Intel, Held says they are also tackling some of other features of quantum processors that will make them viable as supercomputing coprocessors. These areas include connectivity, which is a subtle but important differentiator between the various quantum processors. Intel’s current quantum architecture uses a nearest neighbor approach, an array of qubits with each qubit connected to four others which is a key to error correction. The holy grail of quantum computing lies in this concept of error correction because qubits decohere or lose their state over time and Intel’s topology is focused on providing this by the feedback loop of four closely connected qubits.

Further, as we will discuss in more depth in a companion article on Intel’s quantum software stack, co-design is key for Intel Labs as they bring quantum theory to experiments. Instead of focusing on just the hardware Held, says building the entire stack is the best method for guiding the tradeoffs and directing overall strategy. “We can understand the implications of choices as well as the opportunities to capitalize on those tradeoffs if we have work, expertise, simulations, and other tools set up to work on everything together rather than just zeroing in on one piece of the quantum puzzle.”