Many Ars readers love to argue the details of different computer architectures. Cache implementations, pipelines, and other minutiae are all put under the microscope and declared wanting by someone (and excellent by others). From my perspective, all commercial computer architectures are the same, and you have to leave the world of silicon to find radically different computers.

And radical is what we have received from groups of Japanese and American researchers. They have used light pulses, circulating in a fiber optic racetrack, to create a computer that is very scalable—and seemingly pretty fast.

Isolation is bad

The computers we play with every day use logic gates. These gates set every bit in memory by performing a series of logic operations. To solve a problem, we first have to design an algorithm that will generate a solution. Then, that algorithm has to be translated into a series of logic operations that can be fed to the computer. There are, of course, numerous ways to optimize—dividing the problem across multiple CPUs, for instance—but at heart, it's all the same logic. It has the great benefit of being universal. Any computation is possible; you just might still be waiting for the solution when the Universe dies.

But this is not the only way to compute. Another way is through the use of Ising models. Picture a bunch of tiny magnets that are arranged in a regular two-dimensional grid; this is a two-dimensional Ising model. Each magnet can be oriented with the north pole pointing up or down, but not in between. The system is named after Ernst Ising because he asked a key question: what is the lowest energy configuration of these magnets?

In the case of two magnets, the answer is simple. You have four possible configurations: both magnets point down, both point up, or both are oppositely oriented. The minimum energy is when they are oppositely aligned. For larger 2D arrays, however, answering this question is very difficult. The 2D array has an interesting feature that makes solving this problem more than academic. If all the magnets are coupled—meaning that they're all influenced by the magnetic field generated by all of the other magnets—and the coupling between individual magnets can be tuned, then the lowest energy configuration can be the solution to a computational problem.

We still have to create an algorithm that generates a solution to our problem. But now, instead of translating the algorithm into logic operations, we have to translate it into couplings between magnets. This is not necessarily an easy task, but the task is usually easy compared to coming up with answers to the problem of interest, provided we're interested in solving problems that take too much time to solve on a normal computer.

Now, instead of carefully calculating each logical step, we set the couplings and shake the magnets vigorously so that they take on a random configuration. Then we gently reduce the intensity of the shaking. As the shaking slows, the magnets arrange themselves into their lowest energy configuration. At the end, the solution is contained in orientation of each magnet. Note that there are no operations performed directly on any specific magnet, nor is there any timing signal that sets all the bits marching in lock-step. Everything happens at a pace that is governed by the nature of the magnets: big heavy magnets flip slowly, while light ones are twitchy and flip easily.

Beautiful but complex

On the face of it, computing with an Ising model seems pretty simple. But there is an embuggerance hidden in the process. Every magnet must be coupled to every other magnet in a manner that we can control. This interconnectivity is what makes the Ising model a universal computer, so you can't avoid it. Now think about how this system scales: two bits require one connection, four bits require six connections, 2,048 bits require 2,096,128 connections. This is not my definition of favorable scaling.

Indeed, this problem is so difficult that in the physical arrays produced by D-Wave, complete interconnectivity was abandoned. Instead, the company chose hardware with limited connectivity that could be scaled to larger numbers of bits and still solve some useful problems. In more recent research, scientists have resorted to using large arrays of quantum logic to control the coupling between just a few quantum bits.

So the choice seemed to be either lots of bits with weak coupling or strong coupling with only a few bits. Which makes two publications detailing a fully interconnected and programmable Ising model with up to 2,048 bits even more impressive.

Using time instead of space

To overcome the problem of arranging interconnectivity in space, researchers from Japan and the US have turned to time instead. The magnetic spins are replaced by pulses of light that are circulating in a loop of fiber optic cable. The light pulses are far enough apart that they don't really know about each other. In other words, left to themselves, there is no coupling between the different pulses. The coupling is introduced artificially by measuring the state (phase, in this case) of each pulse and calculating the amplitude and phase of feedback that should be applied to each pulse. So for each lap of the circuit, the light pulses are amplified in a manner that depends on all the phases of all other pulses.

Practically speaking, this means that the phase of each pulse is measured, and a classical computer calculates how that should change the feedback used to drive all the other pulses. The computer then sets the feedback that is sent to each pulse before the pulse gets amplified. The amplification is phase sensitive, so depending on the phase and amplitude of the feedback, the phase of the pulse can flip between the equivalent of spin-up and spin-down.

After a certain number of laps, the feedback to each pulse stabilizes, and the pulse amplitudes and phases stabilize to fixed values; with some luck, this will correspond to the ground state. Read the state of the pulses and you've got your result.

This means that as long as the fiber loop is large enough and the driving lasers are stable enough, you can put in as many bits as you like. The researchers demonstrated up to 2,048 bits, with over 2 million connections among them. They used this system to solve a problem called MAX-CUT, which involves finding a division in a network (a graph) such that connections between the two sides of the network are maximized.

The researchers showed that their Ising device could find the correct solution most of the time. If you were willing to accept solutions that are good enough rather than the one optimal answer, then it performed very well. This is actually the nature of this sort of computation: you never know if you have the best possible solution, so you have to re-run the computation multiple times and compare different answers.

To directly compare the performance of their computer with a classical computer, the researchers fed the same problems to a classical computer. On average, the light-based computer came to a solution that was about as good as the classical computer and close to the best reported for the problems they were solving. Oh, and the light-based Ising model reached obtained these solutions about 10 times faster than the classical computer

Where’s the quantum?

In articles like this, I tend to spit out the word quantum like a demented Gatling gun. So what's gone wrong here? Well, the system in question is not a quantum computer. Indeed, you could practically feel the frustration of the authors. They want to claim quantum-ness, but that requires showing that the light pulses are in superposition states, coherent, and entangled with their neighbors. I won't go into what each of these mean, but suffice it to say that only one—coherence—is certainly present. The way that they amplify the light pulses ensures that they remain coherent. But the others are a matter of inference, for which there is not enough data at the moment.

Certainly, there is no fundamental impediment to turning this into a quantum computer, but as the researchers acknowledge, they have some way to go to do that.

Don't think that this means they haven't achieved something special. The researchers have created a highly flexible and extensible design that can compute very hard problems faster than current computers, one that can probably tackle much larger problems more efficiently than current computers. It should be possible to increase the number of pulses in the loop—or use a longer loop and gradually increase the number of pulses. This will then provide a way to measure how the quality of solutions scale with time and problem size without changing too many variables.

Science, 2016, DOI: 10.1126/science.aah4243

Science, 2016, DOI: 10.1126/science.aah5178