Editor's note: I realize that I do not correctly calculate the Bragg transmission in either the classical or the quantum case; however, it is close enough to get an idea of the differences between programming a classical and a quantum computer.

Time: non-specific 2018. Location: a slightly decrepit Slack channel.

"You know Python?"

Questions from John Timmer, the Ars Technica Ruler of All Things Science, are sometimes unexpected. If Slack could infuse letters with caution, my "Yes" would have dripped with it.

It turns out that D-Wave was unleashing its quantum optimizer (the company had just announced a new version) on the world via an application programming interface (API). Ars was being invited to try it out, but you needed to know some Python. I was up for that.

I had envisioned D-Wave producing some fantastic API that could take my trademark code-that-makes-coders-weep and turn it into quantum-optimizer-ready lines. Alas, when I got my hands on the quantum optimizer, that was not the case. I quickly found myself buried in documentation trying to figure out what exactly I was supposed to do.

I think D-Wave's press officer had in mind someone who knew enough to be able to run the pre-written examples. But I'm kind of stubborn. I had come up with three or four possible problems that I wanted to test out. I wanted to find out: could I master the process of solving those problems on the D-Wave computer? How easy is it to make the conceptual leap from classical programming to working with a quantum annealer? Were any of my problems even suitable for the machine?

To give away the stunning conclusion, the answers are: maybe not quite "master," difficult, and #notallproblems.

Choosing something to code

Despite what you may or may not think of me, I'm what you might call a practical programmer. Essentially, anyone skilled in the art of programming would wince (and quite possibly commit murder) at the sight of my Python.

But I can come up with problems that require code to solve. What I want is something, for instance, that calculates the electric fields due to a set of electrodes. Something that finds the ground state of a helium atom. Or something that calculates the growth of light intensity as a laser starts up. These are the sorts of tasks that interest me most. Going in, I had no idea if the D-Wave architecture could solve these issues.

I chose two problems that I thought might work: finding members of the Mandelbrot set and calculating the potential contours due to a set of electrodes. These also had the benefit of being problems that I could quickly solve using classical code to compare answers. But I quickly ran into trouble trying to figure out how to run these on the D-Wave machine. You need a huge shift in the way you think about problems, and I am a very straightforward thinker.

For instance, one issue I struggled with is that you are really dealing with raw binary numbers (even if they are expressed as qubits rather than bits). That means that there are, effectively, no types. Almost all of my programming experience is in solving physics problems that rely on readily available floating-point numerical types.

This forces you to think about the problem in a different way: the answer should be expressible as a binary number (preferably true or false), while all the physics (e.g., all the floating-point numbers) should be held in the coupling between qubits. I could not for the life of me figure out how to do that for either of my problems. While buried in teaching, I let the problem simmer (or possibly curdle).

After about six months, I finally hit on a problem that I was familiar with and that I might be able to solve using D-Wave's computer. Light transmission through a Bragg grating can be expressed as a binary problem: does the photon exit the filter or not? All the physics is in the coupling between qubits, while the answer is read out from the energy of the solution.

Bragg gratings

A 1D Bragg grating is a layered material. Each interface between two layers reflects a small amount of light. The total transmission through the whole structure is determined by the spacing between the interfaces. To get light through, we need the waves from different interfaces to add up in phase. The transmission spectrum of a perfect Bragg grating with 50 layers with 0.1 percent reflectivity at the interfaces is shown below.

Here is the code to generate the data for that graph.

ld = np.linspace(ld_center-3e-9, ld_center+3e-9, num_ld) k = 2*np.pi/ld T = np.ones(ld.shape) for j in range(num_layers): T = T*(1-A)*np.cos(j*k*layer_sep)**2

Here, we explicitly calculate the relative contribution of each interface in terms of the optical power that it will contribute to the next interface. Constructive and destructive interference is taken into account by reducing or increasing the contribution of the interface, depending on how close the layer spacing is to a half wavelength.

This is a necessary hack, because the couplings between qubits are only real-value numbers, not complex numbers (the physics is best expressed as a complex number that contains the amplitude and phase of the light). Nevertheless, the output of the classical code "looks" approximately correct—the lack of sidebands is concerning and shows the model is incomplete, but that's not important for now.

The missing part of the model in the classical code is that there is no test for self-consistency. I've calculated the result based on an assumption about the way the wave will propagate. Even if that assumption is wrong, the result of the calculation ends up being the same—while the equations are based on physics, there's no way within the code to ensure they get the physics right.