Contrary to current trends, the CPU may get bigger in the future. Yes, the size of CPUs are larger today than they were in the past, but they also pack in more transistors. The future may involve larger CPUs but with a much lower density of transistors. Why? Because of optics.

The idea of purely optical computers—and hybrid electronic-optical computers—is not new. But a set of recent advances is the first time I’ve thought we might be entering an era where some functions beyond long-distance communication will be handled optically.

Have you seen the light?

There are two properties of optical computers that make them attractive. The first is that they are naturally fast: light pulses travel at (yes) the speed of light. And when light switches light—the optical equivalent of a transistor—it happens very fast (think femtoseconds, which are 10-15 of a second). These two properties combine to make optical computers much faster than electronic computers.

The downsides are related directly to the upside. Using light to switch light is generally inefficient, meaning that you spend a lot of energy to compute. Likewise, light travels fast, but it also spreads out, meaning that components have to be separated by large distances.

The middle ground is a hybrid device. Light carries the information, but switching is performed electronically. Essentially, the light has to be absorbed to generate a current. The generated current is then used to modulate another optical signal to create an optical transistor.

Materials capable of absorbing the light (and creating electrons) would normally be quite large, which necessitates a large capacitor, from the point of view of the electrons. The electronic response is limited by the time it takes to charge and discharge the capacitor. The same story repeats itself when it comes to modulating the flow of light: a block of material has to charge and discharge.

Not only does charging and discharging the capacitor take time, it also costs energy. While an on-chip transistor might use about a femtojoule (10-15J) of energy per bit, an optical system might use a thousand times more. That is why optical interconnects only make sense between computers in a data center (or larger scale). But when high performance overrules energy efficiency, optical interconnects can make sense at the scale of a motherboard. That is the absolute limit, though.

No more capacitors

This latest bit of research has successfully solved the capacitance problem and may bring the benefits of high speed optics to the chip level.

To obtain such remarkable performance, the project's researchers made use of photonic crystal technology. A photonic crystal in this case is basically a slab of silicon with a lot of holes drilled in it. Light that is trying to travel through the slab hits these holes and scatters. But the spacing and size of the holes means that no matter which direction a light wave goes, it will encounter a similar wave that is exactly out of phase with it. The result is no light at all. In other words, the hole-filled slab is a perfect mirror.

If a single line of holes is removed from the slab, light is guided down the track of missing holes. The researchers placed a tiny bit of light-absorbing material at the end of the waveguide. When light hits the absorbing material, it generates lots of electrons. This turns the waveguide into a high-speed photodiode.

The researchers managed to transmit data at 40Gb/s, which is about standard for a high-capacity multiwavelength link. What's noteworthy is that this was done with a single wavelength, and the speed was made possible by the tiny capacitance of the absorbing material.

Optical transistor

However, the researchers did something more clever than just transmitting data. They created a device with their fast photodiode. Next to that, they placed an active material completely surrounded by holes, creating a (poor) laser. When powered, the laser leaks light into a second waveguide track. On the other hand, when the active region of the laser has no electrons in it, it will act as an absorber and suck light out of the waveguide.

The active region was electrically connected to the photodiode (on chip, no wires required). As the photodiode absorbs light, it sends electrons to the active region, where it amplifies any signal traveling down the second waveguide. When there is no light hitting the photodiode, light is absorbed in the second waveguide. The combined photodiode and modulator act as an optical transistor.

The researchers showed that they could modulate a signal at 10Gb/s, which is about the standard rate for optical communications. They also showed that the capacitance of the photodiode and modulator were under 2fF, which is about an order of magnitude smaller than anything else so far. However, it looks like optimizing the electrical loading might allow the data rate to be increased considerably.

Even better, it's all highly energy efficient. The researchers show that their technology consumes less than 0.1fJ/bit, bringing it into contention for on-chip use.

That doesn't mean you're going to get optical computers soon; the next step will be a hybrid. The researchers suggest that their optical transistor would be very useful in maintaining coherence between caches in multicore CPUs. I suspect that it might be useful for distributing clock signals as well.

This also means that chips are going to get larger. In the not-so-distant future, it will make sense to move certain functions to computational units based on these hybrid optical transistors. But not all of them—if a Core i7 processor (1.9 billion transistors) were to be implemented optically, the chip would have an area of 48m2. The balance between speed, power, and size will need to be carefully considered when combining optics and electronics.

Nature Photonics, 2019, DOI: 10.1038/s41566-019-0397-3 (About DOIs)