Ever since laptops and smartphones became a large part of our lives, power consumption has been an important factor in chip design. Gone are the days when a chipmaker gloats about the raw power of their chips; instead, efficiency and smart ways to step up and step down chip performance have come to the fore.

Separately, chipmakers are working on improvements in fundamental chip design. Every drop in gate switching voltage is a big step—halving the voltage reduces power consumption by a factor of four. A seemingly simple way to get rid of the voltage entirely would be to replace electrons with light, although that's proven impossible to date. New research performed by Japanese scientists demonstrates that light-powered interconnects may not be too far away.

Why would light be valuable? One factor in determining how much power a chip is going to consume is how far a signal has to travel. The resistance of a wire between two gates is proportional to the wire's length. Once a wire gets long enough, it would be more efficient to transcode the electronic signal into an optical one and transmit photons instead of electrons. Of course, once you're talking about communication between chips, it makes even more sense since optical signals have a much higher bandwidth. Yet despite the appeal, your computer and cell phone are bereft of optical interconnects.

Silicon: The enemy of good lasers

Optical interconnects are absent because silicon is, fundamentally, a really crappy optical material. To give you an idea of how outrageously bad its behavior is, let me enumerate the many ways that I hate silicon.

First, silicon has an indirect band gap—the band gap is the amount of energy required to shovel an electron from a state where it is bound to a single silicon atom to a state where it is free to move through the silicon crystal. In other, better behaved semiconductors, one can simply apply a voltage larger than the band gap voltage. This will generate free electrons that lose energy by decaying across the band gap, emitting a photon as they go. Laser diodes are all based on this idea and they are very efficient (30 percent is a typical number).

But silicon has an indirect band gap, which means that before a free electron can decay and emit a photon, it has to bounce off a silicon nucleus and excite a physical vibration in the crystal. The probability of that happening is so low that silicon laser diodes do not exist.

The second problem is something called two-photon absorption. Think of it like this: a photon hits a bit of silicon, and if it has enough energy, it lifts an electron across the band gap so that it is free to move around. In this case, one photon provides one free electron. But if there are enough photons around, two can be absorbed at the same time to free up one electron. It doesn't matter if each photon has insufficient energy as long as the two of them combined provide enough energy to cross the band gap.

That means that you might have chosen to fill your silicon crystal with light that isn't absorbed because the photon energy is too low. As soon as the intensity ramps up, though, the silicon suddenly starts to absorb. Then the real problem kicks in.

Once two-photon absorption starts, there are all these unemployed electrons hanging around heckling passersby, shoplifting, and generally being a nuisance. In particular, they absorb a lot of light. Since they're free to move, they don't require that a photon have a particular energy to make them move—any energy will do. So lots of photons get spent giving the free electrons a short joy ride.

The short summary of this: even when a silicon laser switches on, it functions like I do before my first coffee hits—mostly whining and generally not achieving a lot.

The way to make silicon shine

Even if you got silicon to lase, it wouldn't be very good. But how do you get it to lase? The answer is Raman scattering. Raman scattering is the process whereby an atom can absorb light of the wrong frequency and get away with it. Put simply, silicon atoms in a crystal might like to physically vibrate at a particular frequency. For light, this will be a very long wavelength—a typical vibrational frequency would correspond to an optical wavelength of 19 micrometers, which is far from the entire visible range (400-700 nanometers).

Photons with a shorter wavelength can still set a silicon atom vibrating, but instead of being absorbed, they just give up a little energy. If you put in blue light, the silicon starts vibrating and emitting slightly redder light. This is Raman scattering. Like the normal process of light emission, Raman scattering can be stimulated by the presence of photons at the right frequency. By having both the pump light—the pump light acts as a source of photons—and the Raman generated light present in the silicon crystal, Raman scattering becomes more efficient. We can do this by creating an optical cavity in the silicon crystal so that both the pump and the generated light stay together for a long time, building up the intensity of the generated light.

Until now, even when this has worked, it hasn't worked very well. The problem is that shortly after the laser starts going, two-photon absorption sets in, beginning a cascade of inefficiencies. But even in the absence of two-photon absorption, these lasers just weren't that good. They took a lot of energy to get started, and they were simply too big to be useful in integrated circuits.

Holy silicon Batman—that’s the answer

Many of these problems have been overcome in a paper recently published in Nature. The key to this work lies in the miniaturization of the laser. Instead of relying on mirrors or on a traditional waveguide to confine the light, the researchers used a photonic crystal. A regular array of holes in the silicon act to confine the light very tightly within a small space. I won't discuss this in detail, but the researchers spent a lot of time and effort analyzing the shape of the light fields (known as cavity modes) in the confined space and used this information to choose which mode was best for powering the laser and which mode would be best excited by the pump—that is, which mode the Raman laser would run on. This allowed them to design the waveguides and hole arrays so that those two modes were the ones that they could excite and measure.

Once light makes it into the confined region, it tends to stay around for a long time, so the pump and Raman light intensities build up to very high levels within the confinement area. A Raman laser's efficiency and threshold depend on this build-up—by confining the light more tightly, the researchers managed to get the laser started with just 1 microwatt of power, which is a dramatic improvement over the milliwatt power levels required in previous work.

The downside is that silicon is still silicon. This high intensity of the chosen cavity modes means that two-photon absorption starts to hit really early on, with the laser showing signs of this at just 100 nanowatts of emitted power. In general, this is a bad thing, but we aren't after a lot of power for most applications. Instead, we just need sufficient photons to provide low error rate data transmission. Summing up, it looks like a decent trade-off.

The laser size is still large (a few micrometers) and cannot be reduced much further. Given the competition for silicon real estate, we are looking at something that could power interconnects between chips rather than within a chip, I suspect. There are still several other steps required, too: the laser requires a laser to operate, and coupling the two efficiently is really challenging. We also require good modulators to encode data into the light flow. Integrated optical modulators exist, but they are rather large devices and require a similar reduction in size and power consumption. Finally, the light detection side will likely need some work. Even so, I think we're not too far off from seeing light put to use in high-performance systems.

Nature, 2013, DOI: 10.1038/nature12237