For the longest time, imaging was probably the most boring subject imaginable. Unless you were excited about comparing various mass-produced, brand-name lenses, there wasn't much to talk about. That changed briefly with the invention of the laser, but the actual imaging technology was still... yeah, boring.

In the last decade or so, though, things really have changed, in part because of new ways of thinking about what an image actually is. Among the many fascinating variations on traditional imaging is something called ghost imaging. The idea of ghost imaging was to use the quantum nature of light to image an object by detecting photons that had never actually encountered the object. This is a mind-blowing idea that has now been developed to the point where it might actually be practical in some circumstances—especially when you can acquire about 1,000 ghost images per second.

Am I seeing ghosts, or using ghosts to see?

The original idea behind ghost imaging made use of something called quantum entanglement. Imagine that I have a single photon that I slice into two photons. Because the Universe doesn't create or destroy things like energy, momentum, or angular momentum, the energy contained in the two photons has to sum to the value of the energy contained by the first photon.

The total energy, however, can be divided up any way we like. If this were classical physics, that would be the end of the story: two photons each with an energy, the total of which is a fixed value. In quantum mechanics, however, we cannot know which photon has which energy. The result is that both photons behave as if they have all possible energies at the same time. The same is true of momentum and angular momentum.

The two are also entangled, which means that if I measure the energy of one photon, then I get a single number, and the second photon then immediately takes on the appropriate energy. From that moment on, it behaves like a photon with a single energy. That is what makes quantum entanglement special.

We can use two photons with this sort of highly correlated property to make images. One photon goes directly to a camera, while the other bounces off the object. The photon that bounced off the object can then be registered using a photodetector. The experimenter then does the following: every time the camera records a photon (remember, these aren't the photons that strike the object) and the photodetector goes bing, you keep the camera image. The remaining camera images are thrown away. The saved images get added up to create a complete image of the object, all based on light that never went anywhere near the object.

You might think that this is a rather slow process, and you'd be right. Imagine that our entangled photon source emits about a million photons per second (this would be an excellent entangled photon source). Of the photons sent to the object, about one percent of them bounce off (the rest are lost); of that one percent, maybe one in a thousand bounces off on a path that sends it to the photodetector. So, we get about 10 camera images per second, each of which is a single photon that is detected by a single pixel of the camera's sensor. If the camera has one million pixels, then we expect to need about 30 hours to obtain enough data to combine into a single image.

That kind of sucks.

What's in a name?

Later, researchers realized that you didn't really need to do this sort of imaging with single photons. The next idea is a little abstract, but it is central to the work. Photons always come in something called a mode. In this case, a mode just describes the spatial shape of the light—where the bright and dark patches are. Any image can be described as a sum of modes.

What does this mean? Instead of sending out pairs of photons, you can use an intense light source. That light should be in a single spatial mode, which is divided so that it travels down two paths. In one path, the mode is directly detected by a photodetector. In the second path, the light bounces off the object and then another photodetector registers how bright the reflected mode is—which only requires a single pixel.

A computer can then take the two signals and use them to determine how big a contribution that mode makes to the image. To create an image, you simply cycle through as many modes as you desire and sum their contributions up. Now, frankly, I don't think this is really ghost imaging, because you already know the mode (since you control the light source), so you don't need the detector that measures it.

Which is why the researchers have removed that detector and call the technique computational ghost imaging. The researchers take their knowledge of the mode sent by the light source and then use the intensity of the single pixel photodetector to determine how much that mode contributes to the signal.

I still don't think you can call this ghost imaging, regardless of how many adjectives you tack on. The image is created directly from the photons that have bounced off the object, plus a computation based on the spatial mode of the light incident on the sample. Regardless of what you call it, though, it's pretty cool.

Bright flashy lights

The advantage of using modes is that each mode can be very bright. That means there is no need to wait long periods of time as each photon bounces off the object. However, you still have to cycle through lots of modes individually to build up the image. While this slows things down, it's still a huge improvement, providing speeds reaching about 10 frames per second (fps).

The slowdown is because each mode needs to be created separately, which is usually done by using something like the mirror used in a projector. The projector mirror can create about 22,000 modes per second, while a 1,024 pixel image requires about 2,048 modes to ensure accuracy.

To get to 1,000fps, the researchers abandoned the mirror from a projector system and decided to just use an array of 1,024 lights (LEDs). Each LED could be switched in a few nanoseconds, giving a potentially much higher frame rate. The grid of lights was controlled using a customized controller that could produce 500,000 modes per second, which gives the researchers a basic frame rate of 250fps.

But once you know a bit about the object you are imaging, you can figure out which modes are important and which are not. The researchers implement this using an evolutionary algorithm that takes the modes that were most dominant in the previous image and adds in a random sampling of other modes to quickly converge to an image. This allowed them to reduce the number of modes for a 1,024 pixel image from 2,048 to 512, increasing the frame rate to an impressive 1,000fps.

In static images, of course, this isn't very impressive. So the researchers also imaged moving scenes. There, the 1,000fps camera outperformed the slower frame rate settings quite significantly (as expected).

The researchers also did a pretty bogus comparison with a normal camera. It's a poor comparison because the normal camera was not capable of operating at 1,000fps, and at its normal frame rate (50fps), it could not operate at a shutter speed equivalent to 1,000fps. So, of course the images it obtained are well and truly blurred.

But that doesn't detract from the overall results. Yes, there are cameras out there that have faster frame rates and cameras with higher resolution. This sort of imaging system, however, could reach higher frame rates. And it's particularly suitable for certain types of microscopy that currently have quite slow frame rates and would benefit from this sort of technique. So, yeah, this is the sort of imaging system that will have its place in the pantheon of cameras—even though it's not ghost imaging any more.

Optics Express, 2018, DOI: 10.1364/OE.26.002427