An imaging technique has been developed that can record non-repetitive ultrafast phenomena without strobe or flash illumination. The approach could find applications in biomedicine and security technologies. See Letter p.74

In this issue, Gao and colleagues1 (page 74) describe a new imaging technology that can capture time-evolving events at a speed of up to 100 billion frames per second without the need for specialized illumination methods. The technique involves a unique hardware design combined with spatial encoding of the image, to achieve something not possible before. The approach allows imaging of ultrafast phenomena that need happen only once, and does so using single camera snapshots. By contrast, ultrafast imaging techniques used today rely on imaging the same subject over and over again, requiring it to be a repeated event, or on using strobe or flash illumination.

Imaging fast phenomena has been a research field in itself, ever since the golden age of film photography in the nineteenth century, which used controllable mechanical light shutters that limit the film exposure time. High-speed photography using moving prisms or mirrors was originally developed in the twentieth century, and has been used to take spectacular photos of bullets going through apples and water balloons, to capture suspended water droplets, explosion clouds and sonic booms, or simply to record sporting events. The development of digital electronic sensors in the semiconductor era of the 1970s to '80s completely changed high-speed photography, allowing fast events to be imaged electronically (Fig. 1). Since then, there have been fundamental design changes to the methods used for high-speed exposures, allowing ever faster imaging. This has advanced to the point that ultrafast imaging today can observe the movement of light on the millimetre distance scale. Figure 1: Fast cameras over time. Imaging at slow acquisition speeds has its origins in the nineteenth century, with fast imaging using rotating prisms, mirrors or strobe cameras arriving in the twentieth century. Modern digital cameras based on charge-coupled devices (CCDs) and complementary metal-oxide-semiconductor (CMOS) sensors were developed in the 1980s. These sensors then enabled electronic technologies to be developed for ultrafast imaging, such as gated-streak cameras and in situ storage CCD (IS-CCD) devices. Gao et al.1 have developed a technique called compressed ultrafast photography (CUP), which outperforms previous ultrafast imaging technologies without the need for active illumination. Full size image

Mid-twentieth-century approaches typically captured phenomena on the millisecond to microsecond timescale. They required highly efficient sensors (such as charge-coupled devices and complementary metal-oxide-semiconductor sensors), sufficient lighting, or strobe flash photography. Investigation into the scientific origins of fast phenomena has benefited enormously from the development of such high-speed cameras, and other applications of this technology exist in all areas of human interaction, commerce, health care and defence. The key technological leap to imaging at timescales smaller than one microsecond came with the invention of advanced rotating-prism cameras and rotating-mirror cameras.

However, imaging of ultrafast phenomena requires something extra special, and the applications of ultrafast imaging can be even more compelling than those of fast imaging. The main breakthrough in this technology capitalized on the proven idea that, by moving the sensor lateral to the direction of imaging, a temporal signal could be captured spatially on the sensor. In this way, time variations could be translated into spatial variations on the sensor. This conversion from time to space could be done with a moving mirror, a film sequence or a moving sensor. However, the major advance came through converting the temporal signal into electrons at a photocathode inside a vacuum tube, amplifying the converted signal and spreading it laterally through electronics.This architecture became known as a streak camera, for its ability to turn a time-varying signal into a horizontal 'streak of light' on an electronic image sensor. For decades, this technique has been used as a commercially available ultra-high-speed imaging approach, in which images are read out by a standard electronic camera.

Since the 1980s, electronic-amplification vacuum tubes known as gated microchannel plate photomultipliers have been the core technology in streak cameras, and have allowed imaging with picosecond (10−12 s) temporal resolution and less than nanosecond-level shutter times. However, their design typically limited their use to a single, one-dimensional data set, with the second dimension being used to spread the data temporally before amplification in the photomultipliers. With this design, the output image displays horizontal position versus time, but image-acquisition rates are less than 1 billion frames per second.

Techniques to reach imaging in the billions of frames per second typically need a radical new design. One such advance that emerged2 in 2009 involved encoding each of the pixels in an image in spectral wavelengths of light, and then transforming this spectrum into a time sequence of data. This allowed the image pixels to be serially amplified by a fibre laser and read out, pixel by pixel, by a single detector. This imaging infrastructure does not look like a camera at all, but the image could be created pixel by pixel, with a frame rate of around 6 million frames per second, and an effective light-exposure time of less than 0.5 ns. The major benefit of this approach was that the amplification of the signal above a factor of 300 provided a superior resolution, and so the technique could be applied to low-intensity phenomena. However, the infrastructure required to create the images was high.

In comparison, Gao and colleagues' work shows that a more conventional set-up of a camera allows imaging at the extremely high frequency of 100 billion frames per second. The authors' method, called compressed ultrafast photography, uses the imaging optics and image geometry of streak cameras, but takes advantage of the ability of compressed sensing tools to recover images from sparse spatial data. The approach enables the information in the image to be laterally encoded and also somewhat randomly encoded in the field of view of the streak camera. This encoded-image data set allows the system to read out images with full-frame capability at extremely high speed.

So, what could be done with an ultrafast camera at 1011 frames per second? Applications could include visualizing optical communications, optically active light–matter interactions and quantum-mechanical phenomena. For example, it might be possible to improve the investigation into approaches to optical cloaking3, in which light bends or is deformed around an object, instead of going through it. This field of study, popularized in Star Trek, is real, and although many advances are being made in fundamental approaches to cloaking designs, the inability to see the interactions between light and the object being cloaked hampers development. Similarly, phenomena in which light can focus or defocus through a material, and effects in which light oscillates between thin layers, could all be imaged for the first time. It is also likely that new ways to image high-speed signals could result in innovations in industrial processing for materials research, biomedicine and security technologies.

The progression of fast-imaging technologies has been steady ever since the invention of film, but key transformative design innovations such as the one reported by Gao and colleagues are still few and far between. These advances will be essential for seeing the physical behaviour of light and exploiting it.

References 1 Gao, L., Liang, J., Li, C. & Wang, L. V. Nature 516, 74–77 (2014). 2 Goda, K., Tsia, K. K. & Jalali, B. Nature 458, 1145–1149 (2009). 3 Cai, W., Chettiar, U. K., Kildishev, A. V. & Shalaev, V. M. Nature Photon. 1, 224–227 (2007). Download references

Author information Affiliations Brian W. Pogue is in the Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire 03755, USA. Brian W. Pogue Authors Brian W. Pogue View author publications You can also search for this author in PubMed Google Scholar Corresponding author Correspondence to Brian W. Pogue.

Rights and permissions Reprints and Permissions