Ultrafast video recording of spatiotemporal light distribution in a scattering medium has a significant impact in biomedicine. Although many simulation tools have been implemented to model light propagation in scattering media, existing experimental instruments still lack sufficient imaging speed to record transient light-scattering events in real time. We report single-shot ultrafast video recording of a light-induced photonic Mach cone propagating in an engineered scattering plate assembly. This dynamic light-scattering event was captured in a single camera exposure by lossless-encoding compressed ultrafast photography at 100 billion frames per second. Our experimental results are in excellent agreement with theoretical predictions by time-resolved Monte Carlo simulation. This technology holds great promise for next-generation biomedical imaging instrumentation.

Keywords

Here, we present single-shot, real-time video recording of spatiotemporal light patterns in a scattering medium to overcome these limitations. Of particular interest is the long-sought-after transient phenomenon—photonic Mach cones ( 50 , 51 ). Although their propagation has been previously observed using pump-probe methods ( 52 , 53 ), a single-shot, real-time observation of traveling photonic Mach cones has not yet been achieved. To tackle this challenge, we generated a photonic Mach cone by scattering a picosecond laser pulse that travels superluminally relative to the surrounding medium. We modeled the evolution of this transient scattering phenomenon using a time-resolved Monte Carlo simulation. A newly developed lossless-encoding compressed ultrafast photography (LLE-CUP) system, whose more efficient hardware design and reconstruction paradigm surpass the performance of previous CUP systems ( 54 – 56 ), macroscopically imaged light-scattering dynamics at 100 billion frames per second with a single camera exposure. The recorded instantaneous scattering pattern—the photonic Mach cone—is in excellent agreement with theoretical predictions.

On the other hand, experimental visualization of light propagation in scattering media in real time (defined as the actual time during which an event occurs) has been a long-standing challenge ( 25 ). Freezing light’s motion in a tabletop scene requires a picosecond-level exposure time per frame (that is, 1 billion frames per second) ( 26 ). Despite continuous improvements in state-of-the-art electronic sensors, current complementary metal-oxide semiconductor and charge-coupled device (CCD) technologies are incapable of reaching this speed ( 27 ) because they are fundamentally impeded by their on-chip storage capacity and electronic readout speeds ( 28 ). Nevertheless, various optical gating mechanisms, such as ultrashort pulse interference ( 29 ) and the Kerr electro-optic effect ( 30 ), were able to achieve picosecond exposure times. However, each gated measurement could capture only one image with two spatial dimensions at a specific time point, and temporal scanning (that is, repeated measurements with a varied delay between the pump and probe) is required to resolve details of the scattering event in its full duration ( 31 , 32 ). Light-scattering events can also be measured by a streak camera—an ultrafast imager that converts light’s temporal profiles to spatial profiles by pulling photoelectrons with a sweep voltage along the axis perpendicular to both the device’s entrance slit and the optical axis ( 33 – 35 ). Capable of recording a time course with picosecond temporal resolution, the streak camera removes the need for optical gating–aided temporal scanning in ultrafast measurements. The pioneering studies by Alfano introduced the streak camera for light-scattering measurements ( 36 , 37 ). Later seminal studies in this direction have led to many breakthroughs, including the observation of ballistic and diffusive components in the scattered light ( 38 – 42 ), the imaging of hidden objects behind scattering walls or around corners ( 43 – 45 ), and the measurement of the fluorescence lifetime of dye molecules in turbid media ( 46 – 49 ). However, the conventional operation of the streak camera sacrifices the imaging dimension—the narrow entrance slit (10 to 50 μm wide) confines the imaging field of view to a line. To achieve two-dimensional (2D) ultrafast imaging, this mode requires scanning the orthogonal spatial dimension and synthesizing the movie from a large number of measurements ( 45 ). In general, existing multiple-shot ultrafast imaging technologies based on temporal or spatial scanning do not have real-time imaging capability. They require the scattering events to be precisely repeatable, which is inherently challenging for events in dynamic scattering media, such as soft biological tissues and flowing blood.

Light-scattering dynamics have been extensively investigated from both theoretical and experimental perspectives. Among many simulation paradigms ( 15 – 17 ), the Monte Carlo method offers a rigorous and flexible approach ( 18 ) and is often regarded as the gold standard for modeling light transport in a scattering medium ( 19 ). The Monte Carlo simulation is equivalent to modeling photon transport analytically by solving the radiative transfer equation ( 20 ). As a statistical approach, a typical Monte Carlo simulation provides an ensemble-averaged result of light propagation [that is, it ignores coherent effects ( 21 )] and requires launching a large number of photons to ensure the desired accuracy ( 22 ). The Monte Carlo method is capable of simulating light propagation sequences with a short (for example, subnanosecond) time interval. This time-resolved Monte Carlo simulation has been widely used to model time-dependent light distribution, dynamic optical properties, and frequency domain light transportation in scattering media ( 23 , 24 ).

Imaging scattering dynamics can broadly aid biomedicine. For example, time-dependent acoustic speckles convey useful physiological information, such as blood flow velocity ( 1 ) and tissue elasticity ( 2 ), and time-reversal scattering theory has contributed to the development of microwave imaging instruments ( 3 , 4 ). Studies in light scattering have also been increasingly featured in recent progress in biomedicine ( 5 ). First, light scattering has been leveraged to develop novel biomedical optical instruments ( 6 – 8 ). For instance, techniques to spatiotemporally invert light scattering have provided the means to focus light into deep tissue for high-resolution imaging and control ( 9 , 10 ). In addition, analysis of temporal fluctuation in the scattered light signal reveals many optical properties of biological tissues ( 5 , 11 ). This characterization has enabled a diverse range of applications, such as assessments of food and pharmaceutical products ( 12 ) and studies of protein aggregation diseases ( 13 , 14 ).

RESULTS

Modeling light-scattering dynamics in a thin scattering plate assembly We assembled materials of different refractive indices and scattering coefficients (Fig. 1). Specifically, a “source tunnel” with a refractive index of n s scatters a collimated laser beam into two “display panels” with a refractive index of n d . A short laser pulse propagates in the source tunnel. The elastic scattering events in the source tunnel emit secondary wavelets of the same wavelength as the incident laser pulse. These wavelets form a wavefront in the display panels by superposition. When n s < n d , light propagates faster in the source tunnel than in the display panels. Under this circumstance, the scattering events generate secondary sources of light that advance superluminally to the light propagating in the display panels. At a certain time point, the instantaneous scattering light distribution has a Mach cone structure. The cone boundary is delineated by the common tangents of the secondary wavelets, where the wavelets overlap most to produce the greatest intensity. The semivertex angle of the photonic Mach cone, which is denoted by θ in Fig. 1, is determined by (1) Fig. 1 Schematic of the thin scattering plate assembly. The instantaneous light-scattering pattern represents a photonic Mach cone when n s < n d . θ, semivertex angle of the photonic Mach cone; DP, display panel; ST, source tunnel; n d , refractive index of the display panel medium; n s , refractive index of the source tunnel medium. Because the scatterers in the source tunnel are randomly distributed within the cylindrical volume illuminated by the laser beam whose diameter is much greater than the optical wavelength, the scattered light forms a laser speckle pattern in the display panels (57), with speckle grains of a few micrometers in size. For macroscopic observation, because each effective pixel of the detector at the object plane usually has a size on the order of millimeters, the observed photonic Mach cone is an intensity pattern averaged over many speckle grains. The net effect is equivalent to averaging over many speckle realizations, as if the sources were spatially incoherent (57). Our theory is based on ensemble-averaged wavelet intensity addition due to the above spatial averaging effect. To obtain the analytical formula describing the intensity distribution of the cone, we first derive the impulse response from a spatiotemporal Dirac delta excitation, traveling at a superluminal speed c s in the +x direction (fig. S1) (2) Here, c d denotes the speed of light in the display panels (c d < c s ), t denotes time, denotes position in a Cartesian coordinate system, q = c s t − x, , , and (detailed in section S1). For a spatiotemporally arbitrarily shaped pulse, the spatiotemporal distribution of light intensity of the resultant cone can be found using a three-dimensional (3D) convolution (3)where U(r) denotes the 3D snapshot intensity distribution of the excitation pulse and “⊗” represents convolution in 3D (detailed in section S1). Extending the concept of the Mach number from fluid dynamics, we define the photonic Mach number as (4) As an example, the light intensity distribution corresponding to a superluminal impulse excitation at M p = 1.4 was calculated according to Eq. 2. The central x-y cross section of the cone is shown in fig. S2A. The cone edge is defined by setting Y = 0, where the intensity approaches infinity (58). For a spatiotemporal Gaussian pulse excitation, the intensity distribution of the photonic Mach cone computed by Eq. 3 is shown in fig. S2B. We also numerically evaluated the photonic Mach cone using the time-resolved Monte Carlo method. Both superluminal (M p = 1.4) and subluminal (M p = 0.8) light propagations were simulated (detailed in section S1). Briefly, an infinitely narrow source beam propagated through a thin scattering sheet with a speed of c s along the +x direction. During the propagation, 105 scattering events were randomly triggered with a uniform probability distribution. Each scattering event emitted an outgoing secondary wavelet, which contributed to the total light intensity distribution. Then, the resultant light intensity distribution was convolved with a normalized spatiotemporal Gaussian function representing the finiteness of the laser pulse. Figure 2 shows contour plots of the scattered light intensity distributions on the sheet. Under superluminal conditions (Fig. 2A), the contours depict a nearly triangular region dragged behind the excitation pulse, representing a photonic Mach cone. However, under subluminal conditions (Fig. 2B), no such cone is formed. The expanding secondary wavelets always bound the excitation pulse, preventing the formation of a photonic Mach cone. Fig. 2 Time-resolved Monte Carlo simulations of instantaneous scattering light intensity distributions on a thin scattering sheet under superluminal and subluminal conditions. For both cases, the excitation light pulses are spatiotemporally Gaussian and propagate along the +x direction. (A) Contour plot of the light intensity distribution when a laser beam propagates superluminally in the medium with a photonic Mach number of 1.4. (B) Same as (A), but showing a laser beam propagating subluminally in the medium with a photonic Mach number of 0.8. The temporal processes of both transient events (A and B) are shown in movies S1 and S2.

Implementing LLE-CUP We developed LLE-CUP to capture 2D light-speed scattering dynamic scenes in real time with a single exposure. As a computational imaging approach, LLE-CUP operates in two steps: data acquisition and image reconstruction (both further described in Materials and Methods). In data acquisition, LLE-CUP acquires three different views of the dynamic scene (detailed in section S2 and figs. S3 and S4). One view, akin to a view in traditional photography, records a direct image of the scene temporally integrated over the exposure time. In contrast, the other two views record the temporal information of the dynamic scene by using a compressed sensing paradigm (54, 55, 59). The image reconstruction in LLE-CUP recovers the dynamic scene from the three-view data by exploiting the spatiotemporal sparsity of the event, which holds in most, if not all, experimental conditions. A compressed sensing reconstruction algorithm, developed from the two-step iterative shrinkage/thresholding (TwIST) algorithm (60), is currently used (detailed in section S3). The LLE-CUP system is shown schematically in Fig. 3 (with an animated illustration in movie S3 and further description in Materials and Methods). The dynamic scene is first imaged by a camera lens. A beam splitter equally divides the incident light into two components. The reflected component is imaged by an external CCD camera to form the time-unsheared view. The transmitted component passes through a 4f imaging system, consisting of a tube lens, a mirror, and a stereoscope objective, to a digital micromirror device (DMD). To spatially encode the scene, a pseudorandom binary pattern is displayed on the DMD. Each encoding pixel is turned to either +12° (on) or −12° (off) from the DMD’s surface normal and reflects the incident light in one of the two directions. Both reflected light beams, masked with complementary patterns, are collected by the same stereoscope objective. The collected beams are sent through tube lenses, folded by a planar mirror, and again folded by a right-angle prism mirror (see the upper right inset in Fig. 3) to form two images in separate horizontal areas on the entrance port of a streak camera. Unconventionally, this entrance port is fully opened (~5 mm width) to capture 2D spatial information. Inside the streak camera, a sweep voltage shears the encoded light distribution along the y′ axis according to the time of arrival. Therefore, these temporally sheared frames land at different spatial positions along the y′ axis and are temporally integrated, pixel by pixel, by an internal CCD camera in the streak camera, forming two time-sheared views. Fig. 3 Schematic of LLE-CUP. Lower left inset: Illustration of complementary spatial encoding for two time-sheared views. The on pixels are depicted in red for View 1 and depicted in crimson for View 2. The off pixels are depicted in black for both views. The combined mask shows that the two spatial encodings are complementary. Upper right inset: Close-up of the configuration before the streak camera’s entrance port (dashed black box). Light beams in both views are folded by a planar mirror and a right-angle prism mirror before entering the fully opened entrance port of the streak camera. LLE-CUP’s unique paradigm of data acquisition and image reconstruction brings several prominent advantages. First, facilitated by the streak camera, the LLE-CUP system can image a nonrepetitive dynamic scene at 100 billion frames per second with a single-snapshot measurement, circumventing the necessity of repetitive measurements by the pump-probe technique (26, 45, 52, 53). Second, LLE-CUP does not need the specialized active illumination required by other single-shot ultrafast imagers (61–63), enabling passive imaging of dynamic light-scattering scenes. Third, compared with other streak camera–based single-shot ultrafast imaging methods (64, 65), the LLE-CUP system has a light throughput of nominally 100% (excluding imperfect losses from the optical elements). In previously reported CUP systems (54, 55), only the on pixels of the DMD were used in the spatial encoding operation. As a result, information that landed on the off pixels of the DMD was lost, compromising reconstruction quality. In addition, the time-integrated CCD image was simply overlaid with the reconstructed datacube as a postprocessing step (55), without adding new information to assist in image reconstruction. In contrast, LLE-CUP harvests light reflected from both on and off pixels of the DMD to form two complementary time-sheared views. This design prevents any loss of information from spatial encoding, which is advantageous for compressed sensing–based reconstruction. In addition, the time-unsheared view recorded by the external CCD camera enriches the observation by adding another perspective, which is used with the two time-sheared views in the new reconstruction paradigm to yield a much improved image quality (as further explained in Materials and Methods and illustrated in fig. S5). Thus, the dual complementary masking, the triple-view recording of the scene, and the three-view joint reconstruction are three major enhancements of LLE-CUP over the previous CUP systems.