Deep Fusion takes an underexposed photo for sharpness, and blends that with three neutral pictures and a long high-exposure image on a per-pixel level to achieve a highly customized result. The machine learning system examines the context of the picture to understand where a pixel sits on the frequency spectrum. Pixels for clouds will be treated differently than those for skin, for example. After that, the technology grabs structure and tonality based on ratios.

There are some gotchas. You can't use this with your phone's ultra-wide angle lens, as hinted earlier, and bright telephoto shots will revert to Smart HDR to maintain better exposure. The capture process is quick, but it'll take a second for your iPhone to process the image at full quality. And yes, you absolutely need a 2019 iPhone for this to work -- it's dependent on the A13 chip.

You'll have to wait until the general release of iOS 13.2 if you're not willing to experiment. Even so, this could represent a minor coup for Apple. The company has been accused of slipping on photography in the past, letting AI-centric phone cameras like Google's pull ahead. Although the iPhone 11 series made strides in photo quality out of the box (particularly with its Night Mode), Deep Fusion gives Apple an AI-powered camera feature that boosts quality further and might provide an edge over rivals in key scenarios.

Update (8:10PM ET): According to TechCrunch, the developer beta with Deep Fusion will not be released today. It's still "coming," but there's no launch date right now.