iOS 13 and iPadOS will give recent iPhone and iPad users the ability to capture images and videos with their front- and rear-facing cameras simultaneously.

Apple says it is also possible to take advantage of multiple microphones to “shape” the sound that is captured. It encourages developers to leverage the new capabilities to bring picture-in-picture and spacial audio to their apps.

Apple introduced the ability to capture pictures of video with multiple cameras on the Mac way back in 2011 which OS X Lion. However, it has never been possible with iPhone and iPad. That’s going to change this year.

Apple is introducing new APIs that will allow camera apps to take advantage of front- and rear-facing camera modules simultaneously. You’ll need one of its latest devices to enjoy this change.

Multi-cam support comes to iOS this fall

With multi-cam support, you’ll be able to film a scene with your iPhone’s rear-facing camera while also capturing your reaction with its front-facing selfie camera for the first time. And it’s not just video and photos that are saved.

You will also be able to capture the metadata, audio, and depth from multiple cameras and microphones simultaneously. In addition, developers will be able to leverage both TrueDepth camera modules.

That means you’ll be able to separate streams captured by the iPhone’s wide-angle and telephoto lenses. And you’ll be able to switch between them on the fly during playback in the Photos app.

There are limitations

To enjoy these features, you’ll need one of Apple’s newest iOS devices. Only the iPhone XS, iPhone XS Max, iPhone XR, and 2018 iPad Pro are getting them. What’s more, there are limits to what can be done with it.

Only a certain combination of camera sensors can be utilized simultaneously. And you can only run one instance of multi-cam at any time. So it’s impossible to use multi-cam capture in multiple apps simultaneously.

On the iPhone XS, for instance, if you’re capturing separate streams from the dual camera on the back of the phone, you can’t also capture from the selfie camera module.

Likewise, selfie camera capture can only be combined with images or video from one of the iPhone’s rear-facing camera modules — either the wide-angle or telephoto sensors.

Portrait photos will be better, too

Apple is also introducing a new technology to iOS 13 called Semantic Segmentation Mattes. This will help it better identify things like skin, hair, and teeth to improve Portrait photos and effects.

In one demonstration at WWDC, Apple showed how a person’s face could be separated from their hair and teeth so that virtual face paint and hair dye could be applied in the right places.

Of course, developers will need to take advantage of these improvements and APIs before we can enjoy them in iOS 13 and iPadOS. Hopefully some will do that by the time these updates roll out to everyone this fall.

For more information on Apple’s improvements to camera technology, check out its multi-camera capture session on the WWDC website.