Google’s Pixel phone has one hell of a camera, and one of the reasons for this is AI. Google has used its machine learning talent to squeeze better images out of a tiny smartphone lens, including its portrait mode shots, with blurred backgrounds and pin-sharp subjects.

Now, Google has open-sourced a lump of code named DeepLab-v3+ that it says will help others recreate the same effect. (Although, this is not the same tech that Google itself uses in the Pixel phones — see the correction note at the bottom of the article.) DeepLab-v3+ is an image segmentation tool built using convolutional neural networks, or CNNs: a machine learning method that’s particularly good at analyzing visual data. Image segmentation analyzes objects within a picture, and splits them apart; dividing foreground elements from background elements. This can then be used to create ‘bokeh’ style photographs.

As Google software engineers Liang-Chieh Chen and Yukun Zhu explain, image segmentation has improved rapidly with the recent deep-learning boom, reaching “accuracy levels that were hard to imagine even five years [ago].” The company says it hopes that by publicly sharing the system “other groups in academia and industry [will be able] to reproduce and further improve” on Google’s work.

At the very least, opening up this piece of software to the community should help app developers who need some lickety-split image segmentation, just like Google does it.

Correction: Google contacted The Verge to clarify that DeepLab-v3+ is not the exact same technology used in the Pixel’s portrait mode, as the company’s original blog post had implied. Portrait mode on the Pixel is just an example of the sort of features DeepLab-v3+ can enable. We regret the error.