In a blog post, the company detailed how its "unique set of perceptive technologies" allow it to create a 3D image with only a single camera and without a dedicated depth sensor. Thanks to machine learning, the new Augmented Faces API is better able to crop faces, which reduces noise and tracks movement more realistically. It sounds similar to Apple's Animoji, but it won't require the latest iPhone camera technology. It also allows the filters to be applied to pre-recorded videos.

The tool can detect and compensate for camera imperfections and extreme lighting, meaning it will simulate light reflections off of AR glasses and cast virtual shadows to match natural lighting. Google says the filters allow for more realistic makeup effects as well.

Unfortunately for most of us, the filters are only available to YouTube "creators" -- channels with more than 10,000 subscribers -- who are currently guinea pigs for YouTube Stories, still in beta. The filter tool is also available to developers using the latest ARCore SDK, meaning a developer might use this to bring AR filters to a video game using Unity or an app using Sceneform. So while most of us won't be able to use this directly in YouTube now, the technology might come to other apps in the future. This also looks like yet another example of companies doing what Snapchat does, only better. At least in this case, it's not Facebook eating Snapchat's lunch again.