Ever thought your selfies were as good as a Rembrandt? Or your party pics as apocalyptic as something by Hieronymus Bosch? Well, Facebook hears you, and it’s got something to help. This morning, the company announced that a number of AI-powered art filters would soon be arriving on its mobile app, allowing users to overlay photos, videos, and even live broadcasts with various artistic styles.

This type of AI feature is known as "style transfer" and has been around for a while now. Russian app Prisma popularized it earlier this year, and in October, Facebook showed off a prototype version working with live video. According to a report from Wired, the feature is now available in Facebook's main app in Ireland and is "due soon" in the US. You can see what sort of effects it produces below:

But as interesting as this is, purely as something fun to play with (and an example of Facebook putting more emphasis on the camera in its primary app), it's the technology that underpins it that's really significant.

Like many of the most interesting consumer AI applications out today, Facebook's art filters are built using deep neural networks. These are a type of computer program that's meant to mimic the actions of neurons in the brain, and that are trained using large amounts of data to recognize common patterns. Deep neural nets are used for everything from voice recognition in digital assistants to categorizing your holiday snaps in Google's Photos app.

The miniaturization of AI is a big step forward

Running these programs can be tricky, though, as they take up a lot of processing power. Often, programs that use this sort of AI need a connection to the internet to work. They send off your request to a server farm somewhere which does the actual computation, and then sends the results right back to you. With its new art filters, though, Facebook has managed to make these neural nets work locally on your phone.

To achieve this, Facebook has created an entire new deep learning architecture called Caffe2go. You can read about this technology in more technical detail in a Facebook blog post, but suffice to say, it's impressive work. The seminal paper on style transfer was published in 2015, and the resulting software worked only on images and had to run on a massive data center that still took seconds to deliver results. By comparison, Facebook's implementation processes 20 frames per second using just your smartphone's hardware.

This miniaturization of AI and Facebook's new Caffe2go infrastructure (which is available to developers) is going to enable lots of new features, says Facebook CTO Mike Schroepfer. "We can create gesture-based controls, where the computer can see where you're pointing and activate different styles or commands," writes Schroepfer. "We can recognize facial expressions and perform related actions, like putting a 'yay' filter over your selfie when you smile."

These examples probably don't sound like the grand future of AI you might have imagined, but they're just the beginning. Facebook already uses its AI to describe the content of images to visually impaired users, but imagine the same process happening offline with real-time video. In true 21st century style, better selfies are just the beginning.