In a blog post yesterday, the engineers behind the feature explained that the tool has been developed from basic facial recognition software. But while that was great for pictures of people, it didn't help with images of objects, landscapes or animals. They then began exploring research into eye-tracking, which can be used to train neural networks and other algorithms to predict what people want to look at.

Once a neural network was able to pick out these salient areas, the team needed to find a way to make it work in real-time on Twitter. Picture cropping on the site is fairly broad -- only a third or so of an image needs to be previewed -- so it used a process called "knowledge distillation" to simplify the process, which made the neural network 10 times faster than its initial design. Saliency detection and optimized cropping now happens instantaneously.

The feature is being rolled out on iOS, Android apps and desktop now, so next time you upload a picture of Mittens you can be sure your followers will see his little furry face in all its adorable glory, whether they want to or not.