Artificial intelligence researchers at Facebook have figured out how to train their AI models for image recognition at eye-popping speeds.

The company announced the results of the effort to speed up training time at the Data@Scale event in Seattle this morning. Using Facebook’s custom GPU (graphics processing unit) hardware and some new algorithms, researchers were able to train their models on 40,000 images a second, making it possible to get through the ImageNet dataset in under an hour with no loss of accuracy, said Pieter Noordhuis, a software engineer at Facebook.

“You don’t need a proper supercomputer to replicate these results,” Noordhuis said.

The system works to associate images with words, which is called “supervised learning,” he said. Thousands of images from a training set are assigned a description (say, a cat) and the system is shown all of the images with an associated classification. Then, researchers present the system with images of the same object (say, a cat) but without the description attached. If the system knows it’s looking at a cat, it’s learning how to associate imagery with descriptive words.

The breakthrough allows Facebook AI researchers to start working on even bigger datasets; like, say, the billions of things posted to its website every day. It’s also a display of Facebook’s hardware expertise; the company made sure to note that its hardware is open-source, “this means that for others to reap these benefits, there’s no need for incredibly advanced TPUs,” it said in a statement throwing some shade at Google’s recent TPU announcement at Google I/O.

Facebook plans to release more details about its AI training work in a research paper published to its Facebook Research page.