Microsoft has unveiled Project Adam, its new artificial intelligence that it claims is 50 times faster than comparable state-of-the-art systems deployed by the likes of Google. Adam can look at an image of almost anything and tell you exactly what it is; it can even differentiate between a Pembroke and Cardigan corgi. While image classification is actually a bit old hat by this point, Adam is twice as accurate and uses 30 times fewer computers than other comparable systems. Notably, while similar AIs are moving to massively parallel GPU computing, Adam uses plain old CPUs in Microsoft’s Azure cloud — an impressive feat that is only possible thanks to Microsoft’s use of lock-free “HOGWILD!” computing.

Project Adam uses a branch of artificial intelligence called deep learning neural networks (NNs). We’ve covered deep learning NNs before, but here’s a quick synopsis. You start with a bunch of pristine synthetic neurons (represented by software). You then expose these neurons to a huge corpus of training material (in this case 14 million images of 22,000 object categories). Clever training algorithms look for markers that can be used to classify each category, and slowly connections are formed between the synthetic neurons to represent these categories. Later, once training is complete, you can put a new image into the system (say, from your smartphone camera), and it will percolate through the neural network until it arrives at the right classification. Deep learning NNs are pretty accurate: Facebook has a system that is as accurate as humans (97%) at matching faces.

Way back in 2012, Google and Stanford showed off a deep learning NN called DistBelief that used 16,000 processing cores to perform image classification. Microsoft says Adam is twice as accurate, uses 30 times fewer machines, and is “50 times faster” overall. These numbers should be taken with a grain of salt, though: It’s a little silly to assume that Google hasn’t made any improvements to its system since 2012.

Microsoft says the secret sauce behind Adam is its use of Hogwild (its creators actually call it HOGWILD!, but I refuse to write it like that, for the sake of your sensitive eyes). Hogwild [research paper] is an interesting, niche method of computing that eschews locking in favor of increased parallelism. Basically, in a modern multithreaded system, resources (RAM, storage, etc.) are locked to ensure that only one thread can use that resource at any one time. This prevents collisions (two programs writing to the same region of RAM at the same time would be disastrous) — but at the expense of blocking (the second program has to wait for the first program to release its lock before it can continue).

Hogwild gets rid of the locking, which massively increases the throughput of multithreaded parallelism. This is how Adam manages to be so much faster than Google’s solution, while using just a small cluster of plain ol’ Azure CPU cores. Don’t go and install Hogwild on your own PC though: Such a lock-free approach is only possible in certain situations where collisions don’t cause significant repercussions — such as deep learning neural networks, seemingly.

So far, Project Adam’s only use is an internal smartphone app that identifies objects in photos. As you can see in the photo, this is great if you don’t know your Shih Tzu from a Sheltie, but it could also be used to instantly tell you the calorific content of everything on your plate, or to identify the mysterious rash on your neck.

While Project Adam has obvious applications, an accurate computer vision/object classification program isn’t going to change the world on its own, and it surely isn’t Microsoft’s end goal. Peter Lee, head of Microsoft Research, thinks parts of Adam could be used in e-commerce, sentiment analysis, and robotics. A bit further down the line, Lee even thinks Adam could be the basis of an “ultimate machine intelligence” that handles a much wider range of input sources — like humans with our many senses. Yes, it would seem Microsoft wants to challenge Google in making the first human-level intelligence. What could possibly go wrong?