Machine learning is breaking down language barriers, fuelling – and fighting – cybercrime, and can even recognise emotions, but the complex processes behind these breakthroughs are often a mystery.

Machine learning versus AI: what's the difference? Machine Learning Machine learning versus AI: what's the difference?

Startup Graphcore wants to change this. The Bristol-based firm has created a series of 'AI brain scans', using its development chip and software, to produce Petri dish-style images that reveal what happens as processes run.


Most machine learning programs – including Google's own systems, and open source frameworks – work by training AI on computational graphs.

Put very simply, machine learning systems go through a construction phase, during which a graph showing all the computations needed is created. This is followed by an execution phase where the machine uses the computations (or steps) highlighted in the graph to run through its training processes. As it powers through its executions, it makes 'passes' which run forwards and backwards across the data. In Graphcore's images, the movement of these passes and the connections between them have been assigned various colours.

Read next Covid-19 has shown how easy it is to automate white-collar work Covid-19 has shown how easy it is to automate white-collar work

This is similar to how brain scans are compiled, according to Nigel Toon, Graphcore's CEO.

"The striking similarity to scans of the brain highlights that what your brain is doing is something very similar," Toon told WIRED. "Your brain has neurons, and synapses connecting those neurons together, and you're effectively modelling something very similar in this machine learning world as well.


"What you're seeing is how the graph operates on the processor, so it would be analogous to taking a scan from a brain to see how it works."

The images, provided exclusively to WIRED, show what the firm's Poplar software is capable of when combined with a processor designed for AI applications. Graphcore generated the pictures while running machine learning processes used to identify images. "You're effectively taking a graph description through a piece of software to a graph processor," Toon told WIRED.

"What you're seeing is the layers of a deep neural network exposed," he explained. "What a deep neural network is doing is trying to extract features from data automatically, so you give a stream of data and they are extracting finer and finer levels of detail."

Graphcore says the chip used to create the images will be completed this year and it has developed an Intelligent Processing Unit (IPU), which it argues is the best way to run machine learning AI. It explains the technological process in a blog post published alongside this article.

Holding AI to account: will algorithms ever be free from bias if they're created by humans? Transparency Holding AI to account: will algorithms ever be free from bias if they're created by humans?


By comparison, existing machine learning programs are run on high-powered GPUs by firms such as NVIDIA. NVIDIA explains its GPUs are being developed to run in the cloud and support more data processing with less infrastructure but Toon argues specific processors for machine learning are better than GPUs.

It's something, seemingly, Google agrees with. When the tech giant recently rolled-out its AI for Google Translate, it was forced to create a new chip: called a tensor processing unit. The processor is structured differently to GPUs and compute less.

"They're trying quite hard to evolve GPUs in a different direction," Toon said. "We think by starting form a clean sheet of paper we can make some major breakthroughs and move the landscape."