21st June 2018

Brain scan algorithm is 1,000 times faster

MIT has published details of "VoxelMorph", a new machine-learning algorithm, which is over 1,000 times faster at registering brain scans and other 3-D images.

Medical image registration is a common technique that involves overlaying two images – such as magnetic resonance imaging (MRI) scans – to compare and analyse anatomical differences in great detail. If a patient has a brain tumour, for instance, doctors can overlap a brain scan from several months ago onto a more recent scan to analyse small changes in the tumour's progress.

Unfortunately, this process can often take hours, as traditional systems meticulously align each of potentially a million pixels in the combined scans. In a pair of upcoming conference papers, however, researchers from the Massachusetts Institute of Technology (MIT) describe how to overcome this problem. Their new machine-learning algorithm can register brain scans and other 3-D images over 1,000 times more quickly.

The AI works by "learning" while registering thousands of pairs of images. In doing so, it acquires information about how to align images and estimates some optimal alignment parameters. After training, it uses those parameters to map all pixels of one image to another, all at once. This reduces registration times to a minute or two using a normal computer, or less than a second using a GPU with comparable accuracy to state-of-the-art systems.

"The tasks of aligning a brain MRI shouldn't be that different when you’re aligning one pair of brain MRIs or another," says Guha Balakrishnan, co-author on both papers and a graduate student in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "There is information you should be able to carry over in how you do the alignment. If you’re able to learn something from previous image registration, you can do a new task much faster and with the same accuracy."

MRI scans are basically hundreds of stacked 2-D images that form massive 3-D images, called "volumes," containing a million or more 3-D pixels, or "voxels." Aligning all voxels in the first volume with those in the second (and so on) is therefore extremely time-consuming. Moreover, scans can originate from different machines and have different spatial orientations, which makes matching voxels even more computationally complex.

"You have two different images of two different brains, put them on top of each other, and you start wiggling one until it fits the other," says co-author and postdoc at CSAIL, Adrian Dalca. "Mathematically, this optimisation procedure takes a long time."

This process becomes particularly slow when analysing scans from large populations. Neuroscientists analysing the variations in brain structures across hundreds of patients with a particular disease or condition, for instance, can potentially require weeks of computational time. That's because those algorithms have one major flaw: they never learn. After each registration, they dismiss all data pertaining to voxel location.

"Essentially, they start from scratch given a new pair of images. After one hundred registrations, you should have learned something from the alignment. That is what we leverage," explains Balakrishnan.

The researchers' algorithm, called "VoxelMorph", is powered by a convolutional neural network (CNN) – a machine-learning approach commonly used for processing images. These networks consist of many nodes, which process image and other information across several layers of computation.

In the first paper, presented this week at the Conference on Computer Vision and Pattern Recognition (CVPR), the researchers trained their algorithm on 7,000 publicly available MRI brain scans, then tested it on 250 additional scans.

During training, brain scans were fed into the algorithm in pairs. Using a CNN and modified computation layer called a "spatial transformer", the method captures similarities of voxels in one MRI scan with voxels in the other scan. In doing so, the algorithm learns information about groups of voxels – such as anatomical shapes common to both scans – which it uses to calculate optimised parameters, which can be applied to any scan pair.

When fed two new scans, a simple mathematical "function" uses those optimised parameters to rapidly calculate the exact alignment of every voxel in both scans. In short, the algorithm's CNN component gains all necessary information during training so that, during each new registration, the entire registration can be executed using one, easily computable function evaluation.

The researchers found their algorithm could accurately register all 250 test brain scans – those registered after the training set – within two minutes using a traditional central processing unit (CPU), and in under one second using a graphics processing unit (GPU).

The other paper, to be presented at the Medical Image Computing and Computer Assisted Interventions Conference (MICCAI), in September, describes a refined VoxelMorph algorithm that "says how sure we are about each registration," explains Balakrishnan. It also guarantees "smoothness", meaning it doesn't produce folds, holes, or general distortions in the composite image. A mathematical model validates the algorithm's accuracy by using something known as a Dice score – a standard metric to evaluate the accuracy of overlapped images. Across 17 brain regions, the refined VoxelMorph algorithm scored the same accuracy as a commonly used state-of-the-art registration algorithm, while providing huge speed improvements.

VoxelMorph has a broad range of potential applications, in addition to brain scans, the researchers say. MIT researchers, for instance, are currently running it on lung images. The algorithm could pave the way for image registration during operations. Various scans of different qualities and speeds are currently used before or during some surgeries. But those images are not registered until after the operation. When resecting a brain tumour, for instance, surgeons sometimes scan a patient’s brain before and after surgery to see if they've removed all the tumour. If any bit remains, they’re back in the operating room.

Using the new algorithm, Dalca says, surgeons could potentially register scans in near real-time, getting a much clearer picture on their progress. "Today, they can’t really overlap the images during surgery, because it will take two hours, and the surgery is ongoing," he says. "However, if it only takes a second, you can imagine that it could be feasible."

"This is a case where a big enough quantitative change [of image registration] – from hours to seconds – becomes a qualitative one, opening up new possibilities such as running the algorithm during a scan session, while a patient is still in the scanner," says Bruce Fischl, a professor in radiology at Harvard Medical School and neuroscientist at Massachusetts General Hospital. "It enables clinical decision making about what types of data needs to be acquired and where in the brain it should be focused, without forcing the patient to come back days or weeks later."

Fischl adds that his lab, which develops open-source software tools for neuroimaging analysis, hopes to use the algorithm soon: "Our biggest drawback is the length of time it takes us to analyse a dataset, and by far the more computational intensive portion of that analysis is nonlinear warping, so these tools are of great interest to me."

Perhaps, in the more distant future, regular checkups with a doctor could include a full-body scan to generate a historical database of a patient's entire anatomy over time, with neural networks pinging regions that show "high risk" changes between checkups. The implications for early detection of cancer and other diseases would be profound.

---

• Follow us on Twitter

• Follow us on Facebook

• Subscribe to us on YouTube

Comments »