GPUs aren’t quite ready to help people with severe disabilities shed their wheelchairs, but they’ve helped researchers take a big step closer.

At this fall’s Cybathlon — dubbed the world’s first “bionic Olympics,” and held in Zurich — a team of 40 students from Imperial College of London developed technology that let competitors participate in a brain-computer interface race.

The participants, referred to as pilots, wear electroencephalogram (EEG) caps, which record the electrical activity of the brain and are connected to computers running GPU-powered machine learning algorithms. Normally unable to engage in physical competitions due to spinal cord injuries, neurological diseases or other trauma, the pilots raced in a video game, with the algorithms interpreting their brain impulses to control their digital avatars.

The approach was the latest step in Aldo Faisal’s efforts to jettison the use of invasive brain implants to aid those with restricted movement, and instead rely on software made smarter via machine learning.

“I love to combine the latest in artificial intelligence and robotics research to understand how the brain controls movements and help people who cannot move in return,” said Faisal, associate professor of neurotechnology at Imperial College. “Ideally, we do that with low-cost approaches and without having to cut people open.”

Enter Deep Learning

For the Cybathlon race, that meant building a convolutional deep learning network that enabled the machine learning algorithm to determine which of three possible actions a competitor wanted to take. Especially tricky was getting the algorithm to recognize when a competitor didn’t want to take an action.

To speed up the process of retraining the brain signals every time a competitor put on an EEG cap, Faisal’s team also pre-trained the deep learning network. Without pre-training, he likened the process to having to wait an hour or two for a computer to reboot.

NVIDIA GPUs powered the building and training of the deep learning network and machine learning algorithms, which Faisal said sped the process as much as 15x compared with high-end CPUs.

A Simple Plan

The potential applications for this approach to brain-computer interface are numerous. Already, the combination of EEG caps and machine learning is being used in brain diagnoses. And Faisal foresees the technology solving issues related to human-machine interactions for everything from smart homes to self-driving cars.

But Faisal’s more immediate plan is to make the technology simpler and more accessible for the general population.

“We want to show that all of this can be done in the cloud, so that you need only your EEG and a simple laptop to get going with high performance brain-computer interface,” he said. “It is not only about the big ideas, but also to make things work in practical life.”

Check out highlights from the brain-computer interface race in the video below.