Summary: Researchers report a convolutional neural network has been used to decode brain signals from EEG data. Scientists believe deep learning systems could be important tools for neuroscience analysis and could help revolutionize brain research.

Source: University of Freiburg.

Filtering information for search engines, acting as an opponent during a board game or recognizing images: Artificial intelligence has far outpaced human intelligence in certain tasks. Several groups from the Freiburg excellence cluster BrainLinks-BrainTools led by neuroscientist private lecturer Dr. Tonio Ball are showing how ideas from computer science could revolutionize brain research.

In the scientific journal Human Brain Mapping, researchers illustrate how a self-learning algorithm decodes human brain signals that were measured by an electroencephalogram (EEG). The study included performed movements, and also hand and foot movements that were merely thought or an imaginary rotation of objects. Even though the algorithm was not given any characteristics ahead of time, it works as quickly and precisely as traditional systems that have been created to solve certain tasks based on predetermined brain signal characteristics, which are therefore not appropriate for every situation.

The demand for such diverse intersections between man and machine is huge: At the University Hospital Freiburg, for instance, it could be used for early detection of epileptic seizures. It could also be used to improve communication possibilities for severely paralyzed patients or an automatic neurological diagnosis.

“Our software is based on brain-inspired models that have proven to be most helpful to decode various natural signals such as phonetic sounds,” says computer scientist Robin Tibor Schirrmeister. The researcher is using it to rewrite methods that the team has used for decoding EEG data: So-called artificial neural networks are the heart of the current project at BrainLinks-BrainTools.

“The great thing about the program is we needn’t predetermine any characteristics. The information is processed layer for layer, that is in multiple steps with the help of a non-linear function. The system learns to recognize and differentiate between certain behavioral patterns from various movements as it goes along,” explains Schirrmeister.

The model is based on the connections between nerve cells in the human body in which electric signals from synapses are directed from cellular protuberances to the cell’s core and back again.

“Theories have been in circulation for decades, but it wasn’t until the emergence of today’s computer processing power that the model has become feasible,” comments Schirrmeister.

Customarily, the model’s precision improves with a large number of processing layers. Up to 31 were used during the study, otherwise known as “Deep Learning”. Up until now, it had been problematic to interpret the network’s circuitry after the learning process had been completed. All algorithmic processes take place in the background and are invisible. That is why the researchers developed the software to create cards from which they could understand the decoding decisions.

The researchers can insert new datasets into the system at any time.

“Unlike the old method, we are now able to go directly to the raw signals that the EEG records from the brain. Our system is as precise, if not better, than the old one,” says head investigator Tonio Ball, summarizing the study’s research contribution.

The technology’s potential has yet to be exhausted – together with his team, the researcher would like to further pursue its development: “Our vision for the future includes self-learning algorithms that can reliably and quickly recognize the user’s various intentions based on their brain signals. In addition, such algorithms could assist neurological diagnoses.”

About this neuroscience research article

Source: Robin Tibor Schirrmeister – University of Freiburg

Image Source: NeuroscienceNews.com image is credited to Michael Veit.

Original Research: Full open access research for “Deep learning with convolutional neural networks for EEG decoding and visualization” by Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball in Human Brain Mapping. Published online August 7 2017 doi:10.1002/hbm.23730

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]University of Freiburg “Smart Computers.” NeuroscienceNews. NeuroscienceNews, 18 August 2017.

<https://neurosciencenews.com/ai-brain-research-7328/>.[/cbtab][cbtab title=”APA”]University of Freiburg (2017, August 18). Smart Computers. NeuroscienceNew. Retrieved August 18, 2017 from https://neurosciencenews.com/ai-brain-research-7328/[/cbtab][cbtab title=”Chicago”]University of Freiburg “Smart Computers.” https://neurosciencenews.com/ai-brain-research-7328/ (accessed August 18, 2017).[/cbtab][/cbtabs]

Abstract

Deep learning with convolutional neural networks for EEG decoding and visualization

Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping.

“Deep learning with convolutional neural networks for EEG decoding and visualization” by Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball in Human Brain Mapping. Published online August 7 2017 doi:10.1002/hbm.23730

Feel free to share this Neuroscience News.