Human, am I making a mistake? Yamaguchi Haruyoshi/Getty

Try again robot, you’re doing it wrong. A brain-computer interface lets people correct robots’ mistakes using the power of their thoughts.

The system uses electroencephalography (EEG) to measure a person’s brain signals as they watch a robot work. When it detects a signal suggesting the person has witnessed a mistake, it alters the robot’s course. The system could be used to let humans control industrial robots simply by observing them.

“We’re taking baby steps towards having machines learn about us, and having them adjust to what we think,” says Daniela Rus at the Massachusetts Institute of Technology.


Rus and her team used an EEG headset to measure how the electrical signals in five volunteers’ brains responded as they watched a robot reach towards one of two LED lights. In each test, one LED was randomly selected as the “correct” one. If the volunteer saw that the robot was reaching for the wrong one, the headset detected this in their EEG readings and sent a signal to the robot, making it reach for the other. The robot used was Baxter, an industrial robot made by Rethink Robotics in Boston, Massachusetts.

When we witness a mistake, we generate brain signals called “error potentials”, says Ricardo Chavarriaga at the Swiss Federal Institute of Technology in Lausanne. Error potentials have a distinctive shape, which makes them a good choice for controlling a robot, he says.

In 70 percent of cases where the volunteers noticed that the robot was making a mistake, the system correctly recognised an error potential and altered the robot’s actions. The result was similar on a task where volunteers watched Baxter sort reels of wire and paint bottles into different boxes.

An advantage of using error potentials is that people don’t need any training to use the system, says Andres Salazar-Gomez at Boston University, lead author of the study. Other EEG-controlled devices require humans to think about specific words or movements to generate commands.

In this test, the algorithm that detected error potentials had to be calibrated separately for each volunteer, but Salazar-Gomez hopes that future EEG-based systems could automatically detect signals for different volunteers.

Rus says that error potentials could also be used in autonomous cars to alert the car when its passenger has spotted something its sensors haven’t noticed. If a passenger hears an ambulance siren, for example, their brain signals could put the car on high alert to watch out for impending dangers.

Adding learning algorithms to error potential systems could also help robots make better decisions over time. Chavarriaga co-authored a study in 2015 that used error potential signals to teach a robot arm the most efficient route to a target. A similar system could be used to create brain-controlled prosthetic arms that learn from their mistakes and adapt to perform actions more naturally.

Robots are usually programmed to perform specific actions or trained to accomplish tasks using huge datasets, but error potential systems could help us teach robots in a way that comes more naturally to humans, says Rus. “In the future it will be wonderful to have humans and robots working together on the terms of the human.”