Researchers at the University at Buffalo are using gamers' brain waves to advance the progress of robot swarms that could be used in the military.

Using groups of simple robots to complete complex tasks, rather than one extremely advanced robot, is referred to as "swarm intelligence" in AI theory.

DARPA hopes this new training method will improve groups of autonomous ground and air robots used on the battlefield.

In what sounds like a Black Mirror-esque approach to military strategy, the U.S. Defense Advanced Research Projects Agency (DARPA) is funding a study that will use gamers' brain waves to teach hives of defense robots how to swarm together to complete missions.



DARPA has given a $316,000 federal grant to the University at Buffalo Artificial Intelligence Institute to study gamers' brain waves and eye movements. The aim is to improve organization and strategy among autonomous air and ground robots.

Why would the U.S. want to invest in robot swarms? Because bevies of bots are already being pursued elsewhere in the world, like Russia. Flock-93, for example, is a vision of 100 kamikaze-like drones, each armed with an explosive charge, swarming targets like vehicle convoys. In theory, these hordes of robots are drastically more difficult to defend against, so the U.S. certainly doesn't want to lag behind.

"The idea is to eventually scale up to 250 aerial and ground robots, working in highly complex situations," said Souma Chowdhury, assistant professor of mechanical and aerospace engineering at Buffalo, in a press statement. "For example, there may be a sudden loss of visibility due to smoke during an emergency. The robots need to be able to effectively communicate and adapt to challenges like that."

Inside UB’s SMART Motion Capture Lab, students create a simulated environment to demonstrate how autonomous air and ground robots can work together. Douglas Levere/University at Buffalo

To put it simply, groups of more primitive robots can complete certain tasks better than one really intelligent robot could on its own. This theory in artificial intelligence is referred to as "swarm intelligence."

It's a trait found throughout nature. Consider the modest ant: By itself, an ant can lift 5,000 times its own body weight, but that still doesn't amount to much. A colony of ants working together, meanwhile, can pull off some pretty unbelievable feats, like creating superhighways of food, waging war, and enslaving other ants.

This biomimicry is a hot topic in computer science, Chowdhury told DigitalTrends.

"It’s becoming known that there are a lot of different applications which could be done by not using a single $1 million robot, but rather a large swarm of simpler, cheaper robots," he said. "These could be ground-based, air-based, or a combination of those two approaches."

Douglas Levere/University at Buffalo

In Chowdhury's study, experts will play real-time strategy games similar to StarCraft, Stellaris, and Company of Heroes, which force players to use resources to build units and defeat opponents. The researchers are developing their own unique strategy-based game.

As gamers play, the decisions they make are recorded, and researchers will track their eye movements through high-speed cameras. In tandem, their brain activity will be monitored through electroencephalograms. (Those are the headsets with a bunch of electrodes on the cap that you might wear during a sleep study).

Then, based on the data they've gathered, the scientists will build new algorithms that will guide autonomous drones and ground robots used in military applications.

"We don’t want the AI system just to mimic human behavior; we want it to form a deeper understanding of what motivates human actions," Chowdhury said. "That’s what will lead to more advanced AI."

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io