A future iteration of artificial intelligence would measure a soldier’s cognitive and physical state and trigger actions that would support, or even save, the individual in combat. These actions might direct the human on a different course, or ultimately initiate activities that complete the soldier’s mission or protect the individual in combat.

This research effort goes far beyond merely updating the mood ring. The Army is looking for artificial intelligence (AI) to become a teammate with the soldier. But this Army Research Laboratory (ARL) effort aims for AI to be more than a supporting teammate. This approach to AI would understand what its teammate needs and take care of that without pestering the human.

“What we really want to try to do is have the AI be able to adapt to the real-time state changes of the human,” explains Jean Vettel, Ph.D., senior science lead at the Combat Capabilities Development Center (CCDC) ARL. This means changes in the human’s intent and what task the person is about to perform. The AI would glean this information from a change in the person’s response to the environment, she says. This might cue the AI that the soldier is disengaging from the task both were working on, perhaps because the person is confused and needs clarification. This “confused state” originates from the person’s physiology, and signals from the brain or the body can identify that changed state.

This does not imply that the soldier will have intrusive sensors reading body functions, however. This AI would incorporate noninvasive sensors that monitor brain and heart rate data, along with other physiological signals, to detect the soldier’s state. In turn, the AI would adapt to the human’s needs at that particular moment.

The first step in this research march has focused on physiological signals in the human body that can be measured to predict a person’s state, Vettel relates. The second part is to determine courses of action to take when those states are detected. The third part is to ascertain whether to build a closed-loop real-time system between the AI and the human.

One question that comes up with human-AI interactions is, who is the final arbiter? “If you have a human and AI teaming on a task, and each of them has strengths and weaknesses, then there always has to be a control architecture that says who is right,” Vettel says. “If you have an AI and human that disagree on the next step, who arbitrates? Who has the final decision?”

Right now, the human is the ultimate arbitrator. But if, for example, a soldier crossed several time zones in a short period of time and suffered from a disrupted circadian rhythm, the AI would know the soldier is not in an optimal state. Throw in a coinciding threat scenario, and the AI might be better suited to serve as the final arbiter. The ARL is working toward this type of “robust but intelligent teaming” where the human is not always right, Vettel allows.

A near-term goal of this effort would be for the AI to sense the human’s state and give the person the information to make a key decision. But the longer-term goal is for the AI to assume responsibilities when it combines human state information with immediate situational awareness data. This advanced AI would perform components of the mission on its own, Vettel explains.

She offers as an example a stressed-out soldier whose AI determines that a terrorist is in a nearby building. A human teammate might implement a lockdown on all the building’s doors. The advanced AI would take the place of that human teammate and perform the same lockdown autonomously.

The ARL’s research effort is exploring all types of physiological indicators. These range from polygraph-like indicators to true brainwave readings. “We actually expect that we will likely use multiple physiological signals to increase our confidence of the state estimate,” Vettel says. “Any given signal from our bodies is ambiguous,” she notes, explaining that any one of many different variables may cause a change in a single indicator, and it’s hard to know how to interpret that signal. However, if soldiers are equipped with sensors for several different physiological signals, then incorporating knowledge across the signals will provide the necessary disambiguation, she says.

“Our research assumes that multiple signals will be better than a single signal, and we’re working on how to synchronize signals, what’s the resolution we need—maybe we don’t need to record data every millisecond, maybe we need it every couple of minutes,” she offers. “Maybe for some things where we want AI to team with solders, we only need really coarse measurements every hour.”

These are issues that remain to be answered, she emphasizes. While the laboratory seeks these answers, it has a vision built around the concept that looking at multiple physiological signals is the proper approach.

Vettel cites recent research into whether brain signals can account for past performances of an individual person—leading to predicting what that person might do. “Instead of constantly viewing the person as a nebulous creature that you could only learn about if you asked them questions, let’s instead focus on science that capitalizes on advancement and how we can image our physiology, our brain data, to be able to analyze and ask, ‘Can I use the signals in my brain to predict something about my performance?’” she suggests. “Because, if I can detect that relationship, then if I’m currently performing a task on whatever mission I’m doing, and brain signals can indicate that I’m not going to perform the task very well, that is a way we can start capitalizing on detecting the state I need, or where I could benefit from a teammate that could help me perform that task better.” That teammate could be another human or AI, she notes.

Recent research has shifted to examining how to collect sufficient data that removes the need to average statistics across numerous people to obtain enough statistic power for analysis, she expresses. Currently, the individual differences in each person’s brain fundamentally alter the way that each brain’s dynamic occurs. As a result, research is focusing on the protection state at the individual level so that technology can adapt to an individual person specifically.

That research also will help predict how an individual will act or react in a situation. Capturing the model of an individual will define how each person’s brain will function, as opposed to the one-size-fits-all definition that risks mis-evaluations.

The next of these research goals aims to “push into more complex tasks,” Vettel says. Most signs detected by existing efforts have tended to focus on particular brain mechanisms with specific actions. Laboratory studies of people sitting alone responding to pictures they are shown have revealed a lot about brain functions, she notes, but ARL research is exploring more nebulous areas. She relates that the laboratory recently collected data from two people driving in an instrumented car along I-95, and it recorded brain data from the driver as the passenger shared previously unknown information. The lab wanted to know if the brain signals that emerged when the two people were talking about the information would be able to predict what the driver would remember from the conversation—how well the information was communicated.

“We’re pushing into, ‘Can we use brain data to predict performance in these naturalistic settings,’ namely whenever there is risk involved,” Vettel continues. Driving safely along a busy multilane interstate highway qualifies as a risky endeavor, she notes. There is risk in the primary task, but the scientists also are looking at whether the physiological signals can predict a secondary task, such as communication.

“Whenever we think about teaming humans and AI, there is not a lot of work on what you would even measure to quantify how well a human has teamed with AI,” Vettel points out. AI studies are rife with research on autonomy and how unmanned aerial vehicles can swarm, or on how autonomous cars would interact safely on the road. However, these types of teaming do not cross the threshold of human-AI teaming.

“There really isn’t any existing framework on how you would literally assess how well AI and humans are teaming together,” she states. So, much of the ARL research in this area strives to understand how to start working in that space. Success in this area is essential to determining the role of AI as a teammate for the soldier, particularly the degree of AI participation.

“One question is, with what resolution can we start detecting human states and intent that matters for task performance in complex settings?” she offers. The other question, which would ensue following success in answering the first question, is, “If we’re successful on getting to know about soldier intent, how will we actually use that to team effectively with AI?”