On a recent day in San Francisco, Sam Anthony watched a cyclist pull up to a stop at a traffic light slightly past the white line at the intersection. From the other direction, a self-driving car had a green light, but didn’t move. Even though it was obvious to human drivers that the person on the bike didn’t plan to keep going–his foot was down, and he was resting on his handlebars–the car couldn’t tell.

This inability of machines to understand and anticipate human action is a problem that Perceptive Automata, the startup that Anthony cofounded, is attempting to solve. Right now, using a combination of cameras and infrared laser pulses and radar, autonomous cars can detect people or other vehicles on the road. But they struggle to predict behavior.

“Today’s systems are really good at knowing the geometry or the physics of the world around them, but they’re not good at the psychology of the world around them,” says Sid Misra, CEO of Perceptive Automata.

Human drivers make continual judgments about other humans on the road. “About 250 milliseconds after seeing someone, you’ve made all of these inferences about their state of mind, their intention, their awareness,” says Anthony. “Those inferences are something that humans are incredibly good at, and self-driving cars to this point have had zero ability to do.”

The startup, which launched out of work at a Harvard lab, built a model that tries to mimic human intuition. The founders took footage at street corners and asked groups of people what they saw in hundreds of thousands of clips. The judgments weren’t based on single cues, but a constellation of features suggesting whether someone is paying attention or not, or beginning to move or stay still. If someone had set a bag down and they’re reaching down to pick it up, for example–or if they’re tightly clutching a coffee cup, or if there’s a little tension in their shoulders–they’re probably getting ready to start crossing the street.

The AI model, trained on the data from those human judgments, aims to think the same way a human would. “We’ve developed extensive real-world test sets capturing situations that are both ambiguous and unambiguous, and the goal for our model development is to make judgments that are indistinguishable from human judgments on that entire set,” says Anthony. The model isn’t yet perfect, but can already provide information that other systems can’t. For a situation like the San Francisco cyclist stopped at an intersection, the current tech converges with human judgments 95% of the time.

In theory, self-driving cars hold the promise to eliminate the human errors that lead to most traffic deaths (in the U.S. alone, there were more than 40,000 traffic deaths in 2017; globally, there are more than a million road deaths each year). But if the cars can’t drive predictably, they can’t be widely used. Right now, if an autonomous car doesn’t know if someone is stepping into a taxi or starting to cross the street, the car may come to a sudden halt. It’s not uncommon for self-driving cars to be rear-ended.