The race to develop reliable self-driving cars is heating up as more tech companies and automakers enter the space, but that doesn't mean such cars will be available to regular drivers anytime soon. We're finally getting to the point where self-driving cars can drive on their own, and the next big step is sharing the road with humans. The trouble is that while the robots can navigate roads, they don't think like humans.

"There are many situations that autonomous cars are still pretty far from knowing how to handle," says Shlomo Zilberstein, a computer science professor at the University of Massachusetts Amherst, who's spent the past several decades researching the broader problem of autonomous planning — that is, how computers direct themselves through rapidly changing situations. He's previously worked on self-driving car systems with GM, but his latest project, funded by the National Science Foundation, focuses on safely transferring control between a human driver and computer while the car is in motion.

Zilberstein welcomes recent developments in the space, such as Uber's news that it is following in Google's footsteps by researching its own self-driving cars, while Google itself reportedly pursues a rival Uber-like car-hailing service that may use its smart vehicles. Even Apple may be getting into the game, if the speculation is correct about a mysterious, sensor-festooned van spotted driving around Concord, California. Alongside the numerous self-driving car announcements from the Consumer Electronics Show in January, it's clear that self-driving cars have become a fixation for many multibillion dollar companies around the globe.

Yet according to Zilberstein, some of the biggest hurdles standing in the way of both fully autonomous cars (no human driver) and semi-autonomous cars (a human driver some of the time) are simple things that we motorists take for granted.

Eye contact and hand gestures

"Eye contact is a very significant thing," Zilberstein says. Consider the example of a four-way stop. In theory, there's a prescribed order in which cars are supposed to go. In reality, drivers often look to each other to see which car should proceed first. As he explained in a recent NSF article: "There is a slight negotiation going on without talking. It's communicating by your action such as eye contact, the wave of a hand, or the slight revving of an engine."

Humans rely on subtle eye contact, hand signaling, inching forward, and honking

But it's not just at four-way stops where this type of nonverbal communication occurs. Changing lanes, merging, entering and exiting parking lots, driving over pedestrian crosswalks — for all these common driving situations, humans typically rely on subtle eye contact, hand signaling, or other driving behavior like inching forward and honking. These are actions that driverless cars cannot interpret reliably. "You cannot underestimate the amount of information that is being transferred between drivers just by their actions." Zilberstein notes. "Understanding these messages is quite complicated for computers today."

Zilberstein can foresee a future of connected, self-driving cars that talk to one another and automatically signal which vehicle should go first at an intersection without human intervention, but as he points out, "we're not going to replace all the cars overnight with this technology."

Which leads us to the second major problem with driverless cars.

A world filled with unpredictable drivers and roads

The rest of the world moves a little slower than Silicon Valley at embracing disruption, especially when it comes to huge and costly physical objects like cars and roadways. To put it another way, we're not going to have highways packed with smart self-driving cars overnight. When they aren't on the highways, autonomous vehicles will have to navigate a variety of unpredictable road conditions alongside older vehicles driven by even more unpredictable human drivers.

Once you have people, you have to cope with the uncertainty and complexity of human behavior

"If we could transition to all autonomous cars tomorrow, it would be a simpler than a mixed situation with most cars still having human drivers," Zilberstein says. "But once you have people, you have to cope with the uncertainty and complexity of human behavior."

When it comes to the roads themselves, Zilberstein notes that the opposite problem can happen: obstacles can appear literally overnight in the form of construction, accidents, or debris. At the same time, some roads may be old, poorly lit or have poorly marked lanes and signs, making it difficult for a self-driving car to interpret. He points out that even Google's impressive 700,000 miles of self-driving car tests in the past five years were conducted in relatively small areas. And one shortcoming Google openly admits is that its cars haven't been able to drive in snow or heavy rain.

Getty Images

Morality questions

Here's a thought experiment, courtesy of Zilberstein. "Suppose there is an autonomous car that is so good, so smart, it can react quickly to any situation," he says, even predicting the probable outcomes of an accident. "What should it do when faced with a choice of either harming someone's pet or hitting a streetlight?" What would the car decide? What should it decide? What if it were faced with some road-going version of the old trolley problem?

Perhaps the biggest issue with self-driving cars lies in their inability to make moral and ethical decisions for which human drivers have so far been almost entirely responsible. Would-be autonomous carmakers might be uncomfortable programming such choices into their systems, but human drivers make such momentous split-second decisions with regularity. "A person is a responsible legal entity," Zilberstein points out. "Whatever they come up with, they have to live with, and they have the potential to be prosecuted."

Machines must make moral decisions for which human drivers have been entirely responsible

The question of who should bear legal responsibility for autonomous car accidents and under what circumstances has been raised before, but Zilberstein believes that an even more fundamental issue exists: automakers are likely going to be reluctant to endorse human drivers being less attentive, even if their cars offer sophisticated computer driving systems. "Do you think we will soon see a car manual that says, 'when you see this green light or hear this beep, you're allowed to take your eyes off the road?'" he asks. "I doubt it." And given the proliferation of distracted driving laws in the past few years, automakers may be even more wary of going down this route.

The road ahead

Ultimately, Zilberstein believes that Google's ambitious original timetable of having a self-driving car available for consumers by 2017 is probably unrealistic. Yet he does think some advanced forms of consumer vehicle autonomy will be available shortly thereafter. "By 2020 there will be cars that can drive autonomously some of the time, and we'll have to accept that," he says.

One company doing it too quickly will set everyone back 10 years

A fully autonomous car is likely much further out, and could be delayed as semi-autonomous technologies begin to permeate. "Remember, you can have 10 extremely responsible car companies, developing extremely reliable technologies, but one company might be doing it too quickly and cause problems, and then they will set everyone back 10 years."

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io