To better avoid accidents, autonomous cars can do things that human drivers can't. Their eyes—or rather sensors and software—never leave the road to change a radio station or glance over at a passenger. They can see through fog or other inclement weather and sense a stalled car or other hazard ahead and take appropriate action.

But as automated technology progresses and is road-tested, developers are finding that there are advantages human drivers have that machines don't, and this lack of humanness may be a liability when it comes to the adoption of the self-driving cars. Robot cars don't drive as defensively — or aggressively — as humans, for example. And they can't understand subtle cues and hints that most people intuitively comprehend while on the highway.

So before self-driving technology becomes mainstream, the challenge will be to make robot cars more human, something Google and others are already working on. According a recent report from the San Jose Mercury News, Google has begun programming its fleet of self-driving cars to inch forward at four-way stops, lest the vehicles defer to more aggressive human drivers and sit too long at a stop sign.

Google's autonomous vehicles are also learning to drive closer to cars ahead to close the gap and keep other drivers from cutting in front of them. "We found that we actually need to be — not aggressive — but assertive," Nathaniel Fairfield, Google's technical lead for the team that writes software to fix problems uncovered during driving tests, told the San Jose Mercury News. "If you're always yielding and conservative, basically everybody will just stomp on you all day."

Of course, not everyone drives aggressively or the same. So if robot cars are going to behave more like humans, the technology will have to be fine-tuned to different types of drivers, according to Peter Skillman, head of design at HERE, the division of Nokia that provides mapping for self-driving cars. And he said that this adaptability will help convince people to trust machines to drive.

At a recent conference, Skillman said that even when they aren't controlling the car, people should be able to decide how they want to be driven and whether they want to take certain routes over others. He called this integrating of personal preferences into autonomous vehicles "humanized driving," and added that a big part of the experience will be making passengers feel like they still have some sense of control.

"Knowing where you are and what's around is you is key to trust," Skillman said. "You need a visual gestalt of where you're going." As an example, Skillman noted that swerving to miss something in a car's path or hard braking can be jarring even when humans are in control of the car, and could be especially startling when robots are in charge.

Skillman said that providing passengers in self-driving cars with plenty of notice in such a situation can help soften being startled. "It's important that you see the intent of the car to change lanes, so if your car takes evasive action, you know why it happens," he added.

While some of these features won't be difficult to incorporate into self-driving cars, another human aspect of driving could be the way people communicate with glances and gestures while behind the wheel, such as with a nod of the head or a swipe of the hand. Google is working on this too.

"Driving can be a social thing, where you're using your vehicle and a little bit of body language in your car to communicate with other drivers what your intentions are," said Brian Torcellini, who oversees the group that tests the company's cars on public road. "So we're now trying to teach the car different ways to sort of fit in with society and the way that other people drive."

We wonder if that means driving as aggressive as, say, a cabbie in Manhattan or some other big city, and whether it includes interpreting the meaning of a middle finger.

Further Reading

Cars & Auto Reviews