What makes Olli, the car that's now rolling through the streets of National Harbor, Maryland, important isn't that it drives itself, that it's electric, or even that Local Motors made it from 3-D printed parts. What sets Olli apart is its gift of gab.

Upstart automaker Local Motors and IBM teamed up to create the autonomous van-like shuttle, which launches today, carries twelve passengers, and uses the tech stalwart's Watson supercomputer to chat with passengers. That may seem a step down from fighting cybercrime, predicting the weather, and whooping human butts at Jeopardy!, but it's a clever use of Watson's cognitive speech capabilities to solve one of the more devilish problems blocking our path to a world of autonomous vehicles: How to make people trust them.

As they ride around Local Motors' facility a few miles down the Potomac from Washington, DC, passengers can hit a button and ask Olli questions. Things like: Why are we stopping? Is this traffic going to make me late? Can you take me uptown? And Olli answers: Because I'm not about to hit a pedestrian. Probably. Sure.

"It's cute and efficient, but also very personal," says Bret Greenstein, head of IBM's Watson Internet of Things team. Cute's good, efficient's better, but personal is what really matters. Making Olli talk is about more than adding a bullet point to Watson's eclectic CV, Greenstein says. It's about transparency—the kind of transparency that's easy to achieve with human drivers. When Greenstein drives people for the first time, for instance, "they have to decide I'm a decent driver," he says. They do that by watching his driving, of course, but also by reading his behavior: Is he calm or stressed? Does he seem like he knows where he's going?

You can't do that with a computer vehicle, but letting Olli talk helps. "Passengers need to know that the vehicle is indeed functioning correctly, operating safely," says Raj Rajkumar, a computer engineer at Carnegie Mellon University who works on autonomous vehicles. And if people don't trust the robot, they're less likely to use it, and to reap its massive safety and efficiency benefits.

Olli, which is sticking to National Harbor for the summer but could hit Miami and Las Vegas by the end of the year, represents just the latest autonomous vehicle that worries about how its passengers perceive it. Google’s dorky prototype uses a wide LCD screen to tell passengers it's aware of things like nearby pedestrians. Volvo and Audi have shown autonomous concepts that let the humans know when they're about to change lanes.

Delphi's latest autonomous prototype uses the center screen to show a camera feed of the road ahead, indicating its path in bright blue and highlighting traffic lights, stop signs, and the like, so the humans inside have an idea of what it's "thinking." Mercedes' F 015 autonomous concept even speaks to the outside world, projecting crosswalks onto the ground so pedestrians know they can safely cross in front of it.

The chance to talk to Olli, not just have it bark warnings or updates ("Prepare to take over!"; "Autonomous mode engaged!"), is a novel and particularly human tactic for establishing that vital trust. And while Olli's fully autonomous, Rajkumar says its conversation skills could also be useful in self-driving cars that sometimes need the human to take control. Instead of flashing warnings or beeping, those cars could say, "Hey, the weather's getting bad and I'm not confident in how my sensors are reading the lane lines—could you take over driving in the next few minutes?" And it could interpret your answer, or note that you're completely asleep and safely pull over. Trust is a two-way deal.