Tetsuya Iijima sits down behind the wheel of the Nissan Leaf and holds his hands just below the steering wheel, like he’s about to juggle invisible apples. It’s the required pose for engineers testing the prototype of Nissan’s self-driving vehicles on the streets of Tokyo—where Nissan and other automakers expect their tech to drive customers within four years.



Nissan was the first major automaker to promise production-ready self-driving cars by 2020—although ones that will still require a driver’s attention. Ahead of that goal, Nissan’s engineering teams in Japan and Silicon Valley have discovered just how much learning they will need to give their machines in the syntax of driving.



This particular Leaf looks similar to the electric cars Nissan has been selling for years, but carries millions of dollars worth of extra hardware. Nissan outfitted the car with 12 cameras and five radar sensors to provide a 360-degree field of mechanical vision. A laser rangefinder setup, or lidar, tracks close-moving objects like pedestrians (the unit in the Leaf is itself a prototype Nissan built with a supplier, designed to be more compact and cheaper to build with an eye on mass production).

And the hatch of the Leaf has been filled with a mini server farm which can handle the data flow and make the tactical decisions necessary to manage the car in traffic. Iijima estimates at its peak, the car will need to process 3 teraflops of data from its sensors—just to handle the hustle of city driving, which Nissan considers the toughest challenge for a semi-autonomous car.

In our 30-minute ride through central Tokyo, the Leaf makes several key choices correctly. When a trash truck aggressively merges from the left at speed, the Leaf maintains just enough space with the vehicle in front to allow the truck in. When traffic slows, the prototype turns on its blinker and changes lanes, then moves itself back. And thanks to a combination of GPS navigation and its cameras, the Leaf can read not just speed limits but stop lights, correctly halting itself throughout.

Story continues

But it’s not a flawless performance. After one stop light, halfway through a left turn, the Leaf straightens out, suddenly aiming itself for a set of concrete pylons in a pedestrian walkway. Iijima grips the wheel, taps the brakes and turns the car more sharply, avoiding any contact. The error, he says, likely came from the car’s confusion over where the curb ends.

“We still need to train the software, develop the software to understand all the different situations,” Iijima says.

It’s the non-verbal communications among drivers and pedestrians that’s proving the toughest challenge for engineers. If you come to an odd three-way stop with a blinking red light, you can likely figure out quickly who’s supposed to do what based on other vehicles, a trick that self-driving cars haven’t mastered yet. The sensors in the Leaf that track moving objects rely on movement; a pedestrian who’s standing still at a crossing may get overlooked by software.

Maarten Sierhuis, director of Nissan’s Silicon Valley research center, cites what’s known as “piggybacking” in traffic studies—when a car following another one closely at an intersection jumps its turn and moves out of order.

“If we don’t do it, we will decrease traffic flow and things will get very strange and frustrating,” he said.

Nissan’s engineers also worry about how the car can communicate back to the world. Melissa Cefkin, Nissan’s design anthropologist for self-driving vehicles, suggests there will need to be some standardized signal for self-driving vehicles to alert onlookers that the car not only is driving itself, but sees their presence. The IDS concept Nissan showed off at the Tokyo Motor Show had a ring of LED light around the body that changed colors depending on its driving mode, and a message board on the dash that would flash words to pedestrians.

Beyond the detail of daily operation lies even tougher questions. Nissan’s IDS concept showed off a nifty fold-away steering wheel that would tuck itself into the dash in automated driving mode, yet many self-driving engineers worry that the handoff between car and driver will need to be quick and seamless. Just this week, Google’s self-driving team said the handoff issue only reinforced their choice to focus on fully autonomous cars.

“Everyone thinks getting a car to drive itself is hard. It is,” Google said in an update. “But we suspect it’s probably just as hard to get people to pay attention when they’re bored or tired and the technology is saying “don’t worry, I’ve got this…for now.”

Such concerns are mirrored by Nissan engineers; after all, the whole impetus for self-driving vehicles stems from the deaths and injuries caused by human error, estimated at up to 90 percent of all accidents. Kazuhiro Doi, vice president of Nissan’s Research Center outside Yokahama, notes that the technology meant to drive deaths in Nissan vehicles to zero won’t work unless humans trust it inherently.

“We always have to be afraid that we could be killed. The robot car is really very scary,” Doi told reporters. “If we want to make a robot car, we need to know more about the human.”