Navigating Tokyo in Nissan’s Autonomous Leaf Self-driving system evolving fast, but can it make Nissan's 2020 target?

It takes a moment for Ijama-san to squeeze out into the heavy Tokyo traffic. To be more precise, it takes a moment for the Nissan Leaf to sense a good opening a pull out. Tetsuya Ijima, the senior engineer overseeing the company’s autonomous vehicle program is, like the rest of us in the little battery-car, just along for the ride.

Two years after announcing plans to put a fully self-driving vehicle into production by the beginning of the next decade, Nissan is holding to that timetable, offering a small group of journalists a chance to see how far its program has progressed.

While Ijima cautioned that there is plenty more to do before a vehicle like the Nissan Leaf really will be ready for consumers, the nearly hour-long drive on some of the world’s busiest streets showed just how fast the project is moving – while also revealing some of the many challenges yet to be resolved.

When Nissan first revealed its plans in August 2013, it showed off a prototype autonomous Leaf that looked more like a science experiment than a real car. It had all manner of domes and slits jury-rigged to squeeze in the various sensors needed to let the vehicle sense what was happening around it.

The latest prototype looks much more like the production Leaf, those sensors more carefully tucked out of site. All told, Ijima noted, there are 12 cameras, five radar guns and four LIDAR – or 3D laser – sensors. Two of the three forward-looking cameras, for example, are tucked into a roof rack. The third is hidden within the rearview mirror, much like the cameras used for less sophisticated forward collisions warning systems.

(Tesla’s autopilot opens door to semi-autonomous cars. For more, Click Here.)

During our drive, an assistant engineer sat in the back seat, fiddling with a laptop computer. But there was a far more complex computer system completely filling the battery-car’s trunk. By the time Nissan puts its first autonomous vehicle into production, it will have to be downsized enough to tuck entirely out of sight, as well. Even with the steady increase in computer power, that alone is a major challenge.

But Ijama is confident, noting that sensor technology is improving rapidly, dropping in cost, and becoming much more effective in almost every situation, including rain and even snow. “Fog is no good,” he concedes, but on the whole, the cameras are “very good compared to the human eye,” especially in low light.

The latest version of the autonomous Leaf is programmed to be a model driver, never speeding – unless someone sitting behind the wheel programs it to exceed the speed limit, something that proves necessary, at times, just to keep up with traffic.

Ijama sits in the driver’s seat, his hands cupped inches from the steering wheel, an uncomfortable pose Nissan currently requires while testing on public roads. But he could just as easily be texting or reading. Only once during the long drive does he have to take control back from the vehicle itself – something done either by tapping the brakes or hitting an emergency kill switch. The system does allow the human co-pilot to temporarily take the steering wheel, going back to automated mode when the wheel is released.

As we approach a bridge, the system automatically merges from the far right to center lane. It carefully maintains a safe distance from a driver who nervously keeps tapping their brakes. And it readily follows a circuitous route back to our original starting point.

But it’s still not comfortable – if you can use that term to describe a machine – passing another vehicle. And as we pass a police officer who steps out into traffic, his hand raised to halt traffic, Ijima notes the system is not yet able to recognize such hand gestures. It does, however, see a human “target,” and come to a stop.

“We are working on that next,” he explains, adding that “We are still in the middle of development.”

Nissan is by no means alone. In fact, Tesla has just launched its new Autopilot system, which allows a motorist to drive hands-free on limited access roadways. Cadillac and Mercedes-Benz will introduce similar systems next year. But those semi-autonomous technologies still have serious limitations – among other things, they have trouble with poorly marked roads and can’t handle city streets.

(Honda hopes to have semi-autonomous system on sale by 2020. Click Here for the story.)

Google is already testing its own system on all different types of roadways, however, and is building a fleet of prototype vehicles that may soon include a few cars with no steering wheel or pedals at all, just a microphone to allow a passenger to program a destination.

How soon such technology will be foolproof enough to be ready for sale – and how soon regulators and insurers will give their blessings – is far from certain.

Even when the technology itself is ready, there will be challenges to deal with. A recent study published by MIT raised ethical issues that could be daunting for autonomous engineers to program. What happens when the vehicle faces a Hobson’s Choice – a situation with no good options: a child dashes across the road leaving the vehicle to choose between hitting the youngster or steering into oncoming traffic, for example.

That is not going to be easy to program, Ijima acknowledges. Whether that will be a stumbling block is also uncertain. But, after spending nearly an hour inside the autonomous Leaf and comparing it to the primitive system Nissan showed two years ago, there’s no question the project has come a long way in a short time.

(To connect or not to connect? Federal officials debate the safety and privacy of connected car technology. Click Here for more.)