Elaine Herzberg was killed last month while crossing the street after dark in Tempe, Arizona. She was hit by a “self-driving vehicle” owned and operated by ride-hailing behemoth Uber. Initially, Tempe Police Chief Sylvia Moir told the San Francisco Chronicle that Herzberg darted out in traffic, essentially blaming the woman for the incident.



But video evidence—provided by Uber—told a much different story: Herzberg was more than three-quarters of the way across the four-lane road when Uber's Volvo XC90 hit her. There should have been plenty of time for the car’s sensors to "see" her. Furthermore, autonomous or semi-autonomous cars testing on public roads must have a human in the driver's seat whose attention is supposed to be focused on the road. But video also showed that Uber’s backup driver, Rafaela Vasquez, was looking down at her lap.

"People believe the hype."

This accident brought up a number of inevitable questions about how society feels when the driver at fault in a car crash is a machine, not a human. However, there is another problem simmering under the surface of this debate, and that is about the nature of the debate itself. There is something very dangerous about the sloppy way we talk about autonomous cars. And people's lives are at stake.

"The expectation is that autonomous vehicles are designed to avoid such conflicts,” says Bryan Reimer, a scientist in the MIT AgeLab and Associate Director of The New England University Transportation Center at MIT. “But this Uber was not an autonomous vehicle. Which is why this type of accident was inevitable.”

This Is Not a Self-Driving Car

Take a moment to consider this moment in car autonomy. Test rigs like Uber's are driving themselves around the public roads in numerous American states, racking up the miles and experience needed to prove they can handle real traffic, while humans hang out in the driver's seat to take over any time they're not ready.

Meanwhile, more new cars owned by regular people feature semi-autonomous features. Nissan has ProPilot, Tesla has Autopilot, Cadillac has Super Cruise. These helpers can maintain speed, preserve a safe distance between you and the car in front of you, and keep your car within its lane, which means they can handle most of the tasks that make up highway driving at least. But they still throw up their hands at times, and need to the driver to be awake, aware, and ready to take over at a moment's notice.

That is not a self-driving car. However, Reimer says, you might not realize that from the claims by automakers and technology companies. He argues that almost every company involved in self-driving tech exaggerates the capability of its tech.

“People believe the hype,” he says. “Because of the way the functionality of these cars is portrayed, people feel like [autonomous vehicles are] at the tipping point where properly equipped cars can drive themselves. So, they push the limitations of the tech as far as they can. That means putting not only their lives in danger but others as well. Take Joshua Brown, for instance.”

Brown, an Ohio native, died in 2016 when a tractor-trailer pulled out in front of his Tesla Model S sedan on a Florida highway. The car’s semi-autonomous Autopilot driving system was engaged. But it did not recognize the trailer and failed to apply the brakes. Neither did Brown. His car drove under the trailer, the roof was torn off, Brown was decapitated, and the Model S came to rest in a ditch nearly 400 yards away.

After a six-month investigation into the incident, the National Transportation Safety Bureau (NTSB) concluded that Tesla was “partially” to blame for the incident. Not because its technology was faulty, but because Tesla “recklessly relied on poorly-designed safeguards for its self-driving features that invited misuse”—use that far exceeded the system’s operational limitations. The Silicon Valley-based automaker declined to comment for this article.

Brown, a 40-year-old self-proclaimed tech nerd, misused Autopilot in numerous videos on YouTube, including one in which he climbs into the backseat and allows the car to drive itself. He's not alone. YouTube and Facebook are full of videos showing people recklessly testing the limits of Tesla’s Autopilot.

This content is imported from Facebook. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

Nor is Tesla alone in hyping the case for driver-assist technology, Reimer says. Over the last several years, regulators and consumer watchdogs have bumped heads with several other automakers over potentially confusing marketing of autonomous functionality.

In March 2016, Mercedes-Benz released a 30-second commercial called “The Future” to introduce its all-new 2017 E-Class sedan. The spot, narrated by Jon Hamm of Mad Men fame, opens with a family riding in the futuristic Mercedes F-105, a concept car that shows the potential of a fully autonomous vehicle. Hamm’s voiceover begins: “Is the world truly ready for a vehicle that can drive itself? An autonomous, thinking automobile that protects those inside and outside?” After a brief pause, an all-new 2017 E-Class accelerates out from behind the F-105, passing it with a throaty growl. "Ready or not, the future is here," Hamm proclaims. "Introducing the 2017 E Class."

This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

While the ad didn’t explicitly say the 2017 E-Class could drive itself, it implied so. groups were quick to vilify Mercedes, and the automaker pulled the advertisement. Mercedes-Benz also chose not to participate in this story.

Why the Different Names?

The jargon is a core part of the problem here. There has been so much news about "autonomous," "semi-autonomous," and "self-driving" cars that it's not easy to know what any one car can—and cannot—do.

“We need a commonality in naming and nomenclature,” Reimer says. “You can’t name all of the same features differently and expect people to decipher them properly. Smart cruise control, Intelligent Cruise, Adaptive Cruise; It’s all the same thing conceptually, with a few fundamental engineering differences. So why the different names?”

The Society of Automotive Engineers (SAE) tried to cut through this confusion when it laid out its “pathway to autonomy,” which details a step-by-step roadmap to self-driving cars. SAE Level 0 means the car has no driver assist. Level 5 is full autonomy, no human needed. Today's vehicles all lie somewhere in the middle of that chart, Reimer says, which is why it's so dangerous to oversell their intelligence.

"Humans are horrible backups. We are inattentive, easily distracted, and slow to respond."

It's only a recent development that new cars have reached levels 1 and 2. Good examples include Nissan’s ProPilot and Mercedes-Benz Drive Pilot. More advanced Level 2 systems (some would argue they are Level 3) are just making their appearance.

For instance, Tesla’s relatively new Enhanced Autopilot can self-steer, adjust speed, detect nearby obstacles, apply brakes, and park. But it can't do everything. Cadillac’s Super Cruise is also an advanced Level 2 system, allowing hands-free driving under very specific conditions. But it too must rely on a human being in an emergency. Audi, Continental, and Autoliv all promise to Level 3 systems in production within a couple of years. For now, there are no Level 4 or 5 systems on the market, and only one being tested—Waymo’s driverless vehicle. All other testing companies require driver supervision.

But there's the other thing about that driver supervision: Human beings make for a terrible backup system.

Don't Trust Us

Semi-autonomous driving systems don’t possess the instantaneous flexible decision-making skills based on the intuition that humans have. Person and machine must work as a team, so we're ready when they need us.

This moment is called the "hand-off"—the instant that a car realizes it isn't sure what to do and asks the human to take over. Many researchers consider the hand-off problem—an inevitability of SAE Levels 1 through 3—to be the extremely difficult if not unsolvable.

“This poses almost insurmountable engineering, design, and safety challenges, simply because humans are horrible backups,” Reimer says. “We are inattentive, easily distracted, and slow to respond in highly vigilant activates such as supervising a test vehicle.”

Why? For one thing, humans can’t react fast enough. A group of scientists at Stanford University recently published research showing that most drivers require more than five seconds to regain control of a vehicle when called to do so. "That is simply not enough time to get the vehicle under control in a safety critical situation,” Reimer says.



Reason 2: Human nature. Back in 2012, when Google was getting serious about testing this vehicle tech, the search engine giant allowed a group of employees to use its self-driving vehicles. The participants were all warned to be attentive and ready to take back control of the car whenever prompted, especially in the event of an emergency. If you watched those videos of people goofing off while Tesla's Autopilot does the driving, then you can guess what happened next. The employees often climbed into the back seat, watched videos, or otherwise didn't pay attention to what was unfolding around them.

Cadillac

Google called it “Human nature at work," and as a result, it took a stark approach to autonomous car research: The company decided to jump straight to SAE Level 4-5 autonomy, skipping Level 3 altogether (and the inherent problem of the hand-off). At the time, few automakers followed Google’s lead. But now, most believe the driver must be monitored in some form to help “ensure” he or she is paying attention.

Both Tesla’s latest version of Enhanced Autopilot and Chevrolet’s Super Cruise will drop out of AV mode if the driver fails to make frequent contact with the car’s steering wheel. If this happens too many times, Enhanced Autopilot will disable until the entire car is restarted. Super Cruise takes it further by monitoring the driver inside the vehicle as well using an infrared camera embedded in the steering column. “It detects whether the driver is awake or alert to make decisions,” said Mark Reuss General Motors’ EVP, Global Product Development, at GM Investor Day in November. “If not, the system won’t engage, and it will disengage and pull the vehicle over.”

New Type of License?

How do we make sure drivers aren’t irresponsible with autonomous car technologies? Reimer calls for regulation to make certain the information presented to the consumer must be accurate and appropriate checks and balances put in place to make sure that drivers don’t push the limits of the technology. People have to aware of their car’s limitations.

“Possibly a new type of license is required for people to operate a car with driver-assists,” says Reimer. “An endorsement on your license that you have been instructed on the capabilities of the vehicle like you need to operate a motorcycle.” Neither the National Highway Transportation Safety Administration or NTSB responded to repeated requests for an interview, however.

The fact is, nobody wants to talk about the fact that we're talking about autonomous cars all wrong. “Cars are not currently capable of piloting vehicles anywhere, anytime, under any condition,” Reimer says. “All claims to the contrary are intentionally misleading.” And people will continue to be killed if nothing changes.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io