Yesterday, weeks after a Tesla Model S was involved in a fatal car accident while in Autopilot mode, the National Highway Traffic Safety Administration opened an investigation into the technology. The question the U.S. regulators are looking to answer: Did the technology work as expected? Semi-autonomous technology is wholly different from full autonomy and requires human control.

Industry leaders and regulators are still debating whether semi-autonomous technology is safe enough to be introduced to the public and whether Tesla introduced Autopilot too soon.

This story examining both sides of the semi-autonomous debate originally appeared on March 25.

In a matter of days after Tesla first introduced Model S owners to a lane assistance feature called autopilot, videos of owners sitting in the back seat of cars while they were operating began popping up on YouTube. The root of the problem was clear: Humans made cars unsafe. But just last month, one of Google’s fully autonomous cars was not only involved in an accident but was the cause of it for the first time. The question, then, is which of these is the safer approach?

There are two schools of thought. The first, and the approach many automakers have already begun to take, is releasing semi-autonomous features incrementally in new generations of vehicles. The second, which closely matches Google’s current strategy, is that combining human and semi-autonomous control can be counterintuitive and that a car should be controlled by either one or the other and never both.

But in either case, there is a learning curve.

For proponents of rolling out semi-autonomous technology, the concern is that thrusting consumers into fully autonomous cars without first letting them experience and get used to semi-autonomous features might be jarring, even unsafe, because they won’t know how to behave.

"You have to gain the trust of the consumer, and I don’t think most consumers are ready to just jump into a fully automated vehicle unless it’s on a specific route or segregated highway section," said Jim Barbaresso, the national planning leader for intelligent transportation systems for consulting firm HNTB.

Car makers are still learning how people interact with both semi-autonomous and fully self-driving cars, and rolling out semi-autonomous vehicles enables companies to do just that. That’s why they’ve started outfitting new and existing lines with data-gathering technology. General Motors has installed tracking software in its Cadillac CT6 through its partnership with MobileEye. Volvo, in turn, will be gathering data on 100 drivers who will be given XC90s to test the company’s most advanced autonomous technology in 2017.

Most car makers are in agreement: The best way to learn about, become accustomed to and reap the benefits from autonomous technology is to experience it incrementally. As Recode reported, Volvo’s North American CEO Lex Kerssermakers said that autonomous technology will play an important role in the company’s aim to completely eliminate traffic fatalities in Volvo vehicles by 2020.

Ford CEO Mark Fields told Recode that the company is experimenting with both semi-autonomous features and a vehicle that has a level of automation that requires a human to be present but not in control to determine what is best for the consumer. General Motors is expected to start production of its most advanced semi-autonomous system in the CT6 next year, and Tesla has introduced semi-autonomous technology in some its vehicles.

A more pressing need is for public agencies such as the National Highway Traffic Safety Administration and the Department of Transportation to learn how both autonomous and semi-autonomous driving actually work. But that may be difficult, as public agencies don’t have the funding to meet the quickly changing infrastructure needs that self-driving vehicles will require, according to Barbaresso.

"How are they going to be able to keep up with the auto companies and their advances without funding?" he asked. "They have limited ability to provide the infrastructure support."

A way to get around the funding gap is to launch pilot programs that allow these agencies access to anonymized data that car makers have gathered from their semi-autonomous features, according to Barbaresso.

"That would help them get some experience about the benefits and costs of that technology," he said.

But introducing humans to semi-autonomous technology also introduces human error into the equation.

Humans are apparently quick to become too reliant on artificial intelligence technology even if it’s not meant to be fully in control, according to professor Missy Cummings, director of Duke’s Humans and Autonomy lab, who testified at a Senate Commerce hearing last week. Cummings specifically mentioned passenger behavior following Tesla’s rollout of its autopilot feature late last year, when some drivers tested the system while sitting in the back seat.

"The safest thing to do would be [to produce] a system where either the car is in control under all conditions or humans are completely in control," Cummings told Recode. "It’s called unambiguous role allocation, because there is no question who is doing what. The trickier part is the technology isn’t there. If the technology is not there, you can’t guarantee that kind of confidence in the operation under all foreseeable conditions; that leaves us with this gap."

The better the technology gets, Cummings contends, the more distracted drivers will become. In a study performed by the NHTSA in 2015, it typically took 17 seconds for a driver to regain control of the wheel when alerted.

With Tesla’s plans to unveil its mass-market Model 3 vehicle next week, Cummings is concerned about this semi-autonomous technology reaching the masses.

"Will [people] put too much faith in the system?" she asked. "Will they engage autopilot under inappropriate circumstances? There are 32,000 deaths a year because human drivers are causing these mistakes. If we’ve got a partially capable system, humans will pay attention even less."

Starting out with semi-autonomous features could also risk prolonging the transition to fully autonomous vehicles.

The average lifespan of a car is anywhere from 12 to 15 years, and with online used-car marketplaces like Beepi and Vroom gaining momentum, cars may stay on the road for even longer. So even if fully autonomous cars are both technically and legally viable and hit the roads in 2030, the safety benefits of autonomous technology will not be fully realized because those systems will still have to navigate around human-driven semi-autonomous cars.

The 2017 Cadillac CT6 equipped with Super Cruise, for example, will still be on the road in 2032 and will still be primarily driven by a human. Though GM recently acquired Cruise and now has the technical capabilities to do so, it’s unlikely the company will consider retrofitting older vehicles with fully autonomous technology when the time comes.

There are, however, still independent after-market autonomous solutions companies like AutonomouStuff that are already retrofitting vehicles to be fully self-driven.

For now, the focus is on getting from today’s vehicle technology to fully automated technology, and that will require testing and a collaboration between public agencies and private corporations to determine minimum standards, according to Cummings.

"We are going to have to accept some risk, but my concern is we need to figure out how to develop what we call systems that fail gracefully," she said. "We need this public-private partnership where the companies themselves agree on what test standards we impose. I think it should look at things like what are the requirements for when, where and how we require humans to take control over the system."