The marketing of driving assistance features such as Autopilot, ProPilot and others as “autonomous” is setting unrealistic expectations and causing dangerous driving, according to insurers and vehicle safety researchers.

In a report, Thatcham Research and the Association of British Insurers (ABI) say that drivers are being lulled into a false sense of security by the marketing of new driver assistance features making their way into cars and costing upwards of £20,000.

Features such as Tesla’s Enhanced Autopilot and Nissan’s ProPilot, as well as terms such as “full self-driving capability” and being “capable of driving autonomously” are giving the false impression of a level of autonomy not yet available.

As such, drivers are not treating these features with the level of scrutiny and attention required resulting in crashes and dangerous driving.

“We are starting to see real-life examples of the hazardous situations that occur when motorists expect the car to drive and function on its own,” said Matthew Avery, the head of research at Thatcham Research. “Specifically, where the technology is taking ownership of more and more of the driving task, but the motorist may not be sufficiently aware that they are still required to take back control in problematic circumstances.”

Current systems are capable of what is termed level 2 autonomy. This means they are able to keep the car on the road using existing combinations of technologies such as range-guided cruise control and lane-keeping aids. They are not able to deal with the task of driving under all scenarios, meaning that the driver has to pay attention to the road at all times and may have to suddenly take over.

Bhavesh Patel moved to the passenger seat after switching on his car’s Autopilot feature as he travelled on the motorway. Photograph: Hertfordshire Constabulary/PA

A recent spate of accidents involving Tesla’s Autopilot system has highlighted the need for drivers to be much more aware of the limitations of driving aids.

In a fatal collision in Silicon Valley in March, a Tesla Model X with Autopilot activated sped up and steered into a concrete barrier at 70.8mph. Another Tesla crashed into a stationary police car in Laguna Beach, while a Model S crashed into the back of a stopped firetruck in Utah. Drivers should have taken over control to avoid these incidents.

Drivers have also been found blatantly flouting safety laws, including one man who was banned from driving for 18 months after being filmed sitting in the passenger seat of a Model S in Autopilot mode on the M1.

A Tesla spokesperson said: “The feedback that we get from our customers shows that they have a very clear understanding of what Autopilot is, how to properly use it, and what features it consists of.

“When using Autopilot, drivers are continuously reminded of their responsibility to keep their hands on the wheel and maintain control of the vehicle at all times. This is designed to prevent driver misuse, and is among the strongest driver-misuse safeguards of any kind on the road today.”

While Tesla is the carmaker that’s most aggressive in pushing its driving aids as autonomous systems, similar options are available on cars starting from around £25,000, including the new Nissan Leaf. Renault is also installing the technologies into its next-generation Clio, expected in 2020.

“Don’t think it’s just £90,000 Teslas or Mercedes that have got this, it’s going right down the line as manufactures use systems required by Euro NCAP safety regulations to sell features consumers are now demanding,” said Avery.

While progress is being made by carmakers towards level 5 fully autonomous vehicles capable of handling all scenarios, their public availability is still many years away.



For now, drivers are being sold increasingly sophisticated driving aids that are technically and legally incapable of fully autonomous driving. Despite how the aids are marketed, drivers are criminally liable for the safe operation of their cars.

“The capability of current road vehicle technologies must not be oversold,” said Avery. “Names like Autopilot or ProPilot are deeply unhelpful, as they infer the car can do a lot more than it can. Absolute clarity is needed to help drivers understand the when and how these technologies are designed to work and that they should always remain engaged in the driving task.”

Ivan Drury, a senior manager of industry analysis at Edmunds, who was not involved in the report, said that carmakers need to balance the idea that they are pushing the envelope with the drawback of overselling the competencies their systems actually have.

“Features such as ABS brakes, cruise control and blindspot detectors are driver aids that are relatively standard across the board from one carmaker to the next; however, the varying degrees of ‘autonomy’ that are currently rolling out in the market vary drastically when it comes to the level of engagement required from the driver, what obstacles the system can avoid, how the vehicle corrects itself and more.

“Education is critical not only to ensure the safety of consumers, but to pave the way forward for the adoption of these technologies.”

Thatcham Research has set out 10 key recommendations for assisted driving technologies, including clear naming, driver engagement monitoring and crash intervention. It is testing a series of systems on cars and will grade them in a report for use by insurers in the autumn.

A Tesla spokesperson said: “Tesla has always been clear that Autopilot doesn’t make the car impervious to all accidents and the issues described by Thatcham won’t be a problem for drivers using Autopilot correctly.”

A Nissan spokesperson said: “ProPilot Assist is a hands-on, eyes-on, driver-assist system that can be used for motorway and dual carriageway driving. This is clearly communicated to customers at all stages of the purchase process. The system requires the driver to be in control at all times, and with their hands on the steering wheel – the system deactivates if this is not the case.”