The Department for Transport claimed it wants to see fully autonomous cars tested on UK roads by 2021. Astonishingly, this expectation was set out after several fatal crashes in Arizona, outlining a very real danger to pedestrians and passengers as a possible consequence of this technology.

The system and infrastructure used in cars today (known as a Controller Area Network) was designed back in the 80s. It was developed for exchanging information between different micro controllers. Essentially what we have is a peer-to-peer network - and an old one at that.

The main issue here is these networks weren’t built with security in mind, as it was not a key concern at the time. As time has gone on, modern-day functionality has been layered onto the existing CAN infrastructure. This gives it no access control or security features, but instead, leaves access to cars potentially open to criminals.

While no real-world hacks have been executed this way, it’s been proven possible; in 2015, two researchers were able to drive a Jeep Cherokee off the road using wireless technology. As a result of this flaw, half a million cars were recalled.

This demonstrates that emerging technologies are being adopted too quickly without manufacturers fully considering the accompanying security implications.

Discomfort with autonomy

Following the fatality in Arizona last year, it was predicted it would be many years until autonomous cars replace human drivers. Realistically, I don’t think driverless cars will or should ever replace human drivers in the way we imagine; where nearly everyone will continue to drive a private car, but it will be self-driving. The issue of how we implement the technology is for society to decide – whether this takes the form of private vehicles, or a co-ordinated public transport system – but I don’t believe either should remove the human aspect of vehicles.

People are becoming more apprehensive when it comes to driverless cars, where safety is paramount, and rightly so. Historically, driving has always been an aspect of life where human control has been essential, so the idea of watching a film, or sleeping, while a car transports us, feels understandably ‘wrong’ to many people.

There are various levels of autonomy with self-driving cars – ranging from add-on features such as parking assistance through to completely driverless cars. A ‘grey area’ lies between the two, where the driver has very little to do, but has responsibility for the vehicle and might need to take control at some point. In the latter scenario there’s a danger the driver may switch off because they aren’t compelled to be in full control, and might therefore be unable to regain control in an emergency.

Image Credit: Qualcomm

Beyond the safety implications

There are safety and ethical issues to consider. Christian Wolmar raised the issue of ‘the Holborn problem’. If driverless cars are automatically stop upon sensing a pedestrian, what happens when they are confronted with a mass of people milling across a busy road? Will they wait all day? Or will we be asked to accept a lower safety bar? Or if the car is given the chance to choose to avoid hurting pedestrians or the passenger in the car in the lead up to an accident, how and who will it choose? A car isn't able to make moral-based decisions on its own.

Ethics aside, in terms of cybersecurity, it is important to remember that noting can be 100% secure. Just like housework, security is never ‘done’ – you need to continually repeat the process of vacuuming and dusting as the dirt will be back next week. This same logic applies to securing the increasingly advanced technology in modern cars. There are still many unanswered questions and unconsidered scenarios, which we need to ascertain before we can even start to consider loosening the reigns on bringing autonomous cars to our roads.

David Emm, Principal Security Researcher at Kaspersky Lab