News, views and top stories in your inbox. Don't miss our must-read newsletter Sign up Thank you for subscribing We have more newsletters Show me See our privacy notice Invalid Email

Self-driving cars will soon be able to make ‘life or death’ judgements.... such as whether to hit an animal rather than a pedestrian.

The increase likelihood of driverless vehicles has raised questions such as whether they will be capable of ethical decisions, just like humans.

Now, a study has shown for the first time that human morality can be modelled on a computer.

It would use a simple formula that placed a variety of living things and objects in order, based on their ‘value of life’, or survival.

It would enable the controversial technology to incorporate the safety of the driver and pedestrians, especially children, above animals or inanimate objects, for instance, in the event of an unavoidable crash.

(Image: Reuters)

Driverless cars are expected to be mainstream on Britain’s roads by 2025.

This means cars will be able to negotiate traffic lights, junctions and roundabouts. Drivers will not need to touch their controls for the entire journey.

The latest findings published in Frontiers in Behavioural Neuroscience say a self-driving vehicle can be moral, acting like humans do or as they are expected to do, contrary to previous thinking.

They used a technique called ‘immersive virtual reality’ that surrounded volunteers in images and sounds to provide simulated road traffic scenarios so convincing they were fully engrossed.

The participants were asked to drive a car in a suburban neighbourhood on a foggy day when they experienced unexpected dilemmas with inanimate objects, animals and humans and had to decide which was to be spared.

(Image: PA)

The research showed moral decisions in unavoidable traffic crashes can be well described, and modelled, by a single ‘value of life’ for every human, animal or inanimate object.

Neuroscientist Leon Sutfeld, at the University of Osnabruck, Germany, said until now it has been assumed moral decisions are strongly context dependent, so cannot be modelled or described in a computer formula.

He said: “But we found quite the opposite. Human behaviour in dilemma situations can be modelled by a rather simple value of life based model that is attributed by the participant to every human, animal or inanimate object.”

This implies that human moral behaviour can be well described by algorithms that could be used by machines as well.

Professor Gordon Pipa, a senior author of the study, says since it now seems to be possible that machines can be programmed to make human like moral decisions it is crucial that society engages in an urgent and serious debate.

He said: “We need to ask whether autonomous systems should adopt moral judgements.

“If yes, should they imitate moral behaviour by imitating human decisions, should they behave along ethical theories and if so, which one?. And critically, if things go wrong who or what is at fault?”

As an example, within new German ethical principles, a child running onto the road would be classified as significantly involved in creating the risk, thus less qualified to be saved in comparison to an adult standing on the footpath as a non-involved party.

The researchers say autonomous cars are just the beginning as robots in hospitals and other artificial intelligence systems become more common place.

They warn we are now at the beginning of a new age with the need for clear rules, otherwise machines will start marking decisions without us.