Driverless cars are throwing up moral issues. Should it be passenger safety first, or should that be sacrificed to avoid an accident that would claim more lives?

Driverless cars should sacrifice their passengers if it means avoiding a disastrous accident that would claim more lives, the majority of people questioned in a survey have said.

At the same time, the poll found that few motorists would want to travel in such an "ethical" vehicle.

Because of the conflicting responses, designers of future driverless cars find themselves in a moral maze from which there is no easy way out, say experts.

Study author Dr Iyad Rahwan, from the Massachusetts Institute of Technology (MIT) Media Lab in Boston, US, said: "Most people want to live in in a world where cars will minimise casualties, but everybody wants their own car to protect them at all costs."

Autonomous vehicles (AVs) with intelligent computer software have the potential to eliminate up to 90% of traffic accidents, but the way they are programmed presents a huge ethical dilemma.

A driverless car carrying a single passenger could, for instance, be designed to swerve and crash in order to avoid a crowd of 10 pedestrians.

Alternatively it could protect its occupant at all costs, even if it results in mass casualties.

The US researchers investigated attitudes to AV ethics in a series of six online surveys in which almost 2,000 people were asked to balance self interest and public safety.

The results, published in the journal Science, revealed a fundamental conflict of opinion.

While people put public safety first as a general rule, they did not want to risk their own lives or those of their loved ones in driverless cars programmed to make sacrifices.

One survey found that 76% of those questioned thought it would be more moral for an AV to sacrifice one passenger rather than kill 10 pedestrians.

At the same time, there was a strong reluctance to own or use autonomous vehicles programmed to avoid pedestrians at the expense of their own occupants.

One question asked respondents to rate the morality of a driverless car capable of crashing and killing its own passenger to save 10 pedestrians.

The rating dropped by a third when people considered the possibility of being the sacrificial victim.

Participants were also strongly opposed to the idea of government regulation of driverless cars to ensure they are programmed with utilitarian principles - or the "greater good" - in mind.

Writing in the same journal, psychologist Professor Joshua Greene, from Harvard University, said the design of ethical autonomous machines was "one of the thorniest challenges in artificial intelligence today".

He added: "Life-and-death trade-offs are unpleasant, and no matter which ethical principles autonomous vehicles adopt, they will be open to compelling criticisms.

"Manufacturers of utilitarian cars will be criticised for their willingness to kill their own passengers. Manufacturers of cars that privilege their own passengers will be criticised for devaluing the lives of others and their willingness to cause additional deaths. "