Experts pledge to rein in AI research Published duration 12 January 2015

media caption Stephen Hawking: "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded"

Scientists including Stephen Hawking and Elon Musk have signed a letter pledging to ensure artificial intelligence research benefits mankind.

The promise of AI to solve human problems had to be matched with safeguards on how it was used, it said.

The letter was drafted by the Future of Life Institute, which seeks to head off risks that could wipe out humanity.

The letter comes soon after Prof Hawking warned that AI could "supersede" humans.

Rampant AI

AI experts, robot makers, programmers, physicists and ethicists and many others have signed the open letter penned by the non-profit institute.

In it, the institute said there was now a "broad consensus" that AI research was making steady progress and because of this would have a growing impact on society.

Research into AI, using a variety of approaches, had brought about great progress on speech recognition, image analysis, driverless cars, translation and robot motion, it said.

Future AI systems had the potential to go further and perhaps realise such lofty ambitions as eradicating disease and poverty, it said.

However, it warned, research to reap the rewards of AI had to be matched with an equal care to avoid the harm it could do.

In the short term, this could mean research into the economic effects of AI to stop smart systems putting millions of people out of work.

In the long term, it would mean researchers ensure that as AI is given control of our infrastructure, restraints are in place to limit the damage that would result if the system broke down.

"Our AI systems must do what we want them to do," said the letter.

The dangers of a rampant AI answerable only to itself and not its human creators was spelled out in early December by Prof Hawking when he said AI had the potential to "spell the end of the human race."

Letting an artificially intelligent system guide its own development could be catastrophic, he warned in a BBC interview.