Artificial intelligence raises many hopes, fears and questions about what it means to be human in a society where robots are an increasing part of the workforce and could replace thousands of jobs. Sentient machines that can outthink humans are far beyond current technology, however, and could remain the stuff of science fiction for centuries, engineers say.

Scientists have made huge leaps designing machines that can respond to human behavior, but “the development of full artificial intelligence could spell the end of the human race,” Stephen Hawking, a pioneering physics professor at the University of Cambridge, told the BBC. The possibility of computers that can rival the human mind also inspired futurist and Tesla Motors CEO Elon Musk to call artificial intelligence humanity’s “biggest existential threat” during a recent speaking engagement at the Massachusetts Institute of Technology.

Despite these fears, computers are still “pretty dumb” compared with the concept of a machine that can talk or think like a human, says John Giannandrea, vice president of engineering at Google. The immediate ambition of what Google calls “machine learning” is designing software that can adapt to the needs of its users each time it is given new data or commands, Giannandrea says. But a major hurdle is a machine's inability to recognize the nuances of human speech, behavior and thought when scholars are still learning about what makes people intelligent.

“Understanding language is like the holy grail,” Giannandrea says of machine learning. “The ultimate obstacle is that we don’t really know how the human mind works. First you need to match the human mind.”

A screen shows what a Google self-driving car "sees" at an exhibit at the Computer History Museum in Mountain View, Calif., on May 14. Eric Risberg/AP

Google is far from developing artificial intelligence but still invests in emerging technologies like driverless cars, and has acquired numerous robotics startups to join experiments as part of its Google X research labs. Speech recognition is also key for the company’s ambitions to add voice-search commands to its software.

“Over the next decade our ability to communicate with computers is going to improve; computers are going to be able to communicate on a more human level,” Giannandrea says.

While machines learn basic language skills, questions remain about whether a computer could understand the possible good and evil consequences of its actions, or of its inaction. Previous attempts to measure how machines can imitate human behavior include the “Turing test” developed by pioneering British codebreaker Alan Turing.

Numerous gaps in Turing's test have led some scientists to view it as a measurement of how a computer can imitate humans without understanding their behavior, rather than a test for true intelligence, says Alan Winfield, a professor of electronic engineering who conducts research for the Bristol Robotics Laboratory in the U.K. Winfield and other academics are collaborating on a project due in 2018 to build a machine that can make ethical choices, even though intelligent robots are unlikely to be a problem for “hundreds of years, if not beyond that,” he says.

The DEUCE, or Digital Electronic Universal Computing Engine, was based on Alan Turing's designs and became one of the first commercially produced digital computers in the 1950s.

Walter Nurnberg/SSPL/Getty Images

Scientists researching drones and robotics are also very far from developing the advanced humanoid robots depicted in "The Terminator" and other science fiction films about cyborgs taking over the world.

“You may just as well worry about alien invasion,” Winfield says about fears of machines rebelling against humans. “We need to be worrying about stupid robots now rather than futuristic, super-smart robots.”

SPECIAL REPORT: Futurology ]

Driverless cars are among the “stupid robots” that may be used by the general public within a few years, so they are one focus of experiments by the universities of Sheffield, Liverpool and the West of England-Bristol, Winfield says.

So far, robots are confused and indecisive when given multiple commands, says Louise Dennis, a research associate at the University of Liverpool. These decision-making tests include scenarios featuring a robot trying to prevent two other robots from getting in an accident, as seen in the video below.



A main goal of the Liverpool project is to develop a computer that can recognize ethical challenges facing a machine and intervene when needed, including by telling a driverless car to run a red light to avoid an accident, Dennis says. As this new field of study grows, sociologists and philosophers should help set rules for how a machine decides what actions are “ethical,” Dennis says.

“Some of these priorities are going to have to be made at a societal level,” she says. “We feel it’s quite important that rules be specific.”

Winfield draws great inspiration on the nature of machine ethics from author Isaac Asimov, whose science fiction novels included laws of robotics to help machines learn right from wrong. Many of those stories showed the flaws in those laws, Winfield says, and current experiments may not only “lay a foundation” to prevent robots from making bad choices, but could also teach people more about human nature.

“If we make robots safe and ethical, we are in some way heading off some future disaster scenario,” he says.

A close-up of Winfield's robots. Courtesy of Alan Winfield

For the near future, Google is working on “conversational search” software that can recognize a user’s online activity and answer situation-specific questions like, “What airport gate is my flight leaving from?” says Aparna Chennapragada, the company’s director of product management for Google Now. Google software can respond to a range of such commands, but the Google Now digital assistant takes that a step further by recognizing patterns in a user’s schedule and offering suggestions on things people might need.

Recognizing images is another hurdle for machine learning, so Google is improving its software so it can organize a user's gallery by recognizing commands like “photos of motorcycles,” Chennapragada says. Engineers working on projects like Google Now “have backgrounds in human-computer interaction,” as factoring in user feedback is also a key part of improving the software to meet customer needs, she adds.

Google’s system is only “slightly intelligent” because machines don’t understand things like sarcasm, humor or catchphrases, and Giannandrea says engineers are still refining its answers for more profound questions like, “Why is the sky blue?”