The real 2001: Scientists teach robots how to trick humans



It sounds like something straight out of Stanley Kubrick’s 2001: A Space Odyssey.



But, in a chilling echo of the computer Hal from the iconic film, scientists have developed robots that are able to deceive humans and even hide from their enemies.



An experiment by researchers at the Georgia Institute of Technology is believed to be the first detailed examination of robot deception.



The team developed computer algorithms that would let a robot ‘decide’ whether it should deceive a human or another robot and gave it strategies to give it the best chance of not being found out.

Georgia Tech Regents professor Ronald Arkin (left) and research engineer Alan Wagner look on as the black robot deceives the red robot into thinking it is hiding down the left-hand side

The development may alarm those who are concerned that robots who are able to practice deception are not safe to work with humans.



But researchers say that robots that are capable of deception will be valuable in the future, particularly when used in the military.



Robots on the battlefield with the power of deception will be able to successfully hide and mislead the enemy to keep themselves and valuable information safe.

‘Most social robots will probably rarely use deception, but it's still an important tool in the robot's interactive arsenal because robots that recognise the need for deception have advantages in terms of outcome compared to robots that do not recognise the need for deception,’ said the study's co-author, Alan Wagner, a research engineer at the Georgia Tech Research Institute.



A search and rescue robot may need to deceive a human in order to calm or receive cooperation from a panicking victim.



ROBOTS THAT LIE



Perhaps the most famous deceptive robot of all is Hal, from Stanley Kubrick's film 2001: A Space Odyssey. The film, based on Arthur C. Clarke's bestselling sci-fi novel, follows a crew on board a spaceship controlled by an intelligent computer, Hal. When Hal begins to malfunction, the human crew members make moves to shut him down. Hal then begins to kill the crew and take over the ship. He is eventually stopped when the last crew member manages to switch him off and as he gradually loses 'consciousness' he tells the crew the truth about their mysterious mission.

Another film that warns about robots is 'I, Robot', based on the short story collection by Isaac Asimov. During the film, a robot becomes 'self-aware' for the first time, with catastrophic results.



The results were published online in the International Journal of Social Robotics.



The researchers looked at how one robot could attempt to hide from another robot to develop programs that successfully produced deceptive behaviour.



Their first step was to teach the deceiving robot how to recognise a situation that warranted the use of deception.



Wagner and Arkin used interdependence theory and game theory to develop algorithms that tested the value of deception in a specific situation.

A situation had to satisfy two key conditions to warrant deception - there must be conflict between the deceiving robot and the seeker, and the deceiver must benefit from the deception.



Once a situation was deemed to warrant deception, the robot carried out a deceptive act by laying a false trail about its movements.



The robot was even able tailor its deception based on how much it knew about the particular robot it was trying to trick.



To test their algorithms, the researchers ran 20 hide-and-seek experiments with two autonomous robots. Coloured markers were lined up along three potential pathways to locations where the robot could hide.



The hider robot randomly selected a hiding location from the three location choices and moved toward that location, knocking down coloured markers along the way.



Once it reached a point past the markers, the robot changed course and hid in one of the other two locations. The presence or absence of standing markers indicated the hider's location to the seeker robot.



‘The hider's set of false communications was defined by selecting a pattern of knocked over markers that indicated a false hiding position in an attempt to say, for example, that it was going to the right and then actually go to the left,’ explained Wagner.

The hider robots were able to deceive the seeker robots in 75 percent of the trials, with the failed experiments resulting from the hiding robot's inability to knock over the correct markers to trick the ‘finding’ robot.



'The experimental results weren't perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment,' said Wagner.



'The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot.'



The researchers said that they are aware that there could be ‘ethical implications’ involved in teaching robots how to deceive not just fellow robots but humans, too.