One of the US military’s top scientists claims it isn’t killer robots we need to worry about, but an uprising of robotic spies

Gill Pratt, the program manager for the Darpa Robotics Challenge, recently told Defense One that banning autonomous weapons was wrong.

Our focus should instead be on protecting intelligence, he said.

‘The danger is not in the legs. It’s in the camera and the microphone,’ said Pratt. ‘How do we protect the information that the robot picks up?’

Scroll down for video

Gill Pratt (right), the program manager for the Darpa Robotics Challenge, recently told Defense One that banning autonomous weapons was wrong. Our focus should instead be on protecting intelligence, he said. Pictured on the left is one of the robots that recently competed in the Darpa Robotics Challenge

In the future, Pratt envisions robots doing everything from helping the elderly at home, carrying our backpacks on a hike and aiding in disaster recovery operations.

‘I’d love to have a machine help me when I grow old, he said, in an in-depth interview with Defence One. ‘But I don’t want all the information, all that the robot is watching, to be made public.

‘How do we protect against that? I don’t know. ‘These are serious questions, but they aren’t specific to the robotics field. They’re specific to IT.’

He claims, today, there is too much trust in the software used in devices such as mobile phones.

His point was proven last year when experts found gyroscopes on mobile can be turned into crude microphones that can pick up on phone conversations with the aid of specialist software.

‘I don’t worry about the robot on the loose doing physical damage.

'The valuable stuff is the data. That issue is huge and transcends whether it’s a robot, a cellphone, or a laptop.’

‘The danger is not in the legs. It’s in the camera and the microphone,’ said Pratt. ‘How do we protect the information that the robot picks up?’ Pictured is a scene from Terminator Genisys

Earlier in the summer, Elon Musk and Stephen Hawking, signed a letter urging governments to ban the development of autonomous weapons.

The letter warned that 'autonomous weapons will become the Kalashnikovs of tomorrow'.

The experts point out that, unlike nuclear weapons, AI weapons require no costly or hard-to-obtain raw materials.

This means they will become ubiquitous and cheap for all significant military powers to mass-produce.

'If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,'the letter states.

'Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group,' the letter states.

'We therefore believe that a military AI arms race would not be beneficial for humanity.'

But Pratt believes now is the wrong time to be making this decision. He said first we need to understand what’s possible, before deciding to ban them.

‘In the case of lethal autonomy, we need to learn a whole lot more and there’s a whole of good that they can do, too, in stopping lethal errors from happening,’ he added.

Earlier this year, open letter signed by more than 1,000 robotics experts, including Tesla-founder Elon Musk (right) and physicist Stephen Hawking (left), called for an outright ban on 'offensive autonomous weapons beyond meaningful human control' in an effort to prevent a global AI arms race