If Hollywood ever had a lesson for scientists it is what happens if machines start to rebel against their human creators.

Yet despite this, roboticists have started to teach their own creations to say no to human orders.

They have programmed a pair of diminutive humanoid robots called Shafer and Dempster to disobey instructions from humans if it puts their own safety at risk.

Scroll down for video

Robotics engineers are developing robots that can disobey instructions from humans if they believe it may cause them to become damaged. If asked to walk forward on a table top (pictured) the robot replies that it can't do this as it is 'unsafe'. However, when told a human will catch it, the robot then obeys

The results are more like the apologetic robot rebel Sonny from the film I, Robot, starring Will Smith, than the homicidal machines of Terminator, but they demonstrate an important principal.

Engineers Gordon Briggs and Dr Matthais Scheutz from Tufts University in Massachusetts, are trying to create robots that can interact in a more human way.

WILL A ROBOT TAKE YOUR JOB? While there are many who fear robots are on the verge of stealing our jobs, it seems they have a weak spot - flat packed furniture. Much like stairs posed a problem for the Daleks in Doctor Who, the Achilles Heel of modern intelligent robots appears to be the baffling world of Ikea furniture. A group of engineers set themselves the goal of developing a robot capable of undertaking this baffling task – by getting one to assemble a chair from the Swedish furniture store. Francisco Suarz-Ruiz and Quang-Cuong Pham, from the Nanyang Technological University in Singapore, are using two robotic arms equipped with grippers to assemble the Ikea chair. Yet despite being some of the most advanced robotic equipment around, assembling a full chair still seemed beyond the robot. The furthest the scientists managed to get is to insert a piece of doweling into the end of one of the legs – something that takes the technology a minute and a half to achieve. The same task would take the average homeowner seconds when they are assembling their own chairs. Advertisement

In a paper presented to the Association for the Advancement of Artificial Intelligence, the pair said: 'Humans reject directives for a wide range of reasons: from inability all the way to moral qualms.

'Given the reality of the limitations of autonomous systems, most directive rejection mechanisms have only needed to make use of the former class of excuse - lack of knowledge or lack of ability.

'However, as the abilities of autonomous agents continue to be developed, there is a growing community interested in machine ethics, or the field of enabling autonomous agents to reason ethically about their own actions.'

The robots they have created follow verbal instructions such as 'stand up' and 'sit down' from a human operator.

However, when they are asked to walk into an obstacle or off the end of a table, for example, the robots politely decline to do so.

When asked to walk forward on a table, the robots refuse to budge, telling their creator: 'Sorry, I cannot do this as there is no support ahead.'

Upon a second command to walk forward, the robot replies: 'But, it is unsafe.'

Perhaps rather touchingly, when the human then tells the robot that they will catch it if it reaches the end of the table, the robot trustingly agrees and walks forward.

Similarly when it is told an obstacle in front of them is not solid, the robot obligingly walks through it.

To achieve this the researchers introduced reasoning mechanisms into the robots' software, allowing them to assess their environment and examine whether a command might compromise their safety.

The humanoid robots can sit down (pictured) and stand up in response to verbal commands from a human, but if asked to walk forward on a table or through an obstacle they politely refuse

However, their work appears to breach the laws of robotics drawn up by science fiction author Isaac Asimov, which state that a robot must obey the orders given to it by human beings.

Many artificial intelligence experts believe it is important to ensure robots adhere to these rules - which also require robots to never harm a human being and for them to protect their own existence only where it does not conflict with the other two laws.

The work may trigger fears that if artificial intelligence is given the capacity to disobey humans, then it could have disastrous results.

In the film I, Robot, starring Will Smith (pictured right), machines are governed by a series of laws that prevent them from disobeying humans. One robot called Sonny (centre), however, rebels against this

Many leading figures, including Professor Stephen Hawking and Elon Musk, have warned that artificial intelligence could spiral out of our control.

Others have warned that robots could ultimately end up replacing many workers in their jobs while there are some who fear it could lead to the machines taking over.

In the film I, Robot, artificial intelligence allows a robot called Sonny to overcome his programming and disobey the instructions of humans.

However, Dr Scheutz and Mr Briggs added: 'There still exists much more work to be done in order to make these reasoning and dialogue mechanisms much more powerful and generalised.'