Each time, it chooses at random whether to hurt the person or not

Science fiction author Isaac Asimov came up with the three 'laws' of robotics in a story he published in 1942.

The first of these laws says a robot may not injure a human being or, through inaction, allow a human being to come to harm.

Now artist and roboticist Alexander Reben has developed the first robot that breaks this rule, by hurting humans in an unpredictable way.

Scroll down for video

Artist and roboticist Alexander Reben claims he has developed the first robot (pictured) that hurts humans autonomously and in an unpredictable way. The robot administers a small pin prick at random, so the person does not know whether they will be pricked or not

IT'S A 'NEAR CERTAINTY' TECHNOLOGY WILL THREATEN MAN It is a 'near certainty' that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years. That's according to physicist Stephen Hawking who claims science will likely bring about 'new ways things can go wrong' for human survival. But the University of Cambridge professor added that a disaster on Earth will not spell the end of humanity – as long as humans find a way to spread out into space. Hawking made the comments while recording the BBC's annual Reith Lectures on January 7. The lecture explore research into black holes, and his warning was made during questions fielded by audience members. When asked how the world will end, Hawking said that increasingly, most of the threats humanity faces come from progress in technology. The scientist, who turned 74 this month, said these include nuclear war, catastrophic global warming and genetically engineered viruses. Advertisement

The robot administers a small pin prick at random to certain people of its choosing.

The tiny injury pierces the flesh and draws blood.

Mr Reben has nicknamed it 'The First Law' after a set of rules devised by sci-fi author Isaac Asimov.

He created it to generate discussion around our fear of man made machines. He says his latest device shows we need to prepare for the worst

'Obviously, a needle is a minimum amount of injury, however – now that this class of robot exists, it will have to be confronted,' Mr Reben said on his website.

'While there currently are 'killer' drones and sentry guns, there is either always some person in the loop to make decisions or the system is a glorified tripwire,' Mr Reben said.

'The way this robot differs in what exists is the decision making process it makes.'

Mr Reben is director of technology and research at Stochastic Labs, a Berkeley California incubator for sustainable creative design companies where he is working on machine ethics and next generation social robotics.

The robot is the first to that makes a 'decision' whether to hurt the person or not.

'The first robot to autonomously and intentionally break Asimov's first law, which states: A robot may not injure a human being or, through inaction, allow a human being to come to harm,' he said.

Creator Mr Reben is director of technology and research at Stochastic Labs, a Berkeley California incubator for sustainable creative design companies where he is working on machine ethics and next generation social robotics. The robot is the first to that makes a 'decision' whether to hurt the person or not

The tiny injury does pierce the flesh and draw blood. 'Obviously, a needle is a minimum amount of injury, however – now that this class of robot exists, it will have to be confronted,' Mr Reben said on his website

This cover of I, Robot illustrates the story 'Runaround', the first to list all three 'laws' of robotics

'The robot decides for each person it detects if it should injure them not in a way the creator cannot predict.'

'No one's actually made a robot that was built to intentionally hurt and injure someone,' Mr Reben told FastCompany.

'I wanted to make a robot that does this that actually exists...That was important to, to take it out of the thought experiment realm into reality, because once something exists in the world, you have to confront it. It becomes more urgent. You can't just pontificate about it.'

Mr Ruben thinks this robot should be the starting point for a discussion about ethics.

Mr Reben told MailOnline the next step 'is to find places to show [the robot], (maybe museum or something ) so people can experience it.

'The other is to make a next edition which makes its decisions in a more complex way.

'Maybe by doing more interesting things like does it not like your breath, or maybe it reads what you write on social media and decides if it likes you or not.'

Experts warned in January that the world must act quickly to avert a future in which autonomous AI robots roam the battlefields killing humans.

At a gathering in the Swiss Alps, a group of scientists and political leaders discussed the need for rules to prevent the development of such weapons.

Angela Kane, the former German UN High Representative for Disarmament Affairs said the world had been slow to take pre-emptive measures to protect humanity from the lethal technology - and she even admitted the discussions might be too late.

THE THREE 'LAWS' OF ROBOTICS Science fiction author Isaac Asimov first came up with the three 'laws' of robotics in a story called 'Runaround', published in 1942. The first of these laws says a robot may not injure a human being or, through inaction, allow a human being to come to harm. The second says a robot must obey orders given it by human beings except where such orders would conflict with the First Law. The third is that a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Advertisement

The world must act quickly to avert a future in which autonomous AI robots roam the battlefields killing humans, experts have warned. At a gathering in the Swiss Alps, a group of scientists and political leaders is discussing the need for rules to prevent the development of such weapons (Terminator pictured)

Speaking at the conference, Kane said: 'There are many countries and many representatives in the international community that really do not understand what is involved.

'This development is something that is limited to a certain number of advanced countries.

'We are not talking about drones, where a human pilot is controlling the drone,' added Stuart Russell, professor of computer science at University of California, Berkeley.

'We are talking about autonomous weapons, which means that there is no one behind it,' he told the forum.

'Very precisely, weapons that can locate and attack targets without human intervention.'

Professor Russell said he did not foresee a day in which robots fight the wars for humans and at the end of the day one side says: 'OK you won, so you can have all our women.

Some 1,000 science and technology chiefs including British physicist Stephen Hawking, have previously said in an open letter last July that the development of weapons with a degree of autonomous decision-making capacity could be feasible within years, not decades.

They called for a ban on offensive autonomous weapons that are beyond meaningful human control, warning that the world risked sliding into an artificial intelligence arms race and raising alarm over the risks of such weapons falling into the hands of violent extremists.