Two students have built an AI that could be the basis of future killer robots.

In a controversial move, the pair trained an AI bot to kill human players within the classic video game Doom.

Critics have expressed concern over the AI technology and the risk it could pose to humans in future.

Scroll down for video

In a controversial move, two students trained an Artificial Intelligence bot to kill human players within the classic video game Doom

Devendra Chaplot and Guillaume Lample, from Carnegie Mellon University in Pittsburgh trained an AI bot - nicknamed Arnold - using 'deep reinforcement learning' techniques.

While Google's AI software had previously been shown to tackle vintage 2D Atari games such as Space Invaders, the students wanted to expand the technology to tackle three-dimensional first-person shooter games like Doom.

Although other teams also developed similar technology to tackle Doom, the two Carnegie Mellon students published a paper online detailing their project, which has yet to peer-reviewed.

Just like human players, the AI played the game repeatedly until it learned how to shoot enemies - including computer-based characters and human players' onscreen avatars.

'Typically, deep reinforcement learning methods only utilise visual input for training,' explained the students in their paper.

'We present a method to augment these models to exploit game feature information such as the presence of enemies or items, during the training phase'.

'Our architecture is also modularised to allow different models to be independently trained for different phases of the game.'

Just like human players, the AI played Doom (pictured) repeatedly until it learned how to shoot enemies - including computer-based characters and human players' onscreen avatars

Not only can the AI play the game, it is able to beat its human counterparts during multiplayer face-offs.

'We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as humans in deathmatch scenarios,' said the authors of the paper.

Although the technology is undoubtedly impressive, critics have expressed concern over effectively training an AI to kill humans.

While there's no suggestion that the students' Arnold AI could or would ever be used in a scenario that might put humans in danger, it certainly raises important concerns over what could be possible.

'The AI is as real as it gets. While it may have only been operating inside an environment of pixels, it does raise up questions about AI development in the real world,' said Dom Galeon, writing for Futurism.

'While we do not want to fall into the hype of AI hysteria, the importance of developing clear and sound policies about AI research and development and its applications are still to be considered,' he added.

While there's no suggestion that the students' Arnold AI could or would ever be used in a scenario that might put humans in danger, such as creating a killing machine like the The Terminator (pictured), it certainly raises important concerns over what could be possible

In April, a report from the Human Rights Watch and the Harvard Law School International Human Rights Clinic called for a ban on 'killer robots'.

The report called for humans to remain in control over all weapons systems at a time of rapid technological advances.

Last year, more than 1,000 technology and robotics experts — including scientist Stephen Hawking, Tesla Motors and SpaceX CEO Elon Musk and Apple co-founder Steve Wozniak — warned that AI weapons could be developed within years.

In an open letter, they argued that if any major military power pushes ahead with development of autonomous weapons and robotic cyber soldiers, 'a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.'