From driving cars to beating chess masters at their own game, computers are already performing incredible feats.

And artificial intelligence is quickly advancing, allowing computers to learn from experience without the need for human input.

But scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether.

Scroll down for video

Scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether. Pictured is the Terminator film, in which robots take over - a prospect that could soon become a reality

ROBOT TAKEOVER A recent report by PwC found that four in 10 jobs are at risk of being replaced by robots. The report also found that 38 per cent of US jobs will be replaced by robots and artificial intelligence by the early 2030s. The analysis revealed that 61 per cent of financial services jobs are at risk of a robot takeover. This is compared to 30 per cent of UK jobs, 35 per cent of Germany and 21 per cent in Japan. Advertisement

Last year, a driverless car took to the streets of New Jersey, which ran without any human intervention.

The car, created by Nvidia, could make its own decisions after watching how humans learned how to drive.

But despite creating the car, Nvidia admitted that it wasn't sure how the car was able to learn in this way, according to MIT Technology Review.

The car's underlying technology was 'deep learning' – a powerful tool based on the neural layout of the human brain.

Deep learning is used in a range of technologies, including tagging your friends on social media, and allowing Siri to answer questions.

The system is also being used by the military, which hopes to use deep learning to steer ships, destroy targets and control deadly drones.

There is also hope that deep learning could be used in medicine to diagnose rare diseases.

But if its creators lose control of the system, we're in big trouble, experts claim.

Speaking to MIT Technology Review, Professor Tommi Jaakkola, who works on applications of deep learning, said: 'If you had a very small neural network [deep learning algorithm], you might be able to understand it.'

'But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.'

This is concerning, considering deep learning could soon be used to control deadly military weapons, and cars.

In a recent study, a computer was tasked with predicting disease by analysing patient records.

The findings are concerning, considering deep learning could soon be used to control deadly military weapons, and cars (stock image)

Results showed that the computer was extremely accurate in diagnosing schizophrenia – but even its creators did not know why.

Dr Joel Dudley, who lead the project at New York's Mount Sinai Hospital, said: 'We can build these models, but we don't know how they work.'

In the hopes of staying in control of these powerful systems, many of the world's largest technology firms created an 'AI ethics board' in 2016.

Researchers with Alphabet, Amazon, Facebook, IBM, and Microsoft teamed up to create the new group, known as the Partnership on Artificial Intelligence to Benefit People and Society, to develop a standard of ethics for the development of AI.