Jaan Tallinn argues that human-driven technological progress has largely replaced evolution as the dominant force shaping our future. The US military is experimenting with robot fighter pilots, while the majority of trading on the stock market is done by computers in what is known as algorithmic trading. "My core main message is actually that this thing is not science fiction, this thing is not apocalyptic religion - this thing is something that needs serious consideration," said Tallinn, who gave a talk on his theory at the University of Sydney last night. Tallinn isn't your average programmer. The Estonian is a board member of the Lifeboat Foundation (tagline "safeguarding humanity") and at university he majored in theoretical physics. His thesis looked at travelling interstellar distances using warps in space-time. He argues we are witnessing an "intelligence explosion" - with neuroscience advancing in leaps and bounds to the point where scientists could replicate the human brain by the middle of this century.

Implications of super intelligent machines have been explored in many films like iRobot. The event when machines surpass human levels of intelligence and ability has been dubbed "the singularity". "In my view the fact that computers caught up to humans and completely dominate humans in chess and some other domains already that says there's evidence that yes in principle they can be better programmers than humans," said Tallinn, 40. Is science fiction leading us to trivialise the real risks of super intelligent machines? "Once computers can program they basically take over technological progress because already today the majority of technological progress is run by software, by programming."

The question then is, how can you control something that can actually reprogram itself? "Once you acknowledge that human brains are basically made of atoms and acknowledge that atoms are governed by simple laws of physics then there is no reasoning principle why computers couldn't do anything that people are doing and we don't really see any evidence that this is not the case," said Tallinn. It doesn't take a rocket scientist to figure out what could happen to us humans if we're no longer the most advanced, technologically aware species. "It really sucks to be the number two intelligent species on this planet; you can just ask gorillas," said Tallinn. "They will go extinct, and the reason why the will go extinct is not that humans are actively conspiring against the gorillas, it's that we as the dominant species are rearranging the environment; the planet used to produce forests but now it's producing cities."

The key, he says, is to make sure that once we have systems that can rearrange the environment like we can, we need to ensure that those changes are beneficial to us. "We don't want super intelligence to do terraforming projects; that means take the planet and change its atmosphere or soil or whatever," he said. "What we have to realise is designing super intelligence is not a typical technology project because a typical technology project is something where we develop a first version of something and refine it. "We can't do that with super intelligence because in order to refine a first version of super intelligence, you have to basically kill or turn off the first version but if this thing is smarter than you, how do you turn it off?" So, in the worst case scenario, smarter machines could rise up and destroy us all? Tallinn says the worst case scenario could be "even worse" than that.

"If you build machines that understand what humans are and they really have some distorted view of what we want, then we might end up being alive but not controlling the future," he said. "For example if the skill is to make sure that people are happy and the way the super intelligence is supposed to measure that is how many smiles are on the planet, the easiest way to achieve that is to sedate everyone and make sure their faces are stuck in a cramp or smiling." Thankfully for us, there is an alternative to this Orwellian doom. We can harness super intelligence to work for us. "Once you have something that is smarter than you and is actively on your side, you can basically solve any problems really quickly." Further reading:

Singularity Institute - http://singularity.org Facing The Singularity - http://facingthesingularity.com/