Science and engineering lift humanity to greater and more enduring heights, while philosophy seems to be in a regular state of existential crisis. Yet sometimes we’re reminded that even the greatest science and engineering minds of the last century – whether the genius of Stephen Hawking, who we sadly lost a few days ago, or the innovation of Elon Musk – could still benefit from philosophical thinking.

Musk, in particular, constantly reminds us of his big fear, repeated this week: that robots with artificial intelligence are likely to annihilate us. He sincerely believes that we should colonise Mars mainly because it provides the greatest opportunity for humanity to survive this unavoidable annihilation.

There are many good reasons to colonise Mars: avoiding an eventual asteroid collision with Earth; escaping a depleted environment; perhaps even to provide a home if nuclear weapons ever fulfil their terrible (though unlikely) potential. But AI-infused robots are unlikely to wipe us out, no matter how intelligent, and the evidence is in critical thinking.

Think about what a robot is: a body of some type, controlled by a computer that is essentially doing the job our brain does for us.

Our brains have evolved to allow us to react to stimuli in increasingly impressive ways; 3.5 billion years ago, our ancestors were single-celled organisms, and since then we have developed the ability to hear, see, touch and now think deeply about the stimuli we are presented with.

Right now, human and robot “brains” are worlds apart, because computers do not have the complexity that evolution imbued in us in order to reach the pinnacle of the evolutionary tree.

Once we scale the mountain of complex artificial intelligence, and become able to recreate intensely smart, reactive and learning robots, the opposite will be true: our brains will be inferior because we are limited by what it was evolutionarily necessary for us to be able to do.

The memory and abilities of a computer could be limitless, precisely because they are not limited by the bias unavoidably programmed into us by such a complex genetic history.

That last part is the important bit: robots will become more intelligent, in the sense that they may be able to process data faster, learn faster, one day even become self-aware. But they will not have the evolutionarily developed “junk”.

They won’t have the insecurities of social situations, feel the need to fit in with peers or to dominate conversations. They won’t become power-hungry or feel the need to amass unmatched levels of currency. They won’t have the feeling that they are falling when they are trying to fall asleep, because they were once tree-dwelling creatures.

Humans might be advanced compared to horses and dogs, but we’re still quite simple; we’ve developed decent cognitive abilities, but we’re driven by basic desires to procreate, be comfortable and fit in.

Man wrestles with four legged robot dog at Boston Dynamics lab

Yet this is what worries Musk. When humans became more advanced, we decided to farm other species, war with other humans and gradually try to dominate one another. He assumes robots will do the same.

Hawking, similarly, believed AI would place us in danger because of likely different goals to humanity. But while robots will become smarter and more capable, perhaps even self-aware, they won’t have that same genetic desire to survive and reproduce. We don’t possess those things due to self-awareness – we have them because it was evolutionarily necessary to.

Computers may one day become advanced reasoning machines. In some ways they already have. But they will never be smarter versions of human beings, because we have flaws which no one would ever want to recreate in robots.

On the off-chance someone does, and can, these robots will be necessarily less capable than their unflawed colleagues. Their processing capabilities would be heavily consumed by jealousy and anger, sorrow, resentment, attachment, contentment, doubt, guilt, pride and every other bias which makes human life unpredictable and wonderful. While the unbothered alternative models might be programmed to feel sympathy for such flaws, they would not have them.

The concern about AI is not that someone could develop things that might kill millions of humans with the press of a button. These already exist.

Instead, the concern centres on the idea that someone will be able to programme AI capable of coexisting with or destroying humanity, and that the AI will choose the latter.

It’s really a paradox: if the technology exists to create this, it would require robots with programmed flaws in their coding to allow such irrelevant yet complex goals.

These would be robots programmed to learn the width and breadth of human culture, to learn to make the best possible decisions, and we would be worrying about them becoming concerned with domination, and all the human characteristics which steal our attention and stop us from making better decisions.

Robots are no more likely to want to rise up and dominate the Earth than they are to want to drink sugary drinks, inject heroin or watch reality TV. That we see dominating Earth as the end goal of a perfectly rational individual says much about our own evolution, and very little about robots.