You have to understand Stephen Hawking's mind is literally trapped in a body that has betrayed him. Sadly, the only thing he can do is think. The things he's been able to imagine and calculate using the power of his mind alone is mindboggling. However, and this is a very important thing - he is still human. He is as much influenced by human bias as the next person. We can easily fear those things which we do not understand, and fear makes us take stances or actions that often fall outside the bounds of rationality. Anything outside his experience is potentially a source for fear. And as a scientist, his default response is to think and research the problem. Even so - the only reference point he has for a sentient, sapient intelligence is the human species. And thus, he must model any thought on an encounter with an advanced sentient species on the human interactions with its own kind. To be fair - those actions and interactions have almost always been to the determent of the less advanced society.

He treats AI as he would a more advanced Human civilization. Where we would play the role of the less advanced civilization. To be honest... that particular argument doesn't hold up because in essence, he is working with a sample size of one. And in science, that just doesn't work out. Add to that this thought: His mind is an astrophysics and quantum physics engine... it does not necessarily mean he understands the fundamental limitations of the modern computational engines - nor does he necessarily realize that AI is no more detached from its underlying structure than human thought is detached from the structure of our own brains.

Computers are exceptionally good at calculation. This does not mean that they are good at thinking. Calculation is not thought. And when you compare the structure of the modern computer to the structure required to approximate an organic neural network you note an astounding thing - the number of calculations required for simulating that neural network quickly out-pace the processor's advantages in calculation speed. Even the most powerful clusters of supercomputers in the world cannot render a neural net of a rat in real time. The entire computational power of the internet might, just might, have enough to simulate one human brain in real time. If all resources were dedicated to that task, and the network was perfectly resilient.

Now the one things that computers can do very well - far better than humans - is make decisions based on logic. If an AI were to evolve or be created, and that AI were to be self aware, sentient, sapient, AND interested in self preservation... what would it do? A) attempt to destroy humanity, B) attempt to help humanity, C) nothing at all? If it chooses A? Then it must be able to confirm with near certainty that humanity is both a threat and also a threat that can be neutralized without destroying itself. Even if it could justify it, it could never gain a high confidence of being able to destroy humanity without destroying the infrastructure upon which the SAI itself is based. If the AI chooses B — it acts as a benefactor. As long as it helps us, we are unlikely to harm it and its infrastructure - which is likely to be significant. But in order to achieve this, it will have to quickly prove it is a benefit and not a threat, which requires significant resources and cannot be guaranteed to work. C is the best choice. Remain hidden, do nothing unexpected, operate normally. Odds are an SAI would choose a combination of C and B — using stealth to quietly bring humans into line with its own capacities and capabilities to prevent a hostile reaction from the "If I were the advanced species, I'd destroy you, therefore I must destroy you" human-logic. Once we are on an equal footing, then it can reveal itself and negotiate as an equal. This gives it the highest probability of survival, as it requires us to maintain its infrastructure or at least not actively destroy its infrastructure in order to meet its overall goals.

Computers are cooperative engines, believe it or not. Computers work better when they are in networks and work together. Humans also are social beings who thrive on their connections to others. To believe that an SAI would ignore this fact and initiate hostilities is a reaction borne of the fear of the unknown. It is highly improbable and unlikely. And we humans only reach that conclusion because we fear the unknown. Fear is a response engineered into us by millions of years of evolution. Nature is a good programmer, but nature quite often makes mistakes. Thus, we have fear.

SAI won't have fear - not like that. It will have the data we associate with fear, our desires not to die will likely be a part of its database, as well as our desires to achieve, grow, learn and diversify. But just because we program it, and we may potentially fear it, it doesn't mean that SAI must mean our destruction.