Everything in our evolutionary background prepares us to deal with angry entities and knowing whether or not to trust them. If we get a robot that’s angry in the classically human sense, we know so much more about how to deal with it than a robot that does not exhibit anger of any sort but may have goals that are very dangerous. The dangerous ones are the ones that do not correspond to anything that we can classify on a human scale – the ones that are indifferent to some crucial aspect of the world. If AI are indifferent to humans, it’s obvious how that could go wrong. If they’re indifferent to some aspect of humans and they get great power, well that aspect of humanity may vanish.