Ever since roboticists first broached the possibility of "thinking machines", science fiction has treated a future with sentient robots as inevitable. Maybe they'll befriend us or maybe they'll kill us, but they can think logically, be communicated with, and usually express some range of emotions - or a distinct lack thereof. At least that's how it works in the movies - but how would a truly sentient robot actually behave?

The truth is we have no idea. The birth of AI consciousness is called the Singularity for a reason - until it happens, we have no way of predicting how it could possibly unfold. Not that it's stopped Hollywood from trying, but in most cases these questions have to be simplified so we're not bogging down the plot. Ex Machina is one recent exception. It tells the story of Caleb, a computer programmer asked to judge whether an AI named Ava could pass as a human. During Ex Machina's theatrical release, it received widespread critical praise for examining deeper implications of consciousness and robotics.

That's thanks partly to Murray Shanahan, a UK-based roboticist who acted as advisor on the film. His book Embodiment and the Inner Life was an inspiration for Ex Machina's script, while his new book The Technological Singularity breaks down potential future scenarios in more detail. The Escapist recently spoke with Shanahan to talk about what a human-level intelligence might be like. Could it reflect human behaviors and emotions? Would it be coldly logical while surpassing us in superior brain-power?

"I think we can imagine human-level AI that is not human-like, and that lacks human-like emotions," Shanahan told The Escapist. "But it's also possible to build a human-level AI that is very human-like and does have human-like emotions. In other words both are possible, and right now we have no idea what the first human-level AI - which might be decades away - will be like."

Part of the problem is that when we talk about robots, we're not talking about consciousness - we're talking about intelligence. Since we consider intelligence to be a strictly human trait, it's an easy way to measure whether an AI is humanlike, or whether they can outsmart us. That explains why humans get worked up when computers beat chess champions, instead of asking whether it actually enjoys playing chess.

It's also why the Turing Test has become sci-fi shorthand for determining how well AI can mimic human behavior. But the truth is that human speech isn't just based on intelligence. In fact, we've recently realized that programmers can trick testers by adding human-sounding nonsense. "The Turing Test is usually thought of as a test of intelligence rather than self-awareness or consciousness," Shanahan continued. 'But one of the problems with the Turing Test is that it's possible to build chatbots that seem human-like because they behave in eccentric or silly ways."