Sophia the robot has been granted Saudi citizenship. Credit:YouTube In 1950 computer scientist Alan Turing devised a test of a machine's ability to exhibit intelligent behaviour equal to a human. In the Turing test a machine and a human have a conversation in a "natural language" such as English and the machine is programmed to give human-like responses. The machine passes the test if the evaluator cannot consistently distinguish which is the human. One of the earliest successes was ELIZA, a computer program created by Joseph Weizenbaum at MIT's Artificial Intelligence Laboratory in the mid-1960s. ELIZA simulated conversation by pattern matching and substitution – it was good at figuring out which pre-written scripts to draw on but had no true understanding or ability to contextualise. Nonetheless, many early users were convinced of ELIZA's intelligence and understanding. It didn't matter how much Weizenbaum insisted it was an illusion, people believed anyway. As any magician would tell you, the audience wants to be fooled.

It's unlikely we'd be discussing citizenship for Sophia if we put its AI brain inside a Dalek body. Natural-language processing is a type of AI. It's improved immeasurably since the 1960s but it's still about pattern matching, finding the right pre-written script and slicing and dicing it to make it sound natural. There's another type of AI called machine learning, where the computer programs are able to incorporate new information and get better outcomes as a result. If you combine machine learning and natural-language processing, the results are convincing – especially if you add a humanoid face capable of animated expression. You can often fool this sort of software by introducing noise. That could be literal noise – machines aren't great at filtering out background noise, as anyone with a hearing aid will tell you – or it could be noise in the sense of irrelevant information or limited context. You could ask "what do you think of humans?" and then follow up with "can you tell more about it?" The second question requires the robot to define "it", remember what it said last time, and come up with something new.

We can only see a short distance ahead, but we can see plenty there that needs to be done. Alan Turing In the case of the ABC interview, the questions were sent to Sophia's team ahead of time so they were possibly pre-scripted. Just like an interview with a human celebrity! Note Sophia did not actually answer Virginia Trioli's question about sexism and misogyny in the robot world – the machine deflected and answered a different question and we didn't notice because its answer was even more provocative. Just like an interview with a human politician! A serious answer to Trioli's question is, there's a lot of sexism, racism and other prejudice in AI, because the machines are fed data contaminated by our own biases. Just one recent example: a computer program in the US was more prone to mistakenly labelling black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people, according to ProPublica. I requested an interview with Sophia a day or so after the ABC interview but the team had already left the country. I was told there might be a possibility of a Skype interview but there'd be an "operator" for Sophia and also "someone from the team next to her to help with the flow of the conversation".

I asked for more information about Sophia's autonomy in the interview but they didn't answer. Perhaps Sophia was too busy being made a Saudi citizen, generating headlines for the opening of a technology conference and plans to spend $500 billion on a city powered by AI. Great PR stunt, shame about the ethics of redefining citizenship. From what I can see, the truly impressive aspect is the hardware and software that give the robot semi-realistic facial expressions. Sophia's body is just a torso but there are plans to give it arms and legs. Humans are primed to respond to a face so I don't think we'd be having the same conversation if you put Sophia's AI brain inside a Dalek body. Beth Singler, a researcher at the University of Cambridge, says discussing robot rights is a form of anthropomorphism.