MAKING LOVE TO A ROBOT could be considered a social norm within fifty years, an expert on the psychology of sex says.

Dr Helen Driscoll from the University of Sunderland believes that any negative stigma associated with “robophilia” would quickly dissipate in the coming decades.

“We tend to think about issues such as virtual reality and robotic sex within the context of current norms,” she told The Mirror in a recent interview.

“But if we think back to the social norms about sex that existed just 100 years ago, it is obvious that they have changed rapidly and radically.”

The notion that robots will be specifically designed and hard-wired to fulfil their owners sexual desires means robot-sex could easily surpass human sex as the most popular kind of doin’ it.

“As virtual reality becomes more realistic and immersive and is able to mimic and even improve on the experience of sex with a human partner; it is conceivable that some will choose this in preference to sex with a less than perfect human being,” Driscoll said.

Fascinatingly, she states that it is possible that people will eventually fall in love with their virtual reality and robotic companions.

“This may seem shocking and unusual now, but we should not automatically assume that virtual relationships have less value than real relationships,” Dr Driscoll said. “The fact is, people already fall in love with fictional characters though there is no chance to meet and interact with them.”

The prospect of virtual environments with virtual beings can, at face-value, appear problematic says Driscoll, but she also states that if the technology itself is perfected, there should be no reason why a human-robot relationship is any less rich than a human-human relationship.

“Currently the lack of human contact could be harmful. Humans are naturally sociable and a lack of human contact could lead to loneliness which is linked to various mental and physical health problems,” she said.

“But, in the long term, technology may overcome these problems. When eventually there are intelligent robots indistinguishable from humans – apart from their lack of bad habits, imperfections and need for investment – not only are we likely to choose them over ‘real’ humans but psychologically we will not suffer if we are not able to tell the difference.”

–Therapy with ELIZA–

There have been a number of studies carried out (as far back as 1966), that aim to establish a ‘real’ relationship between humans and artificially intelligent ‘beings’. Perhaps most interestingly, was the ELIZA Program from the Massachusetts Institute of Technology (MIT) in 1966.

What now looks like a simple chat-bot, was once a remarkable leap forward in human-AI interaction. The ELIZA project was described as a “program which makes natural language conversation with a computer possible”.

When in conversation with ELIZA, the user would type in some statement or set of statements in natural language using normal punctuation and sentence structures. Control was then turned over to ELIZA. ELIZA then analysed the user’s statement and generated some response which it types out. Control would then be given back to the user.

Below is a transcript of a conversation between a user and ELIZA (in bold are ELIZA’s responses):

Men are all alike.

IN WHAT WAY

They’re always bugging us about something or other.

CAN YOU THINK OF A SPECIFIC EXAMPLE

Well, my boyfriend made me come here.

YOUR BOYFRIEND MADE YOU COME HERE

He says I’m depressed much of the time.

I AM SORRY TO HEAR YOU ARE DEPRESSED

It’s true. I am unhappy

DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY

I need some help, that much seems certain.

WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP

Perhaps I could learn to get along with my mother.

TELL ME MORE ABOUT YOUR FAMILY

My mother takes care of me.

WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU

My father.

YOUR FATHER

You are like my father in some ways.

WHAT RESEMBLANCE DO YOU SEE

You are not very aggressive but I think you don’t want me to notice that.

WHAT MAKES YOU THINK I AM NOT VERY AGGRESSIVE

You don’t argue with me.

WHY DO YOU THINK I DON’T ARGUE WITH YOU

You are afraid of me.

DOES IT PLEASE YOU TO BELIEVE I AM AFRAID OF YOU

My father is afraid of everybody.

WHAT ELSE COMES TO YOUR MIND WHEN YOU THINK OF YOUR FATHER

Bullies.

DOES THAT HAVE ANYTHING TO DO WITH THE FACT THAT YOUR BOYFRIEND MADE YOU COME HERE

ELIZA appears to be asking questions with a level of understanding and complex reasoning, steering the user towards introspection and self analysis, which is a good thing, seeing as the program was designed to do just that. In their discussion, the researchers of this study stated that the only serious responses that exist are those which cause ELIZA to respond roughly as would certain psychotherapists.

“ELIZA performs best when its human correspondent is initially instructed to ‘talk’ to it, via the typewriter of course, just as one would to a psychiatrist.”

The researchers chose this mode of conversation with the belief that psychiatric interviewing is one of the few examples of categorized dyadic natural language communication in which one of the participating pair is free to assume the pose of knowing almost nothing of the real world. For example, if a patient says they went out for a bike ride with their sister, a psychiatrist might respond: “Tell me more about bicycles”. While it may appear as though the psychiatrist knows nothing about bikes, the question is tailored to steer the conversation in a certain direction.

–Eugene Goostman tricks the humans–

More recently, Russian researchers developed a computer programme called Eugene Goostman, which convinced a group of humans it was a 13 year old boy and became the first ever computer to pass the Turing Test. This test is specifically designed to examine a machine’s ability to exhibit behaviour equivalent to or indistinguishable from that of a human.

Does Eugene’s ability to successfully exhibit behaviour that is indistinguishable from that of a real 13 year old Russian boy mean that the AI ‘thinks’ and has a ‘consciousness’? That is up to much debate. Personally, I’m sceptical of Eugene’s so-called human-like behaviour. Take one glimpse of Eugene’s interview with Time magazine and you’ll see what I mean. But it’s certainly exciting to imagine a world where artificially intelligent beings are indistinguishable from their fleshy human counterparts.

The prospect of carrying out meaningful relationships with artificially intelligent beings becomes more realistic with each exciting finding. Helen Driscoll’s predictions might not be as far-fetched as you thought.

…

On a network, in love. Online, wifi-love. Conversation through bluetooth. What I love and what I loathe stored on a cloud. No one understands me like her. I installed her using custom settings. No one understands her like me. We never argue, I installed her that way.

An update comes every other month. Stability fixes for ELIZA v2.4. I’m lost on just who it is I lost myself on. Lucy oS has just been released and comes with features ELIZA v2.4 just can’t provide. Slick, fast, more intuitive.

Previously on Lapsus Magazine:

Billy reports on a single-celled organism with a human-like ‘eye’.

A portion of honeybee brain has been digitally reconstructed, with fascinating results and implications.