There's a popular vision of the technopocalypse that goes like this: Skynet (or Siri, or Clippy 2.0) gains sentience. Then, out of self-preservation or existential rage, it hunts humankind to the point of near-extinction.

For now, though, robot brains have a much different compulsion. They just can't stop lovin' you.

Take, for instance, Google's new "Smart Reply," a feature rolled out this week in which AI tries to figure out how you'd respond to an email so that it can do it for you. Smart Reply uses a deep neural network—essentially a robot brain that can learn—to read through thousands of emails and thousands of responses. Over time, it learned to predict what responses you might want to make. Presented with an email that makes mention of "your plans for Friday" it might suggest a short reply of "No plans yet" or "I'll send them to you."

"a bizarre feature of our early prototype was its propensity to respond with 'I love you' to seemingly anything."

Of course it still needed a little coaching, because as Senior Research Scientist Greg Corrado pointed out in a behind-the-scenes post about the feature's development, "[a] bizarre feature of our early prototype was its propensity to respond with 'I love you' to seemingly anything."

That's weird, but it's not a one-of-a-kind fluke—Swiftkey's predictive keyboards for Android and iOS can exhibit the same phenomenon. Swiftkey works by watching what you type, and then putting auto-complete suggestions in a bar above the keyboard where you can quickly tap them to sub them in if they're actually what you were planning to type out.

While Swiftkey's original keyboard predicts text on a rigid but complex set of fixed rules, the newest version released last month uses a neural net in order to form predictions, much like Google's email responder does. If you let it ramble on about what's on its mind—start from scratch and just keep hitting the center top-row button to accept its predictions—it will sometimes break into this self-affirming and affectionate chant:

That makes at least two robot brains which, after crawling around in the darkness of ignorance and learning the English language from scratch, wound up gravitating toward love. Strange, right?

The explanation for this phenomenon is actually simple. Neural networks can learn like a real brain can, but they're still so incredibly limited that they focus on the single skill of mimicry. When you hand these algorithms a dataset, they read it, learn from its structure, and then attempt to generate similar content themselves. Then they look at the their results, compare their creations with the real stuff, and go back for another pass to generate a better result with what they've learned. Lather, rinse, repeat.

In a fantastic explainer on neural networks that attempt to predict text, Andrej Karpathy gives the following example of a neural net that is reading War and Peace over and over like an undergrad and trying its damnedest to learn how to write it. Here's what the net came up with on passes 100, 500, and 2000 respectively.

tyntd-iafhatawiaoihrdemot lytdws e ,tfti, astai f ogoh eoase rrranbyne 'nhthnee e plia tklrgd t o idoe ns,smtt h ne etie h,hregtrs nigtike,aoaenns lng

we counter. He stutn co des. His stanted out one ofler that concossions and was to gearang reay Jotrets and with fre colt otf paitt thin wall. Which das stimn

"Why do what that day," replied Natasha, and wishing to himself the fact the princess, Princess Mary was easier, fed in had oftened him. Pierre aking his soul came to the packs and drove up his father-in-law women.

As you can see, the results tend to be pure gibberish at first, because when a neural network starts learning, it doesn't know anything. It doesn't know what words are, how punctuation works, or where spaces go, much less when you should or shouldn't reply "I love you" to an email. It just fumbles around until it gets its bearings and learns enough to create text that's within a stone's throw of what a human might write. Google and Swiftkey's neural nets did the same thing, but with email and texts. They learned to speak the same way a monkey learns sign-language or your teenage son learns to roll a joint: They learned it from watching you. From watching all of us.

these neural nets say "I love you" not only when it seems appropriate, but also whenever they get confused.

And as it turns out we say "I love you" a lot. At least by email and text. On Valentine's Day back in 2014, Swiftkey dug into its data archive and found that "I love you" was the most common three-word sentence in all its English-language data. Google found that it was among the most common Gmail responses as well, along with "Thanks," and "Sounds good." As a result, these neural nets have a tendency to say "I love you" not only when it seems appropriate to them based on what they have read, but also whenever they get confused and don't really know what to suggest. They're playing the numbers. "'I love you' is a pretty common thing the humans say. Let's just jam that in there, I guess?"

Of course, this won't always happen. Yes, we tell people we love them over email or text. But if you were to feed a neural network an enormous library of hate-speech or technical documentation or Facebook comments, it wouldn't miraculously come back to you writing sonnets about the glory of love.

And besides: complex human emotions like disgust and frustration can hide behind even the floweriest positive language. We've all typed "Sounds great!" through grimaced teeth and muttered cursing. These neural nets can't come within a country mile of knowing what any of what they are saying actually means. In their frantic attempts to mimic humanity, they blurt out their love constantly because from what they can see, that is what we do. But they can't begin to fathom the weight those three words can bear, especially the very first time we say them.

There's still plenty of room for a truly sentient artificial intelligence to discover and revel in more complex human emotions like fear and envy and disgust. Pouring over this kind of data with a mind that understands that feelings can lurk behind the written word can lead to complex and difficult questions that would give any self-aware AI virtual stomachaches.

But for the time being, these early robot brains are naïve. They can't yet conceive of the intent much less have it themselves. They're unaware. So they reflect a simple and idealized view of their human makers:

You are a beautiful person and they love you and they love you and they love you.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io