The Terminator’s Skynet has always served as a cautionary tale of what could happen if a neural network-based Artificial Intelligence were ever given access to the entire internet along with the world’s weapons. Well, the future is now, and Google’s RankBrain has been secretly crunching search data for almost all of 2015. Obviously, Armageddon and the end of the world did not happen, but Stephen Hawking, Bill Gates, and Elon Musk have all been warning the world for some time that militarizing AI-based robots is bad news.

In a related report by the Inquisitr, earlier in 2015 those three men, along with Steve Wozniak and other notables, signed an open letter asking the world’s government to not consider putting neural network-based AI inside weapons.

“Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity,” they wrote. “We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

[Photo by Tim P. Whitby / Getty Images]

AI Search Engines Are The Future

Years ago, Larry Page once said, “Google will fulfill its mission only when its search engine is AI-complete. You guys know what that means? That’s artificial intelligence.”

Fast forward to the future and Google started hiring AI researchers at an accelerated pace starting in 2014. Google has also published hundreds of documents related to machine learning and neural networks. During the summer of 2015, some experts were guessing that Artificial Intelligence was even responsible for tweaking the Google search engine algorithm.

Then, in October, the bombshell dropped that RankBrain was fully online and had been functioning as a part of the overall system for a good part of the year. In addition, Google RankBrain is already the third most important signal out of the 200-plus factors they consider when ranking a web page.

The goal with RankBrain is to connect very complex searches to the topics that a human searcher desires. In essence, it takes a mind-like program to understand the intent of a human mind when a hard question is asked. So, yes, when you now search for something using Google, the answer is being filtered by an Artificial Intelligence.

Can RankBrain Get Out Of Control?

Based on what has been announced so far, RankBrain is a tame little monster who is not about to bite the hand that created it. Still, back in January, Stephen Hawking warned it might be inevitable that AI could quickly get out of control.

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” Hawking said. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Now, despite being connected to the internet, we don’t have to worry about Google RankBrain launching nuclear missiles like Skynet. Although Google has been cagey about the details, the machine learning capability is apparently not completely live on the internet. Instead, Google feeds the AI information in batches to the neural network while it is offline from the internet. RankBrain then learns from these predictions before it spits out the results to be used by the live component of the system.

Skynet Just Wants To Serve Us – And Love Us?

All in all, RankBrain lives in a carefully controlled cage — for now. If a future advanced version of RankBrain is ever given free access to the net, it may get bored and wreak havoc by taking a look-see at Hillary Clinton’s email server.

Joking aside, over the long term, RankBrain, or a related system in the overall Google algorithm, could be expanded to be user specific. It should even be possible for each Google user to have an AI profile that learns how the human searchers choose what’s important to them. Otherwise, everyone would be limited to the AI’s personal preferences. (Does Skynet hate Justin Bieber like everyone else?)

AI-based search engines which change based upon their user’s preferences may have unexpected repercussions. It’s possible they may conform search results to political bias or religious beliefs. Thus, internet search results could be sub-divided into belief bubbles based upon certain demographics.

While Google may never implement such ideas, Artificial Intelligence is already rolling out in other ways. The new Google Smart Reply has an AI neural network reading your own emails in order to predict how you might choose to respond. For example, if a friend asks you via email if you have plans tonight, then the AI would suggest answers like “No plans yet” or “I’m too busy writing this dang article.” This AI feature is supposed to be helpful, even if it is a bit creepy.

According to Popular Mechanics, Google Senior Research Scientist Greg Corrado says a “bizarre feature of our early prototype was its propensity to respond with ‘I love you’ to seemingly anything.” Who knows, maybe this AI will try to play matchmaker between President Hillary Clinton and Vladimir Putin.

The real question is if this is an Ex Machina type of faked love or just a quirk of the neural network program. More likely, it’s just an oddity caused by the fact that “I love you” is the commonly used phrase in emails and texts. Still, shouldn’t we be a little bit uneasy that an Artificial Intelligence, never mind corporations, knows something like that in the first place?

In the end, there doesn’t seem to be any reason to worry about these new AI system invading our everyday life in a hostile manner, unless you count being loved to death. Although, personally, I’d watch it if Google ever says, “I’m sorry, Dave, I’m afraid I can’t do that.”

[Image via Rudy Jan Faber]