The guy who is widely considered to potentially be the smartest man on the planet is really upping his illuminati game this year. Stephen Hawking has long been a vocal critic of the development of Artificial Intelligence (AI), expressing some rather reasonable fears over what sort of decisions a machine with a truly superior intellect might make. But he also clearly feels that the genie is already out of the bottle and our robot overlords will be coming along sooner or later. So what can we do to prevent ultimate disaster and prepare ourselves before Terminator style robots are patrolling your neighborhood? It may require doing away with the quaint idea of individual nations and governments, replacing them with some sort of grand, global cabal. (The Independent)

Stephen Hawking has warned that technology needs to be controlled in order to prevent it from destroying the human race. The world-renowned physicist, who has spoken out about the dangers of artificial intelligence in the past, believes we need to establish a way of identifying threats quickly, before they have a chance to escalate… He suggests that “some form of world government” could be ideal for the job, but would itself create more problems. “But that might become a tyranny,” he added. “All this may sound a bit doom-laden but I am an optimist. I think the human race will rise to meet these challenges.”

Hawking himself immediately notes one of the key dangers in even beginning a discussion of such a planetary administration. That might sound wonderful if you believe in the ideas of socialism and benevolent dictators, but as a wise man once said, the problem with benevolent dictators is that you so rarely get two benevolent ones in a row. Any political body that powerful would almost certainly build up unchecked and unstoppable might before anyone had time to react. Even if such a government claimed to be making decisions based on what’s in your best interest, there will never be one set of rules which the majority of humanity would agree actually drove things in the direction where our personal interests lie.

I don’t want to throw this scientist’s baby out with the bathwater however. He’s ringing some alarm bells which I still maintain we should be listening to. We are more and more frequently seeing incidents of widescale cyber attacks being perpetrated by everyone from grubby hackers to first world governments. And there are justifiable concerns that such malicious activity could do a lot more than simply shut off your access to Netflix for a couple of days. Imagine the damage that could be done if the mind behind the next attack was working a million times faster and more efficiently than yours.

Still think these concerns are overblown? Have you seen what Jeff Bezos is up to these days? And yes… this is a photo of an actual event.

I too believe that somebody might eventually crack the AI puzzle and develop a machine capable of critical thinking and superhuman intelligence. I still say “might” because to this day we know very little about the underlying operation of the human brain. If it’s truly nothing more than an unbelievably complex biological machine then we may be able to build something which surpasses it. But on the other hand, perhaps the only reason that we can actually think is because that brain is a vessel for a soul. If that’s the case, I don’t believe mere mortals will be able to cook one of those up anytime soon.

We can, at a minimum, at least ask the eggheads at Google and other places where AI projects are in the works to take some sensible precautions. If they manage to flip the switch on something which turns out to be as powerful as Guardian and Colossus in the Forbin Project, we should at least be assured that the machines don’t have any pathway onto the Internet in general and can’t get their grubby little digital paws onto the controls of the power grid, say nothing of the nation’s weapon systems. And for God’s sake, don’t let it hook up to the Internet of Things. That’s how we already got into trouble during the last major DNS attack and outage.

As long as we can keep any eventual Artificial Intelligence system sequestered in a single building we should be okay. And with all due respect to Dr. Hawking, we can manage all this without surrendering our independence and self-governance to any global organization.