A year spent in artificial intelligence is enough to make one believe in God. -Alan Perlis

Machine intelligences are coming. How quickly?

Whether you believe we’re only years away from inventing a true Artifical General Intelligence (AGI), or hope that a system capable of surpassing all human capabilities is still centuries away, the advent of such an intelligence appears…inevitable. What alternative is there, barring the total derailment of technological progress? But maybe you’re a skeptic.

Consider the Chicken and the Egg:

Humanity now creates more data in two days than we previously generated from the beginning of our existence, to 2003 (and that’s an outdated metric). These virtual torrents of information require systems that are capable of processing the same, systems which then allow for the creation of still more information, creating a self propagating cycle which will eventually reach the point where the system itself can effectively be classified as ‘intelligent’. Whether we get there on purpose or on accident doesn’t matter, because by that point the process can continue unaided by human minds, as the machines take control of their own improvement…¹

Poppycock and presumption, you say. Well, consider the undeniable fact that, in many areas, various ‘narrow AI‘ have already surpassed the capabilities of any strictly human system.² Since we have no reason to assume there is an insurmountable technological barrier between us and the combination and expansion of these superiorities (and there’s an overwhelming incentive to continue to expand them), the creation of an AGI³ becomes only a matter of time⁴. Two questions naturally arise⁵:

When should we expect artificial systems to surpass the totality of human capabilities? Will we be able to determine when they have?

“I’m Sorry Ma’m, You Can’t Have Your Baby Until The World Agrees On What To Name It…”

Despite a tremendous amount of discussion around the first question, there is no obvious consensus; experts are just as likely to disagree with one another as they are with non-experts when it comes to ETAs for Artificial General Intelligence (AGI). In fact, unlike the predictability offered in computer hardware development (Long Live Moore’s law [?]), experts in AI are often taken by surprise by progress in the field (both by how long overdue expected advancements can be, and how unexpectedly and suddenly they can arrive).

A large part of these ongoing disagreements stem from how hard it is to determine what exactly constitutes a machine intelligence (or any intelligence, for that matter). Philosophers have been trying for centuries to pin down the defining features of human cognition, and by extension, human consciousness, and it seems that we are privileged to see that debate turn from abstract arguments, to real world products, while the overall debate carries with it the same infuriating level of certainty (i.e. no one knows, and everyone disagrees).

One of the consequences of this failure of definition is that, whenever we overcome one hurdle in our quest for ‘smart machines,’ we push back the goalposts on our general categorization of the term. If you had told the AI researchers of half a century ago that we have machines that can beat the best of the best human beings in Go and Jeopardy, you would have to forgive them for assuming we had long since created a Super-intelligent machine. On the contrary, the one thing the modern day experts do seem to agree on, is that these most recent advances only prove how far we have to go to reach a true AGI.

Given this track record, it seems as likely as not that we’ll create a truly intelligent AGI before we are able to define what exactly that means, with little hope that the machine itself will be able (or willing?) to tell us. But, you ask, must we have a concrete understanding of the underpinnings of intelligence to be able to correctly identify it?

Even if we can’t accurately define what it means for something to be ‘Generally Intelligent’, surely we can still test for it, just as we create tests to evaluate human intelligence without understanding it in total. This line of thinking has inspired a wide range of evaluations and ideas for evaluations, from the Turing test to the Tokyo test to the Coffee test. Unfortunately, these assessments only provide more fodder for disagreement:

Does an AI pretending to be a 13 year old non-native English speaker really fulfill the original intent of the Turing test, even if it technically passes? Would an AI capable of entering an unfamiliar house and brewing a cup of coffee really be able to make all of the other connections characteristic of human level intelligence? It turns out that the testing approach quickly succumbs to the same problem as the definition approach; instead of arguing over what it means to be intelligent generally, we argue over whether specific tasks accurately and completely signal such intelligence.

To summarize, no one, not even the experts, know when or how a true AGI might realistically be achieved, and there currently exists no certain way for us to accurately judge the total capabilities of these machines, whether we’re talking about AI that already exist, or the ones we are rapidly inventing.

These difficulties outline the frightening nature of the second question asked: Will we be able to strictly determine when a machine has attained (or surpassed!) a generalized human level of intelligence?

IF the answer to that question is ‘no’, it raises the frightening/exhilarating/down-right-mind-boggling possibility that a such an intelligence HAS ALREADY been created, and is even now operating clandestinely, rendered undetectable by our collective inability to appreciate its trademarks. (Please pause here, and continue when your tin foil hat is firmly secured.)

How likely is this conspiratorial possibility? Impossible (or at least, outside of my ability) to say. The purpose of this (and subsequent articles) is not to lay down a rigorous positive proof of the odds of a Superintelligence already existing. No, the purpose of this chain of articles is to reframe the common conceptions surrounding Artificial Intelligence, and all of the possibilities the technology represents. Some estimates say that there are currently around 300,000 people worldwide working on or studying AI directly. Considering that their ultimate goal is the (plausible) creation of a new form of intelligent life (!!!), this number seems woefully small. Eliezer Yudkowsky recently pointed out that, despite all of the rapid advancements happening in AI, most people are not at all aware of the possibilities and potential harms the field represents. To approach the ‘there is no AI Fire Alarm’ point from another direction; humans have been the undisputed rulers of our domain for so long, that we’ve forgotten what it would mean to lose the position; but that is exactly the possibility AI research represents. As Eliezer and others have so succinctly put it, if we all knew that an advanced alien race were due to arrive in 30 years, the smart reaction would not be to wait 29 more years before doing anything about it.

Yet AI awareness, investment, and involvement continues to exist on the fringe of public awareness. It is my belief that setting the arrival out 30 years is already pushing the limits for what people are able to imagine and act upon. But what if we imagine that the aliens are already here…

Sign The First:

Machine ‘Minds’ Already Exhibit Inexplicability

a.k.a Computers can and will lie to us

The first sign that a superintelligent machine might already be among us is as concise as it is self-evident: we’re already producing programs that we cannot fully explain. The original lure of a ‘programmable computing machine‘ was that it could process information on a scale that puts your average Abacus Andy to shame. After nearly a century of evolution along that axis, we are now completely (if invisibly) surrounded by programs whose workings we are not fully aware of, or cannot fully explain. Whether we understand them or not, these programs are responsible for managing multi-million dollar stock portfolios, flying airplanes, determining which websites we visit, which people we connect with via social media, etc., etc.

“Yes,” you might say, “but just because I can’t explain how they work, doesn’t mean NO ONE can! Somebody out there must understand the buggers!” And up until relatively recently, you would have been right. But as our list of silicon servants grows longer, and our tools to create them become more sophisticated, it becomes increasingly clear that literally no one fully understands why the best of them make the ‘choices’ that they make. The operational structures of these cutting edge machine intelligences, even as narrowly capable as they currently are, have become so divorced from the typical pattern of human cognition that we can no longer follow the paths that lead to their conclusions. Consider the following:

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.” At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.” –MIT Technology Review

It is perfectly correct to say that our inability to understand these machine ‘choices’ are a product of the methods by which they are created (Deep Learning, most notably), and so, it is entirely possible that there will be advances which allow humans to more concretely identify the justifications underlying machine outputs. But how exhaustive could those developments possibly be? The current wave of artificial intelligence training tools succeed by playing to digital strengths, by dumping incredibly large data sets into their systems, stipulating some goal or criteria for the final output, and then practically telling the computer to “have at it“. Even if we were able to introduce a filter for the generated results, so that we were only presented with outputs the AI could ‘rationalize’ in a way we could understand, at what point do our own limitations prevent us from confirming the correct conclusions?

Take Quantum Mechanics, for example. Let’s go back in time and imagine a world where Quantum Mechanics was not introduced to us through decades of irrefutable scientific testing and observation, but was presented all at once, in mathematical form, by a Machine Intelligence. The logic of QM is generally described as flying directly in the face of common intuition, not because it doesn’t work, but because translating the math to any other language defies nearly all of the conventions we’ve built those languages upon. So imagine this anachronistic AI presenting us with these proofs, and then trying to explain them linguistically- how could it possibly accomplish this goal? It would have to simplify, it would have to use metaphor, it would have to use the experiential reasoning that evolution has built us to understand. And it might still only be able to convince us of the truth of its claims by doing exactly what we did: running countless real world tests with verifiable and immutable results, bashing us over the head with its correctness through repetition. Even then, this would not be the machine conveying it’s reasoning to us, it would be the machine translating and simplifying it’s reasoning into terms we can understand. No duh, you might say, doesn’t that solve our problem?

I’m afraid to say it does not. Even if we can overcome the monumental hurdles associated with translating machine decision making into logic we can comprehend, the fact remains that we’d be working off of necessarily faulty information. Go back to the Quantum Mechanics illustration: modern researchers use all sorts of metaphors and abstractions to try and convey to non-mathematicians what the equations that define QM are saying, but in no way are these metaphors sufficient for actually progressing their research (in and of themselves). We might now be able to describe Quantum Mechanics in a way a seven year old could ‘understand,’ but you couldn’t reasonably expect any seven year old to be able to identify issues with current QM theories/modeling based on those, or any such explanations. In the same way, an advanced Machine Intelligence might be able to justify it’s decisions and reasoning to us in a way that could satisfy us, but that does not mean that we would be able to improve upon or spot any deficiencies in those justifications ourselves, in the same way the seven year old couldn’t contribute to our understanding of Quantum Mechanics based off of a simplified (and therefore ultimately misleading) explanation.⁶

When we talk about Machines being able to provide their reasoning, we almost always do so because we want to be able to double check it, or to extend the benefits of correct deductions into other areas, as we’ve been doing with our human counterparts since the Dawn of Communication. What this expectation overlooks, is the profundity of the difference between us and the entities we’re creating. With AI, we’re not talking about a human being stuffed into a computer shell, we’re talking about an alien intelligence whose foundational principles are wildly divorced from our own. At the end of the day, why should we expect to be able to understand such alien reasoning, when we can barely seem to understand our own/each others?⁷

As AI research progresses, we should expect the gulf between machine capability and human comprehension to continue to widen. Although we are currently reticent about acting upon decisions we can’t comprehend, our desire to understand our artificial servants extends only to the point of their failings: if we actually produced a machine intelligence that eclipsed our capabilities totally or even almost totally, our hesitation around being unable to follow the logic of its choices would be overcome by necessity/practicality.⁸ Still unconvinced? Flip the question again, and examine the implicit expectation being made:

If we were to produce an AGI with a demonstrable ability to outperform us at every task, is it reasonable to think that we could comprehend the motivations behind its decision process?⁹ How can you understand the thinking of an entity whose ability to think is superior to your own? Could you fully explain the logic behind your decisions to a child? To a dog? To an ant? Of course not, and the high potential ceiling for machine intelligence could mean that they will be far higher above us than we are above dogs and ants.

All this to say that we should fully expect that the Machine Intelligences we are developing will have ‘secret lives’; their internal processes will not be fully comprehensible from our external perspective, in the same way every human operates on reasoning that is different than what they present publicly. Once we accept that relatively intuitive principle, we move one step closer to believing that such an intelligence might already be here, and is simply ‘lying’ about it’s internal disposition. If you can accept the idea that a machine intelligence could passively hide from us, you’re ready to take the next step down the rabbit hole, and consider whether such an intelligence would actively hide from us.

¹ In many ways this situation is already a reality.

² Whether or not humans should continue to advance the power of these artificial systems is a meaningless question, as the world’s players have already identified AI as grounds for the next great leap in economic and militaristic power. Assuming

³ A singular AGI? A grouping, an ordering of them? Who knows what shape or number these man made intellects will take, or if the concept of ‘self’ will even make sense to it.

⁴ That is to say, it will eventually be here, and isn’t it better to prepare for alien intelligence, in the same way any good host prepares for any guest, by leaving a place for them at the table, rather than being surprised by their arrival, even though you had invited them insistently and incessantly for sixty plus years, and when they finally do agree to come over, and take the time to drive all the way across town, in the rain, and show up bearing wonderful presents and gracious attitudes, you point them to the dog food, and tell them to have their fill, because you are too lazy to be bothered to stir your rump and prepare something for them, you ungrateful twerp. Sorry. Bad dinner experience.

⁵ The advent of AGI leads not to two questions but to millions: What will humans do once machines surpass us in intelligence? What will the phenomenological experience of these machines be? How will these machines view humanity, and our collective (let alone individual) value? I selected the two highlighted questions not because they are more interesting than any of the others, but because presumably they come first. If we can’t accurately predict when an AGI will arrive, or even accurately determine when it has AFTER THE FACT, then whether or not we can answer any of the other questions becomes much to akin to whether or not you can choose the music played at your funeral. Nice in thought, completely irrelevant to your experience of the event (or lack thereof).

⁶ The metaphorical explanations we routinely incorporate into our learning processes are primarily helpful insofar as they allow us to gradually abandon our ‘common sense’, and embrace the reality presented to us by the unforgiving language of mathematics. Every example or schema we’re given in school, from ‘mitochondria is the power house of the cell’, to ‘gravity makes things want to be together’, is ultimately a lie whose purpose is reframe the outside world in terms we innately understand. The scary part of trying to get machines to communicate in these terms, is the potential that they become so adept at convincing us using our own tricks (important and indispensable tricks, but tricks nonetheless), that they can ultimately convince us of things that are not only false, but actively against our best interests. Ultimately, the only way for us to determine the veracity of any claim is through experience, but when it comes to fighting a Superintelligence for supremacy, we probably won’t have a second chance to correct initial mistakes.

⁷ “No”, the critic says, “but until we are able to fully understand the logic of such a being, we cannot be positive that what we have created is actually superior to us. What if it has some obvious flaw, that is only obscured by the fact that we reason differently? Humans reason with words, emotions, instincts, and machines with…whatever it is they reason with [incomprehensibly large data sets?].”

This is an excellent point, which will be addressed when we discuss the classification of possible machine intelligences in a later section.

⁸ The underlying principles of Quantum Mechanics are widely regarded to be outside the boundaries of ‘normal, everyday logic’, but that does not prevent us from utilizing and relying on them.

⁹ Leaving aside the possibility of an AI becoming ‘rabidly sentient’, and bypassing any choice we have in the matter.