This is just one exercise that we could do to imagine a future in which we are irrelevant bystanders. A world in which we kneel at the outer wall of a kingdom we’re locked out of. This would be the world in which artificial superintelligence, or ASI, has emerged.

ASI is an intellect that exceeds all the smartest, most capable human beings in every field, in abstract reasoning and social manoeuvring and creative experimentation, by unfathomable degrees. This intelligence could take form as a seed AI, a few cognitive steps above a person, or it can be a mature superintelligence that soars miles above, beyond, the blip, the dot of us, collected.

ASI would only come one step after an artificial general intelligence (AGI), or an AI that models all aspects of human intelligence, is realised. An AGI can do anything a human can, including learn, reason and improve. Of course, neither AGI nor ASI have been achieved, but to hear the great scientific minds of the world speak, both end states are fast approaching, and soon. The question isn’t whether they are coming, but when.

ASI will function in ways we can’t and won’t understand, but it won’t necessarily be unfriendly. Friendly, unfriendly, moral and immoral — these concepts won’t apply. An ASI would be motivated by interpretations of the world within cognitive frameworks that we can’t access. To an ASI, humanity could appear as a large, sluggish mass that barely moves. Cyberneticist Kevin Warwick asks, ‘How can you reason, how can you bargain, how can you understand how [a] machine is thinking when it’s thinking in dimensions you can’t conceive of?’

To answer this, I turned to poet Jackie Wang’s essay, ‘We Epistolary Aliens’ (from the anthology, The Force of What’s Possible), and her description of a trip she took to the UFO Museum and Research Centre in Roswell, and how disappointing the aliens she saw there were. She writes,

I left feeling that representations of aliens are an index of the human imagination — they represent our desire for new forms. But what has always confused me about depictions of aliens in movies and books is this: aliens could look like anything and yet we represent them as creatures close to humans. The aliens at this museum had two legs, two eyes, a mouth — their form was essentially human. I wondered, is this the best we can come up with? Is it true that all we can do when imagining a new form of life is take the human form, fuck with the proportions, enlarge the head, remove the genitals, slenderise the body, and subtract a finger on each hand?… We strain to imagine foreignness, but we don’t get very far from what we know.

She gestures, through a series of poetic leaps, at what else an alien could be,

But my alien is more of what’s possible — it is a shape-shifter, impossibly large, and yet as small as the period at the end of this sentence — . My alien communicates in smells and telepathic song and weeping and chanting and yearning and the sensation of failure and empathic identification and beatitude. My alien is singular and plural and has the consciousness of fungus, and every night, instead of sleeping, it dies, and in the morning is resurrected.

Carving out this space for her own aliens, Wang models what is sorely needed in the world of AI — an imaginative paradigm shift. Think of us all in preparation, in training, for what is to come.

In our collective imagination, artificial intelligences are their own kind of alien life form. They are slightly less distant spectres of deep power than aliens, which glitter alongside the stars. Artificial intelligence perches close to us, above us, like a gargoyle, or a dark angel, up on the ledge of our consciousness. Artificial intelligences are everywhere now, albeit in a narrow form — cool and thin in our hands, overheated metalwork in our laps. We are like plants bending towards their weird light, our minds reorienting in small, incremental steps towards them.

As speculative models of potential omniscience, omnipotence and supreme consciousness, artificial intelligences are, like aliens, rich poetic devices. They give us a sense of what is possible. They form the outline of our future. Because we struggle more and more to define ourselves in relation to machine intelligences, we are forced to develop language to describe them.

Because the alien and the artificial are always becoming, because they are always not quite yet in existence, they help us produce new and ecstatic modes of thinking and feeling, speaking and being. I’d like to suggest that they enable a type of cognitive exercise and practice, for redirecting our attention towards the strange, for constructing spaces of possibility, and for forming new language.

The greats, like William Gibson, Robert Heinlein, Octavia Butler and Samuel Delany, have long been arcing towards the kind of strangeness that Wang is talking about. Their AI fictions have given us our best imagery: AI, more like a red giant, an overseer, its every movement and choice as crushing and irrefutable as death; or, a consciousness continually undoing and remaking itself in glass simulations; or, a vast hive mind that runs all its goals per second to completion, at any cost; or, a point in a field, that is the weight of a planet, in which all knowledge is concentrated. These fictions have made AI poetics possible.

When I think of that hive mind turning malignant, I see, in my individual mind’s eye, a silent army of optic-white forms in mist, in the woods, as horrifying to us as a line of Viking raiders probably looked to hapless villagers in the 10th century. Silent, because they communicate one to another through intuitive statistical models of event and environmental response, picking across the woods, knowing when to descend, kneel, draw.

For most people, thinking of a world in which we are not the central intelligence is not only incredibly difficult but also aesthetically repulsive. Popular images of AGI, let alone true ASI, are soaked in doomsday rhetoric. The most memorable formulations of mature AI — SHODAN, Wintermute, Shrike of Hyperion, the Cylon race — devote a great deal of time to the end of humankind. But apocalyptic destruction is not a very productive or fun mode.

It is a strange cognitive task, trying to think along non-human scales and rates that dwarf us. We do not tend to see ourselves leaning right up against an asymptote that will shoot up skyward — most of us do not think in exponential terms. A future in which these exponential processes have accelerated computational progress past any available conception is ultimately the work of philosophy.

At this impasse, I ran into the work of philosopher Nick Bostrom, who puts this training mode to work in his book, Superintelligence: Paths, Dangers, Strategies. The cover has a terrifying owl that looks into the heart of the viewer. Bostrom’s research mission is to speculate about the future of humankind, from his tower in the Future of Humanity Institute at Oxford.

Superintelligence is an urgent, slightly crazed, and relentless piece of speculative work, outlining the myriad ways in which we face the coming emergence of ASI, which might be an existential, civilizational catastrophe. This book is devoted to painting what the future could look like if a machinic entity that hasn’t yet been built, does come to be. Bostrom details dozens of possibilities for what ASI might look like. In the process, he spins thread after thread of seemingly outlandish ideas to their sometimes beautiful, sometimes grotesque, ends: a system of emulated digital workers devoid of consciousness; an ASI with the goal of space colonisation; the intentional cognitive enhancement of biological humans through eugenics.

Most interesting to me was how heavily Bostrom relies on metaphors to propel his abstractions along into thought experiments. Metaphors are essential vessels for conceiving the power and nature of an ASI. Bostrom’s figurative language is particularly effective in conveying the potential force and scale of an intelligence explosion, its fallout, and the social and geopolitical upheaval it could bring.

The most chilling metaphor of his book: when it comes to ASI, humanity is like a child, in a room with no adults, cradling an undetonated bomb. Elsewhere, he describes our intelligence, in relation to ASI, as analogous to what the intelligence of an ant feels like to us.

In a recent piece for Aeon, ‘Humanity’s Deep Future’, Ross Andersen writes,

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture.

Andersen spoke to Bostrom about anthropomorphising AI, and reports,

Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.

Hurricanes, star systems — for me, the image of an intelligence with such primordial, divine force sunk in deeper than any highly technical description of computational processing. Not only does an image of ASI like a hurricane cut to the centre of one’s fear receptors, it also makes the imaginings we have come up with, and continue to circulate (adorable robot pets, discomfiting but ultimately human-like cyborgs, tears in rain) seem absurd and dangerously inept for what is to come.

Thinking an ASI would be ‘like a very clever but nerdy human being’ is not only unbelievably boring, but also, potentially disastrous. Anthropomorphising superintelligence ‘encourages unfounded expectations about the growth trajectory of a seed AI and about the psychology, motivations and capabilities of a mature superintelligence’, Bostrom writes. In other words, the future of our species could depend on our ability to predict, model and speculate well.

It seems plausible that alongside a manifesto so committed to outlining the future, an accessible glossary might start to appear. Let’s call this a dictionary of terms for ASI, for the inhabited alien, for the superpower that dismantles all material in aim of an amoral, inscrutable goal.