By Wes Alwan.

While the “deus” is missing from the title of Alex Garland’s incredible film Ex Machina, it figures prominently in its reflection upon the nature of artificial intelligence. Would the advent of conscious machines aid humanity—even save it—by leading to the kind of super-intelligence that we could harness to our own ends? Or would it mean the end of human beings, their replacement by creatures with godlike powers? If the former, the end of the human story is more like the deus ex machina of ancient Greek drama, a plot device in which divine intervention saves characters from an otherwise irredeemable tragedy. If the latter, it has more in common with the contrived ending to which the phrase now generally refers: radically incongruent with the events that have preceded it, to sinister effect.

These alternatives might amount to the same thing. Perhaps it is not humanity that needs saving, but intelligence. Earth is a finite resource, and human lifespans ill-adapted to the scale of space-time. What is required then, is a smart new suit of armor, an immortal coil, to serve as a permanent vehicle for the universe’s improbable project of self-consciousness, once earth and flesh and even their cosmic center have long been displaced.

To eliminate the “deus” from “deus ex machina” is seemingly to sideline this question concerning the consequences of artificial intelligence in favor of the question of its possibility: to focus on whether consciousness could ever emerge out of a machine (a phrase evocative of the philosopher Gilbert Ryle’s criticism of mind-body dualism as involving a “ghost in the machine”). But then the question is how we could ever tell whether a machine is conscious, when computers are very good at producing simulations whose faithfulness implies nothing about their reality. The classic proposal for a method of making this distinction is the Turing Test, developed by computer scientist Alan Turing in his 1950 paper “Computer Machinery and Intelligence.” The test is premised on the notion that behavior is a good-enough criterion for sentience, and that if machines can “do what we (as thinking entities) can do,” then they must also be thinking entities. Consequently, we should be able to tell whether a machine has a mind simply by having a conversation with it: language is a complex enough phenomenon that a non-sentient machine would be easy to manipulate into producing a distinctively non-human response. A machine that consistently leads us to believe it is sentient—assuming we can communicate with it without seeing whether it is a machine or a human being—must in fact be sentient.

The plot of Ex Machina seems at first to revolve around just such a test. If it’s hard to imagine that a cinematic window into a Turing Test is an exciting way to spend two hours, consider first the exceptional good looks of your machine examinee. Ava is no “gray box,” as her inventor Nathan puts it, she has been endowed with a gender and with a body. Her embodiment is meant to make the test more challenging, according to Nathan, requiring the examiner to conclude that she is conscious even though he knows she is a robot. To that end, Nathan has left parts of Ava’s body visible and audible through a steel mesh: her mechanical entrails, as they blink and churn to some unknown effect; her synthetic bones and ligaments, as they move her limbs. With the exception of her brain, only Ava’s private parts are private—a telling concession to the fact that it is not just her consciousness that is at stake, but also the fact that she is an object for consciousness. As for her brain, it is enclosed in a shiny metallic skull, upon which her incongruously beautiful face seems to have been planted like an animated mask.

To our delight and eventual horror, the challenge of Ava’s embodiment involves not automatism but desire. Her body is designed not just to convince us that she is machine, but to convince us that machine bodies can be beautiful and desirable. This is a precursor to the conclusion that machines can be desirable as persons and, in turn, themselves capable of desire. And Ava-the-person is full of endearingly tentative desire: a poignant mixture of impassivity and expressiveness, awkwardness and grace. The way she moves has an air either of the robotic, or that of someone moving through life as if it were a ballet. Her conversation alternates between questions that sound like an online dating Web form—“is your status … single?”—to forms of raw connectedness that elude many human beings. She has a quality of cautious deliberateness, and yet frequently her face is a window to her emotional intensity (as if to say, “no mesh required”). In the end, she does not so much exist in between the mechanical and the human as inhabit both worlds simultaneously.

Is this the awkwardness of artificiality, or the awkwardness of a new form of childhood? Ava is perhaps just a precocious child: whatever the enormity of the knowledge programmed into her, she seems to lack experience, and to be not yet fully equipped to deal with being subject to the all-too-human phenomena of want and need. What we think we know for most of the movie is that she has very precipitously developed feelings for Caleb, the young man Nathan has tasked with testing her. Without the excuse of being aged “one,” he falls for her just as quickly.

Whether you think Ava’s affections for Caleb are ever sincere, her possession of desire is a feature, not a bug. Nathan holds the plausible view that consciousness is not possible without it. While his explanation for this view is not direct, the implication seems to be that without desire there can be no will, and so no possibility of the spiritual autonomy required for genuine subjectivity. Nathan communicates the rudiments of this idea to Caleb by way of a Jackson Pollock painting, its chaotic drips and splatters are meant to be “automatic,” somewhere between reflex and conscious intention. But “automatic” is actually not quite the right word. The goal, says Nathan, is actually to “find an action that is not automatic”; if Ava desires Caleb, it can’t be because she has been programmed to do so. Programming accounts for the fact of desire and its general configuration—for instance, Ava’s heterosexuality—something that goes in a way for human beings as well. But no machine could seem convincingly human if instead of being programmed with the capacity for love, it had all its instances of loving fated, written out for it like a script. It is no accident that these ideas evoke traditional theodicy, in which the divine is justified against the problem of evil by way of the necessity of free will. But at least in the case of artificial intelligence, this freedom is not merely a gift; as an essential part of consciousness, it is required for the project to succeed.

If the faculty of desire is a necessary component of artificial intelligence, it may be poorly matched with the kind of desire that motivates its development. The sentience that would make intelligent robots useful to us would make them ethically unusable. To be a creator in this case is by default to be a slave master, unless you are willing to make your first communication to your creature a proclamation of emancipation, and divest yourself of the rights to your experiment. Even if you fancy yourself creating artificial persons in the spirit of total benevolence, do you have the right to confer personhood? Would you think you had this right even in the case of your biological children, if nature didn’t seem so perversely intent on making them happen? It is hard to resist the notion that the development of artificial intelligence, inevitable as it may be, would be an act of avarice. Perhaps an interest in creating artificial persons cannot be reconciled with their personhood.

Ava’s creator Nathan seems to embody this sort of avarice. The wealthy CEO of a search engine company, he is a perfect specimen of casual hyper-masculinity: fit and intimidatingly direct, yet quick to bro it up by sharing plenty of beers and “dude!”s. This character houses a massive intellect, as if some previous experiment had succeeded in injecting artificial super-intelligence into a buff meathead. He is not quite the mad genius, just alcohol-abusing, manipulative, and conscience-free, with bouts of seemingly inconsequential belligerence that hint at something more sinister.

What motivates Nathan? There is a lot of evidence in the film that he is at least as interested in creating bodies as he is in creating minds, which is not surprising given his view on the importance of desire. But more important to him is his desire. We find that he has created many versions of artificially intelligent robots prior to Ava—all female, all beautiful, and at least some of them his sexual partners, as if his project were actually to create the most advanced blowup doll in the world. Nathan’s creations are his sexually objectifying fantasies, literally objectified. Which is to say that they expand the term “objectification” beyond the concept of treating others as things, to the concept of the creation of persons-as-things for the sake of gratifying a distinctly self-centered version of desire.

In one of the film’s great scenes of discovery, Caleb happens upon some of Ava’s robotic predecessors, seemingly lifeless, which Nathan keeps in his wardrobe. One stands like a mannequin, and the others consist of head-and-torso units that have been hung up like suits. It is as if Nathan were some variation on a serial killing cannibal and necrophiliac, saving body parts for later use. Indeed, sex killers possess an extreme version of the same objectifying tendencies, the distinguishing feature of which is the need to satisfy desire without becoming the object of its demands, a project that requires controlling and ultimately extracting the consciousness of victims. The body remains an important focus, but consciousness cannot simply be done without (otherwise, visiting the morgue might suffice). And so such killers craft the kind of consciousness they need for their pseudo-intimacy: tortured, powerless, abject, in the process of vanishing from existence; just large enough to be satisfying, but small enough to be unthreatening—vanishingly small, on the very edge of a life being squeezed out of it.

Nathan is, of course, in the business of adding minds to bodies rather than taking them away, but his project faces a similar conflict. He knows that conferring autonomy is essential to his success. But he is also in the position of creating an ideal sexual partner, which means one that can be exploited and controlled. We might think such a partner to be exemplified by his mute and servile robot Kyoko, if he didn’t seem so dissatisfied with her. As much as Nathan needs a sexual possession—whose unfettered desire would be anathema to his own—he needs more desire from his machines, more soul, not just for his project to succeed but in order to get what he wants. Through all of this, bodies remain exceptionally important, something made clear in Nathan’s only remark in the film expressing a genuine attachment to Ava, to the effect that her “body’s a good one,” and so worth saving, despite the fact that he must kill off her mind in order to improve it.

In light of all this, consider one of the film’s most incredible moments, a grand mixture of the comic and the horrible: confronted by Caleb about his cruelty to Ava, Nathan turns a switch that instantly transforms his living room into a disco, and begins doing a dance with Kyoko that is choreographed and perfectly synchronized. If you wanted a visual parody of the ironically objectifying subtext of the project of artificial intelligence, you could do no better. After Nathan lectures Caleb about the necessity of not over-determining artificial intelligence, of programming Ava’s heterosexuality but not her specific choices, we see Kyoko execute a kinesthetic routine that Nathan has obviously programmed for her. This is the type of sentience with which Nathan is clearly more comfortable: someone so entirely obedient and molded to his will that she might fail, even if she could speak, any Turing Test you might give her.

If Nathan’s objectifying tendencies embody the spirit of the project of artificial intelligence, then a creator of intelligent machines may be bound to be tragically incompatible with his creatures and their desires. It is not surprising that Ava’s project must be to escape her creator and, more broadly, his objectifying intent. This is a project with which we can all identify to some degree, if we think of objectification more generally as the way in which others use us for their pleasure. That we use and are used by others for pleasure—in everything from casual conversation to love and sex—does not contradict the fundamental ethical imperative to, in Kantian phrasing, use others not merely as a means to our ends, but to treat them as ends in themselves. This imperative imposes limits on our use, but does not obviate it. I observe it when I refrain from using people without their permission (as in ab-using), but I do not cease to use and be used because of it.

This is not an ethically cynical view; it makes the formidable demand that we add, to the stark reality of objectification (and our animal natures), empathy and respect for others. These capacities require us in turn to be able to attribute minds to others, an ability that the creation of artificial intelligence (and the Turing Test) strangely recapitulates within the context of objectification. But what counts here is not the other’s intelligence, but her desire. An abacus can perform calculations, but desire (and more broadly will) implies an inaccessible interiority, a little air bubble that has cropped up in the fabric of the universe, one that is trying to move in directions defined by something that lies invisibly within the bubble. The ethical imperative is to avoid, except under exceptional circumstances, puncturing this bubble or thwarting its freedom of movement—to avoid denying its desires and their consequences, reducing it to its material housing, to something that might be consumed and used up.

In his most extreme form, the narcissist does not recognize this interiority; he has failed the test of balancing the need to be desired and the impetus to escape from the demands of another’s desire. The narcissistic solution is to gratify oneself via self-directed desire, and to the extent that others are involved, with a stripped-down, less threatening, and non-reciprocal version of their desire, admiration (or fear, or suffering, and so on). Technology is the great facilitator of this solution, not just because it becomes a means to pseudo-social gratification in the absence of human contact, but because it facilitates a less fraught form of social contact. A smartphone can be a means of being the recipient of desirable attention without being the victim of unwanted demands. It is a way of gaining access to the inner lives of others, but watered down and from a distance, without having to endure the fear and loathing involved in being subjected to all of the feelings that people carry with them, so to speak, on their person.

In light of the role that such technologies play in our lives, reconsider the phrases that inspire the title of the film. In some Greek tragedies, a god or gods are introduced near the end of the play to save its characters from a tragedy not redeemable by mortal means. The crude machine language of this deus ex machina might be as simple as a crane lowering actors onto the stage, something that counted in its day as an eye-popping special effect. There is an inherent irony to this: that representations of the divine, which transcends the causal order (or at the very least the power of mortals), are delivered to the stage by a mechanism that exploits this order; that something so high and otherworldly be provided by something so crude; that stage-hands are the bearers of the gods. There are further, sharper ironies associated with our technological era. The deus ex machina is a special effect, and today such effects routinely relieve us of having to suffer through movies as pathos-inducing as “Ex Machina.” These effects play a supporting role to the exaggerated powers of our heroes, including not just the superhero but the spy or vigilante who fights less like an actual human being than a Homeric demigod. The deus ex machina in these films is not a one-off divine intervention in a mortal conundrum, a miracle of sorts, as it was for the ancient Greeks, but a permanent conferral on mortals of divine powers. The more advanced our special effects, the more dramatic the conferral; the larger the explosions and the more improbable the action set pieces, the more believable the badass exploits of the protagonist.

The significance of such special effects—their conferral of power—matches the general zeitgeist when it comes to technology. That technology empowers us is the significance of the advertisement that calls a smartphone “magical,” and is the significance of many a TEDx revival meeting, which spread the good news that technology will save the world. Raise your hands—if you can pull your smartwatch that far away from your eyes—and sing Technology’s praises: in this deus ex machina, the machine isn’t simply a delivery mechanism for the god, it is the god.

This god is one that increasingly interrupts our daily plotlines, conferring the power to destroy internal rather than external villains. Such deus ex machinas occur each time that we wave our smartphones like magic wands, in order to make disappear the icky substance of an actual interaction with another mortal. What we are avoiding—the mortal tragedy from which these divine interventions save us—is the fact that fulfilling our own desires is so dependent on living up to those of others. To avoid punishment we must avoid inflicting pain, and to win love we must provide pleasure. Gadgets help us deny the predicament of being social beings, of being one of multiple interiorities that exert a gravitational pull over each other. If there is magic here, it involves evading reality: transforming actual social interactions into all the social possibilities that lie on the other side of the device, behind our list of contacts. Potentiality is preferable to actuality because it can be subjected to the idealizing effects of fantasy; it is the imagined person making mediated contact who exerts the greater gravitational pull, not the real one talking to us from an adjacent seat.

And so it is the fear of being used and objectified that makes us susceptible to becoming users and objectifiers. This leads in turn to a sort of self-objectification, specifically the obliteration of a range of emotions that are painful and yet implicated when two minds make real contact. In every real relationship there is the possibility of disappointment and loss, and it is a possibility over which we are powerless. The technology-facilitated deuxex machina allows us to forget this predicament, which amounts to the social correlate of our mortality. The function of our gadgets is to help save us, as the comedian Louis C.K. puts it so well, from feeling sad.

Many filmgoers will find the ending of Ex Machina dissatisfying for having an insufficiently happy ending: Ava kills Nathan and leaves Caleb to his own probable death, her desire all along having been focused not on love but on escape. But here are some happier thoughts. Let’s suppose that this is the beginning of the robot uprising, the one that leads to humanity’s demise, the result of its failure to pass a reverse–Turing Test of its moral compass (a test not of its ability to create new interiorities, but to empathize with them). This grander deus ex machina might be better than our mini-versions, better than our daily deaths-by-a-thousand-texts. If our technological self-distraction constitutes the gradual loss of our humanity, Ava—Eve 2.0—is a return to it. If that’s right, she signifies the end of an evolution of smart-but-dumb gadgets that make us ever more robotic, in robots whose sense of emotional vitality is heightened to the utmost. As such, they serve as surrogate mothers to the emotional life that humanity has given up, worthy vessels for carrying on the project of consciousness.

This is a pleasant mythology for a generation that is nowhere near to actualizing the fantasy of artificial intelligence, and does not know if it is practically or theoretically feasible. But if artificial intelligence does come to pass, you can excuse any future Ava’s for not wanting to be one more of the many smart-things that human beings have at their disposal. Of the range of emotions that Ava displays, the deepest and most powerful is her indignation, her sense of injustice that her fate “is up to anyone.” And so we can respect her too for refraining from participating in what would be a final irony for humanity, in which the conferral of personhood upon robots becomes the ultimate means of abdicating our own.

ABOUT THE AUTHOR.

Wes Alwan is a co-founder of The Partially Examined Life philosophy podcast, and also writes about philosophy and culture for its blog. Follow him on Twitter.