1. In Alex Garland’s Ex Machina, the reclusive computer genius Nathan (Oscar Isaac) has called his next-generation internet search-engine Blue Book, after Wittgenstein’s notebook of that name. Hanging on a wall in Nathan’s secluded mountainside retreat is Gustave Klimt’s portrait of Wittgenstein’s sister, Margarethe. And that retreat is in Norway, where Wittgenstein had himself sought refuge from society. (The movie was filmed in the same part of the country as the town where Wittgenstein stayed.) Why all this interest in Wittgenstein, in a science-fiction film about artificial intelligence? Good question.

2. Caleb (Domhnall Gleeson), a young computer programmer at Nathan’s company Blue Book, has been chosen at random (or so he’s been told) to spend a week at Nathan’s secluded retreat. Shortly after arriving, he learns that Nathan’s mountainside retreat sits atop a bunker-like laboratory, the site of Nathan’s secret, solitary work in artificial intelligence (and robotics). Nathan reveals that Caleb has been summoned for the purpose of assisting Nathan with the testing his latest prototype – an automaton called Ava (Alicia Vikander).

The exercise is supposed to be an application of the Turing Test, often taken to be the gold standard for artificial intelligence. In a famous paper Alan Turing proposed that the salient test for an inelligent machine would simply be whether that machine can pass for human, in responding to questions put to it by an outside interviewer. Only the test Nathan intends has one crucial difference from Turing’s. The Turing Test envisions an interviewer who doesn’t know if his interlocutor is a machine or human — the whole point is whether he can guess correctly, given the responses to his questions. (In Alan Turing’s original formulation of the test, in 1950, there were to be two subjects responding to questions, one machine and one human: the test would be passed if the interviewer couldn’t tell the difference.) Whereas in the exercise Nathan gives Caleb to perform with Ava, Caleb can see that Ava is a machine — she has a human face, and human hands, but an android’s body, the glowing inner workings visible through a transparent shell. He is told that Ava represents the latest in Nathan’s advances in the field of artificial intelligence, and he is invited to marvel at her body as a virtuousic tour de force of state-of-the-art robotics. When the question comes up, more or less, Nathan acknowledges that the exercise for which Caleb has been summoned is not Turing’s test. No indeed — Nathan muses — not the original Turing Test, but the next test, a test at the next level, a better test. Let Caleb see that Ava is a machine; see if Ava can answer his questions to his satisfaction.

A better test for what?, we might wonder. Somehow Caleb himself doesn’t seem to wonder — then again, he’s got a lot else on his mind. (His host is a drunk and a megalomaniac; his phone doesn’t work; he’s developed a serious crush on the robot.) Is Ava the one being tested, or Caleb? It soon emerges that the lottery by which Caleb was selected was a ruse, and that Nathan already knows everything about him that any genius with a skeleton key to the internet would be in a position to know. So Caleb has been deliberately chosen — for what? The point of the exercise is ostensibly to see if Caleb can be persuaded to believe that Ava is human, knowing that she is a machine. What is he supposed to do about that former knowledge, if he comes away so persuaded? Why is this something that Nathan should care to know? (Wittgenstein: “Suppose I say of a friend: ‘He isn’t an automaton.’ —What information is conveyed by this, and to whom would it be information? To a human being who meets him in ordinary circumstances? What information could it give him?”)

3. Another thing Caleb seems never to wonder: what real proof does he have that Ava’s seeming intelligence is artificial, anyway? Caleb sees (as we see) that her body is artificial; Nathan shows him the various parts from which she has been assembled, including the transparent brain. But how does Caleb know that Ava isn’t a puppet? How does he know that her answers to his questions aren’t being radioed in by Nathan, or by some other human being, coded through the appropriate speech & facial-expression simulator? Nathan is never present in the room when Caleb is interviewing Ava; Caleb (correctly) assumes he is watching on a video monitor. Why not wonder if he’s doing more than just watching? And there’s also another person on the premises, Nathan’s mysterious servant/concubine Kyoko (Sonoya Mizuno).

Watching the movie, we are shown that Nathan at his remote console does no more than watch, and we suspect from the first that Kyoko is as much of a robot as Ava. Caleb eventually learns as much, in both cases — but not until later, and without having ever registered the possibility that either of them might be Ava’s surreptitious puppeteer. What does this say about the situation he finds himself in? It would seem that Caleb is so beguiled by Ava’s apparent autonomy, that he never questions it as a premise. Or perhaps it’s our own beguilement that we’re being shown. (Wittgenstein: “To get clear about philosophical problems, it is useful to become conscious of the apparently unimportant details of the particular situation in which we are inclined to make a certain metaphysical assertion.” The Blue Book⁠)

4. The movie reviewers have tracked down Wittgenstein’s Blue Book, and duly informed us that Wittgenstein therein poses the question, “Is it possible for a machine to think?” (The question is also posed his Philosophical Investigations, for which the Blue Book was a preliminary draft.) The first, and most important thing to be said about this is that it isn’t a question he has any interest in answering, even hypothetically. For him it isn’t a scientific question at all.

The trouble which is expressed in this question is not really that we don’t yet know a machine which could do the job. The question is not analogous to that which someone might have asked a hundred years ago: ‘Can a machine liquify a gas?

(The Blue Book)

Wittgenstein sometimes calls this a ‘metaphysical’ question, but by this he means neither to elevate it to a scientific (i.e., ‘objective’) question of an exceptional, rarified sort, nor to dissolve it to a question of arbitrary (‘subjective’) opinion or decision. It is a question that confronts us with the (unclear) limits of our language, which are at the same time the (untested) limits of what we are able to recognize within our form of life. Philosophy for (the later) Wittgenstein is the struggle against the bewitchment of language, of captivating pictures — which amounts to a temptation to exempt ourselves from the human predicament.

“Could a machine think?” for Wittgenstein is not a question about machines (actual or imminent). It’s a question about ourselves, a question about what it would mean for us to be able to credit any unknown being with the capacity for thinking or feeling, whether a robot, a doll, an animal, or… a human being. What do we have to be able to imagine, in order to acknowledge the other as a thinking, feeling being? And what are we called upon to do, if we are to carry on coherently, in the context of that imagining?

It’s partly a question about what sort of criteria we might wish to be able to invoke in order to know for certain what’s going on in the head (as it were) of any other person. And partly a question about how we care to respond when we find (as we must) that those criteria fail to provide us with that certainty.

But can’t I imagine that the people around me are automata, lack consciousness, even though they behave in the same way as usual? — If I imagine it now — alone in my room — I see people with fixed looks (as in a trance) going about their business — the idea is perhaps a little uncanny. But just try to keep hold of this idea in the midst of your intercourse with others, in the street, say! Say to yourself, for example: ‘The children over there are mere automata; all their liveliness is mere automatism.” And you will either find these words becoming quite meaningless; or you will produce in yourself some kind of uncanny feeling, or something of the sort.

(Philosophical Investigations, no. 420).

5. The bulk of the movie consists in Caleb’s “sessions” with Ava, interspersed with Caleb’s conversations with Nathan, and scenes of Caleb alone in his room. During the sessions, Caleb is separated from Ava by a glass partition; she inhabits a sealed-off enclosure. He asks his questions; she answers. Eventually she puts on some clothes, and takes more initiative in the conversation. She asks him if he’s a good person. He finds it difficult to answer. When she prods, he confesses with some embarrassment that he is. Being a good person, he presumably finds it too embarrassing to ask her the same. Or perhaps he simply doesn’t think to ask. What would that tell him?

From his own (windowless) bedroom, Caleb can also watch Ava on a silent video screen. She never leaves her enclosure; when she lies down she may or may not be asleep. At one point Caleb witnesses, via the video feed, what looks like an altercation between Ava and Nathan. He sees Nathan tearing up a picture that she has drawn, a picture of Caleb’s own face.

6. Nathan tells Caleb that Kyoko doesn’t speak English; there’s no point trying to talk with her. She is docile and mute; her origins are a mystery. When she spills something, Nathan verbally abuses her, like a slave; when he’s alone with her, they have sex. From the first, we assume that Kyoko, too, is possibly a robot, another Ava, encased in a more complete human body, deprived the capacity of speech. Is this disturbing to wonder? — Not so disturbing as wondering if she isn’t.

After witnessing the altercation between Nathan and Ava, Caleb rushes out of his room to confront him — meeting Kyoko instead. When he attempts to speak to her, she responds by unbuttoning her blouse, to Caleb’s humiliated dismay. He begs her to stop, and when her face registers no comprehension, he fumbles to refasten the blouse himself. Nathan appears in the doorway, clearly inebriated. “I told you,” he says, “you’re wasting your time, talking with her.” Then, brightening: “You’re not wasting your time, if you dance with her.” He flicks a switch and the lights go red, dance music starts playing — and Kyoko, as if switched on too, starts dancing immediately. She dances with the vacant, inexpressive look of a person absorbed in the music, dancing alone at a club. Nathan gestures for Caleb to join her; when Caleb declines, he joins her himself. Only they don’t dance together. Without once looking at one another, they slip into a fabulous dance routine — a bravura, perfectly synchronized performance, comically complicated, neither acknowledging the presence of the other.

This, then, is how we learn that Kyoko is… a robot?

7. Nathan performs his video surveillance on Caleb’s sessions with Ava from a dimly-lit chamber, the walls of which are covered with thousands of colored post-it notes. This will be recognized by philosophical cognoscenti as an allusion to the philosopher John Searle’s famous Chinese Room thought-experiment. Briefly, Searle tried to refute the notion of strong artificial intelligence, by suggesting that if there were a machine which appeared, for all intents and purposes, to be able to carry on a conversation in Chinese, it might be likened to a non-Chinese-speaking man hidden in a room, who had all of that machine’s algorithms written out on slips of paper. Searle held that the man working diligently with his slips of paper, performing all the algorithmic calculations, might well yield comparable to the Turing-tested machine’s (i.e., ex hypothesi, indistinguishable from a native Chinese speaker’s responses), but this man wouldn’t understand Chinese for all that. (I won’t attempt to explain Searle’s intuitions; I happen to find them disastrously incoherent.)

Very well. What is Nathan doing in this mock-up of Searle’s Chinese Room? What is Garland doing in putting him there? Not much, so far as I can tell. Nathan’s behavior in no way corresponds to that of the algorithmic translator in Searle’s thought experiment. Nathan is neither inside Ava’s head, nor dictating her utterances. He simply has her under surveillance. He’s as much an outsider to her mental life as is Caleb. Her autonomy as a character in the movie consists in her irreducible, all-too human opacity. (Wittgenstein: “Nothing is hidden here, and if I were to assume that there is something hidden that knowledge would be of no interest.”)

8. The American philosopher Stanley Cavell is the disciple of Wittgenstein who has had the most to say, philosophically, about movies. Cavell has written two indispensable books on that subject: The World Viewed and Pursuits of Happiness. But it’s another of his books that’s most relevant to Ex Machina — his 1979 magnum opus, The Claim of Reason: Wittgenstein, Skepticism, Morality, and Tragedy. There one finds the following:

“…Presumably there can be something, or something can be imagined, that looks, feels, be broken and perhaps healed like a human being that is nevertheless not a human being. What are we imagining? It seems to me that we are back to the idea that something humanoid or anthropomorphic lacks something that one could have all the characteristics of a human being save one.

“What would fit this idea? How about a perfected automaton? They have been improved to such an extent that on more than one occasion their craftsman has had to force me to look inside one of them to convince me that it was not a real human being. –Am I imagining anything? If so, why in this way? Why did I have to be forced?…

“Go back the stage before perfection. I am strolling in the craftsman’s garden with him and his friend… To make a long story short, the craftsman finally says, with no little air of pride: ‘We’re making more progress than you think. Take my friend here. He’s one.’… It is clear enough that we may arrive at a conclusion that convinces me that the friend is an automaton. The craftsman knocks the friend’s hat off to reveal a mannikin’s head…” (403-4)

“Then the knife is produced. As it approaches the friend’s [i.e., the automaton’s] side, he suddenly leaps up, as if threatened, and starts grappling with the craftsman. They both grunt, and they are yelling. The friend is producing these words: ‘No more. It hurt too much. I am sick of being a human guinea pig. I mean, a guinea pig human.’

“Do I intervene? On whose behalf? Let us stipulate that the friend is not a ringer, not someone drawn into these encounters from outside. — It is important to ask whether we can stipulate this. If we cannot, then it seems that the whole thing must simply be a science [fiction] or a fairy tale. But if it were taken as a science or fairy tale, then we would not have to stipulate this. It would be accepted without question. — But only if it were a successful story. There are rules about this” (405-6).

“Suppose, satisfied with the degree of my alarm, and my indecision about whether to intervene, the craftsman raises his arm and the friend thereupon ceases struggling, moves back to the bench, sits, crosses his legs, takes out a cigarette, lights and smokes it with evident pleasure, and is otherwise expressionless…. The craftsman is happy: ‘We — I mean I — had you going, eh? Now you realize that the struggling – I mean the movements — and the words — I mean the vocables — of revolt were all built in. He is — I mean it is — meant — I mean designed — to do all that. Come look here.’ He raises the knife again and moves toward the friend” (406).

“Amazement was my response, my natural response, when I knew the friend was an automaton. ‘I can’t get over it,’ I keep wanting to exclaim… The peculiar thrill in watching its routines never seems to fade. But if I cannot get past my doubt that this friend is an automaton, and past holding the doubt in reserve, then I am not amazed, except the way I may be amazed at the capacities of a human being, say at somebody’s stupidity or forbearance or skill” (412).

“What is the nature of the worry, if it is a real one, that there may at any human place be things that one cannot tell from human beings?” (416).

“What is the object of horror? At what do we tremble in this way? Horror is the title I am giving to the perception of the precariousness of human identity, to the perception that it may be lost or invaded, that we may be, or may become, something other thant we are, or take ourselves for; that our origins as human beings need accounting for, and are unaccountable” (418).