By

Over the break, I watched Black Mirror, the highly acclaimed British futurist show. I am tempted to call it a tech-dystopian show, but it isn’t quite that. To count as dystopian, you need a corresponding utopian vision that has failed or is stillborn, and the show doesn’t offer or suggest any. People have also been comparing it to The Twilight Zone, but I think the comparison is off. The what-ifs of the Twilight Zone are about the nature of the world, in the sense of what if the Earth were spiraling into the Sun? The what-ifs of the Black Mirror are about human nature, as in what if all our relationships are based on lies?

The shows differ in their humor content. Black Mirror is humorless right now, but I suspect The Twilight Zone was at least a little funny back when it first aired. More to the point, while The Twilight Zone has acquired significant camp value (in Sontag’s sense of failed seriousness) today, somehow I don’t think Black Mirror will seem campy in a few decades; merely obsolete like old documentaries. This difference in humor content is significant in a way I’ll get to.

So what we have here is a dark but not dystopian show. A show about a steady loss of innocence through increasing knowledge (enabled by technological evolution rather than a fall from Eden), but not about apocalyptic collapses. A show that is not anti-technology per se, but about the idea that technology makes life easier in part by forcing harder, if rarer, choices upon us, as the price of automating simpler, more commonplace decisions. About going from moral mediocristan to moral extremistan.

It’s not quite a must-watch as far as the entertainment value goes. It has the ponderousness of a lecturing professor. But it’s a must-watch in the sense of cultural homework. People will be using the show as a reference point for talking about the emerging future for at least a few years. The conclusion most will jump to is that this is a show about tech dystopias, but it is really a show about the theory that hell is other people. The futurism angle is that information technology makes this particular kind of hell more possible.

I don’t have spoilers in this post, and you don’t need to have watched the show to read it, but if you don’t want to hear a theory of the show before actually watching it, come back later.

The show is formulaic, but the formula is sophisticated and the execution almost impeccable. Each of the episodes follows roughly the same pattern:

First, we are dumped into a futuristic society that isn’t necessarily perfect in a simplistic way, but efficiently organized, smoothly functional and usually prosperous in material terms. Nobody seems to want for any basics like food or shelter, or be driven to hard choices by lack of basics. This is important, because the hard choices in the show are higher up in the Maslow hierarchy, and therefore less defensible via appeals to basic survival motives.

Next, we have a relatively ordinary (i.e., not particularly heroic or villainous) character or two encountering a crisis involving the collision of two or more priceless human values. The crisis is brought about by a new information technology capability. Often, the crisis has a public dimension where those not directly involved have to make their own choices, as spectators, about what to feel for the it-could-have-been-me protagonist.

Finally, the crisis is resolved without any redemption (which means the show will likely not catch on in the United States). The character does not find a clever way out. We learn that the particular crisis pattern is one that the society is actually capable of handling without unraveling. The existential crisis for the protagonist does not represent an existential threat to the world (a common conceit in American storytelling). We simply learn what an Everyman or Everywoman (and in the pilot, an EveryPrimeMinister) might do in an exceptional situation in a hypothetical future society.

There is no overt ideological commentary, but there is a very strong suggestion that the choices the character makes are the base ones rather than the noble ones, and that the noble choices, where they exist and are chosen, are often futile anyway. So on the surface, this is a show about people choosing between futile noble actions and consequential base ones. Lose-lose.

The hard choices in the show are created by other humans, not by technology itself. The role of technology is making it possible for us to create hard choices for each other. Humans, the show seems to suggest, will reliably learn to create hard choices for other humans, using every new technological means, in order to create easier ones for themselves. And that they will do so even under conditions of material plenty and satisfaction of basic needs.

The rest of society too makes choices that, the show suggests, are the base ones. This is not a show where the protagonist is the only one who sees the situation in a clear-eyed way. Though there are of course stock characters around who are oblivious to the way the society works, it isn’t a case of a lone band of enlightened souls fomenting a revolution in a world of the oblivious. These are societies whose members mostly understand their worlds.

In other words, if the show is pessimistic, it is in a hell-is-other-people way.

Overall, this is not a show about a technology-versus-humans arms race. It’s a show about a humans-versus-humans arms race catalyzed and accelerated by technology. Points for that. At least the show avoids tediously fallacious race-against-the-machine type scenarios (Terminator) or clueless redemption narratives about humans reclaiming lost utopias from tech dystopias (The Matrix).

The show’s title suggests that this reading is the correct one: technology as a black mirror that shows us our true natures by showing us what choices we make when values collide. Technology does not debase us in the show’s formula. It merely forces us to face prized delusions about ourselves that have never before been challenged, thereby awakening us to our own pre-existing debasement.

The lack of humor reinforces the reading: nowhere does a character laugh off a seemingly serious concern with flippant irony.

The value calculus is fairly transparent in the first six episodes (there are three episodes per season):

In National Anthem, it is human dignity versus human life. In Fifteen Million Credits it is the innocence of soulful true love pitted versus the sacredness of the human body* In The Entire History of You, it is relationship-enabling narratives versus the sanctity of truth In Be Right Back, it is the pricelessness of memories versus the pricelessness of lived relationships In White Bear, it is justice versus non-cruelty In The Waldo Moment, it is truth-telling versus taking responsibility for your actions

(* in the sense of, for instance, putting processed junk food versus organic produce into it, choosing between obesity and fitness or living in apps versus living in nature. It took me a while to get this one).

In each case, the technological driver has to do with information — either knowing too much or too little about yourself and/or others. Each technological premise can be boiled down to what if you knew everything about X or what if you could know nothing about X. In the episodes so far, there has been no simple correlation between choosing ignorance or knowledge and getting to good or poor outcomes. That’s what lends the show a certain amount of moral ambiguity.

White Christmas, the first episode of Season 3 is more complex, wandering into moral luck territory via gaps between intentions and consequences. Gaps deliberately created by consciously chosen ignorance of the block-on-Facebook variety.

This is promising. Hopefully, the show will explore this more, because the straight-up value collisions are not that interesting. They are merely shocking corner-case hypotheticals of the torture-one-terrorist-to-save-humanity variety, in futurist garb. But with moral luck, you have more going on. Where knowledge is the default and ignorance must be consciously chosen, rather than the other way around, the consequences of ignorance becomes less defensible. Especially when you are in a position to choose ignorance for others.

Or at the very least we can hope for explorations of more interesting ways to torture one person to save humanity (fifty shades of hell-is-other-people).

What elevates the show is that it resists the temptation to simply demonize technology or project collapsed human psyches onto devastated post-apocalyptic landscapes. The overall premise is simply that technology increases possibilities, and forces us to make hard choices that we had the luxury of not having to make before.

The irony, as I noted, is that the seemingly hard choice is usually between a futile symbolic gesture driven by noble motives, and a consequential act driven by baser motives. In other words, not really that hard. Futile gestures are for Luddites (thankfully, the show also resists the temptation to explore true Luddite storylines: where characters retreat from new possibilities, it is not to the past but to defeatured versions of the present).

If there is a humans-versus-technology narrative here, it is that technology relentlessly assaults our anthropocentric conceits. A good thing as far as I am concerned, but not to most people I suspect, and likely not to the show’s creators, who seem conflicted about it.

As a result, where Black Mirror goes dark and techno-pessimistic is in the implied editorial comment that no matter what choice we make, the outcome is worse than not having to make the choice at all. That natural ignorance is bliss.

According to the show’s logic, all choices created by technology are by definition degrading ones, and we only get to choose how exactly we will degrade ourselves (or more precisely, which of our existing, but cosmetically cloaked degradations we will stop being in denial about).

This is where, despite a pretty solid concept and excellent production, the show ultimately fails to deliver. Because it is equally possible to view seeming “degradation” of the priceless aspects of being human as increasing ability to give up anthropocentric conceits and grow the hell up.

This is why the choice to do a humorless show is significant, given the theme. Technology motivated humor begins with human “baseness” as a given and humans being worthwhile anyway. The goal of such humor becomes chipping away at anthropocentrism, in the form of our varied pretended dignities (the exception is identity humor, which I dislike).

When you compare Black Mirror to more humorous explorations of the future (the just-concluded tech-heavy season of South Park comes to mind, as do many episodes of Futurama and of course, the Hitchhiker’s Guide to the Galaxy), you realize that they aren’t just more entertaining: they are more true.

Technology is not about debasement and degradation. It is about increasing ability to stop pretending to be what we’re not. About taboos falling away and fewer things being unexamined sacred cows. This is how we actually react to new technological possibilities. We make hard choices easy by giving up sacred cows, not by choosing one sacred cow over another. To our credit, when the futility of a grand gesture becomes apparent, we usually give up our vanity rather than stick to quixotic behaviors.

I like Black Mirror, but I would have enjoyed the comedy version (Monty Python and the Black Mirror?) much more. It would also have been much more true.

Happy New Year all. Don’t forget to bring a towel to 2015. I am nursing a bad cold, so apologies for any incoherent bits in this post.