It took me an embarrassingly long time to realize that the “black mirror” of the popular anthology series Black Mirror was a screen, or rather, all the screens we surround ourselves with: phones, tablets, computers, TVs, and, increasingly, futuristic devices built by massive corporations that monitor our movements and preferences and words. We buy these black mirrors, welcoming them into our homes and lives and letting them — true to their name — reflect ourselves back to us. And as we know all too well, those reflections sometimes betray our darkest impulses.

Unsettling reflections are not the black mirrors’ fault. Gadgets are merely assemblages of wires and metal and glass. Devices don’t have a point of view; they operate according to the input they receive, the algorithms and designs and patterns that power the software, written by humans and thus shaded and slanted by human biases.

What I mean is that if you or I write a bit of code, what we believe to be true about the world — how it ought to look, how I ought to interact with my device, what someone else might prefer to do or see on their phone or TV — is baked in right from the start. Users can customize and modify, but only to a point. We are restricted by what the software’s designers thought we should be allowed to do, or what they assumed we want, based on their own worldviews.

And when we use a gadget, we are inevitably affected by those worldviews. Our digital worlds alter our “real” worlds, which is how we end up with what Jia Tolentino recently described as Instagram face: a “single, cyborgian look,” mediated by filters and celebrities’ use of them, that people now seek at the plastic surgeon’s office.

“Instagram face” wasn’t developed by Instagram’s engineers. But it was enabled by them when they gave us the option to filter our photos. Other apps, like FaceTune, allow users to edit their selfies to make them more flattering. As Tolentino notes, the result of these capabilities becoming available to everyone is not a celebration of diverse beauty, but an “algorithmic tendency to flatten everything into a composite of greatest hits,” resulting in “a beauty ideal that [favors] white women capable of manufacturing a look of rootless exoticism.” It is an impossible type of beauty determined by consensus.

And while humans have always manufactured beauty standards, this one’s a little different, because it sounds as though it’s ripped directly from a Black Mirror episode or a sci-fi blockbuster — a clear example of humans being altered by the machines they created.

“Instagram face” is obviously an extreme case, as most people probably wouldn’t (or couldn’t) go so far as to physically change their face because a computer, in essence, told them to. But it’s nonetheless become clear that the long-awaited future in which computers transform our humanity has undeniably arrived. And there’s at least one very specific realm in which that future is steadily becoming more and more of a threat: the movies, and how they’re being made as Hollywood increasingly signals its willingness and even desire to cede control over its product to emerging technologies.

It seems more and more likely that decisions about stories, casting, and even genre may soon be left in the cold, steely hands of machines and algorithms. Questions of whether a movie should be greenlit, who should star in it, and whether that star should even be “real” are being analyzed — and answered — by AI.

It seems unlikely that anyone involved has contemplated what effects this approach might have in the real world down the road. Filtering our faces on Instagram makes us physically change them to match. What will happen when what we see on our screens, both big and small, isn’t driven at least in part by artists’ reflections of our anxieties and desires, but by algorithms?

An algorithm that drives movie creation is likely to shut out a crucial factor in art

One of the six biggest studios in Hollywood, Warner Bros., recently announced a deal with Cinelytic, a startup in Los Angeles that uses algorithms and data to predict a film’s success before the film is made or even greenlit. Cinelytic’s technology uses variables like genre and specific performers to predict how much money a movie could make, based on how those variables typically perform in different markets. So if you want to gauge how a movie will ostensibly perform with Michael B. Jordan instead of Oscar Isaac in the starring role, you can do that. Just plug and play.

Warner Bros. and Cinelytic have claimed the technology will be used only in marketing and distribution decisions, or maybe to help executives figure out which projects to greenlight. They won’t, they say, let an algorithm govern the decision-making process entirely; humans will still be involved. But it’s difficult to believe this costly algorithm the studio has licensed won’t ultimately exert a significant amount of influence over which projects move forward and which ones don’t, regardless of whether the company plans for it to do so.

In his 2010 book You Are Not a Gadget, programmer and virtual-reality pioneer Jaron Lanier argues that our digital technologies have baked-in biases that run against the grain of what it means to be human. “When the developers of digital technologies design a program that requires you to interact with a computer as if it were a person, they ask you to accept in some corner of your brain that you might also be conceived of as a program,” Lanier writes. “When they design an internet service that is edited by a vast anonymous crowd, they are suggesting that a crowd of humans is an organism with a legitimate point of view.”

In other words, the digital systems that we create and interact with tell us how to be human, then train us to be human in a way that aligns with their priorities. “Different media designs stimulate different potentials in human nature,” Lanier continues. “We shouldn’t seek to make the pack mentality as efficient as possible. We should instead seek to inspire the phenomenon of individual intelligence.”

The “phenomenon of individual intelligence” is as near a description of the genius of art-making as I’ve ever heard. Certainly, filmmaking is collaborative, and movies aren’t “individual” products (something even the most ardent supporter of auteur theory would admit). But they’re not the result of groupthink or a hive mind, because individual intelligences are at play in the collaboration. In a good film, creative, accomplished, and opinionated artists are able to leave their thumbprint, or a lot more, on the finished product.

In general, the more a movie seems created by consensus — as many big franchise flicks designed for maximum box-office earnings are — the less good it is. It’s designed to please many and challenge few, not for any philosophical reason but because that’s what makes a lot of money. The almighty buck rules.

Which is fine. Hollywood movies have always occupied the weird space between mass culture and rarefied art. They’re a young art form, barely over a century old, and designed to entertain as well as to interrogate, confront, and move the audience. They’re also really expensive to make, so they’ve got to bring in a lot of money to be successful.

And they’re still, on the whole, made by people who really love movies: directors and writers and actors and producers who eat and breathe and sleep cinema. Even bad or mediocre filmmakers get excited by interesting movies, films that bend their brains and make them see the world differently.

And how can a filmmaker create such movies? By being “visionary,” a quality that’s become a marketing cliché but is still meaningful. A visionary filmmaker sees something the rest of us don’t, then chases it down and puts it on screen. Visionary filmmakers are people like Martin Scorsese and Mati Diop and Bong Joon-ho and Marielle Heller and Christopher Nolan and Jordan Peele and Kelly Reichardt and Ryan Coogler and Lulu Wang and so many more. They change how we see the world by letting us see it through their eyes.

The trouble is that visionary directors sometimes make movies that don’t appeal to a wide audience or, for whatever reason, make a lot of money. And Hollywood studios want just the opposite: as big and loyal an audience as possible for every film they release. Most of the industry believes that the way to attract big, loyal audiences is to tell stories that viewers already know they’re interested in (because they liked the previous movie, or the book, or whatever), starring actors with whom they’re already familiar. Hollywood believes in the big swing — but not, in the end, in taking risks.

So the primary goal for the industry’s decision-makers is to mitigate risk as much as possible. And that’s why the partnership between Cinelytic and Warner Bros. makes sense — but it’s also why I’m worried it sets a troubling precedent for the future.

Algorithms will likely only reinforce the assumptions Hollywood already makes

Anyone who’s been alive in the last couple of decades knows that technology in general has a weird way of creeping into places where it didn’t previously exist. Twitter is a space to post random nonsense, but suddenly becomes an avenue for politicians to make official announcements and pronouncements. Your iPhone is just a thing for making calls and playing games, before suddenly it becomes the thing with which you manage your dating life (among countless other things). And what’s especially disturbing is that the data used by predictive technologies like the one Cinelytic is selling will almost certainly be skewed.

Why? Because algorithms that make predictions need data from the past on which to model those predictions. And in the case of the movies, the data they’ll be using is influenced by the biases of Hollywood executives, who have long hewn to a hopelessly outmoded set of beliefs about what people want to see (and, by extension, what they’ll pay for). These are the same groups of people who say they’re shocked when movies like Get Out and Black Panther do well in both North America and abroad, because they’ve long subscribed to the myth that “black films don’t travel.” Who are surprised when a film like Wonder Woman breaks records. Who continue to harbor bizarre ideas about women directors. And who have long made decisions based on those presumptions.

You could argue that technologies like those from Cinelytic — a handful of which have made their way into Hollywood’s decision-making process in recent years — may actually help counteract these outmoded ideas by spotting patterns that executives’ biases allow them to overlook. But the historic exclusion of people of color and women from filmmaking and lead roles, particularly in big-budget genres like action and franchise films, will make it tricky for algorithms to avoid simply reinforcing those old habits. And if it’s the computer that spits out biased, exclusionary results, nobody’s to blame, right?

But that’s not even the biggest issue. What’s most worrying to me is how these technologies are essentially designed to strip out some of the “visionary” from the filmmaking process in favor of “safe,” lucrative choices.

That’s a strategy the formerly innovative leader Disney has already shown itself more than eager to adopt of late, favoring safe nostalgia over daring creativity. And the studio has been richly rewarded for it. Seven of the 10 top-grossing movies in 2019, both domestic and worldwide, came from Disney; an eighth, Spider-Man: Far from Home, was a Disney co-production (by way of Marvel Studios).

Meanwhile, the movie industry has cautiously but increasingly edged into using machine learning and AI technologies, such as an analysis tool called ScriptBook that predicts how well a movie might perform based on its screenplay (in contrast to Cinelytic’s analysis of genre plus potential stars). Those technologies are designed to influence decision-making. And in an industry that thrives on achieving the largest possible return on its investments, the kind of results they provide drives sameness rather than experimentation and discovery.

It’s easy to imagine a world in which studio executives, panicked over flagging ticket sales and mandates from corporate higher-ups, cede increasing territory to the algorithms, opting for the “guaranteed” moneymaker over the “visionary” risk.

But it’s the visionary risk-takers — the James Camerons and Jordan Peeles and Agnès Vardas and George Lucases and Jane Campions and Ava DuVernays and, yes, the Walt Disneys of the world — who have always driven change and altered the form. It’s the unexpected new faces and voices who light the world on fire. Because movies aren’t just commercial products. They’re art. They’re meant to challenge and inspire, which Hollywood frequently congratulates itself on doing even if the congratulations aren’t earned.

The shift toward algorithms is also reflected in Hollywood’s increasing foray into faking performance, slowly pushing the actor’s craft (and, in some cases, the animator’s) out of the picture. Consider the deepfake-style Lion King of 2019, or the strange case of an animated Will Smith in Gemini Man, which drew on old footage of Smith to create a kind of “digital mask” of his younger self that could be projected onto his current face. Or the presence of Carrie Fisher in the last two installments of the latest Star Wars trilogy, both of which were filmed after her death and used previously shot footage and some fancy CGI to revive the star.

In November 2019, it was announced that James Dean, who died in 1955, will be “resurrected” via CGI to play the leading role in a live-action Vietnam War movie called Finding Jack. “We searched high and low for the perfect character to portray the role of Rogan, which has some extreme complex character arcs, and after months of research, we decided on James Dean,” the movie’s co-director told the Hollywood Reporter, adding that Dean’s family views this “as his fourth movie, a movie he never got to make,” and that they “do not intend to let his fans down.”

Let that statement sink in for a moment: The movie’s creative team insists they couldn’t find a single living actor to play the part as well as a technologically reconstituted James Dean, or rather, a holographic version of the actor designed to their specifications.

Of course they could have found another actor. “Casting” Dean is a marketing stunt. But imagine a future — one actually depicted in the 2013 film The Congress — in which beloved actors (or their estates) eagerly (or reluctantly) license their images and voices to film studios. Actors wouldn’t even need to show up on set to appear in a movie; they’d effectively become cartoon characters, endlessly and infinitely digitally generated. Now imagine that those studios could use an algorithm to make decisions about whom to cast in one film or another, with the option to choose from a stable of licensed actors based on whoever is likely to bring in the most cash.

It’s an idea that might delight some people, but one I find rather grim. If such a scenario doesn’t bring about the death of cinema, it will, once and for all, create a rift between the art of making movies and the business of making movies. At present, you still need human animators and artists to create deepfake-style films, but eventually, technology will surely make it possible to remove humans from the process entirely.

And with companies like Netflix continually trying to reverse-engineer the kinds of films and TV shows — excuse me, content — that people want to see based on what they already like to see, the ability to rapidly (and relatively cheaply) churn out that content without having to pay too many people to do it is absolutely on the horizon. One can easily imagine a streaming service, in the not-too-far-off future, that allows viewers to plop down on the couch, select a few variables, and generate a movie on the fly. Want a PG-13 72-minute action-comedy starring Reese Witherspoon and Adam Sandler, set in Paris with, say, a liberal bent? Click, click, click. You got it.

So what?

But who cares, right? It’s just movies.

I’m professionally obligated to care, but even if I weren’t, I’d still find these possibilities disturbing. And I hope others will too. Because in a world where we can fully control our own experience with art, the echo chambers we often find ourselves in — what media theorist Thomas de Zengotita refers to as realities that “flatter” us because they shield us from anything that might disturb or discomfit or surprise us — are only going to get more soundproof.

For better or worse, movies and TV shows are still a place where we can find common ground. Just go listen to any conversation at a party that has inevitably turned to what people have been watching. Movies and TV shows are also where we bump into people who think and act and believe and look different from us, a key reason why more risk-taking, not less, is a great thing for both the art form of cinema and the people who watch it.

But technologies that encourage the creation of entertainment via algorithm, based on what we already prefer, are caving to the black mirror’s worst tendencies. Because let’s face it: Many of us tend to opt for comfort over challenge. Many of us tend to watch the same TV shows over and over, or stick to the same safe movies, unless someone tells us we’ve got to see this new, strange thing that’s just come out. Word of mouth (and good advertising and reviews) is what drives a movie like Get Out or Parasite or Knives Out to become a hit and keeps filmmakers engaged and studios on their toes. But a lot of people wouldn’t have gone to see those movies if others didn’t insist they step outside their comfort zone.

The great danger in letting the algorithms decide what belongs on our TV and movie screens is that we’ll only ever see ourselves reflected in them — or, even worse, only the subset of ourselves that’s always been favored by Hollywood. Which tends to result in predominantly white guys telling predominantly white guy stories, most of which have been told many times before, in a visual style that’s designed for maximum legibility and at a pace dictated by the desire to ensure that the audience never gets bored, even if it’s boredom with a purpose.

The issue isn’t that there won’t be any financial, creative, or existential incentive for filmmakers to take risks; it’s that the risk-takers will, more often than not, be crowded out by what’s already been proven to work. In an algorithm-driven world, it’ll be much, much more difficult for A Hidden Life or Roma or Moonlight or Do The Right Thing to succeed. There’ll be little incentive for audiences to try something new, to see a movie that doesn’t reinforce the biases of the majority, to be enthralled or flabbergasted by someone else’s imagination. We’ll get stuck in a feedback loop of our own creation, and we might forget what it was like to be radically affected by a movie.

I could argue we’re already seeing this feedback loop in action, though there are still studio flicks, and a host of indie and mid-budget films, out there that are full of life. But in an industry that’s shown over and over again how risk-averse it is, the future, I fear, is a dark place where the movies, like our FaceTuned faces, meld into a single, cyborgian retread lacking the guts to be art.

I have no solution, other than to repeat that if money is king at the movies, we’ve got to be brave now. We’ve got to watch films we’re not sure we’ll like and share them with others. It’s almost too simple. We’ve got to vote with our credit cards, a little at a time, and keep insisting we deserve more than warm milk designed to lull us into market-driven complacency, pre-chewed meals served on white bread.