Follow @calebgarling

Movie Body Counts is one of the many odd-but-cool websites in the ether of the Internet; it notches the number of on-screen character deaths in a film. (Sweeping destruction like a death ray wiping out an entire city aren’t counted.)

Braveheart had 184 deaths, with 33 at William Wallace’s hands. Lord of The Rings: Return of the King’s body count of 836 sat atop the leaderboard (however, 524 were dastardly Orcs). Legendary samurai Ogami Itto had the most kills of any movie character at 150.

Morbid but interesting, right? Some have taken that data and done some fun visualizations with them.

But here’s the thing: the counts are done by hand. People sat through each movie and created meticulous logs. So, obviously, the site’s roster isn’t complete because there are so many movies and not enough time.

Artificial intelligence has been a buzzword (and not for the first time) and the promises of what computers can start doing for humans have been big. But can it really understand life and death?

Of course a machine can say when someone’s heart stops beating or their body temperature drops. Those are measurable metrics with clear definitions.

Death and injury is hardwired in our biology, however. That “something” — the collective signs that raise the hair on the neck — raises alarm bells when a fellow human isn’t quite right. Could we train a machine to recognize the moment the life leaves a human’s eyes? That’s a fascinating threshold.

A lot of training would need to happen. One of the common ways to teach a computer is to give it repeated examples — with information about the examples — and let the machine’s software puzzle out the patterns and determine the characteristics of one quality versus another. This is how you might teach it to recognize a cab versus a yellow car. Perhaps showing a million pictures of alive people versus a million pictures of dead people would teach the machine.

D. Fox Harrell, an associate professor at MIT’s artificial intelligence laboratory, points out that there are a number of innate cues that raise those hairs on our neck: body position, small movements with breathing, repositioning body parts, the way the eyelids hang over the eyeball and lots of other signals that establish our baseline perception of someone being alive.

“We may not be able to verbally state everything we are looking at, but our brains are able to process that information at a non-verbal level,” Harrell says. “For an AI system to recognize death in more human-like terms, as opposed to just being instruments, they need to also have the subjectivity to recognize the ambiguities around death that humans do.”

But therein lies a catch: we humans actually aren’t that great at recognizing death. Sure, a decaying body is pretty obvious but from twenty feet away you might have a hard time telling the difference between a nap and a corpse.

And in fact, it points to a much deeper question: the concept of death isn’t a black and white affair. People can be brain dead with a heartbeat. People lose certain critical cognitive or muscle functions. Is someone in a coma more alive or more dead? There are countless stories of someone being “pronounced dead” and coming back.

Mary “Missy” Cummings is an associate professor at Duke University’s Humans and Autonomy Lab (She was also one of the Navy’s first female fighter pilots). She has studied how computers and the human brain interact and points out that it’s extremely difficult to teach a machine a qualitative cognitive function that we don’t do well ourselves.

“That’s what the whole hullabaloo about ‘deep learning’ is — somehow we’ll train machines to be like our brains through pattern recognition,” she says. “But we don’t understand it ourselves. Of all the pieces of the body, the eye-brain connection is a mystery science hasn’t gotten close to unlocking.”

This constraint is not confined to life or death situations. Face recognition software has been a hot topic. Sites like Facebook have shown a machine can recognize the topography of our features, but we’re still a ways from them knowing the difference between a smile and a grimace.

“It can get basic features,” Cummings says, “but interpreting them is very hard.”

MIT’s Harrell writes about an interesting related concept in his book Phantasmal Media. He argues for more “subjective AI,” where we teach computers the concept of death (and life) through literary — qualitative — eyes. Charles Dickens, Maya Angelou and Walt Whitman all capture some distinct essence of “life” in their prose.

“[Computers] would need to understand why the idea of a body having some functions, but being brain dead, is so poignant,” he says. Maybe the only way to “understand” death is through emotions, not physical states and descriptions. And maybe we should allow for more than one classification of death.

Harrell notes, “This could be useful medically for understanding controversial states of biological functioning, but also useful poetically for understanding an aspect of human experience — death — that is not often seen as a topic of AI or cognitive science research.”

Follow @calebgarling