We are only human, and our recollections are imperfect. So when we try to create an account of our past, can we trust ourselves? That's the question at the heart of the new film Marjorie Prime, which opens today in 15 cities (with a national rollout to eventually follow). It is a quiet, contemplative drama that studies our fear of technology and mortality by juxtaposing people with computerized versions of themselves. Thanks to convincing performances by Jon Hamm (Walter Prime), Lois Smith (Marjorie/Marjorie Prime) and Geena Davis (Tess/Tess Prime), the movie forces us to consider if we're to blame for all the times AI goes awry. It also questions whether we're entrusting technology with too much responsibility.

In Marjorie Prime, the holograms (usually of the deceased) are meant to provide comfort, although they sometimes act as caretakers. For example, Walter Prime obligingly tells Marjorie stories of how he wooed her and when he proposed, based on the tales she had told him in the past. It's an unconventional form of therapy, but the act of talking to a loved one without fear of judgment can be just as cathartic as traditional counseling. Walter Prime also reminds Marjorie to eat, calmly questioning the excuses she comes up with to avoid doing so. By contrast, Marjorie's human caretaker, Julie, sneaks the ailing woman cigarettes when Tess and Jon aren't around.

Compared with AI, people's imperfections stand out. These imperfections are passed on to the Primes. The stories that Marjorie shares with Walter Prime (that he later tells back to her) are the versions she wants to remember. For a variety of reasons, she casually changes details like the movie she was watching with her late husband when he proposed, and even the people involved in certain events. The Primes are also designed to mimic verbal signs of hesitation like stuttering or pausing to appear more realistic, and thus more flawed.

The film challenges our mistrust of AI and technology, showing that if anything is untrustworthy, it's our own memories. We are the ones who contaminate software with our own biases. We don't need Marjorie Prime to show us that -- our own world today is full of examples: Microsoft's AI chatbot Tay, who was turned racist by Twitter users, and the company's subsequent bot Zo, who met the same fate. Some believe that in the US justice system, the use of algorithms that predict a person's potential for recidivism as a way to determine punishment is inherently biased. AI is a man-made product, and its flaws are created by us. It is also our fault when we entrust the technology with responsibilities, like making them our therapists, as the characters in Marjorie Prime have done, however unwittingly.

The film eventually takes its central idea to the logical conclusion, where we find out whether AI can even fool themselves into thinking they're human.

The questions of trusting AI and contrasting humans with machines have already been heavily explored (think: Her or the episode "Be Right Back" in Black Mirror), but Marjorie Prime delves deeper into how human nature is to blame. Yet it withholds judgement and shows how we can't help our failings, especially as we age. The beauty of humanity often lies in its flaws, and it's something AI can imitate but not fully replicate.