Cotard’s Syndrome—in which a person can believe that they’re dead, that their organs are rotting, or that they don’t exist—was first identified by the French neurologist Jules Cotard more than a century ago, in 1882. But the condition is so rare that it’s still far from fully understood.

Though it’s undeniably horrific for those experiencing it, Cotard’s Syndrome presents a fascinating conundrum for those studying the disorder. The condition’s central contradiction—how can someone articulate the thought that they don’t exist?—raises questions and potential answers about how human self-awareness works.

A 2013 case study of a Cotard’s sufferer showed low activity in the brain network associated with awareness of the body. It’s only one example (as with much of Cotard’s Syndrome research, because the condition is so rare), but unpacking how the brains of those with the syndrome offers hints as to how normally-functioning brains develop a sense of existence.

But Cotard’s Syndrome isn’t simply interesting from a neuroscience or psychological perspective. In the world of artificial intelligence, roboticists are working to build ever-more complex machines that replicate human behavior. One of the central questions is whether machines can truly become self-aware. Could understanding Cotard’s Syndrome provide the answer?

The disorder is so uncommon that the AI experts I spoke to had not previously heard of it. But they were tantalized by the contradictions and potential answers highlighted by Cotard’s Syndrome, and have already begun researching what it could reveal to those in artificial intelligence.

Raúl Arrabales, a professor at the digital economy institute of ESIC University in Spain, focuses on machine consciousness and explains that human disorders can often be a useful model for building sophisticated robots.

“In most lines of research, you have biological systems like humans and artificial machines and, even though they’re using different substances, they’re based on the same functions and mechanisms,” he says. “So having as a model humans with some impairments or disorders is useful for us to understand how the mechanisms work.”

Cotard’s Syndrome is particularly interesting, he says, because it seems to be a malfunction in an agent’s ability to recognize itself. If we fully understood the causes of Cotard’s Syndrome then, potentially, we would better understand how the brain creates self-awareness, and could attempt to recreate this process in robotics.

Selmer Bringsjord, a professor of cognitive science, computer science, and philosophy at Rensselaer Polytechnic Institute, agrees that this is a promising line of research to explore and simulate self-awareness. Bringsjord has previously created robots that could logically deduce that they exist. But that mathematical self-knowledge is quite different from the human sense of being alive and aware of one’s existence.

For Bringsjord, who creates logic-based robots, Cotard’s Syndrome is also deeply interesting because it presents the ultimate paradox. Descartes famously said, “I think, therefore I am.” How can somebody articulate and truly believe that they don’t exist, when they’re experiencing these very thoughts?

Such a contradiction would be “debilitating—to the point of paralysis in the technology” for Bringsjord’s logical robots. “It seems to me, that if one took this seriously within my paradigm, it could serve as a guide for how to deal with contradictions and inconsistencies in a computing machine and in a robot,” he adds.

A classic version of a robot’s struggle to compute contradictions is the liar’s paradox, namely the sentence, “This sentence is false.” If the sentence is indeed false, then what it says ends up being true. If it’s true, then it ends up being false. And so the conclusion is the sentence is true if and only if it’s false, which is a contradiction in classical mathematics. (Bringsjord points out that in three Star Trek episodes, the day was saved by sending a malicious computing machine into paralysis by presenting it with the liar’s paradox.)

Cotard’s syndrome presents “a much more concrete, less fanciful example,” which would be far more exciting than trying to resolve the liar’s paradox. “People historically who think about these things are tired of the liar paradox. I think they see it as a linguistic trick,” Bringsjord says. “Those people would not be able to maintain the same attitude in the face of a real-life syndrome versus what they claim is an abstract logic puzzle.”

Ultimately, Bringsjord would like to study Cotard’s syndrome so as to replicate it and find a way to resolve contradictions in robots. “I would assert with much confidence that the study of it, in the formalisms and the structures of math and logic, in the form of logic-based robotics that I pursue, would be very productive and helpful on the robotics side,” he adds.