The Terminator was written to frighten us; WALL-E was written to make us cry. Robots can’t do the terrifying or heartbreaking things we see in movies, but still the question lingers: What if they could?

Granted, the technology we have today isn’t anywhere near sophisticated enough to do any of that. But people keep asking. At the heart of those discussions lies the question: can machines become conscious? Could they even develop — or be programmed to contain — a soul? At the very least, could an algorithm contain something resembling a soul?

The answers to these questions depend entirely on how you define these things. So far, we haven’t found satisfactory definitions in the 70 years since artificial intelligence first emerged as an academic pursuit.

Take, for example, an article recently published on BBC, which tried to grapple with the idea of artificial intelligence with a soul. The authors defined what it means to have an immortal soul in a way that steered the conversation almost immediately away from the realm of theology. That is, of course, just fine, since it seems unlikely that an old robed man in the sky reached down to breath life into Cortana. But it doesn’t answer the central question — could artificial intelligence ever be more than a mindless tool?

That BBC article set out the terms — that an AI system that acts as though it has a soul will be determined by the beholder. For the religious and spiritual among us, a sufficiently-advanced algorithm may seem to present a soul. Those people may treat it as such, since they will view the AI system’s intelligence, emotional expression, behavior, and perhaps even a belief in a god as signs of an internal something that could be defined as a soul.

As a result, machines containing some sort of artificial intelligence could simultaneously be seen as an entity or a research tool, depending on who you ask. Like with so many things, much of the debate over what would make a machine conscious comes down to what of ourselves we project onto the algorithms.

“I’m less interested in programming computers than in nurturing little proto-entities,” Nancy Fulda, a computer scientist at Brigham Young University, told Futurism. “It’s the discovery of patterns, the emergence of unique behaviors, that first drew me to computer science. And it’s the reason I’m still here.”

Fulda has trained AI algorithms to understand contextual language and is working to build a robotic theory of mind, a version of the principle in human (and some animal) psychology that lets us recognize others as beings with their own thoughts and intentions. But, you know, for robots.

“As to whether a computer could ever harbor a divinely created soul: I wouldn’t dare to speculate,” added Fulda.

There are two main problems that need resolving. The first is one of semantics: it is very hard to define what it truly means to be conscious or sentient, or what it might mean to have a soul or soul-function, as that BBC article describes it.

The second problem is one of technological advancement. Compared to the technology that would be required to create artificial sentience — whatever it may look like or however we may choose to define it — even our most advanced engineers are still huddled in caves, rubbing sticks together to make a fire and cook some woolly mammoth steaks.

At a panel last year, biologist and engineer Christof Koch squared off with David Chalmers, a cognitive scientist, over what it means to be conscious. The conversation bounced between speculative thought experiments regarding machines and zombies (defined as those who act indistinguishably from people but lack an internal mind). It frequently veered away from things that can be conclusively proven with scientific evidence. Chalmers argued that a machine, one more advanced than we have today, could become conscious, but Koch disagreed, based on the current state of neuroscience and artificial intelligence technology.

Neuroscience literature considers consciousness a narrative constructed by our brains that incorporates our senses, how we perceive the world, and our actions. But even within that definition, neuroscientists struggle to define why we are conscious and how best to define it in terms of neural activity. And for the religious, is this consciousness the same as that which would be granted by having a soul? And this doesn’t even approach the subject of technology.

“AI people are routinely confusing soul with mind or, more specifically, with the capacity to produce complicated patterns of behavior,” Ondřej Beran, a philosopher and ethicist at University of Pardubice, told Futurism.

“AI people are routinely confusing soul with mind”

“The role that the concept of soul plays in our culture is intertwined with contexts in which we say that someone’s soul is noble or depraved,” Beran added — that is, it comes with a value judgment. “[In] my opinion what is needed is not a breakthrough in AI science or engineering, but rather a general conceptual shift. A shift in the sensitivities and the imagination with which people use their language in relating to each other.”

Beran gave the example of works of art generated by artificial intelligence. Often, these works are presented for fun. But when we call something that an algorithm creates “art,” we often fail to consider whether the algorithm has merely generated sort of image or melody or created something that is meaningful — not just to an audience, but to itself. Of course, human-created art often fails to reach that second group as well. “It is very unclear what it would mean at all that something has significance for an artificial intelligence,” Beran added.

So would a machine achieve sentience when it is able to internally ponder rather than mindlessly churn inputs and outputs? Or is would it truly need that internal something before we as a society consider machines to be conscious? Again, the answer is muddled by the way we choose to approach the question and the specific definitions at which we arrive.

“I believe that a soul is not something like a substance,” Vladimir Havlík, a philosopher at the Czech Academy of Sciences who has sought to define AI from an evolutionary perspective, told Futurism. “We can say that it is something like a coherent identity, which is constituted permanently during the flow of time and what represents a man,” he added.

Havlík suggested that rather than worrying about the theological aspect of a soul, we could define a soul as a sort of internal character that stands the test of time. And in that sense, he sees no reason why a machine or artificial intelligence system couldn’t develop a character — it just depends on the algorithm itself. In Havlík’s view, character emerges from consciousness, so the AI systems that develop such a character would need to be based on sufficiently advanced technology that they can make and reflect on decisions in a way that compares past outcomes with future expectations, much like how humans learn about the world.

But the question of whether we can build a souled or conscious machine only matters to those who consider such distinctions important. At its core, artificial intelligence is a tool. Even more sophisticated algorithms that may skirt the line and present as conscious entities are recreations of conscious beings, not a new species of thinking, self-aware creatures.

“My approach to AI is essentially pragmatic,” Peter Vamplew, an AI researcher at Federation University, told Futurism. “To me it doesn’t matter whether an AI system has real intelligence, or real emotions and empathy. All that matters is that it behaves in a manner that makes it beneficial to human society.”

“To me it doesn’t matter whether an AI system has real intelligence… All that matters is that it behaves in a manner that makes it beneficial to human society.”

To Vamplew, the question of whether a machine can have a soul or not is only meaningful when you believe in souls as a concept. He does not, so it is not. He feels that machines may someday be able to recreate convincing emotional responses and act as though they are human but sees no reason to introduce theology into the mix.

And he’s not the one who feels true consciousness is impossible in machines. “I am very critical of the idea of artificial consciousness,” Bernardo Kastrup, a philosopher and AI researcher, told Futurism. “I think it’s nonsense. Artificial intelligence, on the other hand, is the future.”

Kastrup recently wrote an article for Scientific American in which he lays out his argument that consciousness is a fundamental aspect of the natural universe, and that people tap into dissociated fragments of consciousness to become distinct individuals. He clarified that he believes that even a general AI — the name given to the sort of all-encompassing AI that we see in science fiction — may someday come to be, but that even such an AI system could never have private, conscious inner thoughts as humans do.

“Sophia, unfortunately, is ridiculous at best. And, what’s more important, we still relate to her as such,” said Beran, referring to Hanson Robotics’ partially-intelligent robot.

Even more unfortunate, there’s a growing suspicion that our approach to developing advanced artificial intelligence could soon hit a wall. An article published last week in The New York Times cited multiple engineers who are growing increasingly skeptical that our machine learning, even deep learning technologies will continue to grow as they have in recent years.

I hate to be a stick in the mud. I truly do. But even if we solve the semantic debate over what it means to be conscious, to be sentient, to have a soul, we may forever lack the technology that would bring an algorithm to that point.

But when artificial intelligence first started, no one could have predicted the things it can do today. Sure, people imagined robot helpers à la the Jetsons or advanced transportation à la Epcot, but they didn’t know the tangible steps that would get us there. And today, we don’t know the tangible steps that will get us to machines that are emotionally intelligent, sensitive, thoughtful, and genuinely introspective.

By no means does that render the task impossible — we just don’t know how to get there yet. And the fact that we haven’t settled the debate over where to actually place the finish line makes it all the more difficult.

“We still have a long way to go,” says Fulda. She suggests that the answer won’t be piecing together algorithms, as we often do to solve complex problems with artificial intelligence.

“You can’t solve one piece of humanity at a time,” Fulda says. “It’s a gestalt experience.” For example, she argues that we can’t understand cognition without understanding perception and locomotion. We can’t accurately model speech without knowing how to model empathy and social awareness. Trying to put these pieces together in a machine one at a time, Fulda says, is like recreating the Mona Lisa “by dumping the right amounts of paint into a can.”

Whether or not the masterpiece is out there, waiting to be painted, remains to be determined. But if it is, researchers like Fulda are vying to be the one to brush the strokes. Technology will march onward, so long as we continue to seek answers to questions like these. But as we compose new code that will make machines do things tomorrow that we couldn’t imagine yesterday, we still need to sort out where we want it all to lead.

Will we be da Vinci, painting a self-amused woman who will be admired for centuries, or will we be Uranus, creating gods who will overthrow us? Right now, AI will do exactly what we tell AI to do, for better or worse. But if we move towards algorithms that begin to, at the very least, present as sentient, we must figure out what that means.