Isbell’s second feature of true AI: what it learns to do should be interesting enough that it takes humans some effort to learn. It’s a distinction that separates artificial intelligence from mere computational automation. A robot that replaces human workers to assemble automobiles isn’t an artificial intelligence, so much as machine programmed to automate repetitive work. For Isbell, “true” AI requires that the computer program or machine exhibit self-governance, surprise, and novelty.

Griping about AI’s deflated aspirations might seem unimportant. If sensor-driven, data-backed machine learning systems are poised to grow, perhaps people would do well to track the evolution of those technologies. But previous experience suggests that computation’s ascendency demands scrutiny. I’ve previously argued that the word “algorithm” has become a cultural fetish, the secular, technical equivalent of invoking God. To use the term indiscriminately exalts ordinary—and flawed—software services as false idols. AI is no different. As the bot author Allison Parrish puts it, “whenever someone says ‘AI’ what they're really talking about is ‘a computer program someone wrote.’”

Writing at the MIT Technology Review, the Stanford computer scientist Jerry Kaplan makes a similar argument: AI is a fable “cobbled together from a grab bag of disparate tools and techniques.” The AI research community seems to agree, calling their discipline “fragmented and largely uncoordinated.” Given the incoherence of AI in practice, Kaplan suggests “anthropic computing” as an alternative—programs meant to behave like or interact with human beings. For Kaplan, the mythical nature of AI, including the baggage of its adoption in novels, film, and television, makes the term a bogeyman to abandon more than a future to desire.

* * *

Kaplan keeps good company—when the mathematician Alan Turing accidentally invented the idea of machine intelligence almost 70 years ago, he proposed that machines would be intelligent when they could trick people into thinking they were human. At the time, in 1950, the idea seemed unlikely; Even though Turing’s thought experiment wasn’t limited to computers, the machines still took up entire rooms just to perform relatively simple calculations.

But today, computers trick people all the time. Not by successfully posing as humans, but by convincing them that they are sufficient alternatives to other tools of human effort. Twitter and Facebook and Google aren’t “better” town halls, neighborhood centers, libraries, or newspapers—they are different ones, run by computers, for better and for worse. The implications of these and other services must be addressed by understanding them as particular implementations of software in corporations, not as totems of otherworldly AI.

On that front, Kaplan could be right: abandoning the term might be the best way to exorcise its demonic grip on contemporary culture. But Isbell’s more traditional take—that AI is machinery that learns and then acts on that learning—also has merit. By protecting the exalted status of its science-fictional orthodoxy, AI can remind creators and users of an essential truth: today’s computer systems are nothing special. They are apparatuses made by people, running software made by people, full of the feats and flaws of both.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.