DeepMind has surpassed the human mind on the Go board. Watson has crushed America's trivia gods on Jeopardy. But ask DeepMind to play Monopoly or Watson to play Family Feud, and they won't even know where to start. Because these artificial intelligence engines weren't specifically designed to play these games and aren't smart enough to figure them out by themselves, they'll give nonsensical answers. They'll struggle greatly, and humans will outperform them—by a lot.

WIRED OPINION About Assaf Baciu is co-founder and senior vice president of Persado, a cognitive content-generation company in New York.

If these machines are so smart, why do they fail at things that everyday people, let alone genius-level strategists, take for granted? Why can't a conversant Twitter bot, touted as artificially intelligent, be smart enough to stop itself from spewing obscenities?

The fact is, no existing AI technologies can master even the simplest challenges without human-provided context. As long as this is the case, today's version of AI is not actually "intelligent" and won't be the silver bullet for any of our business or societal problems. And claiming so as a business or reporting so as a journalist is counterproductive and misleading.

So what do we mean by context? DeepMind spent years playing Go, and Watson had the context for Jeopardy, having been fed terabytes of trivia and natural language examples to help it decode the show's answer-question format. It is only because of this human hand-holding and "training" that these machines were able to deliver such dominating performances. Even a seemingly simplistic application like x.ai's meeting scheduling assistant took years to learn the context around meeting scheduling in order to reach a consumer-acceptable level of competence.

AI without context is the singularity. Since we have yet to achieve this, it's perhaps unfair to expect AI to develop its own intelligence. Indeed, our hopes, fears, and expectations for AI technologies have been far too high. When given the appropriate context and designed to solve specific problems like how to play a game or fight cybercrime, these technologies can indeed fuel meaningful innovation. For instance, the software powering self-driving cars is poised to be one of those breakthroughs. Cuter technologies like Siri, Alexa, or Google Home, while convenient, don't really solve any global problems and still contain sizable economic barriers-to-entry for many consumers.

What is increasingly called "artificial intelligence," both inside the tech industry and the media, is more artificial than intelligent. Everyone talks about it, and no one agrees on what it actually means. This leads to the question: What is "AI"? Perhaps the question should instead be: What problems are new technologies trying to solve? It ultimately doesn't matter what these technologies are called; what matters is whether they can improve lives and perform the task advertised.

The industry continues to build and experiment with complex neural networks, machine learning systems, and even question-answering engines like Watson because they are fun and interesting, and they improve our understanding of larger AI pursuits. And certainly all of these AI-adjacent technologies have the potential to dramatically alter any industry. Given how likely AI-based start-ups are to secure venture funding, it's easy to imagine that AI is making sweeping, revolutionary changes in business and people's lives. But with so many AI projects focused on relatively innocuous applications like photo-identification or Black Friday insights and even still coming up short, it's no wonder the category has already approached bubble status.

In the far future, AI will be truly intelligent and able to operate without context. That will be more like the famously feared Skynet—and then we'll have a whole other set of problems to worry about, like the extinction of humanity (as Bill Gates and Elon Musk fear). But for now, regardless of what we are led to think by the media's more apocalyptic interpretations of Gates' and Musk's writings on AI, existing technologies are not nearly advanced enough to master simple tasks on their own, let alone pose existential threats to humanity.

But as an optimist and realist, I'm much more invigorated by the related technologies in the space, developed with a specific context and problem in mind, that are the current drivers of meaningful innovation. These technologies (like machine learning, natural language processing and cognitive computing), while not yet "artificial intelligence," have already led to dramatic disruption in industries ranging from healthcare and transportation to finance and marketing. This distinction may seem trivial, but is in fact critical. The tech industry will only be able to set realistic expectations about AI's promises if it uses the term judiciously and is realistic with consumers about what artificial intelligence can truly deliver.