The billion-dollar OpenAI initiative announced last week by Elon Musk and company recalls DARPA’s Grand Challenge, the X-Prize, and MIT Media Lab’s One Laptop Per Child initiative – innovative institutional mechanisms explicitly designed to attract top talent and worldwide attention to worthy problems. They can work well, and my bet is that the OpenAI will goad a similar competitive response — smart companies such as Google, Facebook, Apple, Amazon, Baidu, and Alibaba will immediately recognize that the so-called “not for profit” issues the researchers identify need to be incorporated in their own AI/ML technology roadmaps.

Here’s the central challenge with AI that Musk’s initiative should tackle: feelings.

“Artificial Intelligence” is a misnomer; the label misrepresents and – ironically – misunderstands the reality of learning and cognition. Historical evidence overwhelmingly suggests that intelligence and cognition co-evolve with emotion and affect; that imagination and creativity are as much functions of emotion and affect as of cognition.

If people want machines that enhance creativity and imagination – and they do – then “artificial intelligence” and “machine learning” research must confront the reality that it’s not just a matter of how “smart” or “knowledgeable” or “intelligent” you are, but what kind of temperament and emotion you bring to your decisions, communications, and actions.

That super-smart technology with – literally and figuratively – a “mind of its own” won’t have digital counterparts to fear, pain, desire, curiosity, irritation, sentiment, and ambition is a research hypothesis to be tested, not a foregone conclusion. The “feelings” and “emotions” these minds may have could be alien to humans. But that doesn’t mean they don’t or won’t exist.

For the kinds of things that Apple, Google, Facebook, GE, Amazon, and the rest want to do with machine learning – not to mention the other kinds of things that Musk, who has described AI as potentially “more dangerous than nukes,” is worried about — the machine’s emotions may matter more than it’s “intelligence.” Thus, thinking of AI as merely “intelligence” simply doesn’t go far enough. To be authentically influential and to effectively manage the real risks posed by AI, the Open AI initiative must confront this.

Although framing public policy narratives in doom-laden Kurzweilian Singularities or Skynet Terminators is overkill, there are risks, and they do need thoughtful management. Crafting unhappy AI-enabled “use cases” requires less creative science fiction than logical extrapolation. Consider the kinds of decisions – fraught with ethics, bias, and emotion – that AI may have to make:

Enterprises use sophisticated blends of predictive analytics and machine learning to ward off cyber-criminals, cyber-spies, and cyber-attacks from nation-states and hostile groups worldwide. Vicious and complex threats demand swifter and more robust defenses. Technologies that fast, flexible, and furious need ever-greater autonomy to succeed. But what happens as combinatorial complexities of attack and response evoke “autoimmune responses” that prove dysfunctional? What kinds of digital drugs heal – or manage – a hallucinating global network that starts hurting itself?

Ethical and legal debates around quasi-autonomous vehicles – automobiles, drones, etc. — intensify as their capabilities improve. When accidents appear unavoidable, is the technology’s first obligation to protect its occupants as best it can? Or to minimize total human harms? Does a baby-on-board change the machine’s computations? “Situationally aware” vehicles might not allow their occupants to override the vehicle’s decision over who “deserves” to be most protected or, alternately, sacrificed to the algorithm’s judgment. What happens when two family SUVs – or drones — about to collide simply disagree about the best way to protect their humans?

As productivity tools and trackers proliferate, workers will increasingly find their behaviors monitored and analyzed. Machine learning algorithms identify which people collaborate best and which individuals should be kept apart. The software observes who responds well to praise and encouragement and who improves after reprimand. The technologies similarly alter and/or edit interpersonal and/or team communications to minimize risk of inadvertent offense or misunderstandings. The machines take special care eliminate any threat of human-engendered hostile workplace environments; they identify and record micro-aggressions and “discriminatory” human actions. The technologies compile dossiers on most productive, least productive, and counterproductive behaviors while analytics constantly scan data to anticipate shirking or fraud. Might bias creep into these intelligent, learning, and predictive algorithms? Are there meaningful ways to determine whether brilliant technologies inappropriately, or even harmfully, manipulate the humans they (cost)-effectively oversee? Will people or machines be held accountable for the answers?

Nothing in these three scenarios is inconsistent or incongruent with existing AI/Machine Learning research and deployments. And the risk is not (just) that machines might “take over,” which I find a tad too apocalyptic. But I am nervous about what happens when the machines have a different point of view – essentially, different feelings – than we do.

No thoughtful deep learning researcher desires “I have become Death; the shatterer of worlds” digital impact. Writing foolishly buggy code is one thing; but writing foolishly buggy code capable of writing its own foolishly buggy code presents quite another.

So the smart money suggests that tomorrow’s most effective digital technologies will need to be affective, as well. This should force a new human self-consciousness about the future of machine consciousness. And that would be a good thing.