Wikimedia Commons license

As machine learning, and especially deep learning, achieve greater cognitive feats — winning at Jeopardy and Go, recognizing images and spoken language, and soon enough driving without drivers — we’re reaching for the next challenges, including teaching AIs how to understand our human needs and respond accordingly.

The exploration is already underway on many fronts, and it won’t be long before machine learning algorithms know enough to order pizza for your family — or send an apologetic email to your spouse — without you having to ask.

In general, the data for these sympathetic systems will come from an array of sensors and from the content of our written and spoken communication. Consider the following list…

lack of food in the fridge,

lack of activity in the kitchen at the normal time,

data that your pulse and blood pressure has been higher than average all day,

the content of this afternoon’s tense phone call with your boss,

the presentation on your calendar for tomorrow morning, and

the fact that there’s enough extra money in the checking account to splurge on dinner

Taken together, that information could compel a well-trained machine learning algorithm to phone in an order for a couple of pizzas (with healthy toppings) — before you lose your mind.

After the Pizza

The initial complexity comes from aggregating the data and assigning relative weights — which the AI can refine as it trains on data from thousands of other households like yours.

Yet, the next — and more interesting — level of complexity is for the AI to assess the consequences of its own actions in the human world. For example, maybe everyone enjoyed the pizza, but …

• Did your blood pressure drop to normal levels?

• Did you say nice things to your spouse that evening?

• How did the children sleep that night?

• Did the meeting in the morning go well?

In other words, the choices that algorithms make will generate new data that they evaluate in turn. What emerges is evaluation on longer timescales. If the AI evaluates not just what leads up to its actions, but also what follows, it can begin to consider the organization of whole arcs of time and action.

And we have a word for organized arcs of time and action. We call them stories.

Tune Up

Perhaps without fully recognizing it, we’re edging toward creating AIs capable of shaping the stories of our lives. We’ll want to enjoy those stories, of course. And at the same time, we’ll want to feel that they’re meaningful — not only for us, but for the people we care about. But for the stories to be meaningful, they’ll need to be about more than convenience — or even happiness.

We know that ad engines and social media feeds already ply us with the information they think will please us. And to expect more than convenience and happiness, we’ll have to go beyond the first instincts of the algorithms. We’ll need to be able to tune those algorithms to optimize for other things…

Our children, for example, are shaped toward independence, responsibility, and self-preservation by experiences of difficulty. Will we be able to tune the AIs to engineer those difficult experiences for them? For example, could we tune the AI not to send a reminder to recharge the car, knowing that it means Thomas Junior will find himself stuck on the side of the road somewhere between Fort Worth and Abilene? With enough training data and the right weights, the AI could carefully stack together tough-but-manageable experiences — experiences that might spell the difference between a resourceful child and a fearful one.

And could we go one better and have the AI adjust the experiences so that they inspire real stories? Imagine that the AI can make the car conk out at any time. Won’t it make a better story if that happens in a tiny town on the day of the annual peach and melon festival?

The more fun the story is to tell, the more Thomas tells it. The more he tells it, the more it colors his view of his own resourcefulness.

Story of Your Life

Naturally, the lives we want will vary from person to person. Some will prefer quiet connection to family and friends. Some will crave adventure, surprise, and conflict. The range and variation is infinite.

We’ll continue to acquire and organize data. And the machine learning algorithms will almost certainly continue to improve their ability to digest and weigh that data in order to fulfill our desires. The desires will be simple at first — pizzas and clean laundry and fully charged cars — but eventually the AIs will be capable of more complexity.

If we don’t plan for meaning as much as we plan for happiness, we might just find ourselves in stories too boring to tell.