Some really smart people told me that artificial intelligence is going to bring the apocalypse. But I’m still typing formulas into Microsoft Excel, telling Doordash what I want for dinner, and dealing with chatty Uber drivers who seem oblivious to me pretending to be on a conference call. Seems like we are far away from being hunted down by killbots.

There are two dirty little secrets holding back the apocalypse right now:

Data scientists kind of suck at deploying products Product managers kind of suck (currently) at the application of AI

Now, before the data scientists and product managers reading this article feverishly send a killbot for me, let me explain. There have been incredible, life-changing innovations and research coming from the field of artificial intelligence: Farmbeats: AI & IoT for agriculture, Increasing guide dogs graduation rates, and Early detection of cancer and eye disease.

And a few other academic projects that personally blew me away, like beating professionals at Go and heads up no-limit poker.

Professional poker players getting destroyed by Libratus (via Pokernews)

But AI available for every day use by consumers are few and far between. And the ones we do have are quite lacking. How many times has Siri just sent one of your queries to a search engine? How many more poor AIBOs need to die from loneliness and neglect? Well, maybe we can have more of these comical disasters. But Salesforce and IBM, who are business product juggernauts, have been talking smack about AI for a long time, with little to show in terms of sticky, everyday products.

This is because deploying AI products is not as simple as deploying traditional products. Here are three key mistakes that product professionals are making in the AI space:

Trying to flex the awesomeness of AI instead of solving a real user problem Expecting users to adapt to a black box Forcing AI into places where it’s totally not needed

Let’s take a real life example. And we’ll do it the right way, and the wrong way.

Motion activated security cameras have been a hot item in the IoT market, and something that helps me see what our dastardly cat is up to while we’re at work. What are the problems with this product today? False positives triggering alerts (I’m looking at you Roomba). Expensive storage to access my recorded streams, and evil hackers on my moderately secured Wi-Fi network.

As an aside, there is one thing that shouldn’t change when you’re building AI products: build for the future. And I mean, 5–10 years into the future, where you can look around, see what’s missing, see where your competitors have converged, see where tech has evolved, and build something better. The pre-requisite for knowing what the future looks like, other than being this guy, is a deep understanding of your market and your users. Without that, you can travel 5–10 years into the future, but still have no idea what to build. The pre-requisite for building something better than your competitors, well, let’s just say it helps to be a contrarian.

But I’m ranting. Let’s innovate on that camera.

What have all your competitors done to solve the false positive problem in the future? Maybe this? Image recognition is really an amazing innovation that AI has made a reality. But what’s the real user problem? They want to tell the difference between their cat and their Roomba? The mailman and a passing car? Or they really only want to be alerted when something bad is happening and needs their attention? Flexing the impressive AI muscle of image recognition barely solves the problem, and feels a lot like the wrong way. Let the academics showcase the technology, you’re here to build sticky products.

A better solution, and what feels like the right way, should be a risk score for every time motion is triggered. Your geography should be a key risk attribute for an event. So should the time of day. And if a face matches a wanted list or a police sketch. Or if the camera for some reason can’t make out a human’s face at all. High-risk events can then be the only things that alert you, and can be intelligently routed to a dispatch service (yes! finally we get killbots). And the machines can continuously learn and improve, and feed data to help police departments, who in turn can feed data back to the machines.

Now we’re getting somewhere. I just got a text alert that there is a high-risk event in my home. Nope, it’s not the cat watching an R-rated movie again. It’s actually that my feed is dark. What! Why is my electricity going out being classified as a high risk event?? I demand an explanation, you stupid black box.

Yeah, that’s totally the wrong way. How about showing me the attributes in the model that are triggering the event? And maybe why they are triggering that high risk score.

Numbers totally made up, I have no idea if any of that is true

Nice progress, now let’s solve for storage space. How about a learning algorithm that predicts how often you will be robbed in the future based on how many risk factors you have, to determine how much video you really need to record? And higher risk factor cameras can have higher storage costs! We will call it, robyou.ai.

Do you want me to delete that last paragraph? Well, I won’t. But don’t force AI where it doesn’t need to be.

One last passing thought, and this one is a bit of a secret. AI products are best assembled from existing and underutilized assets.

You really shouldn’t be reinventing the wheel, when armies of data scientists are building models and writing white papers about them. And Google is giving you TensorFlow for free and Open AI exists. Think about how these models can complement the product that you’re building, and realize the model will never be the product.

Go back to the risk score for events on a motion sensing camera. The score might be novel, but the underlying models that can detect faces, differentiate objects, and make predictions from geo and time already exist somewhere. The product managers just have to give some data scientists the proper specs to repurpose them to solve a user problem. Just don’t forget to bring the donuts.

So how can we accelerate Elon Musk’s worst nightmare? Probably not with more product managers, which is everyone else’s worst nightmare. It starts with a clear vision to build a future-proof product that solves a user problem. Don’t be a black box. Don’t force AI. Find your underutilized assets, and get to your POCs quickly so you can learn and iterate. As soon as our data scientists and product managers can figure all this out, we can get on with our killbot apocalypse already.