AI that looks to make you happy is bad AI

[Today’s random Sourcerer profile: https://sourcerer.io/josh-payne]

I was having a dinner with two old time googlers/friends the other day, and found myself criticizing Google, a position I am not accustomed to. It’s been awhile but I’ve spent three years at Google, and mostly brought good memories out of it: smart coworkers, thought out algorithms, massive infrastructure, tech talks, and occasional oysters. We certainly saw ourselves as good guys, thought hard about how to make search better, took reasonable precautions with user data, and were proud of our data-driven decisions. I don’t believe any of that changed in the last decade. These days I see my former coworkers talking more about perf and promotions, and talking less about how cool it is what they do, but I still think that the fundamentals did not change: Google views itself and acts as good guys. So what is my problem then?

That night we talked about GDPR. The sleep inducing acronym is how EU wants to make sure user privacy is respected. Even though it covers EU, it applies to their data worldwide, so it was bound to cause a lot of pain inside tech giants like Google. I personally think that GDPR is good-intentioned but I don’t find checklist based compliance very useful: it’s too easy to misinterpret, forget, mislead, forge, etc, and consequences only happen if a ruse is uncovered, i.e. when the damage is already done. It would be far more effective to compel Googles and Amazons of the world once a year to make their engineers available to the media to answer direct privacy-related questions without the fear of being fired or sued for an NDA breach. There you have it: a zero taxpayer expense, occasional juicy stories, an annual excitement build up comparable to the Super Bowl, and excellent user privacy. A true American way.

GDPR forced us to look at the user data it covers, and that’s where my gripes began. Because I strongly feel that a lot of this data should not have existed in the first place. I try to keep my house free of internet of things, or FSM forbid, a smart assistant. I hesitantly allowed Rachio, a lawn watering controller, into my house that left unattended makes a happy deluge lasting for hours, and maybe hacking Hillary’s emails in its free time. My bigger concern though is that IoT, just like many other things that tech giants have been pushing onto me lately, does not do anything I could not easily do myself, but it does take away control of my life. I don’t need Alexa to order stuff. I don’t want Google to listen for my questions; in fact, I don’t want it to listen at all. I can type and that will do; thank very much.

“Nobody is forcing you to buy Alexa”, I hear back. “It’s your choice to buy or not to buy it, and people that do buy it like it”. This is a very dangerous simplification. First off, it’s not entirely true. A Samsung Smart TV has a mic in it now, and who knows when it’s on. I did not want a mic, I wanted a TV; they forced a mic on me. Elon Musk’s SolarCity gives you a Nest thermostat when you install solar panels on your roof: “$250 value!”. When you decline, it is hard to not feel a real loss. Nest is pretty, has a heft of a ‘real’ thing, and is convenient. And it collects data about you. And then there are online aides, like canned replies in your LinkedIn chat or Gmail. I never asked for them, but they are still there insidiously reminding me that a mind of a machine is reading every word I write and try to comprehend it. And that maybe my emails were read by humans who were developing and evaluating this tech.

The truly evil part is in “people that buy it like it”. It is true; after all, user satisfaction is what Google is optimizing for. We already grew to rely on Google instead of our own memory. Back in 2010, an argument that Google somehow makes you smarter was fashionable. Maybe there is truth to that. Maybe a constant outsourcing of our mental faculty to a search engine does not make us more stupid, but rather somehow re-allocates a newly available cognitive resource. At least I can see this side of the argument. But there is absolutely no excuse for the cost of a mild convenience of smart assistants that burrow themselves into the very heart of our lives and steer them to their master’s liking. Google remains the same good company: smart people, clever algorithms, data-driven decisions, user interests in priority. Yet somehow the good company is now in the business of making evil products. It discovered that the best way to satisfy us is to take control away from us.

If we follow this gradient of the Google’s target function, it’s easy to see that an ideal user of Google is the one that fully relinquishes her control. Google will pick up her phone, write an email, chat on Facebook, and apply for that promotion that is long overdue. A human is not even optional there, she is an obstacle and a source of uncertainty. And while this will increase shareholder value, I am not convinced we need to subscribe to this. I would welcome a lesson from omniscient Google on how to write a good blog post, just let me write it myself.

I believe that forcing happiness and user satisfaction on us is a choice that the tech giants make at our expense. But it does not have to be this way. There is a world of applications for AI that is designed to serve people. An AI that does precisely what it is told to, and stays deaf and blind until explicitly invoked by a user for a specific purpose. An Aladdin’s lamp AI. And don’t tell me that people would not want it. When it is executed well, people love control. People love WhatsApp because it does exactly what it promises. People love Slack because they can control what information flows to what channels. Unix engineers love command line because it’s brief, precise, powerful, and explicit. Aladdin lamp AI should be exactly that.