It won’t look like this

There’s a meme going around Silicon Valley that computers are becoming so smart that they will take over the world, and that this will be necessarily bad. Of course I’m paraphrasing a multitude of opinions here, but this sentiment isn’t too far off. The basic idea is that sometime soon, computers are going to become more powerful and more flexible than human brains, up to the point where a computer will be able to perfectly pass a Turing test and be able to act of its own volition. Once computers can outsmart humans, what’s to stop them from taking over — either by being connected to the nuclear warheads directly and deciding it’s about time for an apocalypse, or by convincing the guy who’s in charge of the nuclear warheads that it’s about time for an apocalypse?

The thing is, being a programmer I’m very aware of how hard it is to get a program to do anything but what you already know about — and fundamentally, we don’t know what consciousness is so we’re going to have an incredibly hard time convincing a computer to be conscious. We’re fairly accomplished at tricking other humans that we are conscious. However, a program is only as versatile as its creators were articulate. Most creators that I know are relatively finitely articulate. Long story short: I don’t see conscious programs as being an imminent threat to humanity.

The Actual Overlords

On the other hand, humanity is already quite good at building huge systems with huge amounts of computing power, designed to do specific tasks. These systems are composed of many self-aware nodes, each capable of modifying many aspects of the system and even changing source code! Most people just call these systems Fortune 500 companies with programmers for employees. Facebook is a great example of one of these systems. It employs thousands of self aware programmers and uses thousands of computers in order to build a product which is super optimized to a specific task. Unfortunately, that task is to keep you staring at your computer screen, because that’s how they make money.

This scares me way more than the idea of a program tricking people into setting off bombs. People are already good at tricking people to set off bombs. Unwittingly, we’ve allowed Facebook to design a system which encourages you to stare at your phone and it only becomes more powerful the more you stare at your phone. It’s no fault of Facebook — this has been the way of the web since its inception. The most obvious monetization strategy for content-based websites is advertising, which works better the more you look at the website. Facebook is the real robot overlord, turning us unwitting users into zombies.

Staging the Coup

In its obsession with the meme of AI, the Valley appears to have primarily adopted two ludicrous points of view: One, that we can rein in the development of strong AI by rigorous mathematical research into ethical AI development (and that magically, all AI researchers will adhere to these safe methods). And two, the related, but equally absurd, view that strong evil AI is inevitable and will destroy humanity and let’s just get it over with, because wouldn’t it be cool if we were witness to the end of existence? I am of course citing the implications of the mission statements of MIRI and OpenAI, respectively.

I propose a third, perhaps more novel viewpoint: If an evil AI is capable of taking over earth, there is no reason we can’t build a good AI capable of defending earth. Using this fire against fire methodology, I’ve set out to build a system to combat what I believe to be the actual threat against humanity: engagement platforms.

Anti-Engagement

Engagement bothers me because it is a serious departure from what computers were originally designed to do — supplement human ability, rather than aimlessly divert human attention. While our attention has been diverted, the profitability of engagement platforms have driven the industry to continue learning tricks to further divert our attention. Even more sinister is the fact that supposing an evil strong AI does decide to take over the world, it would have a way easier time doing it if everybody was glued to their smartphones instead of out getting a beer with friends, or going on a bike ride. Rather than sitting back and accepting the futility of fighting this engagement system, I’ve decided that we can use similar tactics to build a better cycle.

Me

Matthew Mirman

Founder and CEO @ WillChill