Lots of people are talking about “The Fear of AI”, and whether or not we should.

But it’s important to distinguish between two different types of AI here.

Machine Learning + Big Data + Automation, resulting in the obsolescence of millions of jobs. Creation of a new, super-intelligent life form, which ultimately attacks, subjugates, or destroys humanity.

These are very different things.

The first is already happening—or at least the ML + Data part that is yielding massive opportunities for efficiency. The automation piece is related but orthogonal. But it’s quite realistic to imagine these technologies combining to make humans a really inefficient way for businesses to get work done.

The second one, where we create new life, and it’s so intelligent it just takes over and wrecks the world…well, most experts agree that’s further out. It’s also hard to plan for since it’s so unlikely right now, and people don’t want to think about it.

The key point is that “Artificial Intelligence” means different things to different people in the context of existential threats.

There’s the prospect of humans having no work to do because machines do it better, and then there’s the prospect of humans having nothing to do because we’re all dead.

The first issue is economic and social, meaning:

What do we do with a species and society that’s based on having jobs and earning money for livelihood when there are no more jobs?

And the second is computer science and regulation, meaning:

How do we manage the AI arms race so that we don’t accidentally create a super-intelligence that destroys the world.

We should be thinking about both, but without confusing them.

Notes