No one really knows how to formalize intelligence. It is not as if science provides us with much insight on this question either, to be fair. In the 21st century, the academic battle lines and factions in the debates over cognitive science and artificial intelligence are virtually the same as they were during the first conference on artificial intelligence in the 1950s. Seriously. Take a look for yourself and you’ll see familiar topics such as “how can a computer be programmed to use a language,” “neuron nets,” “self-improvement,” and other mainstays. Beyond the idea that it’s OK to regard the mind as a kind of machine, there is little real agreement. In some ways we have actually regressed as well. Who is today’s Herbert Simon or Norbert Wiener?

The logic of automation is inconsistent and even incoherent

Yes, there are some things that we would obviously prefer to have a machine do. And there are certain tasks that machines objectively can’t do yet. For example, there is a clear economic logic to mass production over hand-craft specializing individual laborers working in guilds. Machines could take over that, and most have come to accept that they should. But very few believed that AI should have taken over control of a proposed Reagan-era integrated missile defense/military command and control system because the technical requirements for such an omniscient machine strategist were impossible to meet. However, underlying this discussion is a strange belief that somehow there certain things that are just destined to be automated and others that will always remain the province of the human. This is sheer nonsense. As Harry Collins and Martin Kusch observed, artificial intelligence is human-like when we regard being human as mechanical. Or rather, what we are willing to accept as mechanical in its execution.

For example, let’s take customer service. Very few consumers are satisfied with dealing with an automated voice interface and prefer to speak to a live human. But if firms raised prices to hire more humans, consumers would likely balk. Likewise, self-driving cars look a lot less impressive in light of the fact that we opted for inefficient and deadly cars over mass transit. Had we, for example, moved to a mass transit system instead of investing in the automobile, we might have been able to automate much of American transport with AI — as Hong Kong does with automated planning and scheduling algorithms.

In general, there are many existing things that could be automated tomorrow if we so desired. And forget about advanced computing systems or deep neural nets. Basic technology — the kind of automated checkout stations seen at CVS stores — could replace a large swath of American jobs if employers made the attempt. Yet we see automation in some commercial areas and not in others. Why? And there are countless professions in which Americans would not mind talking to a robot over a live human being and would even prefer it. Does anyone really believe, for example, that creativity and people skills are necessary to work at the DMV? Finally, to be crude, men and women “automate” one of the most basic of human interpersonal interactions every day with a variety of battery-operated sexual devices and life-size sex dolls.

Automation is a choice

Automation certainly can be said to follow from the instrumental technical logic of science and engineering. But as the previous section implies, automation is also a social choice. And one that is made according to a dizzying array of economic, legal, political, cultural, and even cognitive-affective imperatives. Defining future employability in terms of “can’t be automated” is tautological. The jobs of the future can’t be automated because they can’t be automated. Never mind that what can and cannot be automated is a question that would require the political scientist, psychologist, lawyer, economist, sociologist, anthropologist, historian, organizational theorist, etc to be in the room alongside the computer scientist and the electrical engineer to produce an even remotely useful answer.

Because we are making two wagers when we predict about the future employment landscape:

“I can successfully predict the course of future basic and applied research and development in the underlying science and technology.” “I can successfully predict how science and technology will change society and how society will change the underlying science and technology.”

People are paid obscene amounts of money to develop theories, analyses, and hard predictions about the intersection of these two factors. And for what it is worth Clayton Christensen, despite his flaws, at least made the attempt. Nonetheless, futurists seem to have substantial problems with a basic thing: they hold some variables constant while predicting enormous changes in others. Moreover, they also seem to have a problem of missing when they should hold some things constant.

Conclusion: No One Knows Nothing?

Returning back to the Cowen book, I suppose you have two options:

Be prepared, as Cowen says, for an unstable future in which you will have to constantly train and retrain yourself based on changes in economic fortunes. It’s an interesting book, and I can’t quite summarize it here beyond that basic point. Try to find, as my friend Tdaxp often notes, the right conflux between what you love, what you can be good at, what other people will pay you to do, and what preserves your optionality in general if you are wrong or change your mind.

What I wouldn’t do is pin your hopes on arbitrary predictions and platitudes about what machines can and cannot do. To put stock in that is to both expose yourself to danger should you be wrong and close off opportunities you might have otherwise gained had you had a less rigid idea of your own professional worth. By all means, pay attention to the debate over machines and technological unemployment (frequent correspondent Miles Brundage is doing his PhD on it) but on the whole understand the stakes inherent in predicting whether or not a robot will take your job.