This post from Travis Addair, senior software engineer at Uber, originally appeared on Quora as an answer to the question "Which tech jobs are safe from automation until at least 2045?"

He says the simpler your job is, it's likely easier to automate.

It's harder to automate jobs that rely on creativity or interpersonal skills — like software engineering, product management, and research science.

But, technology could advance significantly in 30 years, so it's hard to say exactly which jobs will or won't still be around.



Ask someone in tech what they do every day.

If they can give you a step-by-step explanation of their job, with no intangibles or other ambiguities that cannot be specified precisely, then it's likely that someday in the not-so-distant future some entrepreneur will find a way to automate it.

In general, jobs that rely on problem solving, creativity, research, design, and interpersonal skills are less likely to be automated than those with well-defined steps to perform each day.

Here are some examples that come to mind within tech:

Software Engineer (problem solving, creativity, design)

Product Manager (interpersonal skills, design)

Research Scientist (research, creativity)

Engineering Manager (interpersonal skills)

Product Designer (design, creativity)

Now for the obligatory caveat for the singularity alarmists.

30 years is a long time. 30 years ago I wasn't even born, so it's pretty much impossible to extrapolate out for longer than I've been alive. Maybe some technological breakthrough will occur that makes all jobs obsolete. Maybe machine learning will become so creative, so good at solving all our problems, that we won't need skilled technical people anymore. Maybe. Possibly. Could be.

Thing is, people tend to make the mistake of assuming that because machine learning today is getting better and better at solving certain types of problems humans are good at solving, that it's only a matter of time until it becomes more "intelligent" (whatever that means) than humans generally. That's a fallacy.

What machine learning is good at doing is finding patterns and correlations in data, and acting on those patterns to make predictions. There's no critical thinking, no logic or reasoning. So until those holes are filled with a suitably novel scientific breakthrough, we say that machine learning is not good at complex tasks like problem solving or design in general.