Watch out workers, algorithms are coming to replace you — maybe

For five years, Israeli author and historian Yuval Noah Harari has quietly emerged as a bona fide pop-intellectual. His 2014 book “Sapiens: A Brief History of Humankind” is a sprawling account of human history from the Stone Age to the 21st century; Ridley Scott, who directed “Alien,” is co-leading its screen adaptation. Harari’s latest book, “21 Lessons for the 21st Century,” is an equally ambitious look at key issues shaping contemporary global conversations — from immigration to nationalism, climate change to artificial intelligence. Harari recently spoke about the benefits and dangers of AI and its potential to upend the ways we live, learn and work. The conversation has been edited and condensed.

Q: AI is still so new that it remains relatively unregulated. Does that worry you?

A: There is no lack of dystopian scenarios in which AI emerges as a hero, but it can actually go wrong in so many ways. And this is why the only really effective form of AI regulation is global regulation. If the world gets into an AI arms race, it will almost certainly guarantee the worst possible outcome.

MBA BY THE BAY: See how an MBA could change your life with SFGATE's interactive directory of Bay Area programs.

Q: AI is still so new, is there a country already winning the AI race?

A: China was really the first country to tackle AI on a national level in terms of focused, governmental thinking; they were the first to say “we need to win this thing” and they certainly are ahead of the United States and Europeans by a few years.

Q: Have the Chinese been able to weaponize AI yet?

A: Everyone is weaponizing AI. Some countries are building autonomous weapons systems based on AI, while others are focused on disinformation or propaganda or bots. It takes different forms in different countries. In Israel, for instance, we have one of the largest laboratories for AI surveillances in the world — it’s called the Occupied Territories. In fact, one of the reasons Israel is such a leader in AI surveillance is because of the Israeli-Palestinian conflict.

Q: Explain this a bit further.

A: Part of why the occupation is so successful is because of AI surveillance technology and big data algorithms. You have major investment in AI (in Israel) because there are real-time stakes in the outcomes — it’s not just some future scenario.

Q: AI was supposed to make decision making a whole lot easier. Has this happened?

A: AI allows you to analyze more data more efficiently and far more quickly, so it should be able to help make better decisions. But it depends on the decision. If you want to get to a major bus station, AI can help you find the easiest route. But then you have cases where someone, perhaps a rival, is trying to undermine that decision-making. For instance, when the decision is about choosing a government, there may be players who want to disrupt this process and make it more complicated than ever before.

Q: Is there a limit to this shift?

A: Well, AI is only as powerful as the metrics behind it.

Q: And who controls the metrics?

A: Humans do; metrics come from people, not machines. You define the metrics — who to marry or what college to attend — and then you let AI make the best decision possible. This works because AI has a far more realistic understanding of the world than you do. It works because humans tend to make terrible decisions.

Q: But what if AI makes mistakes?

A: The goal of AI isn’t to be perfect, because you can always adjust the metrics. AI simply needs to do better than humans can do — which is usually not very hard.

Q: What remains the biggest misconception about AI?

A: People confuse intelligence with consciousness; they expect AI to have consciousness, which is a total mistake. Intelligence is the ability to solve problems; consciousness is the ability to feel things — pain, hate, love, pleasure.

Q: Can machines develop consciousness?

A: Well, there are “experts” in science-fiction films who think you can, but no — there’s no indication that computers are anywhere on the path to developing consciousness.

Q: Do we even want computers with feelings?

A: Generally, we don’t want a computer to feel, we want the computer to understand what we feel. Take medicine. People like to think they’d always prefer a human doctor rather than an AI doctor. But an AI doctor could be perfectly tailored to your exact personality and understand your emotions, maybe even better than your own mother. All without consciousness. You don’t need to have emotions to recognize the emotions of others.

Q: So what’s left that AI hasn’t touched?

A: In the short term, there’s still quite a bit. For now, most of the skills that demand a combination between the cognitive and the manual are beyond AI’s reach. Take medicine once again; if you compare a doctor with a nurse, it’s far easier for AI to replace a doctor — who basically just analyzes data for diagnoses and suggests treatments. But replacing a nurse, who injects medications and changes bandages, is far more difficult. But this will change; we are really at the beginning of AI’s full potential.

Q: So is the AI revolution almost upon us?

A: Not exactly. We won’t see this massive disruption in say, five or 10 years — it will be more of a cascade of ever-bigger disruptions.

Q: And how will this affect the workforce?

A: The economy is having to face ever-greater disruptions in the workforce because of AI. And in the long run, no element of the job market will be 100 percent safe from AI and automation. People will need to continually reinvent themselves. This may take 50 years, but ultimately nothing is safe.

David Kaufman is a New York Times writer.