The field of artificial intelligence may finally be coming back around to Douglas Hofstadter. Since winning a Pulitzer Prize in nonfiction for his 1979 book Gödel, Escher, Bach: an Eternal Golden Braid, Hofstadter, 72, has been quietly thinking about thinking, and how we might get computers to do it.

In the early days of AI research in 1950s and 60s, the goal was to create computers that think and learn the way humans do, by remodeling our ability to intuitively understand the world around us. But thinking turned out to be more complicated than something that could fit in a 1950s computer program. The results were disappointing.

What did eventually yield results, though, was giving up on thinking altogether, focusing computers instead on highly specific tasks and giving them vast amounts of relevant data—resulting in the AI boom we see today. A computer can beat a human at chess not by searching for the satisfaction of making an elegant move, but sifting through millions of previously played games to see which move is more likely to lead to victory.

In 2017, though, it looks like AI may need to tackle the old problem of teaching computers how to be more human. AI pioneer Geoffrey Hinton recently told Axios that he is “deeply suspicious” of methods that, say, use a bunch of data on chess matches to teach chess. Instead, computers should be able to learn about anything, without millions of specific data points, the way humans do.

Through all of these shifts in AI, Hofstadter, a professor of cognitive science and comparative literature at Indiana University, has been trying to understand how thinking works. He doesn’t believe the AI we have right now is “intelligent” at all, and he fears the field has taken humanity down a dangerous path. Quartz spoke to Hofstadter about the current state of AI, what it is doing wrong, and what dangers lie ahead.

Quartz: Let’s talk a little bit about how well computers understand language. To translate effectively from one language to another, a machine would have to have a deep understanding of the world, wouldn’t it?

Douglas Hofstadter: When I think about translation I think about creating a text in a second language that is equally good as the text was in the original language. So if the text in the original language was artistic and beautiful, then the text in the second language should be artistic and beautiful in the same manner. And that’s way out of the domain of Google Translate.

Google Translate is not crafted by something that has understanding. The holes in its understanding come out in all sorts of unexpected places. There was a sentence in a German text that said in English, “The maid brought in the soup.” When I looked at the sentence more carefully, Google Translate said, “The maid entered the soup.” It got “maid” correct, but the image and action that was going on there was absolutely unrelated to anything that would really happen in the real world.

I’m not trying to insult Google Translate. I’m trying to say, look, remember at all times that the words that the computer is using are not imbued with meaning to it.

QZ: Is this what you call the “Eliza effect”?

DH: The Eliza effect is the effect of us taking words or phrases as if they have meaning simply because we use language. And when another entity is manipulating those words and spitting them out onto our screen, or speaking them, we tend to assume that behind the scenes there is thinking. Which can be extremely wrong.

QZ: Do you think it’s possible for a computer to do literary or elegant translation at the level of a human without this kind of thinking?

DH: No, I don’t. I really don’t. Because I think the world is far too complex.

DG: Do we need to have other terms for what artificial intelligence does, to further this idea that it’s not an incredibly apt representation of intelligence?

DH: That’s an interesting question. I don’t think that what we have today is “intelligence.” I’ll go back to something I’m a little bit familiar with that goes back quite a number of years, which is self-driving cars.

This happened to me, so it’s a real situation.

I was driving from my town of Bloomington, Indiana, to Chicago to give a talk. And then I ran into a very big traffic jam on the freeway after an hour or two on the road. I’m still far from Chicago, and this traffic is stopped dead in the freeway. Now, what do I decide to do? I see some people are trying to drive across the grass strip that separates the northbound traffic and the southbound traffic, thinking that they’re going to go south on the freeway and then maybe get off and take some smaller roads. That’s a possibility, but then I see that some of the cars have gotten stuck in the grass, which is not flat, and it’s muddy. So I’m thinking, “do I want to do that, do I want to take that risk?”

Let’s say I wait a while and wind up getting back on the freeway, but now I’m under very, very high pressure to get to Chicago. I’ve lost an hour and I have almost no time. Now what do I do? How much risk do I take? How important is it to me to go to this university to deliver this lecture? What if I call up and say I’m going to be a half an hour late? An hour late? So I’m driving along and I’m thinking, am I going to go 80 miles per hour in the 70 zone? Am I going to go 90? How fast am I going to go?

This is part, for me, of what “driving” is. This is driving. It’s showing that the real world impinges in many facets on what the nature of driving is.

If you look at situations in the world, they don’t come framed, like a chess game or a Go game or something like that. A situation in the world is something that has no boundaries at all, you don’t know what’s in the situation, what’s out of the situation.

QZ: Does it trouble you that people will infer [machine intelligence] is thinking?

DH: If you ask me in principle if it’s possible for a computing hardware to do something like thinking, I would say absolutely it’s possible. Computing hardware can do anything that a brain could do, but I don’t think at this point we’re doing what brains do. We’re simulating the surface level of it, and many people are falling for the illusion. Sometimes the performance of these machines are spectacular.

When I bumped into the field I guess I had the following feeling: It’s fun to create programs that look like they can think, and it’s fun to create programs that can do little bits of thinking, and give you the feeling that you’re approaching something of the nature of thinking. And the subtler you are, the better it is.

I was comfortable with the idea that machines were slowly going to get better. It made me think humans are the ultimate target, human intelligence is a magnificent thing.

In other words, my feeling was there was going to be a slow asymptotic approach on the part of computer intelligence to human intelligence. Asymptotic meaning it was going to approach it from below, but it wasn’t going to surpass it. It was going to approach it as a curve approaches a straight line. It’s going to level out underneath the straight line.

But as of late, with all the victories of things like AlphaGo and DeepBlue, one starts to wonder whether the line isn’t going to cross, just as it did with chess. The two lines are coming along and they just intersect. There’s no asymptote. It just crosses, and then the computer line keeps on rising.

Then to me that’s such a different picture, and it’s not a picture that I like.

What frightens me is the scenario of human thought being overwhelmed and left in the dust. Not being aided or abetted by computers, but being completely overwhelmed, and we are to computers as cockroaches or fleas are to us. That would be scary.

This interview has been edited for concision and clarity.