This post asks if we are making a mistake in the way we anticipate the future of robots and intelligent machines. It is all based on my perceptions and understanding of how far our digital assistant/nemeses have got so far. Please comment below if you know of progress I appear not to be aware of!

–

I’ve been reading a lot about robots, artificial intelligence and machine learning. I am trying to weigh up what it all means. Will jobs disappear? Whose jobs? Who stays in work what do they do? Will we even need to work in future?

One machine I am definitely excited about is the new best player at chess. It dominates because we demanded that it teach itself. Within a few hours it beat one of the top systems in the world. That is exciting and also terrifying.

And yet. Some robots are still utter rubbish.

The Jetsons’ robot maid is nowhere to be seen in my life. There is little evidence of robots coming to dominate in many of the domains people insisted they would.

Voice recognition, for example, remains underdeveloped, despite years of focus. And yet the machines can turn around and defeat us at Go, the one thing we thought we could edge them on for another few years.

It seems to me we are bad judges of what intelligent machines will be good at.

Often, the machines are better at things we consider hard than things we consider easy. One of the first things machines came to dominate at was chess (a game for the human intellectual elite). They remain truly appalling at soccer (a game for everybody).

We assume things children could do will be easy for robots. And we scream with laughter when they find them hard. Later, we are amazed when machines can easily outstrip us at things only the smartest adults can do. This paradox needs resolving.

Why are they smartest at hard things and dumbest at easy things?

Are we benchmarking things wrong? Perhaps we over-emphasise how smart the adult human is; how capable of operating effectively in the abstract world. And underemphasise how physically capable the average adult human is in the material world.

Maybe what we see as hard is just abstract; and what we see as easy involves manipulating the infinite variability of the real world.

From where I work I can watch two turtle doves improving their nest. One flies out, finds a stick or bit of grass, and brings it back. The other takes it and works it into the existing structure with a wiggle of its head. I doubt we could program two drones to do that, even with a decade and a multi million dollar budget.

How different are we from the animals? Is it possible the animal parts of our brain are actually far more advanced than the human parts of our brain? Our software has had aeons to work on things like navigating 3D space, recognising and manipulating never before-seen objects, hearing and identifying sounds. But only a few dozen millenia to work on the higher human plane of logic and abstraction.

Computers operate in that abstract world and are – mostly – killing us at it. Arithmetic is their bread and butter. Accounting, logic and other kinds of rule following that defined human intelligence until quite recently are firmly within their grasp.

Yet machines attempts to navigate the physical world are mostly poor. If you consider how refined those animal circuits are, is it any wonder that machines still can’t do these animal things? If what we can do easily is actually very hard, it might be less surprising that our first iteration at self-driving cars smashes into giant objects right in front of them. And we might approach the task of training robots to interact with dynamic real world space with more humility.

ROBOTS TAKING OUR JOBS

If we have misconstrued the extent of human skill in various domains, could that lead to confusion about what tasks can easily be automated? Everyone seems to think truck driving is due for immediate automation. What if that is because of a sense that truckies aren’t smart?

Many people assume a chess-playing computer must also be able to do everything a person of everyday intelligence can do. Here’s Tesla CEO Elon Musk, speaking at the company’s annual earnings call on 7 February 2018.

“I am pretty excited about how much progress we are making on the neural net front… It is also one of those things where it is kind of exponential. … It doesn’t seem much progress, doesn’t seem much progress and then suddenly: Wow! “That has been my observation generally with AI stuff . And if you look at what Google’s DeepMind did with AlphaGo. It went from not being able to beat even a pretty good Go player to suddenly it could beat the European Champion. Then it could beat the world champion. Then it could thrash the world champion. Then it could thrash everyone simultaneously. “Then they had AlphaZero which could thrash AlphaGo! And just learning by itself was better than all the human experts. “It is going to kind of be like that for self-driving. It will seem like this is a lame driver, this is a lame driver, this is a pretty good driver … [then] holy cow this driver is good!”

It seems to follow logically, but it might not.

We value abstract cognition because it is rare in humans. But we don’t value what is profoundly and abundantly available to us – skill in moving through the real world. That’s why the stock analyst gets paid more than the taxi driver.

Yet traders are already being replaced with algorithms. Taxi drivers – not yet. That could be a warning signal, and our model of intelligence could be impeding us from seeing it.

The smartest people applying neural nets to self-driving vehicles say they are still a long way off.

“Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology. For those of us who have been working on the technology for a long time, we’re going to tell you the issue is still really hard, as the systems are as complex as ever.”

And that’s just the driving part. There was a great post on Marginal Revolution last week about the complexity of a truck driver’s job.

“I wonder how many of the people making predictions about the future of truck drivers have ever ridden with one to see what they do? One of the big failings of high-level analyses of future trends is that in general they either ignore or seriously underestimate the complexity of the job at a detailed level. Lots of jobs look simple or rote from a think tank or government office, but turn out to be quite complex when you dive into the details. For example, truck drivers don’t just drive trucks. They also secure loads, including determining what to load first and last and how to tie it all down securely. They act as agents for the trunking company. They verify that what they are picking up is what is on the manifest. They are the early warning system for vehicle maintenance. They deal with the government and others at weighing stations. When sleeping in the cab, they act as security for the load. If the vehicle breaks down, they set up road flares and contact authorities. If the vehicle doesn’t handle correctly, the driver has to stop and analyze what’s wrong – blown tire, shifting load, whatever. … I’ve been working in automation for 20 years. When you see how hard it is to simply digitize a paper process inside a single plant (often a multi-year project), you start to roll your eyes at ivory tower claims of entire industries being totally transformed by automation in a few years.

COUNTERPOINT

Perhaps this argument is upside down. Perhaps we chose not to make computers good at the material word. Perhaps we trained computers to do abstract things because only a few people can do them. To get the benefit of training a computer we must set it on tasks where human skill is rare. It is not that they couldn’t do what we can do, just that we haven’t put in the effort.

FINDING THE PATTERNS TO RECOGNISE

I suspect the problem is not so much in asking computers to process the data produced by manual tasks as getting them to identify it as data.

In an abstract world data is always in the right place and fully visible. In a spreadsheet, the data you need will always be exactly in the right place. And if not, nobody expects the spreadsheet to figure that out and fix it. In the physical world, information might be harder to find. Where’s the label on this box? Where’s the face on this human? Where’s the road under this snow? etc.

We already know how you can get robots to take on jobs in the material world. You need to standardise the inputs. Robots do a wonderful job welding things that come down a production line. They do a great job driving trains in wholly separated systems. They do a perfect job of driving lifts up and down lift-wells, etc. In these cases we give the material world the standardised appearance of an abstract one. Take away the production line, the protected rails and the lift-well, and those systems are all at sea.

Neural nets will of course be much smarter than the computers that drive lifts. They will be able to parse information from the material world. Self-driving cars can use cameras, radar, lidar and 360 degree vision to get advantages over us in sensing. These systems should be able to learn fast.

But I am not yet convinced we can apply the lessons from an abstract world which has only 64 different locations, to a real world which is infinitely more complex. Assuming those lessons will cross over is the exact kind of intellectual trap a cognitively limited species would fall into.