“I can only go on what the machine says.” My ears prick up. The Oyster card inspector on my bus to work has found someone who (apparently) hasn’t paid her fare. He argues: “I connect to the scanner by the driver when I get on the bus, and check every passenger’s card to see if they have recently swiped that scanner.”

She contests the reading from his handheld device: “But how could I have got on if I didn’t swipe my card. The driver would have stopped me.” He replies that not all drivers do. She goes on to explain that when she used her card, the scanner light went green. But the display next to it showed that the Travelcard expired in 2002. The bus driver commented on how odd this was – 2002 was before Oyster cards were available. Assuming there was a glitch in the system, and comforted by the green light, the drive ushered her onto the bus.

Maybe she was lying. The protracted discussion afterwards about whether she would give the inspector her contact details sowed seeds of doubt in my mind.

Either way, the recourse to the machine as the object of blame (on both sides) echoes a broader debate that’s played out in the media over the last year. Warnings of the coming robot economy – when the machines will take over our jobs – are everywhere. The idea took centre stage at the World Economic Forum this year: “Most Davos delegates expected barcodes and robots to replace humans at an accelerating rate”. Even an expert survey last year was split 50:50 between those arguing that the robots will create more jobs than they displace, and others worrying that they could lead to income inequality. Exactly how a robot economy will change everyday life is still up for grabs.

The everyday robot economy

The interaction between the passenger and the inspector is not much different to those I used to hear when London buses had conductors 15 years ago. Back then, no one paid as they got on the bus. The responsibility for collecting fares came down to the conductor, and their ability to remember who had just got on. Excuses included dropping the ticket on the floor, out of the window or losing it in a bag. The discussion with the inspector or conductor would then come down to ‘the rules’ rather than ‘the machine’. The sophisticated near-field technology now in place doesn’t change things much.

Except it does. Far fewer inspectors are needed on buses today. Much of the work that a conductor used to do is done by the card scanner system – making that job almost obsolete. (It returned with Boris Johnson’s new Routemaster bus design, but almost none of the fleet in operation have conductors.)

Also, there is a difference between ‘the machine’ and ‘the rules’. The argument by the inspector today sounded like he was part of an automated system driven by his handheld device; if the machine said she didn’t pay for her journey, he has no power to override it. The conversation I overheard wasn’t about whether she had abided by the rules. It was about whether the machine knew if she had. The inspector argued at one point that it didn’t matter whether the driver could corroborate her story. It was a machine-led process. Once an invalid ticket is detected, then the inspector’s job is to be the human negotiator on behalf of the machine.

The subtleties of this robot economy bear little relation to the grandstanding about the prospects of artificial intelligence that everyone from Bill Gates to Stephen Hawking is engaged in at the moment. None of the cinematic worries about machines that take decisions about healthcare or military action are at play here. Hidden in these everyday, mundane interactions are different moral or ethical questions about the future of AI: if a job is affected but not taken over by a robot, how and when does the new system interact with a consumer? Is it ok to turn human social intelligence – managing a difficult customer – into a commodity? Is it ok that a decision lies with a handheld device, while the human is just a mouthpiece?

What does this mean for the second wave robot economy?

Mike Osborne and Carl Benedikt Frey from Oxford University have studied the risk of automation in the US economy, concluding that 47 per cent of jobs in the current workforce are at high risk of computerisation. They come to this conclusion by looking for jobs that can’t be automated; the 47 per cent is what’s left over. In their model, there are three bottlenecks that prevent automation:

…occupations that involve complex perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks are unlikely to be substituted by computer capital over the next decade or two.

These are bottlenecks which technological advances will find it hard to overcome. The authors predict that the next decade will see steps forward in the algorithms that automate cognitive tasks, including cutting edge techniques like machine learning, artificial intelligence and mobile robotics.

This second wave of the robot economy follows a first wave that automated manufacturing and repetitive manual tasks. So many of the desk jobs that our parents and grandparents would have done, like typing and manual data entry, are now becoming obsolete. And according to Osborne and Frey, some of the jobs that are most at risk of automation, were formerly present in droves at many city offices. This includes the likes of accountants, legal clerks and book keepers - dying breeds, and casualties of the robot economy. But Osborne and Frey think that tasks like navigating complex environments, creative thinking and social influence and persuasion will not be automated as part of these advances.

Some of my colleagues are interested in the second kind of task – creativity. They are working with Osborne and Frey to understand how resistant the creative economy is to automation: how many jobs in the creative economy involve truly creative tasks (if that’s not tautologous). Preliminary results look pretty good for creative occupations. 87 per cent are at low or no risk of automation.

Maybe service occupations where persuasion and influence are important will be saved too. The bus ticket inspector requires exactly the kind of social intelligence that Osborne and Frey argue a machine cannot replicate. But this doesn’t take into account the subtleties I witnessed on the top deck of the 76. It may not be job titles or wages that are most affected by the day-to-day of a robot economy. Automation of parts of a job, or of the context that someone works in, means that jobs not taken by machines are fundamentally changed in other ways. We may become slaves to hardwired decision-making systems.

To avoid this, we need to design human-machine jobs with the humans who will be part of them. I met Carla Brodley, Computer Scientist from North­eastern Uni­ver­sity in the US a few months ago. She has applies advanced computing techniques to med­ical imaging, diagnosis and neu­ro­science. Brodley has publicly argued that the most interesting problems for machine learning come from real world uses of these computational techniques. She says the tough bit of her job is knowing when and how to bring the expert - doctor, radiologist, scientist - into the design of the algorithm. But she is avid that the success of her work depends entirely on this kind of user-led computational design. We need to find a Brodley for the bus ticket inspector.