Computer says no: What happens when we disagree with the robots we’re working under?

Computer says no: What happens when we disagree with the robots we’re working under?

It’s the 22nd century. Thanks to some beef over who should lead the Klingon High Council, the crew of the Starship Enterprise are pulled into a civil war. All senior officers are given a ship to command – except Data, the second officer, the only android in the fleet.

Despite possessing mental and physical capabilities far beyond any organic being, and 26 years of service, he is left behind. Humanity is so technologically advanced that they can traverse galaxies – but they’ve not found the nerve to let a robot command a vessel.

Data eventually gets his ship but is met with hostility echoing today’s racial prejudices. His first officer makes his disdain plain and requests an immediate transfer. Captain Data listens.



‘I understand your concerns.’ He nods, thoughtfully. ‘Request denied’.


Though we’re a long way from androids commanding starships, the question raised is not so distant: is Data a member of the crew like everyone else? Or is he just another useful tool – albeit one with a personality?

Fictional futures aside, versions of this scenario are not lightyears away. What happens if we’re working under robots and they say ‘no’?

First, we need to figure out if AIs can have rights, and what that means for the rest of us who work alongside them. Unlike the Star Trek example above, humanity does not have to grapple with the rights of a single machine. In the real world, technological advancement has led to a profusion of AIs throughout our workforce in a short amount of time. It’s conceivable and even probable that unlike Data, human workers will be vastly outnumbered by our synthetic colleagues.

According to the World Economic Forum’s Future of Jobs report, AIs are expected to perform half of all productive functions in the workplace by 2025.

Despite AIs doing an increasing number of jobs as well as – or even better – than their human colleagues, legal standards have not changed to account for the coming changes.

Very soon humans will not be the keystone of the workforce, so we can’t afford to ignore the conversation about who gets what rights for another century.

This means some think it’s time to revisit the ethical rights of machines.

(Picture: Metro.co.uk. Source: World Economic Forum)

‘As robotic and automated “workers” begin to populate our workplaces and as their emotions evolve closer to humans, we will need to figure out on a global basis what fundamental rights and protections will they have,’ Professor Andrew J. Sherman. a partner at law firm Seyfarth Shaw LLP, tells Metro.co.uk.

Prof Sherman says it’s time to think about how the usual workplace issues of pay, progression, vacation and even discrimination will look once we add AI into the mix.

As advances lead us ever-closer to superintelligence, humanity must grapple with the broadening of the definition of personhood and all its attendant dilemmas.

Automated workers will eventually outperform their human colleagues – what could this mean for monitoring performance, worker motivation and human-machine co-working?

As AIs become more human, Sherman argues they will need to be subject to disciplinary procedures in the same way that people are. It’s time for humans to game out what happens if our creations cause us harm.



‘Workers rights need to be developed to empower managers but also to protect the rights of these automated co-workers,’ he says.

Humans have always feared the ‘other’. Naturally, not everyone will welcome the coming robo-revolution. For some, their displacement in the current order is frightening, meaning an uncertain future.

Fear, resentment and the anxiety of being made irrelevant plagues workers, Sherman explains. Steps must be taken now to prepare our workforce – re-training and retooling them in anticipation of what’s to come – or we will face hostility between man and machine.

Add to this the workplace wisdom of ‘percussive maintenance’ – how often we give our tech a thump when it doesn’t work – and we have a new problem to contend with. What happens when the machine can thump you right back?

‘It will only be a matter of time before the automated creatures will “feel” this hostility and/or feel the need to retaliate,’ says Andrew.

This is precisely why we need to understand what the rights and responsibilities of AIs will look like in future, Sherman says. We need to know what robots ‘are’ so that we understand who is accountable if things go wrong.

‘Will they be charged with assault and battery and legally responsible for the harm they may cause under criminal or civil law? Or should a robot’s programmer be held jointly responsible?’

Prof Sherman is not the only one who feels that tension between human and non-human workers is something worth paying attention to.


And it could be because the first innovators in AI never considered that the fruits of their labour would be potentially so risky.

‘[AI pioneers] gave no lip service – let alone serious thought – to any safety concern or ethical qualm related to the creation of artificial minds and potential computer overlords,’ Prof Nick Bostrom, director of the Future of Humanity Institute at Oxford University, wrote in Superintelligence: Paths, Dangers, Strategies.

Bostrom argues that one day we could conceivably find ourselves confronted by AIs capable of rapid self-improvement, with the means of concealing their development from humans to circumvent our interference.

Some experts are concerned we may be designing intelligences so powerful they could one day take over. If ever there was an argument to take AI rights and the implications for humanity seriously, surely this is it.

If technology continues speeding forward without serious thought being given to robot rights and our place alongside them, distrust and discomfort will breed, especially when it comes to disagreeing with machines.

It’s crucial that we establish a working structure and hierarchy that includes the work done by AI. We have to establish whether AI can be in charge and asked to make decisions, or if humans will always be in top positions.

It could the case that our legacy mirrors that of the technology we create for mass consumption and maximum profit – we may be engineering our own obsoletion.

If we don’t find a viable way for workers and machines to work together across their differences, then we may see ourselves taken out of the equation entirely.

The Future Of Everything This piece is part of Metro.co.uk's series The Future Of Everything. From OBEs to CEOs, professors to futurologists, economists to social theorists, politicians to multi-award winning academics, we think we had the future covered, away from the doom-mongering or easy Minority Report references. Every week, we explained what's likely (or not likely) to happen. Talk to us using the hashtag #futureofeverything. Though the series is no longer weekly, if you think we might have missed something vital to the future, get in touch: hey@metro.co.uk or Alex.Hudson@metro.co.uk Read every Future Of Everything story

MORE: Cracking down on gagging orders won’t stop patterns of abuse in the workplace

MORE: We all know about workplace stress, but let’s not forget self-employed stress

Advertisement Advertisement