About one third of young Canadians would prefer to work for a robotic boss rather than for a flesh-and-bone one, a new study has revealed.

According to a survey carried out on 2299 people by Vancouver-based consultancy Intensions and futurist activist Nikolas Badminton, 31 percent of Canadians aged 20 to 39 think that a computer program would be better at hiring, assessing and managing its employees. The figure for the general population of all age ranges is still quite high, with 26 percent being in favor of 'robo-bosses'.

New study from Intensions Consulting and me.



The study, which surveyed 2299 adults across Canada, found that a… https://t.co/hJfOT6cbCc — Nikolas Badminton (@NikolasFuturist) March 29, 2016​

The main pro-robot argument is that synthetic managers would be unbiased and therefore more honest and reliable than their human counterparts. Sexism, prejudice and irrational behaviors are thought to be among the ills a robotic boss would do away with.

According to Nikolas Badminton, who contributed to the study: "People are losing faith in human management, and rightly so.

"Who would you trust, a human with personal biases and opinions or a rational and balanced AI? These results are not surprising, and I expect to start seeing automated HR and management systems being deployed in the next 3 to 5 years — with a human touch to maintain creativity and empathy."

There is obviously another side to the story, and it has a lot to do with how the survey's questions were phrased. People were asked, in fact, whether they would trust an "unbiased machine" as a manger. Obviously, everybody wants to be managed by something programed to be fair.

But the very use of the word "unbiased" is a great assumption about the future of artificial intelligence. Technically, it is hard to know whether unbiased machines can be created at all: would not the biases of whoever programs the bot affect how the way a 'robo-boss' behaves?

Beyond that, researchers and entrepreneurs such as Oxford's Nick Bostrom and Tesla's Elon Musk have continuously warned that artificial intelligence could not share our system of values, a detail they think could lead to humankind annihilation. For instance, an artificial intelligence programed to make people happy could decide that "happiness" means being dead, and proceed to kill us all.

This value discrepancy means that a robot's concept of being free from bias may not fit with ours and it may very well be too late when we eventually come to that realization, having already surrendered ourselves to our robotic overlords.