advertisement

advertisement

advertisement

advertisement

No privacy protections for workers The monitoring that’s possible now will seem simplistic compared to what’s coming: a future in which robotics and other technologies capture huge amounts of personal information to feed artificial intelligence software that learns which metrics are associated with things such as workers’ moods and energy levels, or even diseases like depression. One healthcare analytic firm, whose clients include some of the biggest employers in the country, already uses workers’ internet search histories and medical insurance claims to predict who is at risk of getting diabetes or considering becoming pregnant. The company says it provides only summary information to clients, such as the number of women in a workplace who are trying to have children, but in most instances it can probably legally identify specific workers. Except for some narrow exceptions—like in bathrooms and other specific areas where workers can expect to be in relative privacy—private-sector employees have virtually no way, nor any legal right, to opt out of this sort of monitoring. They may not even be informed that it is occurring. Public-sector employees have more protection, thanks to the Fourth Amendment’s prohibition against unreasonable searches, but in government workplaces, the scope of that prohibition is quite narrow. AI discrimination In contrast to the almost total lack of privacy laws protecting workers, employment discrimination laws—while far from perfect—can provide some important protections for employees. But those laws have already faced criticism for their overly simplistic and limited view of what constitutes discrimination, which makes it very difficult for victims to file and win lawsuits or obtain meaningful settlements. Emerging technology, particularly AI, will exacerbate this problem. AI software programs used in the hiring process are marketed as eliminating or reducing biased human decision-making. In fact, they can create more bias, because these systems depend on large collections of data, which can be biased themselves. For instance, Amazon recently abandoned a multiyear project to develop an AI hiring program because it kept discriminating against women. Apparently, the AI program learned from Amazon’s male-dominated workforce that being a man was associated with being a good worker. To its credit, Amazon never used the program for actual hiring decisions, but what about employers who lack the resources, knowledge, or desire to identify biased AI?

advertisement

The laws about discrimination based on computer algorithms are unclear, just as other technologies stretch employment laws and regulations well beyond their clear applications. Without an update to the rules, more workers will continue to fall outside traditional worker protections–and may even be unaware how vulnerable they really are. Jeffrey Hirsch is the Geneva Yeargan Rand Distinguished Professor of Law at the University of North Carolina at Chapel Hill This article is republished from The Conversation under a Creative Commons license. Read the original article.