A.I. Fuels Inequality and Climate Change, a New Report Warns

The paper from AI Now also calls for a moratorium on facial recognition

Photo: Artur Debat/Moment/Getty Images

The artificial intelligence industry is promoting worker mistreatment, discrimination, and pseudoscience, a new report claims. Meanwhile, tech companies continue to push facial recognition and ignore the potential carbon footprint of running energy-hungry A.I. systems.

The paper, known as the “AI Now 2019 Report,” is published by The AI Now Institute, an organization that researches the societal impacts of artificial intelligence. In its fourth year, the report makes 12 recommendations to the tech industry and policymakers.

“Across diverse domains and contexts, A.I. is widening inequality, placing information and control in the hands of those who already have power, and further disempowering those who don’t,” the report says.

Kate Crawford, co-founder of AI Now, says that the organization is particularly concerned with the field of affect recognition, a facet of the facial recognition technology that promises to determine someone’s personality or emotional state by their facial expression. The technology is being pitched as a tool to vet job applicants, track students’ attention in schools, and gather data on shoppers’ emotional states inside stores. AI Now recommends a legislative ban on the technology’s use when applied to critical decisions like hiring.

Another emerging area of concern is the environmental impact of running power-hungry artificial intelligence programs. AI Now cites a report by a group at the University of Massachusetts, Amherst, which estimated that the energy consumed by training an A.I. model produced 600,000 pounds of carbon dioxide emissions.

As a proposed solution, AI Now researchers suggest that regulators mandate companies to publicly disclose the carbon footprints of their computations.

Four of the 12 recommendations specifically criticize the use of artificial intelligence for surveillance, calling for moratoriums on facial recognition and expanded laws relating to biometric data collection, including fingerprints and DNA.

While three municipalities in the United States have banned facial recognition from government use this year — San Francisco, Oakland, and Somerville, Massachusetts — state and federal-level legislation has stalled. The report commends Senator Bernie Sanders for making a ban on police facial recognition part of his presidential campaign platform.

The report highlights Illinois’ Biometric Information Privacy Act as a model example of how regulation can curtail the harms of biometric data being used to surveil or track a person. The 11-year-old law allows those in Illinois to take legal action against the collection of any biometric data by another person or company without consent.

Other recommendations focus on the technology industry’s lack of diversity, the use of automation by businesses to squeeze out human employees, and hostility toward tech workers who speak out with ethical concerns about the impact of their work.

“Too often, decisions about how A.I. is used are left to sales departments and executives, hidden behind highly confidential contractual agreements that are inaccessible to workers and the public,” the report says.