In many low-wage U.S. workplaces, 2019 looked like a dystopian nightmare.

Algorithmic pay models and AI surveillance technologies spread to new industries and increased their grip on others. Many workers are now accountable to algorithms, instead of bosses. Countless Amazon warehouse workers suffered injuries while trying to meet production quotas set by AI systems, while those who fell behind were terminated. Gig workers were forced to seek charity from the Red Cross and churches during the holidays after their wages fell due to changes in the algorithms that determine their pay. Others had their utilities shut off after being “deactivated” on gig-working apps.

A new report on the social implications of artificial intelligence from NYU’s A.I. Now Institute argues that people who work under algorithms in 2019—from Uber drivers to Amazon warehouse workers to even some white collar office workers who may not know that they’re being surveilled—have increasing cause for concern and collective action.

“The spread of algorithmic management technology in the workplace is increasing the power asymmetry between workers and employers,” the report’s authors write. “This year, we’ve seen the rapid acceleration of algorithmic systems that control everything from interviewing and onboarding, to worker productivity, to wage setting and scheduling.”

Indeed, a growing number of employers, including farms, hospitals, and hotels rely on AI systems to set productivity quotas and surveil workers. At Amazon, the pace of work is set by an AI system that is calculated automatically and changes daily. According to an Amazon warehouse organizer in Minnesota, workers are fired if they fall behind on productivity quotas three times in a day. Meanwhile contractors who assemble medical supplies at a hospital in Philadelphia must adhere to an “opaque algorithmic rate,” or else face discipline. Workers at Olive Garden, Applebee’s, and Outback Steakhouse have also fallen under similar forms of surveillance.

“Such rate-setting systems rely on pervasive worker surveillance to measure how much they are doing,” the report states. “Systems to enable such invasive worker monitoring are becoming more common.”

Perhaps no workforce has taken a harder fall under AI-controlled systems than gig workers at companies like Uber, Lyft, and Instacart, who have seen their earnings drop precipitously over the past year. In some cases, earnings have fallen by more than 50 percent as the apps have tinkered with algorithmic pay models to drive down costs while they expand into new labor markets and prepare for initial public offerings on the stock market.

“What’s going on doesn’t seem terribly complicated,” Meredith Whittaker, one of the authors of the report and a principal organizer of the 2018 Google walkout, told Motherboard. “These companies have deployed platforms that only a handful of engineers and people at the top of the corporate ladder have access to [and] understand, and these platforms are optimized for endless revenue growth, for profit and for shareholder value.”

Adding insult to injury, gig workers on app-based platforms have been increasingly subject to sudden unexplained “deactivations” (in essence, firings determined by an algorithm), reduced work hours, and earnings that fall short of the costs of gas and other expenses that they must cover to complete “gig” assignments.

“I don’t think it’s an accident that we’re seeing this bait and switch model, where these companies entice workers over the course of years to shape their lives and livelihoods around working for the platform,” Whittaker said. “They buy and lease cars. They quit other jobs. They situate their family lives with the expectation that they can earn some semblance of a predictable living wage. Once they’re locked in, their wages drop and drop and drop. They’re essentially becoming experimental subjects in an algorithmic workplace, where only the bosses have any insight or any control.”

The report’s authors say another cause for concern is the increased use of AI in hiring and firing practices. “Commercial firms across industries, including major employers like Unilever, Goldman Sachs, and Target, are integrating predictive technologies into the process of selecting whom they hire, and whom they fire,” the report’s authors write. “AI systems also actively shape employment advertising, resume ranking, and assessment of both active and passive recruitment.”

Such systems often select based on categories such as “cultural fit” and “competence,” leading experts to argue that these tools likely exacerbate racial and gender inequalities. Yet, because employees aren’t required to disclose whether they use these systems, it’s hard to determine their impact. In January, Illinois will become the first state that requires employers to inform job candidates when they use algorithmic systems to judge candidates during video interviews.

“It’s really hard to tell where and how these systems are being used,” Whittaker said. “Often times, we get this information from worker whistleblowers or vendors who have public advertising. But there’s a challenge for research here, because these are protected behind corporate secrecy.”

“Companies are optimizing for efficiency and they’re also optimizing for a plausible deniability,” Whittaker added. “It’s the algorithm’s fault when there’s a pattern of biased hiring or when tech companies never seem to meet diversity goals.”

While the future may seem grim, the good news is the recent mobilization of workers across the gig economy, the tech industry, and beyond. In 2018, more workers staged major strike actions than any year since 1986. Coincidentally, the AI report’s experts argue that joining unions and other forms of labor organizing are critical to rolling back the damages already done by AI systems, both for white and blue collar workers.