ImageNet Roulette is a video installation and app that accompanies their exhibition.

It went viral in September, after millions of people uploaded their photos to see how they would be classified by ImageNet. This is a question with significant implications. ImageNet is the canonical object recognition dataset. It has done more than any other to shape the AI industry.

While some of ImageNet’s categories are strange, or even funny, the dataset is also filled with extremely problematic classifications, many of them racist and misogynist. Imagenet Roulette provided an interface for people to see into how AI systems classify them — exposing the thin and highly stereotyped categories they apply to our complex and dynamic world. Crawford and Paglen published an investigative article revealing how they opened up the hood on multiple benchmark training sets to reveal their political structures.

This is another reason why art and research together can sometimes have more impact than either alone, making us consider who gets to define the categories into which we are placed, and with what consequences.

3. CITIES, SURVEILLANCE, BORDERS

Issues of power, classification, and control are at the foreground of the large scale roll-out of corporate surveillance systems across the U.S this year. Take Amazon’s Ring, a surveillance video camera and doorbell system designed so people can have 24/7 footage of their homes and neighborhoods.

Amazon is partnering with over 400 police departments to promote Ring, asking police to convince residents to buy the system. A little like turning cops into door-to-door surveillance salespeople.

As part of the deal, Amazon gets ongoing access to video footage; police get access to a portal of Ring videos that they can use whenever they want. The company has already filed a patent for facial recognition in this space, indicating that they would like the ability to compare subjects on camera with a “database of suspicious persons” — effectively creating a privatized surveillance system of homes across the country.

But Ring is just one part of a much bigger problem. As academics like Burku Baykurt, Molly Sauter and AI Now fellow Ben Green have shown, the techno-utopian rhetoric of “smart cities” is hiding deeper issues of injustice and inequality.

And communities are taking this issue on. In August, San Diego residents protested the installation of “smart” light poles, at the exact same time as Hong Kong protesters were pulling down similar light posts and using lasers and gas masks to confound surveillance cameras.

And in June this year, students and parents in Lockport NY protested against a facial recognition system in their school that would give the district the ability to track and map any student or teacher at any time. The district has now paused the effort.

Building on the fights against tech companies reshaping cities, from Sidewalk Toronto’s waterfront project to Google’s expansion into San Jose, researchers and activists are showing the connections between tech infrastructure and gentrification, something that the Anti Eviction Mapping project, led by AI Now postdoc Erin Mcelroy, has been documenting for years.

And of course, in February a large coalition here in New York pushed Amazon to abandon its second headquarters in Queens. Organizers highlighted not only the massive incentive package New York had offered the company, but Amazon’s labor practices, deployment of facial recognition, and contracts with ICE. It’s another reminder of why these are multi-issue campaigns — particularly given that tech companies have interests across so many sectors.

Of course, one of the most unaccountable and abusive contexts for these tools is at the US southern border, where AI systems are being deployed by ICE and Customs and Border Patrol.

Right now there are 52,000 immigrants confined in jails, prisons, and other forms of detention, and another 40,000 homeless on the Mexico side of the border, waiting to make an asylum case. So far, seven children have died in ICE custody in the past year, and many face inadequate food and medical care. It’s hard to overstate the horrors happening right now.

Thanks to a major report by the advocacy organization Mijente, we know that companies like Amazon and Palantir are providing the engine of ICE’s deportations. But people are pushing back — already, over 2,000 students across dozens of universities have signed a pledge not to work with Palantir, and there have been near weekly protests at the head offices of tech companies contracting with ICE.

We are honored to be joined tonight by Mijente’s executive director, Marisa Franco, who was behind this report and a leader in the “NoTechForIce” movement.

4. LABOR, WORKER ORGANIZING and AI

Of course, problems with structural discrimination along lines of race, class, and gender are on full display when we examine the AI field’s growing diversity problem.

In April, AI Now published Discriminating Systems, led by AI Now postdoc Sarah Myers West. This research showed a feedback loop between the discriminatory cultures within AI, and the bias and skews embedded in AI systems. The findings were alarming. Just as the AI industry established itself as a nexus of wealth and power, it became more homogenous. There is clearly a pervasive problem across the field.

But there are also growing calls for change. Whistleblower Signe Swenson and journalist Ronan Farrow helped reveal a fundraising culture at MIT that put status and money above the safety of women and girls. One of the first people to call for accountability was Kenyan Graduate Student, Arwa Mboya. Her call for justice fit a familiar pattern in which women of color without much institutional power are the first to speak up. But of course MIT is not alone.

We’ve seen a series of walkouts and protests across multiple tech companies, from the Google Walkout, to Riot Games, to Microsoft workers confronting their CEO, all demanding an end to racial and gender inequity at work.

Now, as you may have heard, AI Now Co-founder Meredith Whittaker left Google earlier this year. She was increasingly alarmed by the direction of the industry. Things were getting worse, not better, and the stakes were extremely high. So her and her colleagues started organizing around harmful uses of AI and abuses in the workplace, taking a page from teacher’s unions and others who’ve used their collective power to bargain for the common good.

This organizing work was also informed by AI Now’s research and the scholarship of so many others, which served as an invaluable guide for political action and organizing. Along the way the tech worker movement grew, there were some major wins, and some experiences that highlighted the kind of opposition those who speak up often face.

Contract workers are a crucial part of this story. They were some of the first to organize in tech, and carved the path. They make up more than half the workforce at many tech companies, and they don’t get the full protections of employment, often earning barely enough to get by, and laboring on the sidelines of the industry. Work from scholars like Lilly Irani, Sarah Roberts, Jessica Bruder, and Mary Gray, among others helped draw attention to these shadow workforces.

AI platforms used for worker management are also a growing problem. From Uber to Amazon warehouses, these massive automated platforms direct worker behavior, set performance targets, and determine workers wages, giving workers very little control.

For example, earlier this year, Uber slashed worker pay without explanation or warning, quietly implementing the change via an update to their platform. Meanwhile drivers for delivery company Door Dash revealed that the company was — quite literally — stealing the tips customers thought they left them in the app.

Happily, we’ve also seen some big wins for these same workers. Rideshare workers in CA had an enormous victory with the AB-5 law, which requires app-based companies to provide drivers the full protections of employment. It’s a monumental change from the status quo, and to discuss this significant moment, Veena Dubal is joining us tonight from UC Hastings. She’s a leading academic studying the gig economy, and she’s worked with drivers and activists for years.

On the east coast, Bhairavi Desai heads the New York Taxi Workers Alliance, a union she founded in 1998 that now has over 21,000 members. Bhairavi led one of the first campaigns to win against rideshare companies, and she’s with us tonight to discuss that work.

Finally, we’re honored to have Abdi Muse on the same panel. He’s the Executive Director of the Awood Center outside of Minneapolis, and a veteran labor organizer who worked with Amazon Warehouse workers in his community to bring the famously resistant company to the table and get concessions that improved their lives. Getting Amazon to do anything is a major feat, and this was a first.

AI’S CLIMATE IMPACT

The backdrop to all these issues is the climate. Planetary computation is having planetary impacts.

AI is extremely energy intensive, and uses a large amount of natural resources. Researcher Emma Strubell from Amherst released a paper earlier this year that revealed the massive carbon footprint of training an AI system. Her team showed that creating just one AI model for natural-language processing can emit as much as 600,000 pounds of carbon dioxide.

That’s about the same amount produced by 125 roundtrip flights between New York and Beijing.

The carbon footprint of large-scale AI is often hidden behind abstractions like “the cloud.” In reality, the world’s computational infrastructure is currently estimated to emit as much carbon as the aviation industry, a dramatic percentage of global emissions. But here too, there is growing opposition. Just this month we saw the first ever cross-tech sector worker action — where tech workers committed to strike for the climate.

They demanded zero carbon emissions from big tech by 2030, zero contracts with fossil fuel companies, and for companies not to deploy their technology to harms climate refugees. Here we see the shared concerns between the use of AI at the borders, and the movement for climate justice. These are deeply interconnected issues — as we’ll be seeing on stage tonight. So let’s close it out by looking at the full visual timeline of the year, that brings all these themes together.

THE GROWING PUSHBACK

Varoon Mathur and Genevieve Fried, Technology Fellows

You can see there’s a growing wave of pushback emerging. From rejecting the idea that facial recognition is inevitable, to tracing tech power across spaces in our homes and our cities, an enormous amount of significant work is underway.

And it’s clear that the problems raised by AI are social, cultural, and political — as opposed to primarily technical. These issues, from criminal justice, to worker rights, to racial and gender equity, have a long and unbroken history. Which means that those of us concerned with the implications of AI need to seek out and amplify the people already doing the work, and learn the histories of those who’ve led the way. These are the people you’ll hear from tonight.

The pushback that defined 2019 reminds us that there is still a window of opportunity to decide what types of AI are acceptable and how to make them accountable. Those on stage this evening are on the front lines of creating this real change — researching, organizing and pushing back, across multiple domains.

They share a common commitment to justice, and a willingness to look beyond the hype, and to ask who benefits from AI, who is harmed, and who gets to decide.