Developing technology that doesn’t perpetuate racism demands putting social values before profit.

In her new book Race After Technology: Abolitionist Tools for the New Jim Code, Ruha Benjamin breaks down the “New Jim Code,” technology design that promises a utopian future but serves racial hierarchies and racial bias. When people change how they speak or act in order to conform to dominant norms, we call it “code-switching.” And, like other types of codes, the practice of code-switching is power-laden. Justine Cassell, a professor at Carnegie Mellon’s Human-Computer Interaction Institute, creates educational programs for children and found that avatars using African American Vernacular English lead Black children “to achieve better results in teaching scientific concepts than when the computer spoke in standard English.” But when it came to tutoring the children for class presentations, she explained that, “We wanted it [the avatar] to practice with them in ‘proper English.’ Standard American English is still the code of power, so we needed to develop an agent that would train them in code-switching.” This reminds us that whoever defines the standard expression exercises power over everyone else, who is forced to fit in or else risks getting pushed out. But what is the alternative? When I first started teaching at Princeton, a smart phone app, Yik Yak, was still popular among my students. It was founded in 2013 and allowed users to post anonymously while voting “up” and voting “down” others’ posts, and was designed to be used by people within a 5-mile radius. It was especially popular on college campuses and, like other social media sites, the app reinforced and exposed racism and anti-Black hatred among young people. As in internet comments sections more broadly, people often say on Yik Yak what they would not say in person, and so all pretense of racial progress is washed away by spending just five minutes perusing the posts.

But the difference from other virtual encounters is that users know that the racist views on Yik Yak are held by people in close proximity—those you pass in the dorm, make small talk with in the dining hall, work with on a class project. I logged on to see what my students were dealing with, but quickly found the toxicity to consist overwhelmingly of…racist intellectualism, false equivalences, elite entitlement, and just plain old ignorance in peak form. White supremacy upvoted by a new generation … truly demoralizing for a teacher. So I had to log off. Racism, I often say, is a form of theft. Yes, it has justified the theft of land, labor, and life throughout the centuries. But racism also robs us of our relationships, stealing our capacity to trust one another, ripping away the social fabric, every anonymous post pilfering our ability to build community. I knew that such direct exposure to this kind of unadulterated racism among people whom I encounter every day would quickly steal my enthusiasm for teaching. The fact is, I do not need to be constantly exposed to it to understand that we have a serious problem—exposure is no straightforward good. My experience with Yik Yak reminded me that we are not going to simply “age out of” White supremacy (as Jessie Daniels demonstrates in Cyber Racism), because the bigoted baton has been passed, and a new generation is even more adept at rationalizing racism. Yik Yak eventually went out of business in 2017, but what I think of as NextGen Racism is still very much in business more racially coded than we typically find in anonymous posts. Coded speech, as we have seen, reflects particular power dynamics that allow some people to impose their values and interests upon others. As one of my White male students, Will Rivitz, wrote—in solidarity with the Black Justice League, a student group that was receiving hateful backlash on social media after campus protests:

“To change Yik Yak, we will have to change the people using it. To change those people, we will have to change the culture in which they—and we—live. To change that culture, we’ll have to work tirelessly and relentlessly towards a radical rethinking of the way we live—and that rethinking will eventually need to involve all of us.” I see this as a call to rewrite dominant cultural codes rather than simply to code-switch. It is an appeal to embed new values and new social relations into the world, because as Safiya Noble writes in Algorithms of Oppression, “an app will not save us.” Whereas code-switching is about fitting in and “leaning in” to play a game created by others, perhaps what we need more of is to stretch out the arenas in which we live and work to become more inclusive and just. If, as Cathy O’Neil writes in her book Weapons of Math Destruction, “Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide,” then what we need is greater investment in socially just imaginaries. This, I think, would have to entail a socially conscious approach to tech development that would require prioritizing equity over efficiency, social good over market imperatives. Given the importance of training sets in machine learning, another set of interventions would require designing computer programs from scratch and training AI “like a child” (as Jason Tanz wrote in an article for Wired) so as to make us aware of social biases. The key is that all this takes time and intention, which runs against the rush to innovate that pervades the ethos of tech marketing campaigns. But, if we are not simply “users” but people committed to building a more just society, it is vital that we demand a slower and more socially conscious innovation. The nonprofit AI research company Open AI says, as a practical model for this approach, that it will stop competing and start assisting another project if it is value-aligned and safety-conscious, because continuing to compete usually short-changes “adequate safety precautions” and, I would add, justice concerns. Ultimately we must demand that tech designers and decision-makers become accountable stewards of technology, able to advance social welfare. For example, the Algorithmic Justice League has launched a Safe Face Pledge that calls on organizations to take a public stand “towards mitigating the abuse of facial recognition analysis technology. This historic pledge prohibits lethal use of the technology, lawless police use, and requires transparency in any government use” and includes radical commitments such as “show value for human life, dignity, and rights.” Tellingly, none of the major tech companies has been willing to sign the pledge to date. Nevertheless, there are some promising signs that more industry insiders are acknowledging the complicity of technology in systems of power. For example, thousands of Google employees recently condemned the company’s collaboration on a Pentagon program that uses AI to make drone strikes more effective. And a growing number of Microsoft employees are opposed to the company’s contract with the US Immigration and Customs Enforcement : “As the people who build the technologies that Microsoft profits from, we refuse to be complicit” (Frenkel, New York Times, June 19, 2018.) Much of this reflects the broader public outrage surrounding the Trump administration’s policy of family separation, which rips thousands of children from their parents and holds them in camps reminiscent of the racist regimes of a previous era. The fact that computer programmers and others in the tech industry are beginning to recognize their complicity in making the New Jim Code possible is a worthwhile development. It also suggests that design is intentional and that political protest matters in shaping internal debates and conflicts within companies. This kind of “informed refusal” expressed by Google and Microsoft employees is certainly necessary as we build a movement to counter the New Jim Code, but we cannot wait for worker sympathies to sway the industry. Where, after all, is the public outrage over the systematic terror exercised by police in Black neighborhoods with or without the aid of novel technologies? Where are the open letters and employee petitions refusing to build crime production models that entrap racialized communities? Why is there no comparable public fury directed at the surveillance techniques, from the prison system to the foster system, that have torn Black families apart long before Trump’s administration? The selective outrage follows long-standing patterns of neglect and normalizes anti-Blackness as the weather (as Christina Sharpe describes in her book In the Wake), whereas non-Black suffering is treated as a crisis. This is why we cannot wait for the tech industry to regulate itself on the basis of popular sympathies. This edited excerpt from Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin (2019) appears by permission of the author and Polity Press.

Share

Ruha Benjamin is an associate professor of African American Studies at Princeton University.