Dinkins has since gone down what she calls a “rabbit-hole” of investigations into the way that culture—particularly the experiences of race and gender—is codified in technology. She has become a strong voice in the effort to sound the alarm about the dangers of minority populations being absent from creations of the computer algorithms that now mold our lives. Her research into these imbalances has taken her on a head-spinning tour of tech companies, conferences, and residencies over the past few years. Dinkins is currently in residence at the tech programs of both Pioneer Works and Eyebeam, nonprofit art centers based in Brooklyn. She regularly leads community workshops where she educates people broadly about the impact of AI on our lives, aiming to cultivate an attitude toward technology that sees it not as an intimidating monolith—the haunting specter of computers gone awry that we see so often in Black Mirror or, most iconically, in the cunning and calculating character of HAL in Stanley Kubrick’s 1968 film 2001: A Space Odyssey—but an approachable tool that is, for better or worse, very human.

“We live in a world where we have to always be learning and willing to take on new information, and to do the work to get there, otherwise we’re sunk,” she says. “How do you move forwards in this super fast technological world?” She operates under the principle that if she can get people to think about the future in increments, it’s not quite so daunting. “In five years, what’s my world going to look like? What do I need to be doing now to start dealing with that world?”

Dinkins tries to find accessible avenues into what can seem like brain-scrambling concepts by speaking the language of her target group. In recent workshops at the New York nonprofit space Recess, she worked with kids who’d been diverted from the criminal justice system. She began by inviting them to wrap their heads around what an algorithm is, exactly, finding analog comparisons in “basic things you can do without thinking,” like brushing your teeth, or behavioral tendencies, like those that shape encounters with police. She helped them to see the way in which they are hardwired to react in such moments of conflict. They worked on Venn diagrams to visualize these interactions from their point of view and that of a cop: What were those two people thinking in this shared moment? Where are the overlaps? “Some of [the kids] can be very reactionary, which makes the situation worse,” says Dinkins. “That’s where the algorithm has to change.”

From that familiar territory, Dinkins works her way into talking about online systems and chat bots—pieces of software that emulate the conversational style of humans, and evolve as people enter into dialogue with them—as well as the larger goal of training AI to use language and ideas that relate to a more diverse range of worldviews. Participants in her workshops will often have a go at setting the intentions of a bot, then implanting it with data. One group created a bot whose sole purpose was to tell jokes. The input? Yo mama jokes. “I thought that was just amazing,” says Dinkins. “It’s the idea of taking one’s own culture and putting it into the machine, and using that to figure out how the machine is making decisions.”