There’s a case to be made for stepping out of real life, says Judy Goldsmith, a computer science professor at the University of Kentucky. Goldsmith started teaching science fiction a decade ago, after students complained about an exam assignment. She gave them the option of analyzing a work of science fiction instead. That gave rise to a course of its own, “Science Fiction and Computer Ethics.” Her own background was in the abstract mathematics of algorithms, not philosophy. “I had remarkably little clue what I was doing,” she says. Feeling somewhat adrift, she eventually wrangled Burton, who had completed a dissertation on ethics and The Chronicles of Narnia, to help revamp the course.

Gregory Barber covers cryptocurrency, blockchain, and artificial intelligence for WIRED.

The mathematician and the religion scholar soon realized they approached sci-fi differently. Goldsmith saw in fiction a way to spool today’s technology forward, to have her students imagine dilemmas that would arise out of coming advances in things like killer drones and carebots that tend to the elderly. Sci-fi, in other words, as an exercise in prediction, a way to prepare us for what soon may come. Many people find that a useful frame. Just ask the army of futurist consultants who trot into corporate boardrooms to engage executives in world-building exercises.

Burton argues there’s not much point in trying to be predictive—especially if engineers aren’t equipped to handle the quandaries right in front of them. “There's all kinds of hideous shit that Facebook has gotten up to that even they probably realized was a little out of balance,” she says. “But I think it's easy to mistake how easy it is, when you are in that place, to talk yourself into the idea that what you're doing is normal.” Sci-fi, yes, offers some distance from the headlines, as well as a sustained interest in the pitfalls of innovation. But the point of fiction, Burton says, is to crack open existing human problems. Basically, it boils down to empathy. Ken Liu’s short story “Here-and-Now” might launch a debate about digital privacy; Martin Shoemaker’s “Today I am Paul” speaks to robot-human relations.

Fiesler, the University of Colorado professor, strives for a middle road. She favors sci-fi with a close tether to the real world—like Black Mirror. “You can still see the thread from here to there,” she says. And she pairs it with real-life case studies, believing the blend of real and speculative guides her students to actionable insights about the nature and risks of working in tech. Even better, she’d have them learn ethics in the same courses where they learn programming, so that they learn to spot moral questions, and potential solutions, in the context of code.

The ultimate question, of course, is whether any of this sticks. Will students instructed in ethics get better at both recognizing technological bias and deploying the tools of code to fix it? Do squishy notions of empathy and conflict in narrative fiction make comp-sci students more sensitive programmers? Burton says it’s not just about identifying a specific coding problem; ethics touches on what it means to be a person at the mercy of a large company and the forces of technological progress. Perhaps exposure to something outside the goal-driven mentality of code—to be immersed, for one semester at least, in a mode of thinking that’s enriched and complicated by human substance—might do an intangible good, make us more engaged, critical employees. As Liu wrote about the genre in Nature in 2017, “although science fiction isn’t much use for knowing the future, it’s underrated as a way of remaining human in the face of ceaseless change.”

When I studied computer science not so long ago (but long enough that Google still said “Don’t be evil”), my college relied on Philosophy 101 to do the job of ethical training. There was merit, I’m sure, in having us learn to write and argue, in our exposure to students from other departments. Perhaps I still have Plato and Descartes filed away in some neural Siberia. But it never occurred to me that the classics would be useful as a programmer.

Instead, I think back to a course in which I was the only computer science student in the room: Introduction to Media Theory. We read McLuhan and Foucault. There were case studies, now long forgotten. I do recall a film. It imagined cross-border labor in the age of VR, a haunting meditation about what happens when technology erases the need for a physical body. I was working in a lab that depended on Mechanical Turk, a service that labels data for researchers. For the first time, I considered what it was—not a service, but workers.

More Stories on How We Learn