Grandma might have even been able to get the desired results without making any effort. Perhaps the algorithms running her software would automatically personalize the viewing experience -- say, keeping an ongoing record of whom she looked away from, as well as other biological features that register discomfort, such as an accelerated heart rate. Biofeedback could safely cocoon us in an amped up version of the filter bubble.

Disturbing as this scenario is, it barely scratches the surface of what could come to pass. Augmented reality users could do much more than ignore minorities -- they could track them. If minorities are dangerous, they'd reason, you want to know where they are at all times. Otherwise, you're vulnerable. Science-fiction author Tim Maughan has envisioned horrendous possibilities, expressed to me in private correspondence: augmented reality warnings, like a "big floating arrows" that identify people to be avoided from miles away, or a navigation app that steers users away from racially undesirable neighborhoods and establishments.

Of course, racist appropriations of technology long preceded digital culture. In The Whale and the Reactor, Landon Winner contends that in the mid-20th century, Robert Moses embedded his racist intentions into the very materiality of Long Island bridges, designing the overpasses to be high enough for cars to pass under, but too low for buses to handle. This "strategic architecture of control" enabled "automobile-owning whites of 'upper' and 'comfortable middle' classes" to use the parkway system to get to Jones Beach, while keeping away "poor people and blacks, who normally used public transit."

What's the best way forward? Banning objectionable reality filters is a futile endeavor, and strengthening "our society's ability to tolerate diverse viewpoints" is easier said than done. Instead, conscientious engineers should take up the case, fight fire with fire, and set their sights on designing anti-racist apps. Gary Marcus, author of a recent New Yorker essay on instilling ethics into driverless cars, offers a clever suggestion (also via private correspondence): "What about augmented reality apps that superimpose information about stranger's hobbies and family background, in order to increase empathy? Decades of research show that people are kinder to those that they view as human beings, rather than anonymous strangers. With the right apps, augmented reality could help." Whether or not this particular program proves effective, one thing is certain: A society committed to social justice needs to advocate for creative ethical solutions, not tolerate technological idealism.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.