Our interfaces are lying to us. Elevator “close” buttons don’t really close the doors. Software progress bars don’t map to actual progress. Mobile apps like Instagram say they’ve completed an action when they may not have even started. Like the white lies we tell to lubricate our social interactions, these “placebo interfaces” are designed to shield us from the psychological burden of total awareness. Often, we appreciate them.

advertisement

advertisement

But what happens when automated systems and artificial intelligence take over more and more of our products and services? What’s the difference between a comforting white lie and a potentially dangerous abdication of control? How do we design “volitional theater” into things like self-driving cars—and should we? Eytan Adar, a human-computer interaction researcher at the University of Michigan, co-authored a paper on what he calls “benevolent deception” in UI design, and he thinks our current era of smart systems offers more opportunities to deploy it than ever before. “We’re seeing the underlying systems become more complex and automated to the point where very few people understand how they work, but at the same time we want these invisible public-facing interfaces,” he told me via email. “The result is a growing gulf between the mental model (how the person thinks the thing works) and the system model (how it actually works). A bigger gulf means bigger tensions, and more and more situations where deception is used to resolve these gaps.” Placebo UIs are a band-aid, and they’re not even effective—at least, not at anything other than treating users like children. He adds that “deceptions are just one of many design strategies we can apply” to deal with complexity, but that splitting hairs over what does or does not constitute “lying” is beside the point. What matters in technology, he says, is not some metaphysical value like honesty, but practical value: What is the job to be done? What are the costs and benefits of this design? How effective are the outcomes? Filmmaker, interaction researcher, and BERG alumnus) Timo Arnall takes a hard line in the opposite direction, arguing that dishonesty or obfuscation of a system’s true functionality does have a negative effect on its practical effectiveness. “Interfacing” with a system resembles communication as much as physical manipulation, and a “placebo UI” offers something worse than a lie: a dead channel for talking to the system and receiving feedback about its state and behavior. This “mode confusion” contributed to the deaths of 228 people when Air France 447 crashed in 2009. As Roman Mars explains in an episode of 99% Invisible: When a pressure probe on the outside of the plane iced over, the automation could no longer tell how fast the plane was going, and the autopilot disengaged. The “fly-by-wire” system also switched into a mode in which it was no longer offering protections against aerodynamic stall. When the autopilot disengaged, the co-pilot in the right seat put his hand on the control stick … and pulled it back, pitching the nose of the plane up. This action caused the plane to go into a stall … The pilots, however, never tried to recover, because they never seemed to realize they were in a stall. Four minutes and twenty seconds after the incident began, the plane pancaked into the Atlantic, instantly killing all 228 people on board. Designing systems with security blanket interfaces (like, say, a pseudo-functional steering wheel in a self-driving car) “doesn’t address the core concerns and problems of our age that are really about how we represent large, invisible and complex software systems in ways that make sense to the millions of people that use them,” Arnall says.

advertisement

But this, too, is dicey territory. Abstracting complexity away from end users is usually a good thing, as the GUI-equipped computing device you’re reading this article on can attest to. Arnall’s argument seems to hinge on the idea of designing and maintaining “legible, readable, understandable” interfaces—or, in other words, metaphors. But grokking a system’s state isn’t the same thing as controlling it. If you’re one of the unlucky future travelers who finds herself on the business end of the “trolley problem” for self-driving cars, will it really reassure you to understand how and why the system decided to sacrifice you for the greater good, even if you can’t do anything about it? Matt Webb, Arnall’s former BERG studio-mate, zeroes in on the practical issue that Adar and Arnall’s metaphysical debates glance off of: agency. While it may be true that choice is overrated, the ability to act is generally not. The ideal automated/ smart system will provide an interface that is medium and mechanism at once: you can both communicate with it and manipulate it, in a context-appropriate manner. Webb offers a little parable to explain: “I grew up in Southampton, which was one of the first places in the U.K. to have centrally computerized traffic light systems in the early 1990s,” he says. “I remember stopping at a red light at 4 in the morning. I went to change the tape in my cassette player—the light goes green, and then red again—and I’m still changing the tape, and the light just stays red. I thought, I wonder how long I can stay here and nothing will happen? I stayed there for 15 minutes and the light never changed. “Then I remembered that there was an induction loop under the road which detects when something heavy and metal like a car goes over it,” Webb continues. “So I moved my car back one foot and forward one foot, and lo and behold, 20 seconds later the light turned green.” The moral of the story, Webb says, is that if a “placebo UI” is in place (or deemed necessary), “there’s something amiss about the automation.” If the system is legible and evident—i.e., “if the design of that automation was clear, its behavior was dependable, and the mapping [of that behavior] matched the way we thought about it”—you never have to ask the dreaded question: What’s it doing now? Your agency is not impaired. The ideal smart system will provide an interface that is medium and mechanism at once: you can both communicate with it and manipulate it.

advertisement

Placebo UIs are a band-aid over “broken heuristics” in smart systems, Webb says. And they’re not even effective—at least, not at anything other than treating users like children. “The placebo UI is not making you feel any better about your lack of understanding and control over the system—it’s just giving you somewhere to put your shitty feelings,” he adds. “Sometimes we need that. But the placebo UI isn’t adding any value. It’s the best of all terrible worlds short of fixing the underlying problem with the system.” But if impaired agency is the problem, it might also point to a solution. After all, human beings have a very robust and ancient way of “interfacing” with other agents (biological or otherwise): we ascribe personalities to them. The more “intelligent” our systems get, the more sense it might make to treat them like the four ghosts in Pac-Man, each of which was programmed with a distinct “character” to encapsulate its behavior patterns. Automate as effectively as possible, but just in case, don’t hide the seams. “If you think about putting your self-driving car in eco-mode, or into ‘get home quick because I’ve heard my house is on fire’ mode, or ‘be careful because there’s a power outage and I’m not going to be able to recharge you till tomorrow’ mode, the best way to embody those patterns might be as ‘characters’ rather than settings or configurations,” Webb explains. The legibility of the system is maintained (no “what’s it doing now?”), and its behavior is usefully abstracted away from the basic mechanics while still encapsulated (or “embodied”) in a form that you feel empowered to act on. Think of TARS, the intelligent robot that accompanies Matthew McConaughey’s crew in the film Interstellar. McConaughey doesn’t fuss with a bunch of buttons or “modes” to interact with and control TARS. Instead, it acts (and he treats it) as if it has a personality—albeit one that McConaughey can explicitly query and adjust at will. Of course, that kind of artificial personality is its own kind of design theater, but if implemented thoughtfully enough, it could sidestep what Webb calls the “weird misdirection” of placebo UIs for automated systems. Or it might open a whole new can of worms. In any case, the parable of the traffic light points the way ahead: Automate as effectively as possible, but just in case, don’t hide the seams.

advertisement