Her technical expertise launched a billion phone calls, yet now she is routinely thwarted by what many hail as one of the most accessible UIs on the planet.

My mom is an O.G. console cowgirl, a real-life hidden figure. Working for Ma Bell in the early ’70s, she was responsible for administrating some of the most powerful computers then in the country, the ones that drove our telephone systems. “I’m one of only seven people who know how to program these,” I remember her once bragging as she swept her hand across a clean room the size of an aircraft hanger filled to the ceiling with blinking, whirring mainframes.

But Sally Brownlee isn’t so good with computers anymore. Last March, she was hit by a car while walking her dog. Everyone survived, but Wally lives with another family now (he’s happy; I hear they have a beach house), and as for Mom, who suffered permanent brain damage, she’s not the same in innumerable ways. But in no way is the change more quantifiable to me than when I watch her stares uncomprehendingly at the keyboard of her laptop, or try and fail to use her iPad for the umpteenth time. This woman’s technical expertise launched a billion phone calls, yet now, she is routinely thwarted by what many hail as one of the most accessible UIs on the planet.

If you’re blind or deaf or have a motor disability, iOS has a plethora of features that will make it possible for you to use an iPhone or iPad or a Windows device easily. But there is a big group of disabilities that Microsoft and Apple has overlooked — cognitive disabilities, like the one my mother now has. And these companies are not alone in ignoring cognitive accessibility in their software. When it comes to cognitive disabilities, inaccessibility is the status quo.

Why Design for Cognitive Accessibility?

“I wish I could tell you there were lots of companies out there designing software from the ground up with cognitive disabilities in mind, but it’s not true,” says Lisa Seeman, a member of IBM Accessibility Research and the editor of the World Wide Web Consortium’s standards for making designs for individuals with cognitive disabilities. “There’s lots of interest, but not a lot of results so far.”

To understand why cognitive accessibility is important, it might help to explain why companies are interested. It’s not out of the goodness of their hearts; it’s because of money. According to a recent Nielsen study, seniors are 35% less likely to complete a purchase online than someone under the age of 55.That’s a lot of money left on the table, but what does it have to do with cognitive accessibility? Simply put, the older we get, the more we act like we’re cognitively disabled: Again, according to Nielsen, people’s ability to use websites effectively declines 0.8% every year over the age of 25.

The older we get, the more we act like we’re cognitively disabled: According to Nielsen, people’s ability to use websites effectively declines 0.8% every year over the age of 25.

“It happens naturally,” Seeman explains. “We get older, and even without having something like dementia, we become less capable of figuring new things out.” And this isn’t true only for seniors. Cognitive ability is a spectrum, not a binary switch: It goes down when you’re stressed, tired, depressed, hungry, or in pain. “Everybody has the same cognitive impairments when they’re stressed or depressed,” Seeman says.

In other words, cognitive accessibility isn’t just relevant for people with brain damage, like my mother. It’s relevant to everyone, from people with migraines to neophytes in emerging markets who don’t have software translated yet into their local languages.

The Problems of Cognitive Accessibility

OK, so cognitive accessibility is important, and fixing your software or website’s cognitive accessibility issues could represent a 35% jump in sales. If that’s the case, why are companies dragging their feet? According to Seeman, designing an interface with cognitive disability in mind means breaking a lot of the design rules we currently take for granted in our interfaces.

Take discoverability, a key tenet in modern interface design, which relies on user intuition to discover UI functionality through experimentation: accessing Notification Center in iOS by swiping down from the top of the screen, for example, or unlocking an iPhone XS by swiping up from the bottom of the lock screen. Discoverability, as a concept, is a response to the miniaturization of screens. When real estate is at a premium, functionality must be hidden. The problem is that what makes a UI feature “discoverable,” even among people of baseline cognition, is highly debatable. (Bet you don’t know that you can shake your iPhone to undo the last text you entered, a feature that has been in iOS since it was still skeuomorphic.)

Designing an interface with cognitive disability in mind means breaking a lot of the design rules we currently take for granted in our interfaces.

As for people with cognitive disabilities, these affordances need to be explicitly explained, and demonstrated, sometimes every time. If a feature isn’t obvious, it isn’t discoverable.

Here, then, is where the design ethos that has defined much of the post-iPhone age — Dieter Rams’s “As little design as possible” — butts head with the reality of cognitive disability. When you are cognitively disabled, how much design is needed — how many labels, buttons, text, explanations, cues, animations, and so on — can differ from individual to individual, and even moment to moment.

How to Design Interfaces for Cognitively Disabled People

So what’s the answer? A system-level toggle you flip to tell software you’re cognitively disabled, which then dumbs down your interfaces accordingly? Nothing so gauche, laughs Seeman. Where the industry needs to go, she says, is dynamic UIs: Interfaces that aren’t built on an assumption of computer literacy but adjust to a user’s capabilities in real time. “What we need are interfaces that meet the user where they already are,” she says.

What would such interfaces look like? First, they’d respond proactively to a user’s preferences. For instance, an app might ask a first-time user which set of icons they find easier to understand. Interfaces would also be patient and instructive, clearly and verbosely prompting users what to do next if they seem confused. (If a user spends an inordinate amount of time on a single screen, an animation might prompt them where to click, while a voice explains what to do next.) They’d be consistent: Anything clickable or button-like would be rendered the same way. And they’d be responsive, stripping out interface elements or adding them back according to the demonstrated aptitude of the user, the same way a web page might if someone accesses it from an iPhone instead of from a PC.

Every time I write about accessibility, I hammer the same point: If you want to see the cutting-edge future of interfaces ten or twenty years down the line, look at what accessibility designers are working on now. Siri, after all, owes its existence to decades-old efforts of accessibility designers trying to make computers accessible to the blind. And I think the same is true here. The interfaces of the future won’t be static; they will morph and change beneath our touch like a bubble, anticipating our needs and wants based on deep, intimate knowledge of who we are. And when that happens, interfaces will be partially built upon the groundwork even now being laid by researchers like Seeman, working on the problem of cognitive accessibility.

As a son visiting his brain-damaged mother in the nursing home where she will quite possibly spend the rest of her life, that gives me comfort as I watch her struggle with what was once second nature. Maybe she hasn’t so much lost her ability to use a computer as she is trying to use an interface that hasn’t been invented yet. She was once a pioneer on the frontiers of technology. Maybe, in her own way, she is still.

Magenta is a publication of Huge.