Both Android and iOS have an accessibility mode to help people with different degrees of visual and auditive disabilities. That includes interface tweaks like higher contrast, larger type, audio cues, and text-to-speech technology, and these features have enabled millions to use these devices. But there are still disabilities that get overlooked when it comes to accessibility.

Now, artificial intelligence is helping designers close that gap by automatically creating and testing new interfaces based on the unique cognitive models of people with sensorimotor and cognitive impairments. For instance, the software might optimize a smartphone keyboard for people with essential hand tremors by adapting the size of each button or the frequency of spellchecks.

Scientists from Aalto University in Finland and Kochi University of Technology in Japan developed the software, which they describe in the current issue of the IEEE Pervasive Computing Journal. First, they created a “cognitive profile” for each of the disabilities they were targeting: people suffering from dyslexia, tremor, and memory disorders. These profiles–created from observing real people–contain parameters that tell the computer how a user would interact with the screen depending on their cognitive, sensorial, or motor impairment.

The machine then went on to optimize the typical smartphone keyboard based on the profiles, creating countless variations to measure it against the simulated behavior of each of these three cognitive models. The variations can number in the millions, so it’s impossible to test them with real users. Instead, their software simulated how people would use each interface based on its profile data. Essentially, they created a virtual user with a specific disability and ran millions of tests on them to determine the best UI.

When they tested this solution with a real person suffering from essential hand tremors–a nerve disorder that makes some parts of your body shake uncontrollably, like Parkinson’s–they found out that, indeed, the person behaved almost exactly the same as in the AI simulation. He was able to use the new UI to text with nearly no errors, even while he couldn’t use a regular QWERTY keyboard at all.

The researchers believe that they could do the same thing for people with many other disabilities and many other tasks, not just text entry on a keyboard. Does that mean that UI designers are out of a job? Of course not, says coauthor and Aalto University postdoctoral researcher Jussi Jokinen: “This is of course just a prototype interface, and not intended for consumer market . . . . Designers pick up from here and with the help of our model and optimizer create individually targeted, polished interfaces.”