The Same User Interface Mistakes Over and Over

It has been 42 years since the not-very-wide release of the Xerox Alto and almost 32 since the mainstream Macintosh. You might expect we've moved beyond the era of egregious newbie mistakes when building graphical UIs, but clearly we have not. Drop-down lists containing hundreds of elements are not rare sights. Neither are modal preference dialogs, meaningless alerts where the information is not actionable, checkboxes that allow mutually exclusive options to be selected at the same time, and icons that don't clearly represent anything. I could go on, but we've all experienced this firsthand.

Wait, I need to call out one of the biggest offenses: applications stealing the user's focus--jumping into the foreground--so that clicks intended for the previously front-most app are now applied to the other, possibly with drastic results.

That there are endless examples of bad UIs to cite and laugh at and ignore is not news. The real question is why, after all this time, do developers still make these mistakes? There are plenty of UI experts teaching simplicity and railing against poor design. Human-computer interaction and UX are recognized fields. So what happened?

We've gotten used to it. Look at the preferences panel in most applications, and there are guaranteed to be settings that you can't preview, but instead have to select, apply, close the window, and then can't be undone if you don't like them. You have to manually re-establish the previous settings. This is so common that it wouldn't even be mentioned in a review.

(At one time the concern was raised that the ubiquitous "About..." menu option was mislabeled, because it didn't give information about what a program was or how it worked, but instead displayed a version number and copyright information. It's a valid point, but it doesn't get a second thought now. We accept the GUI definition of "About.")

There's no standard resource. How do you know that certain uses of checkboxes or radio buttons are bad? From experience using apps, mostly, and some designers may never notice. If you're starting out building an interface, there's no must-have, coffee-stained reference--or a web equivalent--that should be sitting on your desk. Apple and others have their own guidelines, but these are huge and full of platform-specific details; the fundamentals are easy to overlook.

There aren't always perfect alternatives. There's so much wrong with the focus-stealing, jump-to-the-front application, but what's the solution? Standard practice is a notification system which flashes or otherwise vies for attention, then you choose when you want to interact with the beckoning program. What this notification system is depends on the platform. There isn't a definitive approach for getting the user's attention. It's also not clear that the model of background apps requesting the user's attention works. How many iPhones have you seen with a red circle containing "23" on the home screen, indicating that 23 apps need updating...and it's been there for months?

Implementing non-trivial GUIs is still messy. Windows, OS X, and iOS are more or less the same when it comes to building interfaces. Use a tool to lay out a control-filled window, setting properties and defaults. Write classes which are hooked to events fired by controls. There's more architecture here than there should be, with half of the design in code, half in a tool, and trying to force-fit everything into an OOP model. It's also easy to build interfaces that are too static. REBOL and Tk showed how much nicer this could be, but they never became significant. It's better in HTML, where layout and code are blurred, but this doesn't help native apps.

(If you liked this, then you might enjoy If You're Not Gonna Use It, Why Are You Building It?)

permalink December 30, 2015

previously