This semester I attended “User Interface Design”, a class that encouraged us to think critically about the ways in which people interact with systems. While the course work naturally focused on software interfaces, the principles of good UI design are universal. Whether you’re creating a website, a toaster, a lamp or a door handle, intuitive design relies on understanding human psychology.

In my ongoing quest to contextualize everything I learn in terms of video games, I’d like to explore how the principles of user interface design might be applied to make games more accessible.

In his book The Design of Everyday Things, Donald Norman defines three principles of control design:

Visibility: It Should Be Obvious What a Control Is Used For.

If I press this button, what will happen? If I want to unlock the door, which control should I use? A system with good visibility allows the user to easily translate goals into actions. Affordance: It Should Be Obvious How a Control Is Used.

The system should provide “strong clues to the operation of things”. A button affords pushing, a lever affords pulling, etc. The user should know how to operate a control just by looking at it. Feedback: It Should Be Obvious When a Control Has Been Used.

Once the user has pressed a button, the system should react in a manner that clearly communicates what has just been accomplished. If nothing has happened, this fact should also be obvious.

By following these principles, we can create systems where “the relationships between the user’s goals, the required actions, and the results are sensible, meaningful and not arbitrary.”

These principles can be applied to at least two layers of interaction in video games: the interface between the player and his/her agency in the game (usually an avatar) and the interface between the avatar and the game world. While a lot can be said about the latter1, I’d like to explore two ways in which these three UI principles can be applied to a game’s physical interface.

Visibility for Controllers

Because controllers are designed to support a wide range of games, their buttons cannot usually be labelled according to the functionality that they provide. Instead, buttons are labelled according to letters, numbers and symbols, and the game must provide additional documentation that translates A to “Jump” and R1 to “Shoot”. This violates the principle of visibility, and is a source of considerable frustration for inexperienced gamers.





However, some games use clever tricks to get around this problem. The Legend of Zelda series and Beyond Good & Evil, for instance, facilitate the translation by integrating the documentation right into the player’s heads-up display. Since the buttons cannot be physically relabelled, they are instead relabelled on screen. Not only does this improve visibility by mapping game functions directly to buttons, it also removes ambiguity for context-sensitive actions.

Eric Swain also pointed out the following about controls in Beyond Good & Evil:

It has a simple set of unified controls that transition from one mode to another. From this point of view, the R2 button is not the run button, but the move faster button. The hovercraft and the spaceship both use the same buttons to maneuver as Jade does on foot. On the PS2, the X button will always be action, the O button will always be item and the Square button will always be attack.

By using these kinds of labelling techniques, game designers can compensate for generic controller design and provide consistent visibility.

New Affordances

While traditional controllers have inherent visibility issues, the next generation of interfaces may circumvent the problem by harnessing new affordances. Touch screens and motion controls can actually improve visibility by reducing the representational gap between player action and game agency.

Consider a baseball game on the Wii: the player’s goal is for their avatar to swing at a ball. A motion controller affords physically imitating the desired action. Similarly, the touch screens found on the iPhone and Nintendo DS afford pressing directly on the object that the player wants to manipulate.

In both of these cases, “how the control is used” is conceptually very close to “what the control is used for”. There is effectively no translation or thought required between “what” and “how”. Therefore, the nature of the affordance provides visibility. I believe that this interface quality goes a long way in explaining the success of these consoles with non-traditional audiences.

To many people, video games are user-unfriendly software. Improving the UI design by applying proven principles will hopefully go a long way in opening up the medium to new audiences.