Living Usability Testing

Living usability testing [is] an iterative approach to usability testing that involves learning, then adapting a prototype based on prior test sessions.—Jordan Julien

“I’m a big fan of something I’ve been calling living usability testing,” replies Jordan. “It’s an iterative approach to usability testing that involves learning, then adapting a prototype based on prior test sessions. Using this approach, it often takes longer to complete the testing, but it allows you to test theories along the way. I’ve always been suspicious of usability tests that extract insights based on a very small sample size. So I’ve focused most of my usability testing on optimizing scenario success rates rather than identifying the reasons that painpoints exist. In other words, I recommend that you don’t worry too much about why a participant didn’t interact with a page element. Instead, worry about how to ensure that the next participant will interact with it.”

Move On, Then Return to the Issue During the Post-Test Interview

During the post-test interview…, ask why he thinks he might have missed or purposely chosen not to use that function.—David Kozatch

“Leave the page,” recommends David. “Then, once you’ve completed the non-assisted portion of the session, go back to the page during the post-test interview and ask the participant, ‘Do you recognize any other functions on this page that might have helped you to do xyz?’

“If the participant still doesn’t see it, point it out and ask why he thinks he might have missed or purposely chosen not to use that function. Was it because of its size, color, placement, label, or other factors? If you were performing eyetracking, the moderator should take note of whether the participant’s gaze fell upon that function. Then mention this as part of the context for your question—for example, ‘I noticed that your gaze fell upon this part of the page. Were you looking, but not seeing? Why?’”

Ask Questions

Stephanie Rosenbaum of TecEd taught me a technique called graduated prompting ….—Dana Chisnell

“Stephanie Rosenbaum of TecEd taught me a technique called graduated prompting,” answers Dana. “First, tell the person just to try again. If the participant still says he can’t find it, you can point out that he’s been focusing on a specific area and might look around. Finally, you might tell him specifically and follow up with a question about it—for example, ‘What might have helped you to find or notice that?’ or ‘What were you looking for that we didn’t show you?’”

Once the participant is finished with the task—or has given up—I’ll point to the object on the screen and suggest that clicking it would likely have gotten him closer to his goal.—Carol Barnum

“I try not to influence the process while a participant is engaged in a task,” responds Carol. “But once the participant is finished with the task—or has given up—I’ll point to the object on the screen and suggest that clicking it would likely have gotten him closer to his goal. When I point it out to the participant, I usually get one of two responses: ‘I didn’t see that before!’ or ‘I saw that, but I didn’t think it was the correct choice.’ In either case, I ask the participant to elaborate on his response to help me to understand it and, thus, be better able to inform the design team. The typical explanation for ‘I didn’t see it’ is that it was in the ‘wrong place.’ Typical explanations for ‘I saw it, but didn’t think it was the correct choice’ usually center around the object not looking clickable or the name, label, or design of the object not providing the proper scent of information. That is, the participant didn’t think it would get him closer to his goal. In either case, asking the participant for this information after he has completed a task sheds light on his thinking, without influencing him during his problem-solving process.”

During the debriefing period…, [I] take the participant back to the appropriate screen and ask him to re-enact what he did.—Cory Lebson

“It depends on the situation,” answers Cory. “Ideally, this can wait until the end of the session, when I’d either bring it up during the debriefing period or take the participant back to the appropriate screen and ask him to re-enact what he did. In the latter case, if the participant again misses the thing that I want to know about, I can now ask about it immediately. However, if I don’t think that I can wait to ask until the end, and I’m sure that asking will not bias the rest of the activities, I’ll ask at an earlier point. Regardless of when I ask, however, I know that participants are not always fully aware of what they’ve done or did not notice, so once I draw their attention to a particular part of the screen, they may think that they had noticed it, even though they did not.”

Gain Insight into the User’s Mental Model

Our primary goal is to gather unbiased insights that lead us to a better understanding of the user’s experience.—Gavin Lew

“As researchers, our primary goal is to gather unbiased insights that lead us to a better understanding of the user’s experience,” replies Gavin. “One of the key challenges in good UX research is our ability to write really good questions that are not simply matching exercises—that is, when the user hears a specific word and tries to find it in the user interface

“But to answer the reader’s question, when participants fail to complete a task during usability test sessions, the technique we use to extract their mental model of the experience is analogous to a dentist’s pulling teeth—the observers in the room feel uncomfortable watching the experience. A good moderator will simply ask, ‘What would you do next?’ Even in a frustrating situation, this can provide insight into the participants’ mental model. For example, you might hear, ‘Well. I would’ve thought I should go here, but I couldn’t find anything, and now I want to go here, but that’s not it either. Sigh… So I just can’t believe it’s here.’

“Our goal is to understand the user’s mental model so we can see where a mismatch exists between the desired experience and the actual experience. Through this process of pulling teeth, the participant will tell you about everything he saw that might be relevant, so you don’t have to point anything out.

Our goal is to understand the user’s mental model so we can see where a mismatch exists between the desired experience and the actual experience.—Gavin Lew

“Of course, there are cases when, after this entire process, the participant still doesn’t see anything that remotely looks like the answer,” continues Gavin. “In such a case, you know the task was a failure, and no one can argue that, if we had just given him more time, he would naturally have found it. To address this case, I often coach moderators to limit the words they use when they’re asking probing questions, because they are thinking on their feet. When thinking on your feet, you could accidentally bias a user by using a word that matches a target in the user interface. So I council moderators to instead just point to an object rather than describing it and simply say, ‘Tell me about this,’ then pause. Participants are then able to describe what that object means to them.”