1+. VERIFICATION

You made it this far, and you’re saying, “I read 5 skills, what’s with a 6th?” Verification is essentially another form of research, which brings us full circle.

Verification is not so much a standalone step as it is something to incorporate regularly throughout the process. When you create any design artifact, such as a storyboard, flowchart, set of wireframes, or prototype — you do it to test an idea or assumption and learn from it. You verify it works for users or not.

You build things to test them, learn from observations, and iterate the design. This is expresses by the Lean UX cycle: Think > Make > Check

But how best to check or verify what you’ve made? Again, there are many options, but I’ll share a few that I’ve found to be simple and effective.

Listen & Learn: As with all user research, the goal of verification testing is to gain an understanding of how users think and interact with a design. What you intended, or how you think a design is “supposed to work”, does not matter. Avoid instructing users or telling them what they should do. Instead, strive to understand why they do what they do. This is best achieved by asking open-ended questions and letting users do most of the talking.

Design walk through

A walk through is essentially a guided demo. You review design artifacts (such as flowcharts, storyboards, page wireframes, etc) with a target user, and get their feedback on what they are shown.

A walk through is quick to do, since you only need static design concepts to show. However, since you are presenting to a user; rather than having them directly explore the designs, it’s easy to fall into the trap of showing people how you intend the design to function rather than getting their input on how it should work. Always remember that you want to understand the user’s point of view. Be sure to pause often and ask for their feedback on what they are seeing. When things don’t make sense, ask them to explain why, what might work better, and what that alternate approach would allow them to do.

Moderated task based test

The reason you create any prototype, regardless of fidelity, is to watch it be used and learn from that. Task-based usability tests are a way to learn what’s needed from a prototype. A moderated test simply means you will be watching users interact with the prototype as they execute the tasks. This gives you the chance to observe them and ask questions to better understand their impressions of the design.

Preparation

Start by figuring out what you want to test — e.g., Can someone create a new account? Add items to a cart and purchase them? Find a location on a map?

With that information in mind, write out tasks for participants to complete in the prototype. The tasks should focus on an outcome a person is likely to want and not the steps they would follow to get there.

For example, if testing a prototype eCommerce site:

Good task : Buy a large red tee shirt from the site.

: Buy a large red tee shirt from the site. Bad task: Search on “red tee shirts”, set size too large, add to cart, and proceed to checkout.

Once you have your tasks, do a dry run to make sure they can all be done with your prototype. If not, update the prototype or revise the tasks as needed. You also want to get a feel for roughly how long the tasks will take. It’s best to try and keep total testing time under one hour. Longer than that, and people tend to get fatigued. Expect to use at least 25% of that time for discussion — so best to be well under 45 min. for task time.

With everything ready, you can schedule people to test with. The pool of people you interviewed during initial user research are good to draw from.

Running the test

Testing can be done in person, or remotely. In either case you should record sessions so you can review them later. If you have a clickable digital prototype, you can use any online meeting software with record capability to capture the session. If you’re using a manual paper prototype, you may need a stationary camera to record the session.

Start by letting the participant know you’re testing the design, not them. There are no wrong answers. You can also ask background questions to understand them to see if they align with your persona for the product.

With those preliminaries out of the way, your participant can begin working on the tasks. I like to have them written out and ask the participant to read the task aloud before beginning.

As they begin to work through the task, ask them to say out loud what they are looking to do. If they ask for help, or if they are doing something correctly, avoid a direct answer. Instead, ask what they expect or want to happen. This will give you insights into how they think about the problem.

Once they get to the end of a task, you can take a moment to ask them their thoughts about what went well and what didn’t before moving to the next task. Note that some people will not be able to complete every task. They may give up, which is fine — it gives you a chance to discuss what went wrong. Other times they may think they are done, but haven’t actually finished what was expected. This is also an opportunity to investigate why they were mislead by the design.

Continue this process until all tasks are complete or the participant decides to stop the test. You will occasionally have users that get frustrated. It’s always best to stop a session early rather than leave someone feeling unhappy for participating in a usability test that stressed them out.

Always leave time at the end for a debrief conversation to let them share overall impressions of what they saw.

Try and run individual usability sessions with at least 5 different people. With luck, you will start to see trends in results. When you find points in tasks where many struggle, use that information to redesign and test again.

This video shows how easy and useful it is to test a paper prototype.

“One thing” questions

At the end of any design demo, usability test, or other session with a user, I find it useful to ask these two questions:

What one thing you saw today is most valuable to you and why?

If you could change one thing about what you saw, what would it be and why?

Why these two questions? They help identify the one aspect of your design that people value most and the piece they feel is missing or wrong. Also, by limiting each question to one thing, people are forced to prioritize. You want to know what you need to do to make v1 of your product a success? You’re not looking for future enhancements or nice to have features. You need to know what’s essential to immediate success.

When you see trends in user’s answers to these two questions, use that to enhance what they like and cast off what’s less important.