For UX designers and product managers, usability testing is a vital—and exciting—part of the product creation process. Vital because finally you can get some solid data to support your design hunches, and to use as a foundation for future iterations. Exciting because it’s a chance to release the designs you’ve worked on for months out into the wild, to be tried and tested by an unbiased audience of real people. There’s just one problem: finding test participants. It might seem simple, but anyone who’s organized a bunch of in-person usability tests knows it can be a real pain. First, you have to find enough people willing to sacrifice their time, come to your office, and jump through virtual hoops for an hour. Then you have to get buy-in to conduct face-to-face interviews, which can be hard as they take so much time and have to be scheduled weeks in advance. What if your design team needs feedback now? To avoid the headaches that come with in-person usability testing, a lot of design teams are turning to remote usability tests. These are normally done with a usability testing platform that records people completing the test, collects data, and generates insights that your design team can put into action right away. Taking the remote approach to usability testing comes with some big pros: Finding willing participants is much easier, as users can take the test whenever they want, wherever they want

It’s faster and cheaper to do quantitative usability testing, so you’re more likely to get statistically significant results

You can test your design on a wider range of people from different parts of the world

The time between creating the test and getting results is much shorter, which means your team can get data-driven design insights very fast. But to really see the benefits of remote usability testing, you have to set up your test in a way that’ll get actionable results. Here’s our top tips for making a test that gets the data you need—fast.

1. Moderated vs. unmoderated: pick the right type of test There are two kinds of remote usability testing: moderated and unmoderated. Moderated tests work much the same as traditional in-person usability tests, except the moderator and the user aren’t in the same room. Instead, the moderator observes the user on a video call using Zoom or similar software. The advantage here is that you can still ask the user as many follow-up questions as you want, which means you can potentially get more varied answers from each session. Also, the user doesn’t need to trek to your office. The downside is that it still requires a level of commitment on the user’s end—they have to do the test at a certain time with a specific setup, and the extra questions make it a more time-consuming experience all round. So you’ll probably get fewer takers, and they all need to be in your timezone. Unless conducting user tests at 3 am is your thing. Unmoderated tests don’t have any back-and-forth between you and the user during the test. You create the tasks and write the questions in a usability testing platform like Maze, send them to users, then they complete the test alone. The platform feeds the results back to you when they finish. The good thing about unmoderated tests is that they only take a few minutes for users to finish, so it’s way less hassle for them. This potentially means a lot more people completing the test. You also don’t need to schedule or attend the test yourself, so more tests can be completed in less time. All this adds up to faster results for your team with less effort. But since no one watches over unmoderated usability tests, you can’t be there to make sure that users reach the end—or that they don’t wander off for a Twix halfway through. So choosing the right scope for your test is key. Takeaway For a more traditional usability testing process, go with moderated remote testing. For a quantitative, time-saving approach, try unmoderated.

2. Narrow your scope

The scope of your test can make or break its chances of success. Obviously you’ll want to test every inch of your product at some point. But with remote testing, it’s better to test specific flows than to throw everything into one mega test. There are a couple of reasons for this. First, focusing your test on a few hypotheses will make your results much clearer one way or the other. The more designs you try to test at once, the longer your test—and the more blurry your results. Giving people fewer options lets you pinpoint design decisions and test them more rigorously. Second, the shorter the test, the more likely the user is to finish it. This is especially important for unmoderated tests, as you can’t be there to guide their progress. And if they don’t make it to the end, you get distorted results. We recommend seven or eight tasks for unmoderated remote usability tests. The good thing is that distributing them is as easy as sending a link, so you can run small tests more frequently. Moderated tests can be a little more complex, as you can have full conversations about what people are finding difficult and make notes on how they get stuck. And since the tests take more work to schedule, you might want to dig a little deeper to make the most of each one. Still, you should always keep tests on the short side to respect people’s time. Takeaway To get clearer patterns of data—and to make sure people finish—unmoderated remote tests shouldn’t be longer than seven or eight tasks. Moderated remote tests can be less focused, but the same principle applies.

3. Start looking for participants ASAP Or even better, you already started looking. Because while it’s definitely easier to find people willing to take remote usability tests than in-person ones, it still takes time. Especially if you need users with a particular background or job title. Remember that one of the main benefits of remote usability testing is being able to test with a large sample size. So the more users you can find, the better. Also, the earlier you find the right people, the earlier you can start testing in the design process. This could save a lot of pain undoing your hard work later down the line, as your product will be user-centric right down to its foundations. Here’s a few places to start your search: Ask your customer success team to hook you up with your most active users—and most vocal feature requesters. They’ll have great insights and will be happy to help. And your CS team will thank you for making people feel heard.

Reach out to specific segments of users in your email list who’ll be interested in testing a feature that’s relevant to them

Create a pop-up on your site or in-app asking people for their feedback

Post on social media to give your fans a chance to shape your product

For new product releases, encourage early email subscribers to become beta testers

If you lack time and users, you can hire a test audience from a user testing panel. Whatever method you try, start asap so you can build up a pool of users that are ready for testing as soon as you need them. If you leave it to the last minute, finding people can become a painful blocker in the usability testing process. Takeaway Finding users is the biggest potential bottleneck for remote usability testing. So start looking as early as possible.

4. Create extra-clear tasks

Clarity is super important when you create tasks for a remote unmoderated test. You won’t be there to clear up any ambiguity for the user, so your tasks need to be simple and self-explanatory. Well-written and structured tasks will get you more accurate results. Here’s our top tips for task creation. To dive deeper into this topic, check out our full article on writing great usability tasks. Define your user’s goals: The goals your users have in mind when they use your product should influence the way you phrase your tasks. Make sure to research your users’ thinking before you get started.

The goals your users have in mind when they use your product should influence the way you phrase your tasks. Make sure to research your users’ thinking before you get started. Give people context: Tell people about the test beforehand—what project it’s for, the kind of data you want, that you’re testing the design, and not them. This’ll help them interpret what you’re asking them to do.

Tell people about the test beforehand—what project it’s for, the kind of data you want, that you’re testing the design, and not them. This’ll help them interpret what you’re asking them to do. Start with something easy: Give people a chance to get used to the testing process and the product with a simple opening task.

Give people a chance to get used to the testing process and the product with a simple opening task. Use plan, concise language: Avoid using internal terms or technical jargon that might confuse people. If there’s a copywriter in your team, ask for their feedback.

Avoid using internal terms or technical jargon that might confuse people. If there’s a copywriter in your team, ask for their feedback. Set one task at a time: Let users fully focus on one task before giving them another one. This will prevent them from getting overwhelmed, and help you pinpoint exact design flaws in your data.

Let users fully focus on one task before giving them another one. This will prevent them from getting overwhelmed, and help you pinpoint exact design flaws in your data. Go with the flow: Make your test resemble a common flow of actions in your product so your test is more realistic.

Make your test resemble a common flow of actions in your product so your test is more realistic. Write actionable tasks: Use imperative verbs that prompt users to act. E.g. create, sign up, complete.

Use imperative verbs that prompt users to act. E.g. create, sign up, complete. Don’t reveal the answer: Give people scenarios, not directions. E.g. "You need to see a doctor. Book an appointment with this app." Click on and go to are too precise—the goal of usability testing is to see if people can complete a realistic task with your product without reverting to instructions. Takeaway How you write and structure the tasks will largely determine the success of your remote usability test. Use simple language based on your user’s goals, and avoid making your tasks too complex—or too obvious.

5. Ask questions for more detailed insights

Questions are a way to get more data out of your remote usability test. Even if you’ve gone for unmoderated testing, usability testing software like Maze lets you ask questions before and after each task, and at the end of the test. Follow up to get people’s opinion on specific design elements, or ask more general questions afterward for some qualitative feedback. Since you’ll need to write the questions in advance for a remote test, they need to be word-perfect. Here’s a few pointers: Ask pre-test background questions: Segment your users according to their technical know-how and product habits. This is useful when you’re analyzing data later on, as you can identify trends in how different audiences use your product.

Segment your users according to their technical know-how and product habits. This is useful when you’re analyzing data later on, as you can identify trends in how different audiences use your product. Avoid leading questions: For mid-test questions, don’t ask ‘how easy’ or ‘how useful’ a design was. This plants an idea in your user’s head before they’ve answered, which will skew your data.

For mid-test questions, don’t ask ‘how easy’ or ‘how useful’ a design was. This plants an idea in your user’s head before they’ve answered, which will skew your data. Leave open-ended questions until the end: Asking how someone’s overall experience was can get you detailed, qualitative answers. But keep these questions to a minimum, as they take longer to answer. And here’s a few examples of well-written usability test questions: How was your experience completing this task?

What did you think of the design overall?

How was the language on this page?

What do you think about how information and features are laid out? If you want some more inspiration, you can take a look at the questions on the System Usability Scale (SUS). It’s a tried-and-tested usability survey that’s frequently used to measure product usability. The SUS even gives your product a usability score at the end. You can also ask demographic questions to segment your users by age, occupation, education, etc. This is useful for spotting usability trends for different groups of people, but keep in mind that asking people personal questions can make them feel awkward. So ask carefully, and make demographic questions optional—or you could risk people bouncing before they start. Check out our article for more on using questions in usability testing. Takeaway The wording of your questions has to be very precise for remote testing because you only get one shot. Triple check them before you send out the test.

6. Test your test

Great designers know that you have to test everything. And that includes testing your usability test. The last thing you want is to send your remote test to 100 people, then realize there’s a task-destroying typo in the first sentence. So share a pilot test with colleagues and get their feedback on what could be improved. Get people from different teams to try it out, as copywriters will see it with different eyes to the customer success team. Colleagues outside your own team will also experience it totally fresh, so their perspective will be closer to your users’. Finally, send it in batches—not all at once. There’s a good chance you’ll realize something is off after the first batch or two. This way you’ll be able to fix it and avoid any major embarrassment.