Share this post

Alla Kholmatova, an interaction designer in the FutureLearn product team, explains how the team uses guerilla user testing to help make FutureLearn better.

If you happen to visit the British Library often, chances are that you’ve been asked if you’d like to be involved in one of our user testing sessions. User testing is an important part of our design process at FutureLearn. Testing early prototypes allows us to validate how effective the design is, before we go ahead and build it. If you’d like to know more, here’s how we do it.

Using a quick informal method

One of the user research methods we practise within the team is guerilla testing: an economical user testing technique, which can be conducted anywhere ‘out in the wild’ (hence the term ‘guerilla’ testing). This method is quick, relatively easy to set up, and fits well in the agile methodology, which we follow here at FutureLearn.

The sessions tend to be short, as we structure them around specific research goals. The whole process lasts no more than two hours, during which we talk to about 10-12 participants, for 5-10 minutes each.

Typically we have two testing “stations” set up, which work independently from each other. We believe this helps us to achieve more objective results, since two people interview 5-6 participants in parallel, and independently from each other.

Using simple but great tools

Working in two week sprints, we normally aim not to spend any more than 2-3 days on user testing during one sprint. This includes writing a basic test plan, preparing a prototype in Axure or Proto.io, running a test, analyzing and pulling out useful data, and writing up a short results summary.

We try to only test high fidelity prototypes rather than wireframes or paper prototypes, as our goal is to get genuine reactions and to avoid false positives.

All our sessions are recorded using Silverback, so we can can refer to them during the analysis stage or share the clips with the team.

Testing in the British Library

Since our office is conveniently located in the British Library, this is where we usually conduct our testing. Thousands of people visit the library every day, so we rarely have trouble finding participants for our sessions. Not everyone in the testing teams were comfortable with approaching people for research at first, but gradually we all got better at it.

One tip we found helpful is to have a selection of beautifully presented chocolates that exemplify our product vision on display, which we use to thank participants for the their time.

Testing for discovery, not for validation

We avoid doing user testing simply to confirm our beliefs or to validate a design that’s already been signed off. Quite the opposite?,?we try to stay open minded and prepare that our solution will not perform as expected.

Sometimes our design proposals fail in user testing, but that’s perfectly okay—in fact failure can be a more valuable learning experience than a smooth session where everything goes as expected.

A couple of months ago we had a routine testing session in the library during which we tested a fairly simple design. While working on it, the solution seemed obvious and we were certain that it would perform well in tests: however, the opposite happened. About half way through the session it became apparent that the design had failed. We were tempted to stop the test but decided to continue until the end, as usual. This allowed us to look at the solution closely and to understand exactly why it didn’t work.

While it could have been considered a disappointing experience, in our team this small failure created a lot of joy and excitement. Not only because we avoided building something that wouldn’t work, but also because of the amount of new knowledge we gained from this experience.

When things don’t go according to your expectations you are much more likely to stop and examine why something really happened. We’ve found it is important to be prepared for any outcome and to see it as a discovery and a learning opportunity. The further the result is from what you expected, the more you can learn from it.

Involving people from mixed interests and backgrounds

We invite people from different teams to participate in planning and conducting the testing, so that we get multiple perspectives and avoid bias results. Usually there are four us conducting the test?—?two designers (although usually only one is directly involved in working on the project we are testing), a developer, and someone from another interest, e.g. a product manager.

Most importantly, everyone who takes part is actively involved in user testing, and not just a passive observer. We all talk to the participants, take notes, and have a go at leading the session.

Analysing the data gathered during two hours of user testing is also a shared task. Usually at least two of us write up the notes in Google Docs and re-watch the session videos. Afterwards we compare the findings, draw conclusions and share them with the rest of the team.

Always learning

‘Always learning’ is one of our company values. Everyone at FutureLearn is naturally curious and interested in learning new things, particularly if this is something that can be useful for constructing and influencing our learners’ experiences. That’s why our findings are shared with everyone in the company, not only with the product team.

Although we avoid writing lengthy research reports, we still summarise our key findings in an 8-10 page informal document, which can be used for reference. We try to structure it in a way that makes the goals and outcomes of the project clear, even to people who weren’t directly involved in it. These research summaries, as well as the clips from user testing, are available for everyone in the company to read and refer to.

We also present our findings to the whole company at sprint reviews, which happen every other week. This is a great opportunity for us to share what we’ve learned and to explain the rationale behind some of the design decisions we made, as a result of user testing and research.