When you're doing user testing, it's important to standardize the process so you end up with consistent and reliable results.
In this chapter, we take a deep dive into moderated testing—arguably, one of the most complex testing methods—with the help of four experienced moderators who share their five-step process and help you consider all the variables involved. You’re going to be in good hands!
At the beginning of your usability testing process, you'll need to decide what usability testing method is right for you based on:
If you need a quick refresher, we’ve written a separate page about the usability testing methods and tools at your disposal.
One of the most thorough and in-depth methods for gaining user insight is moderated usability testing, during which a trained moderator observes the participants’ behaviors and interacts with them directly. This method is also one of the most complex ones because there are a lot of variables involved—from the location you pick to the moderator’s ability to get valuable answers from the participants.
To write the following section, we relied on the help of four veteran usability testers who talked us through their process to make sure you get the full picture: Els Aerts (founder of usability and conversion company AGConsult, with almost 20 years’ experience in the field), our editor Fio (who has run over 200 usability testing sessions since 2016), and our product designers Craig and Daniel (who are in charge of running usability testing at Hotjar).
We’re going to use an e-commerce website as our example throughout the page, but the points below apply if you want to test a prototype, a non-transactional website, or a product.
Planning the details of the usability testing session is, in some ways, the most crucial part of the entire process. The decisions you make at the start of the testing process will dictate the way you proceed and the results you end up with.
Collect all this information in one centralized place (bonus points for creating a one-page template you can reuse multiple times), and use it as your main guide towards the next steps: recruiting participants and designing the actual session.
Whom you recruit, and how, depends on your testing goals (for example, how much information you want and therefore how long your sessions need to be) and your budgetary constraints.
The most popular ways to find participants for your study:
However you recruit your participants, you'll need to compensate them for their efforts. Els recommends gift cards if you're based in the United States, and cash for most other parts of the world. The amount you pay is up to you—usually, anywhere between $30-50 dollars for a non-specialist audience is acceptable.
This step (designing the task) and the previous one (recruiting participants) really happen around the same time. Once you've worked out the why and how of your research, and while you wait for participant confirmation, it's time to design the test itself.
What this means: you’re going to carefully plan the specific scenarios you’ll take your participants through, and the tasks your participants will be required to complete, to guarantee clear and actionable results. Els does this by writing specific scenarios that provide a context for the testing tasks. For example, let’s say you’re testing an e-commerce shop that sells clothes:
Make a scenario like: “You've been invited to a theme party, and the theme is red. Everybody had to come dressed all in red. You look in your wardrobe, and you don’t have anything that would work. Well, it’s time to buy something red. How do you go about it?” So now your participants will go to the website, and you know that there is a product filter for ‘red’. Can they find the filter easily? Do they know they can use filters? Et cetera, et cetera. Then, you can just watch them use, or not, the red filter.
This scenario allows you to test the participants’ ability to use the filters on your website and is open-ended enough to apply to anyone's preferences. You want to avoid using scenarios that are too specific (e.g., asking a participant to pick out a milk foamer when they may not even like milk in their coffee).
Another pro tip: when designing scenarios, keep the most important functions of the website in mind. For an e-commerce website, the top task is usually buying something, so you would probably want to include a scenario that nudges the user through the purchasing process. Els recommends giving the tester real money to spend during the user test. “When they're spending their own money, they get a lot more critical,” she says. “We sometimes have to interrupt people, because they will happily spend 45 minutes choosing the right pair of shoes—much like they would do in real life.”
When it’s time to conduct the usability testing session, you or your moderators should follow a set protocol with each participant. This protocol leaves some room for customization but still guarantees an overall standardized experience for each test subject.
Let’s assume you are running the session yourself: here is what you do.
If you’re doing an in-person test, make sure your participant is physically comfortable with the testing setup (chair, desk height, mouse placement, etc.) and that they understand what's going to happen during the session; if you’re doing it remotely, make sure they can hear you properly.
If you are recording the session (for example, because you want to review it later), ask for their permission at this point; if you are running in-person testing, you may even have a printed out form that you ask them to sign.
To get your subject to loosen up, ask them some friendly conversational questions, such as how far they’ve traveled to get to the lab, if they’ve done user testing before, etc. For Craig, “the important thing for a new interviewee coming in is to feel relaxed and comfortable in the testing environment. A big goal of the first few minutes is getting to know each other, build rapport. This helps you make a smooth transition into the testing phase, when the tester hopefully doesn't even really realize that you're shifting gears, as you start to collect more specific information from them.”
During your conversation, collect demographic and psychographic information using predetermined questions. In the case of your e-commerce test, you might want to ask:
Use the rapport you've built to transition the participant into the first testing task. You would usually have 3 or 4 scenarios you want to go through, but the order in which you complete them may depend on your participant's mood and skill level. This is where it pays off to be a trained moderator. Fio notices: “you must be able to sense if your participants are getting frustrated, which may indicate you need to switch them to an easier task to build their confidence; or you may also find the super-skilled participant that completes the task in a really short time, which is where you need to be good at probing and investigating why they did what they did.”
In an ideal scenario, you have a second person taking notes for you so you can be 100% focused on the relationship with your participant; when Daniel and Craig run user tests, they record the sessions and get them transcribed so they can later go through them and highlight relevant parts or sentences.
Reserve some time at the end of the session to ask any follow-up questions and collect the participant's final feedback. Be sure to thank them for their help.
There's an art to running a moderated test section that involves establishing a rapport with the subject and naturally guiding them through the tasks. Our four veteran testers gave us their top tips for being an effective moderator:
Finally, after you've collected all your data, it's time to analyze the results and make conclusions. Try to do this as soon as possible after testing so that the observations are fresh in your mind.
As you go over the data, pull out the most serious or frequent problems that users encountered for further examination. Daniel uses a color-coded system when he reviews his transcripts, so he can group similar situations together.
Don't address every single thing that went wrong; instead, prioritize the issues that were most problematic and need to be workshopped and resolved.
Read more: read the chapter on analyzing and evaluating usability testing data for a straightforward, 5-step approach.