Learn / Guides / User testing guide

Back to guides

7 easy steps to conduct effective user testing

You place value on testing your product's actual performance in the real world, with real users, and that's why you're here. So, let's get to the heart of the matter: how to do user testing the right way to improve your product’s functionality and usability.

Last updated

3 May 2023

Reading time

8 min

Share

User testing looks at whether your offering works as expected (functionality) and enables users to accomplish their goals (usability)—both of which are critical to maintain a happy user base. This customer-centric approach towards product development ultimately saves you time, money, and effort, all while ensuring you meet user needs. 

This guide gives you seven easy steps to evaluate your product or design through your users' eyes. You'll learn to choose the correct user testing method for your needs and create the optimal environment to run your test.

Let’s begin.

Research based on real users’ actions

Get valuable insights into how visitors use your site. Run quick user testing with Hotjar’s qualitative tools.

Step 1: identify your testing goals

Start by defining your test goals. Clear objectives direct your efforts and make it easy to evaluate test results later on.

To set your goals, know what you want to learn from your users. For example, you might investigate: 

  • How long does it take users to complete a task on your site?

  • Are people satisfied with your service or product? 

  • What do customers think is missing from your offering?

Also, be sure to specify a focus area, like uncovering usability issues or testing your assumptions about your product’s core features.

Why should you conduct early-stage testing?

User testing can occur at any point of the development and design process, from wireframing to the launch of the new product, service, website, feature, or update, and as needed in the product experimentation cycle. 

Testing early is ideal for companies that want to encourage innovation and user-led changes and comes with multiple benefits. Putting a mockup or prototype in front of your target audience allows you to:

  • Determine the most critical features for users

  • Identify missing features

  • Reduce the amount of rework needed afterward

  • Save on design and development costs

  • Deliver fixes quickly and effectively

And instead of bringing your target users together and observing them in a lab, you can start small with session recordings and heatmapping, available through Hotjar (👋 hi there!). 

With effective strategizing and maximizing available tools, you’ll build products users love.  

Step 2: select a suitable testing method

User research reveals what people think and how they feel about your product or design, as well as how they perceive and interact with it. That’s a wide range of user experiences—and one type of test alone can’t possibly capture it all. 

But the growing emphasis on human-centered design and the evolution of research methods mean you can now easily pick from a broad range of user testing types to find one that matches your purpose. 

For example, there’s usability testing, a subtype of user testing, which lets you see how quick and easy it is for visitors to use your site or product. On the other hand, session recordings come in when you need to observe unusual mouse movements and rage clicks, helping you understand how people navigate your pages.

💡 Pro-tip: set up a usability test when you need to check user actions and behavior related to any of these elements:

  • Navigation

  • Familiarity

  • Consistency

  • Error prevention

  • Feedback

  • Visual clarity

  • Flexibility and efficiency

Which is the ideal user testing method for you?

Here are a few of the most popular user test types to consider when selecting your method:

  • A/B testing: compare two variations of a webpage, app, or other digital assets to determine which one performs better. This helps you validate assumptions regarding certain changes, but can't tell you why they work (or don’t).

#Hotjar integrates with A/B testing platforms like Optimizely, helping teams uncover why the winning variant works
Hotjar integrates with A/B testing platforms like Optimizely, helping teams uncover why the winning variant works
  • Session recordings: observe clicks, taps, and mouse movements—generally how users behave and interact with your site. Then, send them a survey to investigate further.

#Recordings quickly show you how real users interact with your website
Recordings quickly show you how real users interact with your website
#Generate a survey that aligns with your user testing goals
Generate a survey that aligns with your user testing goals
  • Heatmaps: get an aggregate view of where users click, how far they scroll, what grabs their attention, and what they ignore on a page. For instance, use Hotjar Heatmaps to learn whether your landing page copy inspires them to take action.

#Hotjar’s Heatmaps tool shows you where users clicked on a specific page, how far they scrolled, and what they looked at or ignored
Hotjar’s Heatmaps tool shows you where users clicked on a specific page, how far they scrolled, and what they looked at or ignored
  • Concept testing: ask users about your UX design ideas and concepts before launching them. Concept testing is a survey type in Hotjar that lets you gauge whether customers will respond positively to your decisions..

#Validate your user interface or UX design decisions and find out what users truly love
Validate your user interface or UX design decisions and find out what users truly love

👀This is just a peek. If you want a comprehensive take on user testing methods, check out the following chapter.

Step 3: recruit the right test participants

Now it's time to recruit participants for user testing. You might wonder: who you gonna call? As we already mentioned, your test subjects should be your product's or service's real users or the people who fit your buyer personas.

Regardless of the user testing method, you can automate participant recruitment using technology like Hotjar Engage. Your workload gets lighter because Engage also lets you schedule and include user tests in your interviews. 

Here are a few examples of how to streamline your user research with Engage:

  1. 👩‍🎨 Product designers can recruit and schedule interviews with target users from their existing user base. During an interview, the host can ask users to navigate a new prototype, gather user feedback, and use the insights to inform product development decisions.

  2. 🧑‍💻 Product managers can ask existing customers to participate in testing a newly released feature for usability issues. When conducting the usability test, they can then observe users completing tasks.

  3. 👩‍🔬 Researchers can pluck future candidates for user testing out of Hotjar Engage's 175,000-strong pool of testers. Once they’ve ensured the users match their company's buyer personas, they can conduct in-depth interviews regarding their goals and pain points.

Tap into these interviews' quantitative and qualitative insights in your next user test.

Engage helps us iterate drastically faster on our designs. Preparation for user tests used to take at least 2 working days, including screening & recruiting. Since we started to use Engage, this takes us only half a day.

Tamas Kocsis
Product manager at Skyscanner

Step 4: find a fitting test location or environment

The ideal setting depends on your chosen method. A/B testing, surveys, recordings, heatmaps, and even beta testing can be remote and moderated or unmoderated, meaning, you can set up the test via an online call or let users visit your site from wherever they usually do it:

  • Let's say a site wants to validate a new filter feature's potential to enhance conversions and lift sales. In this case, the host and testers don't need to be in the same room (even if the business can afford it). 

  • Test facilitators can simply create an A/B test and trigger Hotjar to capture recordings of users applying (or ignoring) the filter. And users can continue to access the app or site at home, during their commute, or at work.

Of course, some situations still call for a physical location. For instance, in-person moderated usability testing requires your team to meet test subjects in their homes, community, or any appropriate space. Your office or a lab is also a good option if you require specialized tools like eye-tracking equipment.

Here, you can sit down with the user, instruct them on what to do, and ask follow-up questions immediately while observing them making specific actions.

Step 5: run your test

Before conducting your test, ensure your participants have all the necessary instructions and information. This may include the test objectives, duration, necessary equipment (like a webcam), and feedback expectations (written, verbal, or through a form). 

Also, depending on whether you're in an in-person vs. remote or moderated vs. unmoderated test, you can apply these best practices:

  • Get testers' permission before you observe and record them

  • Give out clear instructions that won't likely be open to interpretation.

  • Encourage participants to think out loud. If sending them a script or prompt before the testing day will put them at ease, do it.

  • Keep your speech and body language neutral to avoid influencing the users' opinions. Otherwise, they might modify their responses to please you.

  • Between screening and post-test, you should equip yourself and your team with excellent questions to pave the way for accurate test result analysis.

❓ Examples of good questions to ask users during the test

Ask user behavior-related questions to understand how users interact with the page, elements, and content:

  • If you were looking for X, where would you expect to find it?

  • How did you find the experience of using the website to complete this task?

Include ‘why’ questions to learn people’s motivations in using a product or service and discover opportunities for improvement:

  • Why did you choose to do X?

  • Why did you choose not to do X?

Go all out and get feedback on the end-to-end user experience:

  • How would you describe your overall experience with the product?

  • Which part of the experience, if anything, surprised/frustrated you?

Lastly, let the users paint a fuller picture of their experience by using probing questions:

  • How easy or difficult was it to navigate [a specific page]?

  • What are your thoughts on the design and layout?

Want more? Head to our usability testing guide for a different set of questions.

Step 6: document the data and insights

While the test is in progress, the moderator should focus on observing and supporting the participants. Pay attention to details like where they initially move the cursor, what they click, and how far they scroll. Speak only when asked so as not to interrupt their flow.

Ideally, a second teammate notes what the users say or don't say (non-verbal cues), how they complete the task, and where they get confused, hesitate, or get stuck. You and your team can review the recording later, alongside the notes and transcript, to better understand the subjects' experience.

Highlight and share insights with stakeholders using Hotjar

In Hotjar, you can optimize how you gather data and share insights through these features:

  • Highlights: save important clips from your recordings or selected portions of your heatmaps as highlights. Add them to a Collection, label accordingly, and throw in a comment so that anyone can easily find them in the Highlights tab. Learn how to use Highlights here.

View your saved highlights by Collection and filter them by Label

Step 7: analyze the results

As you sift through the data collected during the test, consider every piece of feedback and identify the most significant issues that need to be addressed. 

Stay organized by using a method that works for you, such as a color-coded system for grouping similar situations together and spotting patterns. And don't feel the need to fix every single thing that went wrong during the testing. Instead, prioritize the most problematic yet high-impact issues to improve your UX design effectively. 

If you wish to conduct usability testing, include these user metrics in your planning and analysis:

  • Successful task completion: a participant successfully completed a task, such as typing a required value in a form field

  • Critical errors: a participant failed to complete the task or provided an incorrect value 

  • Non-critical errors: errors made by users that did not prevent them from completing the task. Non-critical errors are low impact but can still make task completion less efficient.

  • Error-free rate: the percentage of test participants who completed the task without any errors (critical or non-critical)

  • Time on task: the length of time it took the participant to complete the task

  • Subjective measures: user opinions related to factors such as likes and dislikes, satisfaction, ease of use, and ease of finding information

Test early and test often

No business investing in its users—making them feel included in the development of a product or service that's supposed to meet their needs—will ever regret this decision.

With a proven framework, practical yet powerful tools, and a dedicated team, you have all the elements needed to make user testing successful.

So, put everything you've learned here to work. And don't let your website, product, or service be a hit or miss.

Target the real problem right away

Conduct user testing fast and discover issues early. Design a functional and effective website for your users.

FAQs about user testing