Learn / Guides / Product design guide

Back to guides

Top 7 product design testing methods (and how to use them effectively)

You’ve worked tirelessly to create an amazing product and added many features you think users will love. You’re almost ready to launch, but then you discover that you’ve focused on all the wrong things. Customers don’t find the product user-friendly and it turns out they’d prefer a simple, streamlined experience over the complex feature-packed hierarchies you’ve designed.

Last updated

28 Apr 2022

Reading time

12 min

Share

Too many teams find themselves scrambling to make big changes right before (or even after) launching their product, which drains resources, demotivates the product team, and, most importantly, breaks trust with your users. 

Effective product design testing methods help you avoid this scenario. By running tests, you’ll reduce your assumptions and truly understand your users and their experiences. Great testing lets teams proactively identify issues and create products that exceed user expectations. 

Read our guide to learn how to implement seven effective design testing strategies.

Supercharge your product testing with Hotjar's rich user insights

Hotjar gives teams a rich combination of quantitative behavioral data and qualitative VoC insights for testing that gets results.

Why testing your product is key to design thinking 

Design thinking is a five-stage product design process that helps teams generate creative solutions. Testing is a key part of the design thinking methodology: by trying out prototypes with real users, you can refine your ideas and shape innovative, customer-centric products.

Testing and design thinking 

The five stages of design thinking are: 

  1. Empathize with users and understand their jobs to be done 

  2. Define a problem statement based on what you’ve learned about your users

  3. Ideate creative solutions to the problem 

  4. Prototype models of concepts, products, and features

  5. Test your ideas and prototypes with real user groups to understand what works and what doesn’t

While testing sits at the bottom of the list, it’s important to remember that design thinking is a cycle: every stage informs the others. The best approach is to test continuously at every stage of the product lifecycle.  

Benefits of testing 

Testing is the best way to see how your ideas, prototypes, and products perform in real environments with real users.

Product design testing helps you: 

  • Catch errors and blockers early on so you can address them 

  • Deploy resources effectively and make sure you’re not pouring time and energy into ideas that won’t work

  • Empathize with users and understand their jobs to be done

  • Generate new ideas you may not have thought of before

  • Feel confident in your product design decisions and priorities 

  • Improve product experience (PX), conversions, user satisfaction, and customer loyalty

  • Stay up to date with changing customer asks

  • Keep the whole organization aligned with user needs 

  • Demonstrate the efficacy of your product solutions to stakeholders

It can be tempting to skip extensive rounds of testing to move more quickly through design and development. Maybe you’ve spent time talking to users at the early ideation stages and you’re convinced you already know how they’ll respond. Newsflash: you don’t! 

Until you see users interacting with a developed concept, prototype, or product, even they don’t know exactly how they’ll behave. Taking the time to conduct thorough user testing will save your time and money—as well as sanity—in the long run.

Pro tip: to bridge the gap between what users say and what they do, make sure you combine quantitative and qualitative testing. Product experience insights tools like Hotjar (👋 ) can help you synthesize voice-of-the-customer (VoC) data with user behavior analyses.

7 product testing methods for successful design

You’ll need to run different types of tests depending on what stage you’re on in the product design process and what your goals are. 

In the early stages, when you’re still getting to know your users and validate potential solutions, focus on deploying user interviews, surveys, and feedback tools. You’ll use low-fidelity prototypes at this stage (like paper prototypes and basic mockups) to give participants a general sense of your product design ideas.

As your design ideas take shape, you’ll want to test user responses to clickable prototypes with more functionality and use a wider range of testing methods, including usability testing, split testing, and user observations. At later stages, high-fidelity digital prototypes and early product iterations are key to understanding how users will interact with your final product. 

However, since product design testing is an ongoing process, you should combine different methods throughout the full product lifecycle. 

Use these seven testing methods to ensure your end product impacts customers in all the right ways:

1. Concept validation

With concept testing, you ask real users or potential users to respond to early product design ideas and hypotheses, usually presented as drawings, paper prototypes, or presentations. It’s a way of validating your ideas to ensure they reach user goals.

Strengths: 

  • Early-stage concept validation shows you which ideas won’t work before you’ve spent too much time and money on them

  • A positive response boosts your confidence in your design decisions and priorities—and you can use positive feedback to sell your ideas to stakeholders

  • It’s an exploratory form of testing, which means it can help you empathize with your users to understand what they do—and don’t—want

Limitations: 

  • Since it’s a conceptual form of testing, users don’t engage with a product or prototype, so their response may be inaccurate 

  • If you don’t ask very clear questions and ask users to justify their responses, you can get fuzzy results that won’t help you make key design decisions

Pro tip: beware of false positives in concept validation testing! Users may feel obliged to respond positively just to make you happy—which skews your testing data. 

2. Usability task analysis

Usability task analysis testing checks whether users can complete key tasks on your product or website without hitches. It typically involves instructing a group of participants to complete specific actions—for example, an ecommerce app might ask users to find a product, add it to their cart, and check out. 

Researchers then observe users as they complete the tasks, either in person or through user recordings that track clicks, scrolls, and page movements. 

Strengths: 

  • Great for sanity-checking whether your product journey and navigation hierarchy are intuitive to real users.

  • Helps you quickly identify blockers, bugs, and gaps in usability.

  • Shows you how your users think, move, and navigate so you can improve the user experience (UX).

Limitations: 

  • You often end up with complex results based on particular participants’ end goals and environments—it’s not a quick test you can easily statistically summarize.

  • Since you’re watching real participants in real time as they use the product, it can be time-consuming.

Pro tip: Use filters on Hotjar Session Recordings to help you narrow down your focus in usability testing. To save time, filter to see users in a certain region or industry, or choose to see recordings only for users who reported a bad experience so you can zero in on blockers.

3. First-click testing

With first-click testing methods, teams observe users to see where they click first on an interface when trying to complete certain tasks. 

For comprehensive results, it’s a good idea to track:

  • Where users click

  • How long it took them to click

  • Where they clicked next

Strengths: 

  • It’s a good way to check the perceived usefulness of a particular feature or element for users

  • It shows you which buttons, icons, and other navigational elements your users are failing to notice or avoiding

  • It’s a flexible form of testing: you can run first-click tests on a wireframe version of your product, on each page of your website, and on your final product

Limitations: 

  • First-click testing only shows you where users did or didn’t click—it doesn’t tell you why. Without more information, you’ll end up making guesses or misunderstanding their motives: maybe you think they’re not clicking a new feature button because they don’t see it as useful, but, in reality, it’s just not well-placed and they haven’t noticed it on the page.

Use Hotjar Heatmaps to visualize exactly where users are clicking and scrolling—then deploy Recordings, Feedback widgets, and Surveys to go deeper and find out why. 

#An example of Hotjar scroll (L) and click heatmaps (R)
An example of Hotjar scroll (L) and click heatmaps (R)

For effective first-click testing, ask users to do specific tasks so you can isolate and examine user behavior in each scenario separately. Begin recording the participant's behaviors as soon as the interface appears. The first click test evaluates two critical metrics: the location of the user's click and the time it takes them to click. Taken together, these two measures show whether or not doing these tasks was feasible and how tough it was to perform.

Sara Johanssonx
Customer Success Manager, Onsiter

 4. Card sorting 

Card sorting tests the design, usability, and information architecture of your site or product page. You’ll ask participants to move cards into the themes or topics they think is the right fit and you may ask them to come up with labels. The cards can be physical cards or virtual card-sorting software. 

Strengths: 

  • Card sorting helps you understand how your users think and shows you how to create a user journey that makes sense for your customers

  • It’s a relatively quick and easy process

  • You can tweak card sorting tests based on what you want to discover. For example, in closed card sorting exercises, you give users categories and ask them to decide which categories fit the cards. In open card sorting tests, users create the categories themselves. Both types can give you different user insights

Limitations: 

  • You only gain a partial understanding of users’ navigation needs. Since it’s an abstract test, you don’t learn how they’d categorize different product elements to actually complete tasks. 

5. Tree testing 

To run tree testing, start by showing participants a pared-down product map that branches out into tree-like hierarchies. Next, ask them to do specific tasks on this model, to see how usable and accessible they find the core product experience (PX).

Strengths: 

  • Great way to quickly validate whether your design is creating a clear, intuitive navigation experience for your user

  • Time-efficient tests that are easy to set up (they’re also easy to find recruits for as you’re not asking for much of their time)

  • Offers actionable data on which features or site elements need to be labeled or presented differently

Limitations: 

  • Like with card sorting, users only interact with a basic conceptual model of the product. That means you won’t get insights on how users would respond to the full product ‘in the wild’, with added features, visual elements, and environmental cues that may change their experience. 

  • Only gives basic data on which elements are blocking users —without digging deeper, you won’t understand why they’re struggling or what they’re trying to do

6. User feedback

If you want to really understand why users behave the way they do: ask them. Controlled, analytic testing methods can unearth valuable patterns and quantitative data you can use to make design decisions. But for a deeper view, you’ll need to use open-ended research methods, like asking users for direct feedback on particular aspects of the product or their overall user experience (UX) through surveys or user interviews. 

Strengths: 

  • Lets you dig into the why behind user decisions and gives you rich insights into what customers want from your product

  • Targeted onsite feedback testing helps you understand what’s going through users’ minds as they use specific features

  • You can collect powerful voice-of-the-customer (VoC) insights that tell a compelling story about your product—and can help convince stakeholders to get on board with your design ideas

  • Helps you discover hidden issues with your product you may not have anticipated

Limitations: 

  • User interviews can be time-consuming and conversations can easily go off track 

  • It can be difficult to recruit users for interviews or lengthy surveys

  • What users tell you can differ from how they actually behave in real-life situations

Pro tip: use Hotjar’s Survey and Feedback tools to place quick, non-invasive questions on key product or website pages for a steady stream of user feedback. For a fuller picture, combine these qualitative learnings with user observation data from Recordings or Heatmaps to see how users’ thoughts square with their behaviors.

#A Hotjar on-site Survey

An example of an off-site Hotjar Survey

 7. Split testing 

With split testing methods, you divide users into two or more groups and provide each group with a different version of a product page or website element. 

In A/B testing, you work with just two user group segments and offer them two options at a time. It’s important to ensure there’s only a single variable at play—for example, you might give each group a page that’s identical except for the position of the call-to-action (CTA) button. 

With multivariate tests, you experiment with more variables and/or more different user groups, trying out different design combinations to determine which one users respond to best.   

Strengths: 

  • When you’ve narrowed down your design options, split testing is a great way to make final decisions before iterating

  • Testing different options lowers the risk factor for new design ideas

  • It can be set up with your current users—you don’t necessarily have to recruit focus groups

Limitations: 

  • A/B and multivariate tests only give you answers to very specific, variable-dependent goals

  • If you’re split testing with real users, you run the risk of frustrating users with an unpopular design idea. This is a bigger problem with A/B testing, where you could potentially be testing with half of your user base!

In order to run great A/B testing, you'll want to have a hypothesis attached to each version; a reason why you believe it will yield a certain result. This way, you're not just testing two versions to win (although running an A/B test has been known to settle many a disagreement), but rather, you're inching closer to understanding how the user behaves and how you can provide the best user experience possible.

Ruben Gamez
CEO & founder, Singwell

How to run an effective testing process

Running tests is only one part of the process. For effective product design testing, take time to prepare a plan on how you’ll implement your learnings. Follow these four steps to conduct an effective product design testing process:

1. Define clear testing questions 

First, you need to know what you’re testing. Develop clear, specific research questions and hypotheses based on key user needs or observations you’ve made on your site. 

Maybe you're asking, 'Why are so many users abandoning their carts?' The next step is to develop hypotheses to test, for example: 'If we make previous customer reviews more visible before the checkout process, it may increase customers’ confidence in buying and decrease cart abandonment rates.' You’ll then design a test process to check whether your hypothesis is correct, or find more information on the pain point to formulate new hypotheses.

2. Engage a mix of participants 

Make sure you test with a decent percentage of unknown users (definitely not your team!) who are unbiased. Include a mix of current product users and members of your target audience who haven’t used your product before. 

It’s also a good idea to test with different user groups and demographics. Some testing and product insights tools offer filters you can use to see results for particular kinds of end-users. With Hotjar, for instance, you can sort recordings to evaluate users in a particular region or those who had a positive or negative experience. 

3. Don’t ‘lead’ participants or influence their responses

For clear, accurate test results, make sure you don’t ask leading questions that could bias participants. Don’t over-explain what the product does or what they should experience when they use it. Let them experience your product and then tell you about it.

Pro tip: encourage testing participants to give you direct, honest, and even negative feedback. Remember, test participants are often sensitive to hurting your feelings, and you want to avoid a situation where they’re saying what they think you want to hear. Remind them that you want to hear what’s not working well: they’re helping you by signaling what could be made better.

4. Collaborate and communicate 

Testing should be a full-team process. Of course, some roles will take more ownership in running the process. But make sure you collaborate to design effective testing that answers all your questions. 

You should also communicate throughout the testing process to keep the whole team—and external stakeholders up to date and aligned. 

Use Hotjar Highlights—and our Slack integration—to automate sharing testing insights with everyone who needs to see them!

Use design testing to prioritize brilliantly 

The most important part of testing is turning test insights into actions. 

It’s easy to collect the data and never do anything with it—but that’s a waste of resources and valuable user insights. 

Make sure the information you gather trickles through to everyday design choices and each stage of the design process. 

By using product experience insights tools to synthesize, visualize, and share your test results, you can put your data to use in generating problem statements, sparking ideation, and informing new prototypes and iterations that solve more and more user needs.

Supercharge your product testing with Hotjar's rich user insights

Hotjar gives teams a rich combination of quantitative behavioral data and qualitative VoC insights for testing that gets results.

FAQs about product design testing methods: