Learn / Guides / Concept testing guide

Back to guides

4 concept testing methods (and how to use them effectively)

Concept testing is a brilliant technique that helps you validate your ideas long before their launch. You discover what your users want and need, so you don’t invest valuable resources in building a homepage that confuses them or release a logo that doesn’t align with your company’s values.

But choosing the best concept testing method can feel daunting. You know you need to follow a structure or protocol to get reliable results, but you’re not sure what your options are or how to pick the right one.

Last updated

15 May 2023

Reading time

6 min

Share

This article dives deep into the four main concept testing methods—and breaks down the pros and cons of each. You’ll walk away knowing how to select the right type—so you can collect the best user insights to shape your project or product.

Validate your ideas fast

Use Hotjar’s digital experience insights tools to conduct effective concept testing and build a product that delivers results.

4 concept testing methods explained

Concept testing is a research method of collecting user feedback on new product or design ideas. Whether you’re part of a user experience (UX), product, or marketing team, this process helps you determine what your users like (and dislike), so you can create products that convert better. 

Depending on your budget and needs, you might use 1:1 interviews, focus groups, or surveys to conduct concept testing. 

The four types of concept testing are: 

Let’s look at each method, how to use it, and its benefits and drawbacks.

1. Monadic testing

In monadic testing, a single concept takes center stage. Testers evaluate a concept on its own merits, without comparing it to other options. 

If you want to test other concepts—and we recommend you do!—use a separate pool of testers who haven’t seen the first design, so your data reflects a more accurate representation of user preferences without interference from external factors.

For example, testers might say they like a logo’s design—until it’s in a line-up with the original, more familiar logo.

#Monadic testing lets users focus on a single concept (left) instead of a side-by-side comparison (right)
Monadic testing lets users focus on a single concept (left) instead of a side-by-side comparison (right)

Monadic testing is great for most concept tests—from ad creative to product prototypes. But because it requires a bigger sample size, which takes more time to get, you may want to skip monadic testing for your low-fidelity concepts, like quick ad design sketches on paper, and save it for more detailed medium- or high-fidelity mock-ups.

When conducting monadic tests, keep these tips in mind:

Monadic testing benefits and challenges

Some benefits of monadic testing include: 

  • You get in-depth insights on each concept to inform future iterations

  • Testers only evaluate a single concept, which keeps the survey short and reduces test fatigue

  • There’s less risk of order bias, which occurs when variant order affects the results

And some drawbacks of monadic testing are: 

  • It may not reflect real-world circumstances; typically, people view products, marketing assets, and designs in the context of external factors

  • Since each group only views a single concept, you need to find more testers to assess your other concepts

  • You may need more time to gather enough insights and move on with your design and development process

💡Pro tip: gather the sample size you need—fast—by placing your survey on the highest traffic page on your website. With Hotjar Surveys, you can choose the on-site option that will attract the most respondents, like: 

  • Pop-over surveys that pop out at a certain point on the page

  • Button surveys that users can opt to open

  • Full-screen surveys that appear as an overlay in the middle of the page

If you have an engaged email list, that’s always a good delivery option too. Hotjar will generate a unique link for you to send to your audience. Easy peasy.

Consider survey type and delivery to quickly gather a large sample of responses

2. Sequential monadic testing

In sequential monadic testing, like in standard monadic testing, respondents see one concept at a time and answer questions about their preferences on that concept. But in the sequential monadic approach, researchers test multiple concepts with the same group of respondents. 

Seeing a stellar design first can influence a tester’s reaction to subsequent concepts. To minimize this bias, called order bias, researchers create three different test groups and show each group the concepts in a different order. 

For example, if a direct-to-consumer (DTC) company is testing three packaging options for their new line of soap, they might show one group of testers Option A, B, and then C; another B, C, and then A; and a third C, B, and then A (as you see in the image above). Then, they compare the results between the groups.

Always test multiple concepts and ask users open-ended questions about them. This will help you compare and contrast different ideas, and dig into their preferences to find out which aspects they are most interested in—so you can apply these findings to your iterations.

The Hotjar team

💡Think of sequential monadic testing as presenting multiple monadic tests to the same audience. If you follow the same best practices you do for monadic, you’ll be golden. 

Sequential monadic testing benefits and challenges

Some benefits of sequential monadic testing include: 

  • Since each tester reviews multiple concepts, you don’t need a large sample size, which saves you time and reduces costs

  • You gather the right amount of responses in a short amount of time, helping you make decisions and prioritize faster

And some drawbacks of sequential monadic testing are: 

  • You need active measures in place to combat bias 

  • Surveys are longer, which can negatively affect completion rates

  • If you limit survey length, you lead to fewer open and closed-ended questions, resulting in less in-depth data on concepts

💡Pro tip: don’t make surveys feel like hard work. Instead, keep testers attentive during long concept tests by using visually interesting concept images or engaging language in your instructions and descriptions. Keep your copy as clear and concise as possible; testers might jump ship if faced with a wall of intimidating or robotic text.

Also, monitor your survey’s performance stats to see how well you’re engaging testers (Hotjar lets you launch and evaluate survey performance😉). If your completion rate seems low, take a look at the survey breakdown. You may need to shorten or revise questions with a high drop-off rate.

Hotjar lets you view your survey’s performance statistics to increase engagement

3. Comparative testing

Sometimes you’ve got more than one great option, and you just need a second opinion (or a dozen second opinions). That’s where comparative testing comes in. 

Also known as comparison or preference testing, comparative testing is a method used to determine the relative appeal of two or more concepts. In other words, this approach looks at which design respondents prefer.

#Hotjar’s preference test template makes it easy to run comparative tests
Hotjar’s preference test template makes it easy to run comparative tests

In comparative testing, respondents view concepts side by side instead of one after another, as in sequential monadic testing. A survey might ask them to score or rank designs against fixed criteria, like originality or usefulness, or instruct them to indicate their preference and explain why.

When conducting comparative tests, keep these tips in mind: 

Comparative testing benefits and challenges

Some benefits of comparative testing include: 

  • Testers can directly compare two or more designs

  • You get clear data about what resonates most with users

  • A small sample size works well, which is more cost-effective

And some drawbacks of comparative testing are: 

  • It doesn’t provide in-depth feedback on the individual concepts or nuances in testers’ thinking

  • It may not work well for complicated concepts because seeing two highly detailed images next to each other might confuse and overwhelm testers

💡Pro tip: get your comparative tests up and running in minutes with Hotjar’s preference test template. Just upload a side-by-side image of your concepts, and you’re ready to roll. The template comes ready with two questions: 

  1. Which design do you prefer?

  2. What was your first impression of the design? 

While you have the option to add questions, keeping the survey short means testers avoid survey fatigue, increasing completion rates.

Start collecting insights in minutes with Hotjar’s preference test template

4. Protomonadic testing

Protomonadic testing combines elements of sequential monadic testing and comparative testing. Respondents view multiple concepts separately and sequentially, and then choose their preference at the end. 

In this best-of-both-worlds approach, you accurately gauge the first impressions of each concept, and understand how they perform against each other.

#Use Hotjar Surveys to conduct protomonadic testing
Use Hotjar Surveys to conduct protomonadic testing

Ask users for their first impressions of a design or concept with a rating scale question. Not only does this help quantify participants’ reactions, but it also makes it easy to tell—at a glance—how they perceive it (e.g., ‘dislike,’ ‘love,’ ‘hate,’ etc.) and where the main areas for improvement lie.

The Hotjar team

💡Note: for best results with protomonadic testing, follow the best practices of monadic and comparative testing. While it may seem like more work upfront to design a two-for-one test, you reap the benefits of richer, more reliable results.

Protomonadic testing benefits and challenges

Some benefits of protomonadic testing include: 

  • It allows for better comparisons between concepts since respondents have analyzed each concept before making a decisive judgment call

  • You don’t need as many respondents as in monadic testing 

And some drawbacks of protomonadic testing are: 

  • It can be more challenging to design since it combines two different types of tests

  • With more questions, participants face survey fatigue, which can lower your completion rate

💡Pro tip: create a plan for analyzing your survey results. Protomonadic testing yields considerable quantitative and qualitative data. Many people feel confident analyzing numbers—the quantitative results from, say, a rating scale—but hesitate with how to approach qualitative results, like the long-text answers users give to explain their reasoning.

Qualitative results often present challenges since they’re open to interpretation. Use thematic analysis to make sense of your text-based feedback. Here’s how:

  1. Read through the data—in this case, survey responses

  2. Code the data by identifying key phrases that keep popping up about a topic

  3. Look for themes connecting these phrases that you can use for grouping labels

The good news? Once you’ve nailed down this process, you can use it to analyze other types of customer feedback—like user interviews.

Collect insights that lead to empathy and iteration

The four concept testing methods we explore in this article offer actionable data that empowers you to determine what your users want—so you can give it to them. However, each technique has its own unique benefits and drawbacks. 

To determine which method works for you, start by outlining your project’s goals and team’s constraints, like budget or time. Once you clearly understand your objective and parameters, then you can consider the pros and cons of each method, and find the one that fits best. 

No matter which method you choose, you’ll walk away with a better understanding of your ideal customer persona and ideas for a better product or design.

Validate your ideas fast

Use Hotjar’s digital experience insights tools to conduct effective concept testing and build a product that delivers results.

FAQs about concept testing methods