Learn / Guides / CRO glossary

Back to guides

CRO glossary: type 1 error

What is a type 1 error?

Type 1 error is a term statisticians use to describe a false positive—a test result that incorrectly affirms a false statement about the nature of reality.

In A/B testing, type 1 errors occur when experimenters falsely conclude that any variation of an A/B or multivariate test outperformed the other(s) due to something more than random chance. Type 1 errors can hurt conversions when companies make website changes based on incorrect information.

Type 1 errors vs. type 2 errors

While a type 1 error implies a false positive—that one version outperforms another—a type 2 error implies a false negative. In other words, a type 2 error falsely concludes that there is no statistically significant difference between conversion rates of different variations when there actually is a difference.

Here’s what that looks like:

What causes type 1 errors?

Type 1 errors can result from two sources: random chance and improper research techniques. 

Random chance: no random sample, whether it’s a pre-election poll or an A/B test, can ever perfectly represent the population it intends to describe. Since researchers sample a small portion of the total population, it’s possible that the results don’t accurately predict or represent reality—that the conclusions are the product of random chance.

Statistical significance measures the odds that the results of an A/B test were produced by random chance. For example, let’s say you’ve run an A/B test that shows Version B outperforming Version A with a statistical significance of 95%. That means there’s a 5% chance these results were produced by random chance.You can raise your level of statistical significance by increasing the sample size, but this requires more traffic and therefore takes more time. In the end, you have to strike a balance between your desired level of accuracy and the resources you have available. 

Improper research techniques: when running an A/B test, it’s important to gather enough data to reach your desired level of statistical significance. Sloppy researchers might start running a test and pull the plug when they feel there’s a ‘clear winner’—long before they’ve gathered enough data to reach their desired level of statistical significance. There’s really no excuse for a type 1 error like this.

Why are type 1 errors important?

Type 1 errors can have a huge impact on conversions. For example, if you A/B test two page versions and incorrectly conclude that version B is the winner, you could see a massive drop in conversions when you take that change live for all your visitors to see. As mentioned above, this could be the result of poor experimentation techniques, but it might also be the result of random chance. Type 1 errors can (and do) result from flawless experimentation.

When you make a change to a webpage based on A/B testing, it’s important to understand that you may be working with incorrect conclusions produced by type 1 errors. 

Understanding type 1 errors allows you to:

  • Choose the level of risk you’re willing to accept (e.g., increase your sample size to achieve a higher level of statistical significance)

  • Do proper experimentation to reduce your risk of human-caused type 1 errors 

  • Recognize when a type 1 error may have caused a drop in conversions so you can fix the problem 

It’s impossible to achieve 100% statistical significance (and it’s usually impractical to aim for 99% statistical significance, since it requires a disproportionately large sample size compared to 95%-97% statistical significance). The goal of CRO isn’t to get it right every time—it’s to make the right choices most of the time. And when you understand type 1 errors, you increase your odds of getting it right. 

How do you minimize type 1 errors?

The only way to minimize type 1 errors, assuming you’re A/B testing properly, is to raise your level of statistical significance. Of course, if you want a higher level of statistical significance, you’ll need a larger sample size.

It isn’t a challenge to study large sample sizes if you’ve got massive amounts of traffic, but if your website doesn’t generate that level of traffic, you’ll need to be more selective about what you decide to study—especially if you’re going for higher statistical significance.

Here’s how to narrow down the focus of your experiments.

6 ways to find the most important elements to test

In order to test what matters most, you need to determine what really matters to your target audience. Here are six ways to figure out what’s worth testing.

  1. Read reviews and speak with your Customer Support department: figure out what people think of your brand and products. Talk to Sales, Customer Support, and Product Design to get a sense of what people really want from you and your products.

  2. Figure out why visitors leave without buying: traditional analytics tools (e.g., Google Analytics) can show where people leave the site. Combining this data with Hotjar’s Conversion Funnels Tool will give you a strong sense of which pages are worth focusing on.

  3. Discover the page elements that people engageheatmaps show where the majority of users click, scroll, and hover their mouse pointers (or tap their fingers on mobile devices and tablets). Heatmaps will help you find trends in how visitors interact with key pages on your website, which in turn will help you decide which elements to keep (since they work) and which ones are being ignored and need further examination.

  4. Gather feedback from customers: on-page surveys, polls, and feedback widgets give your customers a way to quickly send feedback about their experience your way. This will alert you to issues you never knew existed and will help you prioritize what needs fixing for the experience to improve.

  5. Look at session recordings: see how individual (anonymized) users behave on your site. Notice where they struggle and how they go back and forth when they can’t find what they need. Pro tip: pay particular attention to what they do just before they leave your site.

  6. Explore usability testing: can help you understand how people see and experience your website. Capture spoken feedback about issues they encounter, and discover what could improve their experience.

Pro tip: do you want to improve everyone’s experience? That may be tempting, but you’ll get a whole lot further by focusing on your ideal customers. To learn more about identifying your ideal customers, check out our blog post about creating simple user personas.

Find the perfect elements to A/B test

Use Hotjar to pinpoint the right elements to test—those that matter most to your target market.