Learn / Blog / Article
4 steps to being less wrong in UX Design, from Leandro Lima at Google
One of our values is that we challenge ourselves to grow. By hosting external speakers and sharing their ideas, we’re hoping to inspire you, our Hotjar community, to grow too.
Last updated17 Dec 2021
We were recently joined by Leandro Lima, a UX Designer with ten+ years of experience working with Booking.com, Klarna, Google, and now King (Activision Blizzard). Leandro loves experimentation (as you’re about to find out), scientific methods, and Formula One.
In this article, we race through his answer to how a UXer can define, influence, and succeed in creating a culture of experimentation. We get into some technical detail about UX design, so you’ll probably find this most useful if you’re part of a UX or product team.
“Being progressively less wrogn”
One of the themes from Leandro’s session is how we should have less focus on being right, and more on being progressively less wrong. Or, “wrogn,” as he put it.
Leandro said this is especially important in UX design, where you constantly adapt to new user behaviors and business requirements. He said you can never be 100% certain the design decisions you make are the best ones possible, but you can be less wrong about them.
He then took us through his four steps to creating a culture of experimentation.
1. Co-own hypotheses creation
Leandro reminded us that an experiment is “an organized way to test one hypothesis.” And to create good experiments, each one has to have a strong hypothesis. He said one of the first things you can do is “co-own the hypotheses creation with everyone involved.”
Leandro confessed that definitions are a guilty pleasure, and his definition of a hypothesis is “a logical assumption about how things will behave.” To get us warmed up, he gave us an example of a bad hypothesis:
“Create a good design to make users happy.”
Why is it bad? Because it leaves you with too many questions. What is “good” design? What is happiness? He said this hypothesis is too much like a desire or a wish.
Instead, Leandro said a good hypothesis should follow some rules:
It should be testable
It should be measurable, as we cannot manage what we cannot measure. Using the example above, this would be a measure of happiness.
It should be falsifiable, meaning it should be able to be disproved. (Remember, we’re just trying to be progressively less wrong, not necessarily right.)
And it should be made up of two things:
An independent variable: this is what you’re manipulating. Leandro used the example of “a pink button that will generate more clicks.” Here, the pink button is the variable.
A dependent variable: the thing that will happen based on your manipulations. Roughly speaking, this is a metric.
2. Co-own metric creation
A metric is “an imperfect number of approximation of some aspect of the world at a certain time and place.” According to Leandro, “a metric is always imperfect” because it’s stuck in a certain moment in time. To reduce uncertainty, he suggested you use some basic statistics.
Leandro covered how UXers should co-own the definition of this metric. He said UXers have biases, and that you should always add your perspective to the mix so a metric isn’t only defined by product owners or developers.
When we talk about metrics, Leandro also said it’s standard for many organizations to talk about ‘UX metrics’ or ‘Business metrics’, but that this can be harmful. Every time you say a UX metric is separate from a business metric, you say that the two are separate and that UX metrics aren’t valuable to the business.
Instead, Leandro suggests you talk about:
Behavior metrics: for example, seeing a pink button and being motivated to click
Outcome metrics: the result of a new behavior: clicking the pink button
Outcomes for both people and business
In UX design, Leandro said you should concentrate on motivating behaviors that create good outcomes for 1) people, and 2) business. It’s similar to how we explore using behavioral insights to delight customers.
Leandro said he always looked to create a good outcome for people because you should be ethical in how you design, especially when influencing behaviors.
On the other hand, while happy users usually lead to better business, you should always make sure there’s a business case. He said this helps get buy-in from everyone.
When a metric becomes a target, it ceases to be a good measure.
Leandro highlighted Goodhart’s quote, which reminds us that while a business’ target may be to ‘increase revenue’ or ‘improve corporate social responsibility’, it shouldn’t become the metric, even if you can create a way of measuring those things.
Why? Because the metric is just an indication of how you’re working towards the target. The target remains, and as the metrics improve, you’re just less wrong than you were before.
So, a good hypothesis looks like...
Leandro gave us a couple of examples of hypotheses that work and why. Here’s the first:
Writing better error messages (independent variable) on our forms inside our purchase flow could motivate people to fix the mistakes quickly (behavior). This will result in fewer support tickets (outcome).
'Better error messages' is the independent variable. It’s what you’re doing, what you’re manipulating with your design.
'Fix the mistakes quickly' is the behavior you want to motivate. In this example, you could measure this in time, such as how quickly mistakes are fixed.
'Fewer support tickets' is an outcome that’s good for 1) people – because they fixed the problem themselves, and 2) business – because fewer calls and less support required leads to better expenditure.
And here’s Leandro’s second example:
3. Co-own the experiment analysis
So, we now know that a good experiment needs a strong hypothesis, and a strong hypothesis needs the right metrics. Next up is analysis.
This is a recurrent theme, but Leandro said not to let a Product Manager own the analysis. Instead, do it together. As a designer, you have a set of values. Just like developers or product managers have their own values too. This mix of values creates a better analysis.
Leandro said, when it comes to doing some analysis, it’s important to remember that numbers tell us nothing. “They’re just numbers, not storytellers,” he said, and this is when bias plays a role. Every time someone explains a number, some meaning is lost, and bias is attached.
Remove bias (as much as possible)
To help analyze data, Leandro said you should ask:
Why did you choose this data?
How did you collect this data?
Who did you collect this data from?
This is because the people who chose the data (and their biases) influence the results. Where this data is collected can influence the result too.
He also covered other biases to be aware of, such as Winner Bias and Confirmation Bias, which can influence us when we analyze results and where we want them to confirm what we’ve set out to achieve. Helping to set metrics before analyzing results helps remove bias too.
Leandro said we should also consider that a big significance does not necessarily mean a big effect. His example was that a child in a playground has double the likelihood of having an accident (0.4%) than a child at home (0.2%). Yet this is still a very small amount. When your product team is analyzing what matters – he said to think about this Practical Significance too.
Use statistics to increase certainty around metrics
Earlier, Leandro mentioned how statistics could give more certainty to metrics, and these are some of the tests he suggested to analyze your data:
Confidence interval: this will help you calculate the likely range for the true mean of your entire population
Significance level: this is when you calculate the probability of change between the base and a new variant
Non-inferiority test: this helps you further test the confidence variant to demonstrate how the new variable compares against the existing variable
4. Co-own the reporting
When it comes to reporting, Leandro said it’s effective to “always state what happened, before following it with a recommendation based on the new information you’ve gathered.”
He gave this as an example of what good could look like:
There is a correlation between increasing the visibility of error messages and fewer people contacting customer service.
Recommendation: create a visual pattern to guarantee good visibility of errors and invest in clear language to communicate errors.
Like before, Leandro stated the importance of co-owning analysis with other team members. People’s recommendations could differ depending on whether they’re working in UX, product, or software development. Bring diversity to your analysis.
Then, he said to share your results far and wide.
Leandro left us with a few final tips for you to remember:
Design like you’re right, test like you’re wrong
He said it’s always important to challenge your ideas and remove your biases.
Experiments are a tool to learn, not prove who’s right and who’s wrong
Leandro said we should remember we’ll be wrong most of the time. He said, “The entire experimentation process is all about informing decisions better, increasing our knowledge, and helping people learn.”
Experiments are not a decision-making tool
Remember that a person makes the decisions, not the research, metrics, or suggestions.
Work to be progressively less wrong
Who’d have guessed it? Leandro said to practice, practice, practice, and constantly remind yourself of this, which came up time and time again in his session.
A big thank you to Leandro Lima for joining us. We hope you picked up a thing or two that you can put into place in your organization. Look out for more articles from us, where we’ll be sharing more ideas and inspiration from our external speakers.
UX design and analysis
How negative feedback leads to better UX
Before a brand or business launches a new product or idea, it undergoes a long process to get approved. There's always room for negotiation, but you have to get feedback and be open to altering plans accordingly with your team.
UX design and analysis
How to use product feedback to solve business-critical issues
Product feedback is one of a product management team’s most effective tools, but only when that feedback comes with the proper context.
UX design and analysis
Website funnel analysis: using funnel analytics to increase conversions on your website
Visitors flow through your website every day, but somehow all of that traffic funnels down to just a trickle of conversions, sales, and signups. Funnel analysis can help you spot where users are leaving your website, so you can optimize it and increase conversions.
In this post, we’ll explain how you can analyze funnels to identify key traffic sources and spot high-exit pages. You’ll also learn how to combine funnel reports with more analytics insight, so you can send more traffic down the marketing funnel to the pages that matter.