The process of turning a mass of qualitative data, transcripts, and observations into an actionable report on usability issues can seem overwhelming at first—but it's simply a matter of organizing your findings and looking for patterns and recurring issues in the data.
Before you start analyzing the results, review your original goals for testing. Remind yourself of the problem areas of your website, or pain points, that you wanted to evaluate.
Once you begin reviewing the testing data, you will be presented with hundreds, or even thousands, of user insights. Identifying your main areas of interest will help you stay focused on the most relevant feedback.
Use those areas of focus to create overarching categories of interest. Most likely, each category will correspond to one of the tasks that you asked users to complete during testing. They may be things like: logging in, searching for an item, or going through the payment process, etc.
Review your testing sessions one by one. Watch the recordings, read the transcripts, and carefully go over your notes.
For each issue a user discovered, or unexpected action they took, make a separate note. Record the task the user was attempting to complete and the exact problem they encountered, and add specific categories and tags (for example, location tags such as check out or landing page, or experience-related ones such as broken element or hesitation) so you can later sort and filter. If you previously created user personas or testing groups, record that here as well.
It's best to do this digitally, with a tool like Excel or Airtable, as you want to be able to move the data around, apply tags, and sort it by category.
Pro tip: make sure your statements are concise and exactly describe the issue.
When you're done, your data might look similar to this:
Assess your data with both qualitative and quantitative measures:
Extract hard numbers from the data to employ quantitative data analysis. Figures like rankings and statistics will help you determine where the most common issues are on your website and their severity.
Quantitative data metrics for user testing include:
Qualitative data is just as, if not more, important than quantitative analysis because it helps to illustrate why certain problems are happening, and how they can be fixed. Such anecdotes and insights will help you come up with solutions to increase usability.
Now that you have a list of problems, rank them based on their impact, if solved. Consider how global the problem is throughout the site, and how severe it is; acknowledge the implications of specific problems when extended sitewide (e.g., if one page is full of typos, you should probably get the rest of the site proofread as well).
Categorize the problems into:
For example: being unable to complete payments is a more urgent issue than disliking the site's color scheme. The first is a critical issue that should be corrected immediately, while the second is a minor issue that can be put on the back burner for some time in the future.
To benefit from usability testing, you must ultimately use the results to improve your site. Once you've evaluated the data and prioritized the most common issues, leverage those insights to encourage positive changes to your site's usability.
In some cases, you may have the power to just make the changes yourself. In other situations, you may need to make your case to higher-ups at your company—and when that happens, you’ll likely need to draft a report that explains the problems you discovered and your proposed solutions.
It's not enough to simply present the raw data to decision-makers and hope it inspires change. A good report should:
Visit our page on reporting templates (coming soon) for more guidance on how to structure your findings.
After the recommended changes have been decided on and implemented, continue to test their effectiveness, either through another round of usability testing, or using A/B testing. Compare feedback and statistics on success rates to evaluate the changes and confirm that they fixed the problem. Continue refining and retesting until all the issues have been resolved—at which point… you’ll be ready to start with usability testing all over again.