Learn / Blog / Article

Back to blog

Content performance comparison: results from a human vs. AI content marketing experiment

Six months have passed since launching our woman vs. machine content marketing experiment in June 2023, where we sent two competing content pieces out into the field to gather data. Keep reading to find out which piece resulted in more traffic, new visitors, signups, and positive sentiment.

Last updated

12 Dec 2023

Reading time

5 min


🏆 Woman vs. machine: 6 months later

  • 🧪 Experiment recap: we gave an experienced freelance writer and ChatGPT identical content briefs to produce a blog post, then sent both posts out into the world to work their magic organically for six months

  • 📈  Results: our human writer, Shadz Loresco, wins across all three categories

  • 🎬 Conclusion: ChatGPT is no match for skilled professionals, but its wide range of use cases makes it an invaluable tool for marketers

Results breakdown

Using Google Search Console (GSC), our custom Organic Search dashboard in Tableau, and Hotjar Heatmaps and Feedback, we analyzed quantitative and qualitative metrics for our human and AI content pieces. Below is a breakdown of both articles’ performance across three categories.

1. SEO metrics

The human article outperformed its competitor across multiple SEO metrics:



Total clicks



Total impressions



Average click-through rate (CTR)



Average position



It peaked at 71 clicks on November 11 and, from October to the experiment's conclusion in December, maintained a healthy average of around 34 clicks per day. Our AI piece, comparatively, took an immediate post-launch nosedive, then plateaued, peaking on August 2 with five clicks.

What’s exciting is our human piece saw a steady increase in clicks over time despite several months of AI-induced upheaval in the SEO industry that saw major events like the introduction of Google’s SGE and Gemini. Even with the odds stacked against it from the start, it performed exactly as we hoped it would.

🔗 Check out our recent webinar with Lily Ray, Senior Director of SEO and Head of Organic Research, for a reminder of everything that happened in 2023 and what to expect in 2024. 

2. Internal performance metrics

We used our custom Organic Search dashboard in Tableau to determine if either piece contributed to our internal metrics. As mentioned in our original experiment write-up, we didn’t foresee movement here because the topic we selected—the impact of AI on various industries—is irrelevant to our ideal customer profile (ICP)

And yet…

Imagine our surprise at seeing our human piece featured in July’s GSC performance report, an imposter among two other topics very much targeted to our ICP:

Our dashboard revealed two more pleasant surprises:



New visitors






Of the 4,550 people who clicked on our human piece, 93% of them were new visitors to Hotjar.com (welcome! 👋). But even more importantly, we got three signups—three brand-new Hotjar users—from a piece of (very) top-of-funnel content that wasn’t even created with our ideal audience in mind. 

3. Reader sentiment and behavior

Finally, and perhaps most importantly, we compare audience sentiment and behavior across both pieces using Hotjar Heatmaps and Feedback.

Scroll maps—a type of heatmap in Hotjar—use a color gradient to represent the most and least viewed parts of a page. Red indicates the areas of a page users see the most; blue represents little to no customer interaction.

Scroll maps comparing our writer’s blog post (left) to ChatGPT’s piece (right)

Scroll maps of both pieces show that the AI piece (right) loses readers’ attention significantly earlier than its human counterpart: the gradient changes from green to blue just a few paragraphs in, while the human piece retains interest for longer.

Understand how real people interact with your content to optimize with confidence and make an impact.

Readers’ qualitative feedback further reinforced our quantitative scroll map results:

A few pieces of feedback via Hotjar Feedback and LinkedIn

However, not every reader agreed that the human piece was a clear winner:

A feedback response from a reader who preferred the ChatGPT version

Others mentioned that both articles have their strengths and weaknesses, depending on factors related to personal content preferences or subject familiarity.

Of the total feedback we received, this was the sentiment breakdown:

And the winner is…

Well, it’s not so simple, even if it looks simple.

At face value, our human piece outperformed—nay, totally annihilated—the AI version in every category.



Total clicks



Total impressions



Average CTR



Average position



New visitors






Scroll depth






It would be easy to give our writer the trophy and thank her for single-handedly saving millions of content marketing careers. But even though these results definitely mean something, they were always going to be imperfect.

It’s worth acknowledging, as many readers already have, that hundreds of variables affect these outcomes: maybe if we’d used the paid version of ChatGPT, maybe if we’d spent more time refining the AI article, maybe if the topic were different, maybe if we’d masked the experiment, maybe if our prompts were better, maybe if we’d used a different AI tool, maybe, maybe, maybe.

Then, there’s our own bias: we’re content marketers and we’re nervous about the future; we want to believe the work we do is unique and irreplaceable. Did we unintentionally sabotage ChatGPT from the very beginning? Possibly. 

There’s also one more critical factor worth considering:

We probably don’t feel the same way we did six months ago

Over the past few months, ChatGPT has become our unofficial right-hand robot, a permanent tab in our browser, and we understand its applications for our jobs a lot more than we did in June. 

There’s still absolutely no chance we’d use it for product-led content writing and editing, but there are many ways AI tools make other, more tedious aspects of our jobs easier. Our internal team of content marketing managers, editors, SEO specialists, and team leads have dozens of use cases between us for tools like ChatGPT, GPT-4, Hotjar AI for Surveys, Jasper AI, and YouTube Summarizer. Here are just a few:

A not-at-all-exhaustive list of how we currently use AI 🤖

  1. Generating captions, transcripts, and recaps for videos

  2. Brainstorming ideas for video angles based on a source text

  3. Summarizing original research reports and converting them into video scripts

  4. Creating articles from webinar transcripts

  5. Summarizing long-form content into reader-friendly TL;DR sections

  6. Brainstorming questions for internal subject matter experts (SMEs)

  7. Choosing contextually correct synonyms for awkward words

  8. Checking grammar in multiple languages

  9. Shortening existing text on YouTube thumbnails or social media visuals

  10. Organizing and reformatting social media posts from a block of text or collection of ideas

  11. Creating micro-blog posts for social media channels

  12. Finding emojis to illustrate specific words and sentences

  13. Rewriting localized meta data that exceed the character limit

  14. Detecting the language of search queries for reporting purposes

  15. Paraphrasing content when repurposing existing material

  16. Tailoring reader surveys to our specific goals

Heck, even the writer of our human piece uses GPT-4 to develop angles for her main topic and subtopics, write FAQ sections, and shorten lengthy sentences. (Plus: rumor has it our Editorial team was actually spied suggesting more ways for our writers to use AI 👀—something that seemed pearl–clutchingly unthinkable back in June 2023.)

Will AI replace human writers?

No—but the takeaway from our experiment is not that AI sucks and people are cool. As everyone reading this has probably already learned for themselves, it’s fantastic for some use cases and terrible for others, just like any reliable tool in your stack.

Ultimately, we hope this experiment has accurately outlined

  • The differences between working with a human writer vs. ChatGPT

  • Real people’s perceptions of 100% human content and AI-assisted content

  • The pros and cons of human and AI-assisted content production

One final thing we’ve learned is that the content marketing landscape is not the same as it was a couple of years ago. AI has upended our workflows, probably forever—but is that a bad thing? Maybe not.

What have you discovered about AI over the past six months? Let us know using the Hotjar Feedback widget—it’s that red tag to the right of the page. 👉

Complement AI and big data with user-centric analytics

Hotjar helps you gain powerful insights into the real people using your product or service. Spot behavioral trends and patterns to deliver changes that delight them.

Related articles

Trending topics

8 ways to improve UX design with AI (and which tools to use)

Incorporating artificial intelligence (AI) into your UX design helps you optimize your workflows and enhance your understanding of user needs, leading to better products, streamlined websites, and happier customers.

But as the use of AI increases, teams are faced with a complex challenge: how do you balance the precision and depth of AI insights without losing the personalized, human touch that defines exceptional UX design?

Hotjar team

Trending topics

Embracing AI in the workplace: 5 ways to overcome resistance and maximize opportunities

Forget the science-fiction scenario where machines rule the world. In reality, we've got generative AI stepping in for something a little less ‘Black Mirror’ and a little more ‘The Office’: taking care of those mundane yet time-consuming tasks in the workplace. 

So, instead of relying on sci-fi for answers, let's dive into some down-to-earth, real-world examples.

Hotjar team

Trending topics

7 ways to use AI to improve user interviews in 2024

Balancing artificial intelligence (AI) technology with the human touch during user interviews is more than just a best practice; it's essential for capturing qualitative and quantitative insights that help you develop successful products and continuously delight users. 

In an era increasingly shaped by data automation and machine learning, however, product teams face a complex challenge: how do you maintain the integrity of the interview experience, while also harnessing the depth and efficiency of AI?

Hotjar team