Split testing (also known as A/B testing) is a straightforward, scientific way to determine which version of a website will produce more conversions.
A split test distributes website traffic between two different versions of a webpage—the original (A) and a variation (B)—which differ from each other in terms of design, content structure, page elements, etc. Observing how each traffic groups respond to the version they're exposed to helps marketing and optimization teams determine which version offers the greatest conversions and opportunities for business growth.
The term ‘split testing’ is often used interchangeably with A/B testing. The difference is simply one of emphasis:
Like A/B testing, split testing can evaluate small changes to a single website element (such as a different image, header, call to action, etc.) or be run between two completely different styles of design and web content.
All available users will be split into groups (without their knowledge) and half of them will see the original version (the control) while the other half will see a new version (the variation). Once the test has reached a statistically significant sample size, the design and optimization team will investigate differences in behavior and declare a winner (or an inconclusive result if no measurable differences are obtained).
Like A/B testing, split testing ensures that decisions aren’t made by gut feel or guesswork.
Without split testing, companies often make changes based on so-called ‘best practices’ or based on the Highest Paid Person’s Opinion (HiPPO). But best practices can kill conversions because, by definition, they’re based on what worked in the past for others (as in: they still can't guarantee that something that worked elsewhere will work for your business). And of course, the highest-paid person’s opinion can be just as flawed as anyone else’s.
Even the most experienced marketers, designers, and copywriters can be wrong when trying to figure out what users will respond to. Split testing lets the users decide, and can prevent an optimization team from going down a dead end.
It’s easy to mistake split testing for CRO—imagining that CRO is just an ongoing series of split tests and A/B tests, helping you stumble across new ideas—but that’s not how it works.
Before you split test anything, you must come up with evidence-based hypotheses about how to improve your user experience and, in the end, boost conversions. Split testing is about exploring designs and solutions based on what you’ve learned studying your users and markets, and collecting answers to questions like: