Split testing is an effective tool for evaluating different marketing strategies — and it has become a relatively common practice. Most marketers have heard of it, even if they don’t know how to do it.
Any time you want to compare the effects of 2 different versions of something — whether it’s emails, ads or landing pages — you would use split testing to do it. Essentially, it’s a way of putting the good old-fashioned scientific method to work: You’re tweaking variables, anticipating outcomes and testing results to quantitatively measure which changes work best.
This method ensures that you’re making marketing decisions based on data instead of conjecture, something for which 40% of organizations plan to dedicate more of their budget toward, according to Invesp, a provider of conversion-rate-optimization software and services.
Let’s look at why split testing is so helpful and how to do it effectively.
People often use the terms split testing and A/B testing interchangeably, but A/B testing is just a type of split testing.
In an A/B test, you are focusing on 2 versions of a web page, email, social post or whatever you happen to be testing. But you can technically test as many versions as you want as long as the data you’re looking for is clear.
The most important factor in split testing is that you isolate one or more variables that you can directly compare across 2 or more different versions of the same thing. The goal isn’t simply to see which of 2 entirely different landing pages performs better, but what specific features make B perform better than A (or C, D and so on).
Knowing what makes one version perform better than the other is what allows you to repeat those same results in other forms. You don’t have to shrug and wonder why B did better than A. You know why. And that’s the ultimate goal behind split testing.
Split testing is a crucial tool in your conversion-rate-optimization toolbox. If you’re tweaking a website landing page, the result you’re looking for is to increase your conversion rate. You could make those decisions based on your gut, but 2 out of 3 of marketers say data-based decisions beat ones made from instinct, according to Google and Econsultancy. Those data-driven decisions deliver 5-8 times the return on investment compared with those not driven by data, according to Invesp.
Say you’re launching a new takeout delivery service and you want to kick it off with an email campaign that entails sending several emails a week over 3 months. There are many factors you could test over the first few weeks to progressively improve your campaign results.
For instance, you might want to compare whether sending an email midafternoon versus right before dinner works better. If you send 2 entirely different emails, one at 3 p.m. and the other at 5 p.m., you’ll never know what caused one to perform better. Was it the timing or the content? To get a definitive answer, you would want more precise A/B split testing.
Instead, you could send the exact same email to 2 subgroups, one at each of those 2 times. You might even try sending emails over a few days to get more data. After a few rounds, you’ll know which time works better. Now, you can stick with the winning time for the rest of your campaign, confident you’re getting better results than you would have otherwise.
Email timing is just one of many factors you can test. If your campaign is multichanneled, the best time for an email may vary from the best time for a Facebook post. You could test just that one variable over several channels.
To illustrate how many factors you can test, let’s consider website split testing. Many aspects of a page can influence its conversion rate, including:
Let’s look at CTAs, for example. While there are some general rules for writing effective CTAs, many of the specifics will depend on your brand and the goal for your web page. Does your audience respond better to a straightforward CTA (“Subscribe Now”) or something more lighthearted and sillier (“I Can’t Take the FOMO!”)? If you aren’t sure, create a split test to find out. Build 2 versions of the landing page with only one difference: the CTA. Watch which one converts more visitors, then you’ll have your answer.
One of the most important decisions you’ll face in setting up your split test is how many different variables to test at once. If you’re just learning how to do split testing, then it’s probably best to keep it simple and limit the number of versions you have to test and compare against each other.
In a straightforward A/B test, you would choose one variable to change and run an A and a B version. So, if you send an email and you want to see if personalizing your subject line increases the open rate, then you would send one set with personalized subjects and another without and see which performs better. You would intentionally avoid changing anything else so you could be certain that this single change made the difference.
In multivariate testing, however, you can run any number of versions with a variety of different combined changes to see which combination seems to be most effective. This is especially helpful for optimizing a web page because there are so many different design elements that play off of one another to create the total user experience. Although it might at times be useful to change one aspect of the page — your navigation bar, for instance — you often will learn more from testing a few versions with several variables interchanged.
Let’s say you isolate 3 different page elements that you want to test: button color, font and header image. If you want to test 2 versions of each page element in various combinations, you would have up to 8 different test pages. You could design one for each combination of elements and then segment your audience into 8 subsets to see which version generates the results you’re looking for.
Doing this may help you find an overall page design much more quickly than isolating one variable at a time. However, you can see how quickly this approach could get messy. Because of this complexity, multivariate testing is best reserved for experienced split testers.
Overall, split testing isn’t a complicated process. But doing it well does require a bit more than just knowing what split testing is. There are ways to get the most out of it and, again, it comes back to the basic rules of the scientific method.
For the most useful results, you can’t just dive into testing. You need current data so you can decide what to test.
On a website, there are countless analytics you can examine to see how users are engaging with each page. These can be anything from simple page visit metrics to complex heat maps that reveal where users are scrolling and clicking. If you can collect a robust picture of user engagement, you can select specific areas you’d like to improve.
Once you select an area to improve, you need to hypothesize about the results you’ll get by making changes. You can only formulate this theory if you have thoroughly examined the data you collected to create a clear picture of current user behavior.
For a simple illustration, let’s focus on one variable. Let’s say you run an online store for bicycles and accessories. Through reviewing your website’s back end along with some data from Google Analytics, it’s clear that there aren’t as many people landing on one of your store pages that has the highest revenue-generating potential. This page — your “Premium Commuter Gear” collection — isn’t prominently linked from your menu, and you think that’s why it’s underperforming.
In this case, your hypothesis would be: If we make the Premium Commuter Gear collection more visible from the main menu, we’ll see more visits to that page and a resulting increase in sales.
With a solid base of starting data and a clearly stated hypothesis, you can now set the parameters of your test. In this case, you might choose 2 different variations to compare with the current site. That means you would end up with 3 versions of the same page.
Again, you could do this for any page element, whether it’s front-end or back-end. Whether you’re split testing different images on your main page or tweaking metadata to run search-engine-optimization (SEO) split tests, as long as you do enough research to form a strong hypothesis and create variations to test it, your test should provide helpful information.
Now you’re ready to test your hypothesis. You create the 3 pages and use a website split-testing tool to funnel traffic equally to each one. Watch the results over the predetermined period and evaluate which page is directing the highest percentage of traffic toward your “Premium Commuter Gear” collection, as well as whether that traffic leads to an uptick in sales.
If one particular variation yields the results you hoped for, you can make that change official and move on. If neither quite proved your hypothesis, then you can go back to the drawing board to test some other changes and see if a different direction yields better results.
For most types of split testing, you’re going to need specialized tools to do the job effectively. Email clients such as Mailchimp or Klaviyo have built-in A/B test capabilities and automatically generate reports for you. But running website split tests is much more complicated and not something you can easily do without some additional software to streamline the process.
There are quite a few tools on the market for this kind of testing, with VWO, Optimizely and Google Optimize 360 among the most popular. Your choice will ultimately depend on your company’s needs and budget. Not all of them have multivariate capabilities. Some do a better job of guiding you through the process than others. Some have built-in web editing, while others just run the variations you’ve set up through other web editing software.
Try to get a clear idea of:
Split testing takes the mystery out of your marketing decisions. As digital marketing tools multiply, you can make choices with greater precision than ever before. It isn’t that you’ll never second-guess your marketing plans again, but at least you will have the data to check your gut.