Get the latest tips straight to your inbox.
When you're trying to improve the conversion rate for your website or other digital marketing efforts, there are seemingly infinite variables you can tweak and test.
It's tempting just to start grasping at straws and see what you find. This approach will only take you so far — and you'll spend a lot of time and money getting there. To maximize your efforts, you need a conversion rate optimization (CRO) framework.
Also known as an experimental framework or experimentation road map, a CRO framework provides a structure for evaluating your efforts. Instead of choosing one landing page or email template to tweak at random, you can systematically evaluate your options, choose the best places to focus your efforts and evaluate the results accordingly.
Let's look at the ins and outs of what a framework can do for your business, along with the key components you need to build an effective one.
When you're building your website and fine-tuning your digital marketing efforts, conversion rate optimization should be one of your primary concerns. It isn’t merely about how many channels you're using or how great your landing pages look, but how those marketing choices are delivering value.
In other words, how are your marketing dollars converting into either immediate sales returns or an expanded audience?
Once you start exploring the maze of data about which types of CRO efforts deliver the biggest bang for your buck, though, you're liable to get lost quickly. One 2013 study shows long-form landing pages to have a 220% CRO. Another study shows that removing navigation bars can increase your conversions by as much as 100%. Yet another focuses on the power of using video on your landing pages. Facebook ads convert at 2.3% in one industry and 14.3% in another. So, which of these strategies should you implement?
Sure, you could try doing anything that you've heard about — taking tactics that have worked for other companies and throwing them at the wall to see if they work for you. It can take weeks or even months to measure the results of your experiments, though. If you are testing variables one by one, you can see how inefficient this approach could become.
Without a CRO framework, you are more likely to choose the wrong variables to evaluate and optimize. You might spend 3 weeks CRO A/B testing a landing page change that boosts your conversion rate by 1% when you could have tweaked a simple call to action (CTA) for a 3% increase. A road map won't prevent all of these missteps, but it will help you drastically reduce them.
The exact design of your conversion framework will be specific to your company — there is no master schematic for you to design your road map. The details will depend significantly on your audience, industry, existing marketing efforts and many other factors.
That being said, there are a few critical elements you need to build a solid skeleton for your framework. These include:
Your road map can take any number of forms, but it should have all of these elements to effectively drive conversions and produce measurable results.
We've mentioned goals already, but it bears repeating here — and expanding a bit. CRO isn't something you work on independent of your larger brand identity. Specific targets for improvement should be tied to the big picture of your brand.
An example may help here. Let's say a brand sells high-end, ultralightweight camping gear. Its target audience is wealthy, adventurous and sporty with high expectations for product quality. In a nutshell, the brand aesthetic and style are more akin to Patagonia than Dick's Sporting Goods. If this company wants to increase conversions, it could A/B test a variable on its homepage. The marketing team is trying to decide between testing a banner offering a big discount or trying out a short homepage video showcasing its products in action.
Given what you know about the brand, which do you think would be more effective? Although the discount is easier to pull off — not to mention cheaper — it may not have the same impact with that particular audience. With buyers ready and willing to pay for a quality product, it could even backfire. A well-made video showcasing the product could be far more effective and worthwhile. This option has a high ROI potential and fits with the overarching brand image.
That example deals with big-picture goals, but specific, detailed objectives are also critical for your road map. It isn’t just, "We want more people to click on the links in our emails," but, "We want to increase our CTR from 1% to 2% in 3 months." These details are foundational for an effective experimental framework. Small, measurable goals that are connected to your broader objectives will help you chart a course toward CRO success.
Now that you have a general idea of what you need in your conversion optimization framework, it's time to get specific. A road map is only helpful if it can get you to your destination, after all.
Here's an 8-step blueprint for building your experimental framework:
To improve your conversion rates, you need more information than simple percentages showing total sales compared to site visits or the number of emails sent. Your mission is to find exactly what your customer journeys look like — and where people tend to drop off along the way. The more complete a picture you can get of how users are engaging with your marketing efforts, the better.
The specific data you need will depend on your company and goals, but a few of the most important areas (and tools to gather the information) include:
With a few tools in hand, gathering the data is the easy part. Grasping what's happening is much more complicated. To truly understand why your customers are converting or dropping at any given point, you have to understand the many factors that can drive their behavior.
Some of this can be generalized. For instance, 1 in 4 visitors will abandon your page if it takes more than 4 seconds to load, according to WebsiteBuilderExpert. Studies such as HubSpot's have repeatedly shown that the longer and more complex your forms are, the more conversions you will lose.
Other data require a more nuanced understanding of human behavior, both in general and with specific user segments. For example, generally speaking, people require a sense of trust to move forward with your brand. Factors such as your site's design, correct spelling and grammar, visible contact info and staff photos all lend a sense of credibility and develop trust. Other general elements include incentives (e.g., what's in it for our users to fill out that form?) and ways that they can engage with your page more fully (videos, comments, social sharing and more).
However, each user is coming to your site with different needs. They arrive at distinct stages in the sales funnel, bring different temperaments with them and demand different levels of complexity in order to complete the conversion. Evaluating your marketing CRO data must include these factors as well. If you're seeing high bounce rates in one specific area, ask yourself what types of users are probably landing there and what aspects of that page may be failing to speak to their specific needs.
Once you form a more comprehensive picture of the data, you can start to define what successful outcomes of your CRO experimentation should look like. You may have come into this process frustrated that your sales weren't meeting expectations, but now you have much more information about what's behind that. If you've done your homework, you may have even found a few surprises.
Let's say you had a particular set of products that were underperforming on your website. Initially, you assumed that you needed to rework those product pages entirely to increase sales. But, through data gathering and analysis, you found that very few visitors were ever landing on those pages to begin with. At this point, the first successful step may not be "increase sales of product set X" but simply, "increase page visits for these products by 150%." Then you'll be able to evaluate whether those pages are effective or not.
Again, the point is to get specific here. For each experiment you run, define 1 or 2 needles you want to move and be sure there are specific dials you can turn to move them. Anything more will just muddy the results.
Now you're putting the first 3 steps together. You want to move metric 1 from point A to point B. Based on your data analysis, what dial can you try to turn to make that happen? Choose that dial and lay out your expectations: "If we do 'X' it will change metric 1 by 25%."
For instance, let's say you want to expand your email subscription list. The current form you use is hidden away at the bottom of the home page, leading few people to see it and sign up. Your hypothesis might be something as simple as, "If we introduce a pop-up email subscription box, we will increase our new subscriber rate by 50%."
There's your experiment: a simple A/B test of your current page vs. one with an email subscription pop-up. Depending on your web traffic, you can decide how long you would need to run the test to see if the new setup meets expectations (we’ll lay this out in detail below). This gives you an idea of what the test would take to conduct and how valuable the results might be.
Do this for a few key areas, but don't start running your tests just yet. Before you do, you need to decide which ones are worth it.
You're undoubtedly going to find far more aspects of your website or digital marketing strategies that you want to improve than you can accomplish within a reasonable time frame. Resources and budgets are limited. To deal with this problem, you need a way to prioritize your CRO tests.
The PIE method is one system that offers a simple way to think through your priorities in terms of an experiment's potential impact compared to the investment required. In this simple framework, you evaluate each potential experiment in terms of its potential, importance and ease of execution.
If you score each of these areas for a given experiment on a scale from 1 to 10, then average the sum for each, you'll have your PIE score. The highest-scoring test should be the first one you conduct.
Here's an example from Widerfunnel:
You can see that, in this particular example, the average score for this company's homepage tests ranks the highest, so this will be the first test its marketing team should conduct.
There are many other ways you can systematically and objectively order your priorities. Some of them tally points across a slew of categories to arrive at a final score. The setup you choose depends on your exact business objectives, but if you're unsure where to start, the PIE method is a simple, straightforward choice.
Now it's time to build and execute the tests you sketched out with those hypothesis statements. A hypothesis is something that you want to prove clearly to be true or false. So your primary method in these tests should be built around A/B testing frameworks in which you'll compare 2 options for a particular page feature, one representing the "true" condition and another representing the "false" condition.
In some cases, multivariate tests involving multiple options can work, but it's typically easier to conduct simple A/B split tests.
Let's take that basic hypothesis we laid out above:
"If we introduce a pop-up email subscription box, we will increase our new subscriber rate by 50%."
Based on your initial research and planning, you already have the test sketched out and you should have projected a time frame earlier so that you could factor that in as you prioritized. For the sake of illustration, let's outline that process in detail here.
You currently have 10,000 subscribers, with an opt-in rate of 1% of visitors to your site, a good bit below the average opt-in of 1.95%. You'd like to see your opt-in rate at least hit the average, if not higher, for this to be a success. Currently, you have roughly 18,500 new visitors per day, which means you get about 185 new subscribers per day. To increase your list by 5,000 at an opt-in rate of 1.95%, you'd need about 2 weeks:
5,000 subscribers / .0195 opt-in rate = 256,410 new visitors required
256,410 visitors / 18,500 new visitors per day = 13.86 days
However, because you will be A/B testing your current page vs. a new version with a pop-up subscription box, you won't be sending all of your visitors to the new page. You can split this in any ratio you want, but let's say you decide on 50/50 for a direct comparison. You can either choose to double your test length or just test for an increase of 2,500 subscribers instead of 5,000. A month is a long time frame, so you would probably be best off opting for the smaller sample.
You can build out any number of A/B testing road maps in this way, for large or small changes to your website, email campaigns or social media marketing initiatives. Just be sure you have a clear hypothesis that you can test definitively within a reasonable time. You should be able to pick a dial you can manipulate (in this case, the pop-up for email signups) so you can easily see whether that dial is having the desired effect. Remember, you want results that you can clearly measure and evaluate.
Once your tests are complete, it's time to examine your results. This stage is a critical part of your CRO framework — and far too easy to rush through.
The key in this phase is to go deeper than just asking whether the test proved or disproved your hypothesis. You also need to ask why it failed or succeeded. Answering that question will prove much more useful for future tests or for fully optimizing the changes you make for your final launch.
On the surface, the above example makes for a fairly simple assessment. If you run your test for 2 weeks and see the rate of your subscription growth for the pop-up page doubling that of the original page, then you proved your hypothesis true. The variable you tested was straightforward.
Let's say it didn't meet expectations. Sure, it may be that the pop-up option isn't a good choice. But there's far more to it. What was the copy in the pop-up? How did it look? How many form fields were there? When did it pop up? All of these factors could influence a user's decision to follow through.
Use the same tools you had in place for initial data gathering to evaluate user behavior during your tests. Assessing heat maps, reviewing drop-off points and examining all the minutiae of user behavior will give you a more complete picture of what's happening and why.
Finally, armed with a plethora of before-and-after data, you can make your CRO decisions. This might mean optimizing your experiment for a modified second round of testing or it might mean tweaking your final version based on the data and then moving forward with your launch.
Even in the case where an experiment proves your hypothesis true, you can assess the data and fine-tune the final product. Continuing with the email pop-up, it's great news if that initial test brought a 50% subscriber increase. You could just move forward with the pop-up as is, make it official and move on to your next CRO test. Or you could dig in a little more to see if that rate increase could be even better.
Let's say your pop-up box had 2 form fields — 1 for an email address and 1 for a phone number. By examining a heat map, you see that a significant number of users started filling out the form but dropped off when they got to the phone number box. It's possible that if you remove that field, you'll see an even greater number of signups. You could launch a final version with this setup, then continue to monitor results.
When it comes to optimizing your conversion rates, never be satisfied with a quick interpretation of your experimental data. You can always probe one step further to see if you can refine your changes a little more.
CRO is an ongoing process. You're never finished testing variables, tweaking experiments and improving results. Once you complete one test, you're immediately on to the next one.
In the quickly changing field of digital marketing, user preferences and behaviors are always in flux. Your CRO framework has to be systematic enough to provide objective, measurable results, but flexible enough to allow you to change your approach along the way. There will still be times to listen to your gut when the data isn't as clear as you'd like, but you should always have your map at hand to guide you back on course.
That's what's great about a road map: It can always be updated based on changes in the landscape, and it will always provide you with a general sense of direction. It may not be perfect, and it won't account for everything. Without it, though, it's all too easy to veer off track and get lost in the woods.