A/B testing has seen increased popularity in the online marketing community starting with around 2010 or so. However, at least according to Google Trends the interest towards A/B testing seems to have hit a plateau in the last years.
Does this apparent saturation mean that A/B testing is less effective in today’s market? First, let’s go over the basics and then see if they’re still relevant to today’s environment
What is AB testing?
Simply put, A/B testing, in the area of digital marketing, is the process of comparing two versions of the same page in order to see which one converts best. The differences between the two versions can be quite significant (multiple page elements can be different among the two pages) or they can be very subtle (a simple color change on a call to action button).
The comparison in terms of performance will be determined by some statistical calculations. Here’s where the statistical significance factor comes in. In order for an A/B test to be considered valid, it should have a statistical significance above 95%.
Furthermore, the more traffic you send to an A/B test, the more chances are that the statistical significance to be accurate. So, ideally, any A/B test should get plenty of traffic for a correct validation process to take place. Some experts, suggest a minimum of 10.000 visitors, but actually the more the better.
How to correctly implement A/B testing into the marketing mix
A/B testing works only if it’s implemented correctly. And unfortunately, many companies that want to give A/B testing a try, don’t have the required know-how and end up having disappointing results.
Some of the most common mistakes when it comes to A/B testing implementation are:
- Not having enough traffic. Many times business owners want to get the benefits of A/B testing but have only a trickle of traffic. This means that it would take months or years until a valid result is reached, making the testing impractical.
- Stopping the test when the statistical relevance is reached. Many times we see experiments reach the 95% required or more, in the early testing stages. It’s tempting to stop the test in its tracks and declare it a winner, but without enough traffic, this is the same as taking a wild guess. It’s a very similar issue with the first point in this list.
- Making changes to one of the variations during the experiment. Sometimes companies decide to make minor changes on the page that’s undergoing an A/B test. But even a minute modification can render the test invalid
- Thinking that every other test would be successful. Finding winning A/B tests is not as easy as it seems. Multiple failed tests are usually required before a positive one is reached. Marketers looking for quick wins won’t find them using A/B testing.
- Copying A/B testing ideas from other websites. The same A/B test replicated on different websites might not be always valid. There are a lot of factors that can influence the final outcome such as the niche, the website copy, how the information is portrayed, brand colors etc.
For these reasons and other similar ones, a big part of companies that give A/B testing a try quickly get disillusioned by the lack of results.
Accurately implementing A/B testing means signing up for an ongoing process with dedicated resources. Ideally, any business that’s serious about its growth would have an in-house conversion rate optimization team that would implement A/B tests continuously.
Is A/B testing still relevant in today’s market?
A/B testing is very much still a solid solution for driving digital growth. The principles are the same as they have always been. Nowadays, with modern tools, it’s easier than ever to implement A/B tests, but this does not bypass the need of acutal knowledge for implementing and running these tests.
In the medium to long term future, however, things might change. With the rise of machine learning and AI, it’s conceivable that A/B testing will not be done by marketing teams anymore. But until then, the old school basics still apply!