Split testing is one of the most powerful tools in a digital marketer’s kit. With so many competing strategies available, it’s almost impossible to predict what will work on any reliable basis. Through split testing, a digital marketer is able to quickly compare different strategies and isolate the most favorable ones. Unfortunately, the data from a split testing campaign can also be misused and misrepresented. When the process of testing is rushed or flawed, it can produce results that are actively detrimental.

testTesting Without a Stated Purpose

The purpose of split testing is to test out a specific strategy. A single specific strategy. It’s not enough to say “We like this landing page design and this landing page design; Let’s see which one works.” A/B testing should be as focused as possible or you won’t understand why the data you receive paints the picture that it does. Moreover, you’ll waste your own time.

Careful consideration should always go into any A/B test you run. The test itself should be mostly about developing your strategies and finding the best way to test which one works best. Otherwise you’ll find yourself running tests that are either poorly thought out or entirely unnecessary.

In the above example, rather than testing two landing page designs as a whole, you should be testing specific aspects of design:

  • The position, shape and appearance of conversion prompts.
  • The call-to-action and ad copy positioning and verbiage.
  • The visual media surrounding the call-to-action and prompt.

swatchRunning Too Short of a Test

It can be tempting to call a test over once the expected results have been achieved. It’s a bad idea. A test should be done for a minimum of a full week and should only be called following the collection of a solid sample size. A test should never be done for a matter of hours or for a single day unless the test is specifically designed to test something time-sensitive. As an example, you might be testing late night conversions, or specifically trying to test a Saturday sale.

The demographic and activity breakdown of a website’s traffic vastly changes depending time of day and day of week. There’s no way to achieve a decent analysis if you don’t have a sample size that transcends the standard seven day week. And, in fact, in many cases a full month can be preferable. For e-commerce sites in particular, paydays can vastly affect revenue streams and traffic.

fruitsNot Collecting Easily Compared Results

A/B split testing requires that the tests be run with as identical an audience as possible. Apart from the strategies being tested, everything else in the tests should be the same. You can’t run an A/B test one after the other nor can you run them at different times of day; the results you get will be both unpredictable and inconclusive.

Marketers who don’t have the technology or resources to run “proper” split testing may try to skirt around this by simply making the required modifications to their site, recording the results and then comparing them with prior results over the same period of time. While this is still technically a split test, it isn’t an accurate form of testing — for one, it makes it very difficult to account for the site’s growth.

customersTesting With Too Few Customers

Split testing is designed to show differences in large volumes of traffic. A large sample size is required to draw any results. Sites that don’t have a significant amount of traffic and conversion to begin with won’t usually benefit from split testing; the margin for error is simply too wide. If your site only achieves five conversions a week, a single additional conversion will seem statistically significant — when, in reality, it could just be a fluke of timing. If your site achieves fifty conversions a week and you see ten additional conversions, on the other hand, that may be more significant.

Following our article on statistical inaccuracy, it seems important to point out why we, as digital marketers, often have such an inconsistent relationship with data. Simply put, data scientists require many years of experience and education to learn how to interpret data consistently and without bias. It seems almost naive to expect that we could obtain an immediate expertise in an area that requires such discipline. While data doesn’t lie, it is also extremely open to interpretation — and that’s why we need to be very cautious about the conclusions that we draw.

Split testing is not a tool for building conversion, it’s a tool for optimizing conversion — there’s a difference. Split testing works best when a site is already successful and is attempting to improve upon that success. And when used properly, split testing can be an incredibly powerful tool.

Jenna Inouye
Jenna I. is a freelance writer, programmer and web developer, focusing on the areas of digital marketing, technology, gaming and finance. Hire Jenna through WriterAccess or contact her directly.

Leave a Reply

Your email address will not be published. Required fields are marked *