

Marketers should rely on their Consumer Insights departments to apply design of experiments discipline to the test design. A/B tests also can be inefficient if not anchored in good experimental design.

To be a good clean test, “A” or “B” treatments should be randomly assigned to members of the exact same group cutting corners on this detail can be very counter-productive. This undermines the validity of the test. To minimize costs, marketers sometimes use cheaper inventory or smaller cell sizes for test/challenger cells, whereas the control cell benefits from its incumbent status as the current champion. In practice, A/B tests are often poorly executed.Without a disciplined focus on developing a theory about your customers, marketers are likely to squander the opportunities afforded by in-market tests. Every test should be regarded as an opportunity, not just to assess a given stimulus, but also to accumulate evidence about what drives consumer behavior. A/B tests give you lots of tactical information, but not about the bigger strategic questions – how much to spend, on which channels, toward what kind of brand strategy? A/B tests tend to be so tactical that marketers who rely on them exclusively often fail to develop a coherent view of their customers’ motivations and needs.If you are not a direct marketer, it can be difficult to identify the right response variable to monitor – and thus to read the results correctly.
