Before the advent of web testing, there weren’t many tools available to definitely measure whether a campaign would gain traction—and why. But, no longer. Experimental design in marketing provides today’s organizations with a powerful technique that can reveal which elements of a campaign resonate with customers. It’s been adopted enthusiastically by organizations like the luxury retailer Tapestry (hear directly from Tapestry e-commerce leaders about their testing program).
What is experimental design?
Experimental design is a research, testing, and optimization approach used to organize data and run statistical tests on it to identify cause-and-effect relationships between inputs and outcomes.
Experimental design in marketing allows brands to separately tag different elements of a campaign—for example, the headline vs. the CTA. By tagging each element separately, the brand can test different message variants in the market with customers and identify which element elicited a response. This allows brands to understand how to combine those elements for the highest level of customer engagement.
For example, a brand that is trying to identify the best design for a web page could separately tag different versions of the headline, tagline, image, and the call-to-action button. Using multivariate testing, it could then put four or five variations of each into the market and measure which combination(s) induce customers to take action. This allows marketers to measure the impact of different page elements quickly and at scale.
How does experimental design differ from A/B testing?
One of the most common methods marketers use to test different messages is the A/B test. As the name implies, A/B tests allow brands to randomly divide an audience into two groups and deliver one version of a message to one and a different version to the other. Marketers can then assess which performs better. A/B testing can be thought of as testing the champion version against the challenger version.
A/B tests are effective at showing which of a small handful of options performs the best, as Ron Kohavi and Stefan Thomke pointed out as early as 2017 in a research study they wrote about in the Harvard Business Review. More recent research out of MIT validates their view. For brands that want to test only two or three variations of one element, an A/B test can produce a usable result with a modest sample size. That ease has made A/B testing popular.
What are the limitations of A/B testing?
Especially when compared with multivariate testing, A/B tests have limitations in today’s climate of massive scale and real-time interactivity. They cannot, for one, efficiently test multiple versions of multiple page elements against one another. A marketer would have to run a separate A/B test for each variation they want to assess. That would quickly produce an unmanageable number of tests. To put that in perspective, four different versions of five separate elements on a web page would require more than 500,000 separate A/B tests. And each of those would need a large enough audience to produce a statistically valid result. With a combination of experimental design and predictive analytics, in contrast, a marketer could gather data on those four versions with five elements with just one experiment.
There’s another limitation to A/B testing that may be even more important, however. It’s that A/B tests can’t tell you why. Because they only show the results of one full message against another full message, they don’t reveal the impact that subtle word changes can have.
That’s a problem for modern marketers. Knowing why some messages work and some fail is critical to improving marketing results over time.
Experimental design in action
To illustrate how use experimental design in marketing, consider how Persado does it. When a Persado client plans a marketing campaign, the brand’s human creators craft the message and then give it to us. We run it through the Persado Motivation AI Platform, which analyzes what the message is trying to motivate people to do. It then generates alternative options predicted to out-perform the original. By “out-perform” we mean the message will result in more clicks, more click-throughs, more purchases, fewer abandoned carts, etc.
Sometimes, our clients simply choose one of those predictive alternatives and use it. But if they want to be 100% sure they have the best performer, they run a language experiment using experimental design to measure how real consumers respond to as few as four and as many as 16 versions of the message.
The data that come out of those experiments show which messages perform the best with which consumers—and why. The Persado AI can see how each version performed as a whole, as well the impact of each element of the message. For example, we can see how much impact the subject line or CTA had on the overall message lift. Think of those elements as sources of motivation.
Using an A/B testing approach would require Persado to create a different campaign to test every message element against every other message element. It require thousands of tests. Experimental design, in contrast, only requires one test for campaigns. The findings on each element are arranged in a way that allows the AI to predict how they perform in different combinations.
Best observed vs. best predicted outcomes in experimental design
One interesting additional element of how Persado conducts experimental design is through “best observed” and “best predicted” outcomes. The best observed option is the one that Persado put into the market. But even with experimental design, testing all the elements and variables to get an observed outcome could take too much time.
The best predicted outcome, in contrast, could be headline A along with body copy B, image C and call-to-action D. That one didn’t go in that form into the market. But the machine learning algorithm predicts it will perform the best based on experimental design. Persado then creates it and broadcasts it to test its performance.
In this way, experimental design provides more insights to marketers in less time with less work than would be required using A/B testing.
How can experimental design help businesses grow revenue?
Potential customers often interact with a brand multiple times before they decide to make a purchase. That means that every opportunity to communicate and engage with people counts. If a brand can increase the number of people who respond to their messages, they increase conversions.
Knowing which messages resonate, which do not, and why, gets to the heart of competing on customer experience.
A Persado banking customer offers a case in point. The bank engaged Persado to improve the impact of a website campaign, but the experiment found no difference in click-through rates across the various messages compared to the control.
That was an unusual result and difficult to understand. Until the Persado team discovered the impact that the customer journey could have. People who clicked to the website from an email engaged at one rate. People who clicked through from social media engaged at another. The differing responses canceled each other out when viewed in aggregate. But, when Persado designed an experiment to test different messages for different audiences, the performance improved, leading to better conversions. Most importantly, the bank’s marketing leaders understood why people engaged and converted on the site and used those insights to inform other campaigns.
Such is the promise and the result of experimental design.
ENDS
—
This article first appeared https://www.persado.com
Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +971 50 6254340 or engage@groupisd.com or visit www.groupisd.com/story