Of course, something as complex as creative testing has sticking points. Here’s a summary of some of those challenges, and how we work around them:
Let’s take a closer look at the cost aspect of creative testing.
In classic testing, you need a 95% confidence rate to declare a winner. That’s nice to have but getting a 95% confidence rate for in-app purchases may end up costing you $20,000 per creative variation.
Why so expensive? Because to reach a 95% confidence level, you’ll need about 100 purchases. With a 1% purchase rate (which is typical for gaming apps), and a $200 cost per purchase, you’ll end up spending $20,000 for each variation in order to accrue enough data for that 95% confidence rate.
That’s actually the best-case scenario, too. Because of the way the math works, you’d also have to find a variation that beats the control by 25% or more for it to cost “only” $20,000. A variation that beat the control by 5% or 10% would have to run even longer to achieve a 95% confidence level.
There aren’t a lot of advertisers who can afford to spend $20,000 per variation, especially if 95% of new creative fails to beat the control.
So, what to do?
What we do is move the conversion event we’re targeting for up a little in the sales funnel. For mobile apps, instead of optimizing for purchases we’d optimize for impression to install rate (IPM). For websites, we’d optimize for impression to top-funnel conversion rate. To be clear, this is not a Facebook recommended best practice, this is our own voodoo magic/secret sauce that we’re brewing.
The obvious concern here is that ads with high CTRs and high conversion rates for top-funnel events may not be true winners for down-funnel conversions and ROI / ROAS. But while there is a risk of identifying false positives with this method, we’d rather take that risk than the risk, time and expense of optimizing for bottom-funnel metrics.
So optimizing for installs is more efficient than optimizing for purchases. Most importantly, it means you can run tests for less money per variation because you are optimizing towards installs vs purchases. For many advertisers, that alone can make more testing financially viable. $200 testing cost per variation versus $20,000 testing cost per variation can mean the difference between being able to do a couple of tests versus having an ongoing, robust testing program. Note: this process may generate false negatives and false positives.
For the past few years, to streamline our Facebook and Google creative testing and reduce non-converting spend, we’ve been testing new video concepts using IPM (Impressions Per Install) as the primary metric. For the record, using IPM is not the Facebook recommended best practice to allow ad sets to get out of the learning phase by gathering enough data to become statistically valid.
When testing creative we typically would try three to five videos along with a control video using Facebook’s split test feature. We would show these ads to broad or 5-10% LALs (Lookalike) audiences, and restrict distribution to the Facebook newsfeed only, Android only and we’d use mobile app install bidding (MAI) to get about 100-250 installs.
If one of those new “contender” ads beat the control video’s IPM or came within 10%-15% of its performance, we would launch those potential new winning videos into the ad sets with the control video and let them fight it out to generate ROAS.
We’ve seen hints of what we’re about to describe across numerous ad accounts and have confirmed with other advertisers that they have experienced similar results. But for purposes of explanation, let’s focus on one particular client of ours and how their ads performed in recent creative tests.
In two months, we produced +60 new video concepts for a client. All of them failed to beat the control video’s IPM. This struck us as odd, and it was statistically impossible. We expected to generate a new winner 5% of time or 1 out of 20 videos – so 3 winners. Since we felt confident in our creative ideas, we decided to look deeper into our custom, money-saving testing method.
Traditional testing methodology includes the idea of testing a testing system or an A/A test. A/A tests are like A/B tests, but instead of testing multiple creatives, you test the same creative in each “slot” of the test.
If your testing system/platform is working as expected, all “variations”, should produce similar results assuming you get close to statistical significance. If your A/A test results are very different, and the testing platform/methodology concludes that one variation or another significantly outperforms or underperforms compared to the other variations, there could be an issue with the testing method or quantity of data gathered.
Here’s how we set up an A/A test to validate our custom approach to Facebook testing. The purpose of this test was to understand if Facebook maintains a creative history for the control and thus gives the control a performance boost making it very difficult to beat – if you don’t allow split test ads to exit the learning phase and reach statistical relevance.
Things to note here:
We ran this test for only 100 installs. Which is our standard operating procedure for creative testing designed to save time and money?
Once our first test reached 100 installs, we paused the campaign to analyze the results. We turned the campaign back on to scale to 500 installs to get closer to statistical significance. We wanted to see if more data would result in IPM normalization (in other words, if the test results would settle back down to more even performance across the variations). However, the results of the second test remained similar. Note: the ad set(s) did not exit the learning phase and we did not follow Facebook’s best practice.
The results of these tests, while not statistically significant and not based on best practices, were surprisingly enough to merit additional tests. So we tested on!
For our second test we ran the six videos shown below. Four of them were controls with different headers; two of them were new concepts that were very similar to the control. Again, we didn’t run the hotdog dogs; they’ve been inserted to protect the advertiser’s identity and to offer you cuteness!
The IPMs for all ads ranged between 7-11 – even the new ads that did not share a thumbnail with the control. IPMs for each ad in the far right of the image.
This was when we had our “ah-ha!” moment. We tested six very different video concepts: the one control and five brand new ideas, all of which were visually very different from the control and did not share the same thumbnail.
The control’s IPM was consistent in the 8-9 range, but the IPMs for the new visual concepts ranged between 0-2. IPMs for each ad in the far right of the image.
Here are the line graphs from the second, third, and fourth tests.
And here’s what we think they mean:
Given the above results, those of us testing using IPM have an opportunity to re-test IPM winners that exclude the control video to determine if we’ve been killing potential winners. As such, we recommend the following three-phase testing plan.
Note: We’re still testing many of our assumptions and non-standard practices.
We look forward to hearing how you’re testing and sharing more of what we uncover soon.