Advertisers face two critical challenges: To develop enough creative to stay ahead of creative fatigue, and to do creative testing fast and inexpensively enough to keep their ROAS (return on ad spend) high.
Developing mobile creative good enough to replace your top-performing ads is highly challenging because you don’t just need a new ad. You need a new ad that will beat your control through creative testing.
This is a bigger problem than most people think, since most creative fails. Most new ad creative doesn’t perform anywhere near as well as a campaign’s top-performing ad, aka “the control.” After managing over $3 billion in ad spend, we’ve found it usually takes twenty new pieces of creative to find a replacement for a campaign’s previous control through creative testing. Only about 5% of new creative performs well enough to make the cut.
1. You don’t need just one new ad every week or so to stay ahead of creative fatigue. You need twenty new ads. The more you spend, the faster your ad creative will wear out.
2. If you don’t want to see a dip in ROAS, and you don’t want to waste a lot of time and blow a lot of ad budget, you’ll need a creative testing system that can rotate through those twenty new ads and find the new winner fast.
Most in-house creative teams struggle to stay ahead of “creative fatigue,” aka the dip in ad performance that happens when audiences have seen an ad so often they start to screen it out.
Ironically, the more successful creative teams are, the harder they have to work. Because if their ad creative does well, ad spend usually increases. And as ad spend increases, their new ads burn out faster. They have to work harder to maintain the same results.
Even if a creative team can keep pace with the demands of scaling up, they’ll often run into another challenge: Getting out of their creative “comfort zone.” This is the inherent tendency of sticking to what works. It isn’t a bad thing – focusing on what works drives performance – but creative teams often get stuck in ruts of just sticking to what’s worked in the past. Eventually, the creative gets a little stale, performance dips, and you have to think beyond what’s worked in the past.
One proven way to stretch beyond the curse of “what’s worked before” is to do regular competitive analysis. The Facebook Ad Library is a great free tool for this, but other paid tools can give you insights into how ads are performing.
Doing regular, documented competitive analyses can help creative teams come up with more ideas. But it also helps them to be more data-driven. That’s critical for success right now.
Another way for creative teams to stretch is to participate in an “agency bake-off.” This is where the in-house creative team competes against an external creative team. The bake-off has rules, and information on ad performance is shared between both teams every week. We do agency bake-offs in 30-day sprints, with set rules designed to maximize learning and performance and minimize any downsides of competition. To date, we haven’t lost one. But be aware that 30-day tests often fail.
Then there’s the most common way to expand a creative team’s capacity: to outsource. Our Creative Studio does that for hundreds of advertisers. It gives them access to a world-class team with Disney-level storytelling skills and data-driven user acquisition expertise. We’ve built a streamlined system that makes it easy to request and approve any amount of mobile creative you need.
So if you’re ready to move beyond a one-off style of mobile creative development, it can be done. Using an outside team like Creative Studio can be particularly helpful if you want to scale quickly, or if you want to be able to expand or contract your creative development without having to hire (or fire) an internal team.
But just having “enough” mobile creative is only the beginning. Next, you have to test it. Finding that new magic ad as efficiently as possible is an advertiser’s best competitive advantage, especially now that Google and Facebook have given every advertiser access to machine learning-driven tools that make many third-party ad tech tools obsolete.
Quantitative Creative Testing is an A/B split-testing methodology we’ve developed specifically for mobile creative. It’s designed to be super-efficient with both time and ad spend, to find the sort of breakout ads that can replace a campaign’s previous control.
Quantitative Creative Testing separates new mobile ad creative into two buckets: Concepts and Variations.
Concepts are totally new, out of the box creative approaches. They tend to fail a lot, but when they succeed, they often blow the doors off everything else. About 20% of the mobile ad creative testing we do is with Concepts.
Variations are just what they sound like. They’re small tweaks we make to existing creative to see if we can get it to perform better. Testing Variations is much “safer,” in that Variations don’t tend to tank as hard as Concepts do, thus they don’t risk wasting as much ad spend. Testing Variations also lets us find out which elements of an ad are driving its performance. This is precious information for optimizing the ad and for refreshing it later on. Being able to refresh creative lets us extend its life and thus radically improves the ROI of creative assets. Strategically expanding the audiences we advertise to helps a lot as well.
To do Quantitative Creative Testing for mobile creative, we take batches of new Concepts and run them against each other in an ad set. Each new Concept gets about 50,000 impressions before we decide if it’s a winner or a loser. If it’s a winner, it gets moved up into another ad set where it will run against other winning mobile ad creative, including the current control. If the new ad can outperform all the other ads in that ad set, then it gets moved into another, primary ad set and gets the bulk of the campaign’s spend.
There are two shortcuts we take when we’re running new ads through their first 50,000 impressions, and two very good reasons we take them.
Don’t get us wrong: Brand compliance is important. It supports long-term customer loyalty, coherent messaging, and many other good things. But sometimes, when we’re testing every attribute of an ad, we run into a situation where the brand guidelines are suppressing results. Or, we simply want to test something that could work… but we can’t because it would break the brand guidelines.
To get around these limitations, we’ve developed two levels of brand compliance: “Flexible Brand Guidelines” and “Strict Brand Guidelines.” Here are their key differences:
We aim for Concepts that are 80% brand complaint, but we don’t insist on the ads being any closer than that. Why? Because we’re being data-driven rather than brand-driven. We want to know what works, even if it doesn’t necessarily achieve brand compliance. Also, because very few people have seen the ad in our tests, we’re minimizing exposure to what is ultimately a fairly small brand deviation.
If you didn’t take statistics in school or you don’t do a lot of A/B testing in your job, “statistical confidence” may be a new concept. So here’s the thumbnail explanation: To be sure the results of any split-test are valid, you need a large enough body of data (or a large enough number of actions) to know the results you’ve gotten aren’t just random chance.
In traditional testing methodology, split-tests require a 95% or even 99% “confidence level” to be considered statistically valid results. Trouble is, to get that level of confidence you need to run ads for a fairly long time. We don’t have that much time, and we don’t want to spend any more on losing ads than we have to. Running losing ads kills ROAS.
Most A/B tests have to run a long time because the ads being tested perform pretty much the same. If the ads were to perform very differently, then the test doesn’t have to run as long. In other words, if one ad just crushes the other performance-wise, then we don’t have to run the test for very long at all.
If Ad A is performing 300% better than Ad B, you don’t need to run the two ads very long at all. But if Ad A is only performing 5% better than Ad B, you’d have to run the two ads for quite a long time to know for sure if Ad A was really better.
For our mobile creative testing, we aren’t looking for 2% or 5% improvements. We’re looking for breakout ads. We’re looking for 100x performance. So we will just toss out ads that don’t perform significantly better than other ads. We don’t care about 2% or 5% improvements. We’re looking for 20%, 40% improvements. We’re looking for earthquakes, not tremors.
Those two shortcuts let us run new ads through our testing machine MUCH faster than if we had to obey the traditional laws of brand compliance and statistical relevance. Compressing that testing window saves an enormous amount of ad spend and lets us generate new high-performance mobile ad creative fast enough to stay ahead of creative fatigue.
In an environment where time is precious, being able to effectively test creative faster than normal is a significant competitive advantage. If we can iterate ads faster than our competitors, our campaigns can become dramatically more efficient. That means we can compete against advertisers with ad budgets three, five, even ten times larger than our own.
And the speed is just the first benefit. Being able to do these tests quickly also means they cost far less than traditional testing would require. We’ve saved a lot of ad spend with the abbreviated testing cycle, and we’ve optimized the budget we did use by not spending money on underperforming ads.
Because our ads are so much more efficient than our competitors, we can outbid them. And even while we’re outbidding them, we’re still getting dramatically higher ROAS than they are.
This dynamic can be so extreme that even a small (but way more efficient) advertiser can sometimes take out a larger, better-funded competitor. If you’re getting 200% ROAS and your primary competitor is only getting 20% ROAS, even if they’re a Fortune 500 with a pile of money to burn, you can still outbid them and take the best ad placements.
Being able to outbid competitors is especially critical for mobile advertisers, too. Because of the nature of mobile advertising, there isn’t a lot of space on the screen. It’s a winner take all situation where the best ad can hog all the inventory, leaving scraps for everyone else.
In principle, any advertiser who can own the top ad placement (often the only ad placement in an app or mobile interface) can basically own that entire ad platform for certain audiences. And that’s why strategic, ongoing A/B testing of mobile creative is the ultimate competitive advantage.