Let’s recap what we’ve covered up until now:
As ad tech becomes less and less of a competitive advantage, advertisers have had to find a new way to stay ahead of the pack: Creative.
But not just any creative: They need the type of breakout creative that delivers outsized results. Not just 10x ads, but 100x ads.
To find that one-in-20 ad, they need a testing protocol that is as efficient as possible. They can’t afford to blow half of a week’s budget finding a new ad that may only last a few days once all of the remaining budget is funneled into it.
Most experience advertisers know you can’t recognize this type of super high-performing creative just by looking at it. This isn’t an issue of “I think this ad will do better.” Opinions don’t matter – performance matters. Breakout creative often doesn’t look like anything terribly special… until you put it into a live campaign and look the reports. Then, that one ad starts to look very different from all the rest.
This is why creative can be such a competitive edge. But only if you know how to find the 100x ads. Finding a 100x ad will require you to develop a lot of prototype ads. You’ll also have to test the bejesus out of them. That’s the other challenge with creative testing: A couple of split-tests ain’t gonna cut it.
To test ads the way we wanted to (and needed to, given the budgets of our partners) we developed a hybrid testing protocol call Quantitative Creative Testing. Quantitative Creative Testing is not quite a/b split-testing and not quite a multivariate testing. It is a hybrid specifically designed for high-volume user acquisition advertising. Here’s how it works:
There are two flavors of ad creative tests: Concepts and Variations. Here are the key differences:
Concepts are brand new, completely different ads. They are big “outside the box” style approaches that can result in large improvements to revenue… or large losses. Concepts are the core drivers of creative success on Facebook, but they have to be used in a limited way because when they fail, they tend to fail flop.
Variations leverage pieces of winning concepts. They reposition existing elements (headlines, cta, colors, composition) to create something similar but new. Because variations don’t have the big differences that concepts have, and because they’re based on high-performing ads, variations tend to generate smaller, incremental wins. And smaller losses.
Creative refreshes leverage certain pieces of winning concepts. Unlike creative variations where simple elements such as headlines, colors, call to actions are changed, creative refreshes keeps those elements, and the winning concept but change the ad’s main content such as characters or other creative components. Creative refreshes help your original ad stay profitable for a long period of time.
Change Many Elements
Large Changes & Impact!
Low Succes Rate – 5%
Change Main Content
Keep Header & Footer
Keeps Winners Alive Forever
Change Only 1 Element
Use A/B Testing Methods
Small Change & Impact
Our Quantitative Creative Testing framework operates in five phases:
1st Phase: Competitive analysis
2nd Phase: Simple variation testing
3rd Phase: Advanced variation testing
4th Phase: Create to convert
5th Phase: New concept ideation
The general idea of the process is to spend about 80% of the time optimizing the best ads and creating new variations. This limits the amount of non-converting spend, and also lets us prototype and iterate new ads rapidly.
The remaining 20% of the time is spent with new concept ideation – big “pie in the sky” ideas that often fail but sometimes generate a 100x ad.
Before we start creating new ads, it’s smart to see which ads competitors are running, and which of their ads appear to be successful. Running a competitive analysis before starting to produce ads will help in identifying top competitor’s best ads, will provide an endless supply of “tested” concepts, thus reducing high-failure rate.
To do this yourself, identify your top ten competitors for Facebook ads. You can conduct a creative audit from their Facebook page if you look under the “Info and Ads” tab. This won’t tell you how well the ads are performing, but it shows you what ads the company is running. Paid competitive tools like Social Pita can provide deeper performance analytics like estimates for spend and impressions.
This background work will optimize your ad spend because you’ll create ads similar to other companies’ best-performing ads. It will also give you a good creative framework based on what companies like yours are doing.
Facebook recently added the “Info and Ads” tab, which lets anyone see which ads a page is running.
You may also use an Ad Insights tool such as Adsviser2.0, Social Ad Scout, PowerAdSpy, Connect Explore, SocialPeta, AdSpy, AdSwiper, etc.
Now we’re into the actual creative testing, but we’re going to start carefully. Simple variation testing is where we basically tear apart ads that are already working and find out which elements of those ads drive results. This minimizes the financial risk of inherent in testing, and it will also give us some best practices to apply to new ad concepts later. Here are a few of the elements we like to test:
Calls to Action / Buttons
Text headers: Text placement, text length (4-6 words tend to perform best), text color and font. Text versus no text.
Image format: Square, horizontal, vertical or stories.
Video length: 6, 10, 15 seconds.
Once you know which element or elements are the primary performance drivers, then you have some very valuable information. It can be used going forward for ad variations and for new ad concepts. This variation testing basically gives you a “best practices” template for future ads.
Advanced Variation Testing takes everything we learned from competitive research and ad elements testing and uses it to start building new ads. As you know, there’s an awful lot to test. Here are some of our favorites:
Start and End Cards: There are plenty of things to test with cards, aka calls to action. Try placing them at the beginning, or the end of videos. Test which call to action to use, and CTA colors.
Colors: We’ve found that primary colors work best – the bolder the better.
Ad copy: Different ad copy, the placement of the ad copy, copy color and font. No ad copy.
Mobile display: Showing a mobile device in the ad versus not showing a mobile device.
Background image: Busy, or plain? Colored, or a patterned background? We’ve found that simpler backgrounds tend to perform better.
Image layout: Split screen? Split it vertically or horizontally? Or try a grid of images. Horizontal, vertical?
Images: As you know, images matter a lot, and so they get tested a lot. Often, we test images based on whether they user-generated or stock photos. Typically, user-generated photos, or photos that look like they are user-generated, outperform “magazine-like” photos.
Product Display: One product versus multiple products.
Logo Display: Include or exclude the AppStore logo and Google Play Badges. Removing the logo tends to lift performance about 15%.
Logos and Brand Placement: Top, bottom, left or right? Or none at all… ads often perform better without branding elements.
These tactics leverage Facebook’s “Create to Convert” feature that lets advertisers take still images and convert them into videos. Create to convert offers four ways to make stills into videos:
Basic motion: Add one or two moving elements in front of a still image.
Brand in motion: Your brand or logo element/s move in front of a still image or stock video footage.
Benefit in motion: The benefit of your ad or its primary message moves.
Demo in motion: Use video of how your app, website, service, product or feature works, shown over a static image so it looks like the ad is showing someone using your app (or website, etc.).
Of the four options, our two favorite features are Benefits in Motion and Demo in Motion. Here’s why we like them:
Benefits in Motion
Shows the user what to expect and why they should play – explains the value of the app very quickly.
Gives the viewer a flavor of what’s in the app: characters, levels, action, features, and more.
Works best with short ad copy that highlights benefits.
Allows advertisers to animate multiple benefits. Facebook says ads perform best with two or three benefits in motion. Applovin recommends one. We recommend testing how many benefits will work best for each ad.
Sample benefits might include: Is the app safe and trustworthy? Will it save people money? Are there multiple levels? Can the app experience be customized?
Demo in motion
Lets you show a screen capture of the app or show gameplay in a phone. It seems to work best to show the demo in a corner of the ad at first, and then zoom in so the gameplay view fills the ad space. This lets you show the most compelling or unique features of the ad, rather than just telling the viewer about them.
This is where we try to capture lightning in a bottle and produce a breakout, 100x, unicorn ad. About 95% of ads will fail to outperform your existing ads, and some of those losing ads will fail hard. To minimize the losses, usually about 20% of creative work is focused on new concepts. To develop these new concepts, we:
Review & leverage what we learned from competitive research on Facebook
Focus on emotions, not gameplay: Relax, So Hard, Challenging, Killing Time, Escape, Win!
Use storytelling with characters to frame, present, and create new concepts.
Leverage character animation and new assets in new ways. If the brand has a strong character or spokesperson, we’ll use it. If not, we won’t.
Create variations of all these new concepts, drawing on what simple variation testing taught us about which ad elements and combinations of ad elements tend to work best.
Not every advertiser has the bandwidth to develop this much creative, or to manage this many tests. But it can be done, and it works. It can definitely help your Facebook ads regain a competitive edge. If you’re seeking assistance with creative strategy or production, please check out our Creative Studio. Once creative development and testing is maxed out, it’s time to turn to the next best competitive advantage: Audience selection.
Here’s a scenario we see a lot: A partner has created a testing system that is good enough to test all their new creative and deliver a breakout, control-defeating ad every week.
They’ve got their competitive edge back. And so the creative machine cranks out new ads, and the testing machine evaluates them like a digital assembly line.
They can now reliably produce a breakout ad every week. They meet and beat their KPIs. The team lead gets a nice pat-on-the-back chat with the C-Suite. Until someone sees one of the ads they’re running and exclaims, “That ad’s not brand compliant!” And the whole system stops.
We’ve seen this situation many times before. We’ve prompted it, too – by providing both the creative and the creative tests. After much trial and error, here’s what seems to resolve the situation best: prototype ads.
Prototype ads let you make data-driven decisions while still mostly staying within brand-driven rules. So what are prototype ads? Prototype ads are concept ads. As mentioned earlier, “Concepts are brand new, completely different ads. They are big “outside the box” style approaches that can result in large improvements to revenue… and to large losses.” Prototype ads meet 60% of brand guidelines.
This will require some buy-in from the brand marketers, but we’ve found that if ads are at least 60% compliant with brand guidelines, they won’t do too much damage to the brand. And having that much freedom with ad creation lets the creative team develop ads rapidly. This is essential given the volume of creative required. Forcing ads to be even 10% more brand complaint – so they meet 70% of brand requirements – makes creating new ads a lot harder. It slows ad creation down substantially and makes it more expensive.
If a prototype ad happens to survive the first round of testing, it can be retooled to better fit within brand guidelines. We’ve found that it’s far more efficient to take a winning ad and tweak it a bit to make it brand-compliant than it is to take an underperforming but brand-compliant ad and incrementally test it until it finally (if ever) performs well. Prototype ads work under the premise of “fail fast”. The first round of testing for prototype ads is ruthless. Each ad will only have about 10,000 impressions to prove itself. This has three key benefits:
It lets us test a large number of ads very quickly. If 95% of ads we test are going to fail to beat the control, it’s essential to be able to test fast so we can weed through all the losing ads to find that one breakout gem.
If an ad is bending brand guidelines, it limits how much the ad will be seen. 10,000 impressions are not enough exposure to damage a brand, especially if the ad is shown is at least 60% brand complaint.
It minimizes wasted ad spend. Prototype ads only get 10,000 impressions to prove themselves. That’s roughly $15-20 in ad spend. So no more spending $500 each on ads that don’t perform.
“But 10,000 impressions is not enough to achieve statistical significance,” someone says. “You’re going to get a lot of false positive and false negatives with that system.”
That would be true – if we were doing a typical A/B split test. But we’re not. We’re not looking for small 5 to 10% improvements. We’re looking for that one breakout ad that will outperform 95% of other ads.
We’re looking for earthquakes. A prototype ad test is fundamentally different than a standard A/B split-test. It’s not looking for which version performs incrementally better – it’s looking for breakout results.
Prototypes ads may not always be perfectly true to brand guidelines, but they are true enough. The added performance they deliver is a good trade for temporarily bending branding rules… if by only just a little.