Facebook announced plans for the impact of Apple’s much-anticipated iOS 14 release, in a detailed blog post today. Apple’s deprecation of the iOS Users’ Identifier for Advertisers (IDFA) will require apps to ask users for permission to collect and share identifying data going forward.
The company will remind its users that they have a choice about how their information is used on Facebook. Also, about its Off-Facebook Activity feature. This allows users to see a summary of the off-Facebook app. They can also see website activity businesses send to Facebook and disconnect it from their accounts.
For partners, Facebook will release an updated version of its Facebook SDK to support iOS 14. This will provide support for Apple’s SKAdNetwork API. Facebook is asking businesses to create a new ad account dedicated to running app install ad campaigns for iOS 14 users. This will mitigate the impact of the efficacy of app install campaign measurement.
The company believes that Apple’s changes will disproportionately affect its Audience Network given its heavy dependence on app advertising. The expectation is that advertisers’ ability to accurately target and measure their campaigns on Audience Network will be impacted. As a result, publishers should expect their ability to effectively monetize on Audience Network to decrease. In fact, Apple’s updates may render Audience Network so ineffective on iOS 14 that it may not make sense to offer it on iOS 14. Facebook is however expecting less impact on its own advertising business.
Facebook is encouraged by conversations and efforts already taking place in the industry to get this right for small businesses – including within the World Wide Web Consortium (W3C) and the recently announced Partnership for Responsible Addressable Media (PRAM).
Our Take on Facebook Plans:
There is still much uncertainty about all the implications of Apple’s big change in data sharing policies. Both publishers and advertisers will need to be agile in their approach to these changes.
For more guidance on the short and long-term steps, you can take to prepare for these changes, visit our recommendations in Part 1 and Part 2 of our roundup articles on the IDFA Armageddon.
Apple announced the deprecation of the iOS Users’ Identifier for Advertisers (IDFA). This is the biggest change in the mobile app advertising ecosystem in the past 10 years. For some, Apple’s IDFA change will be company-crushing, while for others it will create a tremendous opportunity. This is part DEUX of our IDFA Armageddon round-up articles (Part I: In our first post, you will find quotes from 21 leaders in Mobile App Advertising.) If you are trying to understand what the change means to your business, read IDFA Armageddon Part Deux. We have done the hard work rounding and summarizing key articles and quotes for you.
Facebook will not collect the identifier for advertisers (IDFA) on its own apps on iOS 14 devices but may revisit this decision as Apple offers more guidance.
The company will remind its users that they have a choice about how their information is used on Facebook and about its Off-Facebook Activity feature, which allows users to see a summary of the off-Facebook app and website activity businesses send to Facebook and disconnect it from their accounts.
For partners, Facebook will release an updated version of its Facebook SDK to support iOS 14, which will provide support for Apple’s SKAdNetwork API. To mitigate the impact of the efficacy of app install campaign measurement, Facebook is asking businesses to create a new ad account dedicated to running app install ad campaigns for iOS 14 users.
The company believes that Apple’s changes will disproportionately affect its Audience Network given its heavy dependence on app advertising. The expectation is that advertisers’ ability to accurately target and measure their campaigns on Audience Network will be impacted, and publishers should expect their ability to effectively monetize on Audience Network to decrease. In fact, Apple’s updates may render Audience Network so ineffective on iOS 14 that it may not make sense to offer it on iOS 14. Facebook is however expecting less impact to its own advertising business.
Facebook is encouraged by conversations and efforts already taking place in the industry to get this right for small businesses – including within the World Wide Web Consortium (W3C) and the recently announced Partnership for Responsible Addressable Media (PRAM).
“It seems nearly impossible that advertisers won’t face deteriorating economics on Facebook in the short term as IDFA deprecation materially impairs Facebook’s ability to precisely target users. Over the long term, I believe that Facebook will find a path to its current level of ad serving efficiency without needing advertising identifiers. But the content of its own white paper underscores very clearly how important personalization is for ad targeting, and IDFA deprecation damages Facebook’s ability to deliver that kind of personalization.” https://mobiledevmemo.com/idfa-deprecation-is-facebooks-sword-of-damocles/
“We’re still trying to understand what these [iOS 14 privacy update] changes will look like and how they will impact us and the rest of the industry, but at the very least, it’s going to make it harder for app developers and others to grow using ads on Facebook and elsewhere… Our view is that Facebook and targeted ads are a lifeline for small businesses, especially in the time of Covid, and we are concerned that aggressive platform policies will cut at that lifeline at a time when it is so essential to small business growth and recovery.” https://appleinsider.com/articles/20/07/30/facebook-says-apples-ios-14-could-hinder-ad-revenue
“We don’t think fingerprinting is going to pass the Apple test. By the way, just to clarify, every time I’m saying something about a method that is unlikely, it doesn’t mean I don’t like that method. I wish it would work, but I just don’t think it would pass the Apple sniff test… Apple said, ‘If you do any form of tracking and fingerprinting is part of it, you have to use our pop up…”
“The challenge with that is it means that Apple needs to do us a favor and give us the IDFA. That means it’s unlikely because they just killed it very publicly. Why would they have an appetite to give anyone that access again?”
“The result [of the iOS 14 privacy update], most mobile experts think, will be a below-20% opt-in rate for IDFA tracking. (That’s probably high.) This is good for privacy. It’s also bad for legitimate advertisers and marketers. But potentially worse is Apple retaining advantages that other ecosystem players cannot, simply because Apple owns the iOS platform. Everyone else needs permission to allow “tracking,” but Apple retains its access to more data. And Apple potentially has a lot of on-device data. While the company is privacy-centric, and it does mix your data with groups of 5,000 other people to only target large segments, not individuals, Apple explicitly says it uses data about “your devices’ connectivity, time setting, type, language, and location” to personalize ads to you.”
“The notion of ‘deterministic’ performance attribution as we knew it for the past eight years has been a hoax. Most know this, but it was much easier to just accept the status quo. And you at Apple are really good at changing the status quo for the better. Just look at the iPod, iPhone, and iPad as examples. Uber was the perfect example of how attribution has been a hoax. It is a mobile-first company that used deterministic attribution based on user-level data, analytics, and so on, and it later found out that 80% of its ad spend – more than $100 million – was just cannibalizing its organic new users. Uber is just the tip of the iceberg for companies that believe they are making data-driven decisions based on wrong information.”
“SKAdNetwork 2.0 is a large improvement, and while far from perfect, it creates a well-guarded environment where advertisers can regain trust in their ad spend with no need for third-party attribution to guesstimate who should get credit… SKAdNetwork still gets something wrong: 100% of attribution credit goes to the last click. That means that a user watching a video ad on addressable TV, listening to an audio ad on Spotify, and seeing a full pre-roll ad on YouTube but ends up clicking a tiny button ad on a random app will get credit.” https://www.adexchanger.com/data-driven-thinking/an-open-letter-to-apple-thank-you-for-throwing-the-industry-in-flux/
“As Eric Seufert mentions in his MobileDevMemo, a lot of parties in the advertising ecosystem will need to find new ways to provide value. Be it attribution, retargeting, programmatic advertising, ROAS based automation – this will all become incredibly vague and you can already see the attempts of some of these providers to find new sexy slogans and test the interest on the advertiser’s side for new incredibly risky ways of doing business as if nothing has happened.”
“Personally, I do expect that in the short term we will see a drop in top-line revenues for hyper-casual games, but I don’t see their death. They will be able to buy even cheaper and as their focus is to buy untargeted, they will adjust their bids against their expected revenues. As CPMs drop, this volume game might be able to work, though at smaller top-line revenues. If the revenues are then big enough is to be seen. For core, mid-core, and social casino games, we might see tough times: No more retargeting of whales, no more ROAS based media-buying. But let’s face it: the way we were buying media was always probabilistic. Unfortunately, now the risk will increase significantly and we will have much fewer signals to react quickly. Some will take that risk, others will be cautious. Sounds like a lottery?” https://www.gamedev.net/blogs/entry/2269865-idfa-mobile-games-industry-up-for-a-bumpy-ride/
Serving as a postback solution for sharing that data with 3rd parties (FB, Google)
Helping publishers optimize
Helping publishers make algorithmic decisions on where to spend and not to spend”
“We’ll probably get only 10% of people to give consent, but if we get the right 10%, maybe we don’t need more. I mean, by day 7 you lost 80-90% of users anyway. What you need to learn is where that 10% are coming from … if you could get consent from all the people who pay, then you’d be able to map where they come from and optimize towards those placements.”
“Publishers might go after hyper-casual games or build hub apps. The strategy is to acquire highly converting apps (conversion to install), drive users there cheaply, and then send those users to the better monetizing products. What is possible is that you could use IDFV to target those users… It’s a pretty good strategy to retarget users. You could use an in-house DSP to do that, especially if you have multiple apps in the same category, like casino apps. In fact, it doesn’t have to be a gaming app: any app or a utility app could work as long as you have a valid IDFV.” https://www.singular.net/blog/n3twork-prepare-ios-14/
“If an app you use and enjoy on a regular basis presented this screen to you, how likely are you to select Allow Tracking?
1 – Extremely unlikely… 5 – Extremely likely. Of the 1,088 respondents who answered 1-5, the data suggests only 16% of iOS users would consider selecting the Allow Tracking option from an app they regularly use and enjoy. Full results of the poll are published here.”
“One approach to the measurement I’ve seen championed in the weeks since WWDC has been that of probabilistic attribution: using behavioral profiles to attempt to associate (“tag”) users with the channel or campaign most likely to have sourced them. I think this will be difficult to do competently…. without the ability to update model priors with successful prediction results, the models will simply privilege the channels that currently produce the most, highest-value traffic for advertisers.”
“My sense is that the best strategic path forward for mobile performance advertisers is to lean into the SKAdNetwork framework and to fundamentally shift their advertising approach away from the user- and device-centric targeting to campaign-level optimization. Doing this well means establishing an acute product focus on delivering conversion values to ad networks that are predictive of user LTV…. leaning into the SKAdNetwork framework is a relatively low-risk bet: Apple appears to be championing SKAdNetwork as the central measurement tool for iOS, and advertisers should understand how to build infrastructure and strategy around it.” https://mobiledevmemo.com/how-to-scale-and-optimize-marketing-spend-with-skadnetwork/
“Apple introduced the AppTrackingTransparency (ATT) framework that manages access to the IDFA with required user consent. Apple also outlined exemptions for this framework that might provide the ability for attribution as it exists today. We believe that focusing on this framework and creating tools within these rules is the best way forward – but before diving into this further, let’s have a look at the other potential solution. Often mentioned in the same breath, SKAdNetwork (SKA) is an entirely different approach to attribution that removes user-level data entirely. Not only that, but it also puts the burden of attribution on the platform itself.”
“Adjust and other MMPs are currently working on cryptographic solutions using practices such as zero-knowledge theorems that might allow us to attribute without having to transfer the IDFA off the device. While this may be challenging if we have to use on-device for source and target app, it is easier to imagine a solution if we are allowed to receive the IDFA from the source app and only have to perform the matching on-device in the target app… We believe that obtaining consent in the source app and on-device attribution in the target app might be the most viable path for user-level attribution on iOS14.” https://www.adjust.com/blog/the-future-of-the-ad-ecosystem-on-ios-14/
“Kochava’s layered approach to tackling mobile measurement post-iOS 14 Privacy Update
Users that opt-in to IDFA-based tracking.
App Clip deep links (new to iOS 14) – apparently App Clips (functional snippets of apps) will be triggered by a deep link that can pass on a click ID.
SKAdNetwork for cohort data
Harvesting IP and User Agent data
Contextual, i.e. device parameters that can be queried by the MMP’s SDK, e.g. screen brightness, audio volume, locale, language, timezone.
Hyper-granular lookback windows – to increase the accuracy of the probabilistic model, hyper-granular lookback windows, down to just 1-minute post-click. Kochava suggests that sub-10 minute lookback windows offer 98% probabilistic accuracy vs. <90% accuracy for 0.25-3 hour windows.
Aggregation, combination, and correlation of these mobile measurement layers should augment measurement accuracy to an extent that would trump any purely probabilistic model.”
“Singular announces SKAN: an open-source, SKAdNetwork-based framework to support the entire mobile marketing ecosystem in running user acquisition post-iOS 14 with an eye to ROAS.”
“By default, the entity that receives the SKAdNetwork notification (postback) is the ad network, which needs to register with Apple. Ad networks can then forward install postbacks to advertisers and MMPs to validate them, which is rather straightforward as Apple cryptographically signs them.
Singular’s Secure-SKAN solution
There is however one critical flaw that has to be addressed: Apple does not sign conversion values in SKAdNetwork. That effectively means that conversion values are self-reported by ad networks. That’s where Singular’s Secure-SKAN solution comes into play. Secure-SKAN is a solution where Singular as the MMP will register jointly with every partner that will be publishing your ads. SKAdNetwork supports this by design, and we’ve been working closely with partners and advertisers (including Apple) to validate the solution is technically equivalent and solves the trust piece. On the ad network side, adopting Secure-SKAN is straightforward as you essentially receive the same exact postback — however, Singular is simply forwarding it.
When done right, the six bits of conversion values provided by SKAdNetwork can do quite a lot — including cohorts, which can be used for ROAS analysis. To keep ourselves honest here, six bits won’t give us the ability to report on 180-day cohorts. We wish it did. But at the same time, establishing 3-day and even 7-day cohorts provides a great foundation for optimization that takes you a whole lot closer to true LTV.
Conversion management in SKAdNetwork is a fascinating topic, and we can’t wait to provide more details about it soon.
By design, SKAdNetwork data is highly fragmented. We have solved a very similar problem for ad spend and ad network data, so these challenges are quite familiar. Then there are also conversions. You will need to translate values to meaningful event information. Then, tie it all back to the right campaigns.
As you may recall, the fields available in the SKAdNetwork response are the following:
Source, which would normally be the attributed ad network or any other registered publisher that called loadProduct() to display the ad
Campaign ID, which is generated by the ad network
Source App ID, which is the actual publisher app that showed the ad
Leveraging Singular’s Data Connectors, we can connect this data to campaign names, app names (for the publishing app), bids, and other metadata at the campaign and publisher level and of course — ad spend. Tying this together with 3-day, 5-day, or even 7-day cohorts can produce a powerful report that continues to enable user acquisition teams to make informed decisions about their ad spend.” https://www.singular.net/blog/skadnetwork-solution-ios14/
“The extreme ends of the mobile gaming spectrum — “core” games driven by regular in-app events and extreme in-app purchase monetization, and hypercasual games that monetize exclusively through advertising and allow the aforementioned core games to go “whale hunting” via IDFA-targeting — thrive in the current, profile-centric advertising environment, and both of these categories face significant headwinds when the IDFA is deprecated.”
ConsumerAcquisition shares Apple’s values when it comes to protecting user privacy. As an industry, we must embrace the new rules of iOS14. We need to create a sustainable future for both app developers and advertisers. Please check out part I of our IDFA Armageddon roundup.
If I had to guess about the future:
IDFA Armageddon Part Deux: Short Term
We encourage all publishers to talk to Apple and seek clarification on process and end-user consent along with the use of IDFVs & SKAdNetwork product road map, etc.
We believe publishers will aggressively optimize sign-up funnels, consent flow, and onboarding processes. This is to maximize consent and privacy opt-ins or live with the campaign-only level metrics and lose end-user targeting.
The growth and data science teams we’ve spoken with have already starting experimentation.
If a mobile app company would like to continue to optimize towards ROAS, we encourage them to think of privacy consent as a step in the UA conversion funnel necessary to show targeted ads to consumers.
Companies will aggressively experiment with flow optimization and user messaging.
They will get creative testing web-based user flows for registration to preserve IDFA. Then, cross-selling into the AppStore for pay off.
Here’s one solution from a game company
We believe phase 1 of the iOS 14 rollout could look like this:
In the first month of the iOS rollout, the supply chain for performance advertising will experience a short-term hit. Especially for DSP remarketing.
Mobile App advertisers heavily invest in creative optimization of their ads as their primary lever to drive performance.
Publishers will start to optimize user consent flows
UA Teams & Agencies will be forced to rebuild campaign structures.
The 100-campaign limit imposed by Apple will force mobile app advertisers to rethink the number of accounts they run. Also, how many external partners they use. This will force temporary inefficiency as new structures will need to be re-optimized.
Google will receive a traffic bump from mobile app advertisers pushing more ad dollars where it can be measured. Those advertisers will be surprised when they evaluate how much Google traffic comes from Apple. We’re advising clients to check out their Apple contribution to their YouTube traffic to get a handle on traffic allocation. We’re seeing 10-25% Apple traffic coming out of Google for larger advertisers.
User “opt-in” sharing increases but is estimated to only hit a max of 20%.
Developers implement Google’s Firebase to gain monetization efficiencies.
Advertisers shift time from optimizing smaller ad networks to customizing in-game experiences. They produce predictive behavioral dynamics that result in higher-value users (whales).
Fingerprinting users rapidly expands in an attempt to maintain the status quo.
A publisher’s internal fingerprinting, IDFV, (which may not leverage fingerprinting) may not create a privacy problem if it is not used for re-marketing/re-targeting. If abused, Apple is sure to shut it down quickly.
While fingerprinting may be outside of Apple’s control, it appears highly likely to fragment the ecosystem. It will create more barriers for entry to building competitive measurement solutions.
If a publisher or MMP sends their fingerprinting to a 3rd party network, this may be a violation of Apple’s policy. This may result in getting an app rejected by Apple’s App Store.
Open questions remain on how audiences comprised of apps + down funnel user actions (purchases, etc) without user consent will be created/used.
Note: Hyper casual advertisers leveraging broad targeting may be able to initially benefit as the “high-end whale hunters” are pull back causing a temporary CPM deflation. We expect the high cost per subscriber and niche or hard-core games to be most impacted. We recommend these company front load incremental creative testing now to bank wins.
IDFA Armageddon Part Deux: Mid Term
Fingerprinting will be an 18-24 month solution and entered into everyone’s internal algorithm/optimization black box. As SKAdNetwork matures, Apple is likely to shut down fingerprinting or reject apps that violate its App Store policy.
There will be sustained challenges for programmatic/exchanges / DSP solutions.
Growth teams find a new religion with “mixed media modeling.” They take lessons from brand marketers. At the same time, they seek to broaden last-click attribution to open new sources of traffic. Success will be based on deep experimentation and alignment of data science and growth teams. Those companies that get their first will have a significant strategic advantage to achieve and sustain scale
SKAdNetwork must be enhanced with Campaign/AdSet/Ad level information to keep the mobile ad network functioning.
Mobile apps that monetize with mostly ads will pull back. There is likely to be decreased revenue with lower targeting but should normalize over the next 3-6 months.
IDFA Armageddon Part Deux: Long Term
User consent optimization becomes a core competency.
Google deprecates GAID (google ad id) – Summer of 2021.
Human-driven, creative ideation, and optimization are the primary lever for user acquisition profitability across networks.
Incrementality and optimal channel mix become critical.
We are looking forward to working with our clients, Apple, Facebook, Google, and MMPs to participate in shaping the future of our mobile app industry advertising.
Look out for more updates from us regarding IDFA changes.
Oh yeah, and now a word from our lawyers: Nothing stated in IDFA Armageddon Part Deux is legal advice. Please work closely with legal and other professional advisors. Determine how IDFA changes, GDPR, CCPA, or other laws may or may not apply to you and your business.
For the third year in a row, Inc. magazine has named Consumer Acquisition to its annual Inc. 5000 list, which recognizes the most successful companies within the American economy’s most dynamic segment—its independent businesses. Intuit, Zappos, Under Armour, Microsoft, Patagonia, and many other well-known names gained their first national exposure as honorees on the Inc. 5000.
Inc. Magazine Inc. 5000
Brian Bowman, CEO, and founder of Consumer Acquisition: “We are honored to make this distinguished list that has previously included such notable companies. Consumer Acquisition’s growth as an organization has only been achieved through the dedication of our team and partners, who are a credit to our continued success.”
Not only have the companies on the 2020 Inc. 5000 been very competitive within their markets, but the list as a whole shows staggering growth compared with prior lists as well. The 2020 Inc. 5000 achieved an incredible three-year average growth of over 500 percent and a median rate of 165 percent. The Inc. 5000’s aggregate revenue was $209 billion in 2019, accounting for over 1 million jobs over the past three years.
Additionally, the news of Consumer Acquisition being featured in Inc. Magazine Inc. 5000 list comes on the heels of Dustin Engel joining as President to lead the company into its next phase of growth. Engel brings proven experience in M&A, client services, and high-growth marketing and strategy for Fortune 500 brands.
High-performance creative is a rare thing for social advertising. In our experience, after spending over $3 billion dollars driving UA across Facebook and Google, usually only one out of 17 to 20 ads can beat the “best performing control” (the top ad). If a piece of creative doesn’t outperform the best control, you lose money running it. Losers are killed quickly, and winners are scaled profitably.
The reality is, a vast majority of ads fail. The chart below shows the results of over 17,100 different ads. Spend is distributed based on ad performance. As you can see, out of those 17,000 ads, only a handful drove a majority of the profitable spend.
The high failure rate of most creative shapes creative strategy, budgets, and ad testing methodology. If you can’t test ads quickly and affordably, your campaign’s financial performance is likely to suffer from a lot of non-converting spend. But testing alone isn’t enough. You also must generate enough original creative concepts to fuel testing and uncover winners. Over the years, we’ve found that 16 out of 20 ads fail (5% to 17% success rate), you don’t just need one new creative: You need 20 new original ideas or more to sustain performance and scale!
And you need all that new creative fast because creative fatigues quickly. You may need 20 new creative concepts every month, or possibly even every week depending on your ad spend and how your title monetizes (IAA or IAP). The more spend you run through your account, the more likely it is that your ad’s performance will decline.
Why is the Control Video so Hard to Beat?
Creative Testing: Our Unique Way
Let us set the stage for how and why we’ve been doing creative testing in a unique way. We test a lot of creative. In fact, we produce and test more than 100,000 videos and images yearly for our clients, and we’ve performed over 10,000 A/B and multivariate tests on Facebook and Google.
We focus on these verticals: gaming, e-commerce, entertainment, automotive, D2C, financial services, and lead generation. When we test, our goal is to compare new concepts vs. the winning video (control) to see if the challenger can outperform the champion. Why? If you can’t outperform the best ad in a portfolio, you will lose money running the second or third-place ads.
While we have not tested our process beyond the aforementioned verticals, we have managed over $3 billion in paid social ad spend and want to share what we’ve learned. Our testing process has been architected to save both time and money by killing losing creatives quickly and to significantly reduce non-converting spend. Our process will generate both false negatives and false positives. We typically allow our tests to run between 2-7 days to provide enough time to gather data without requiring the capital and time required to reach statistical significance (StatSig). We always run our tests using our software AdRules via the Facebook API. Our insights are specific to the above scenarios, not a representation of how all testing on Facebook’s platform operates. In cases, it is valuable to retain learning without obstructing ad delivery.
To be clear, our process is not the Facebook best practice of running a split test and allowing the algorithm to reach statistical significance (StatSig) which then moves the ad set out of the learning phase and into the optimized phase. The insights we’ve drawn are specific to these scenarios we outline here and are not a representation of how all testing on Facebook’s platform operates. In cases, it is valuable to have old creative retain learning to seamlessly A/B test without obstruct- ing ad delivery.
Creative Testing: Statistical Significance vs Cost-Effective
Let’s take a closer look at the cost aspect of creative testing.
In classic testing, you need a 95% confidence rate to declare a winner, exit the learning phase, and reach StatSig. That’s nice to have, but getting a 95% confidence rate for in-app purchases may end up costing you $20,000 per creative variation.
Why so expensive?
As an example, to reach a 95% confidence level, you’ll need about 100 purchases. With a 1% purchase rate (which is typical for gaming apps), and a $200 cost per purchase, you’ll end up spending $20,000 for each variation in order to accrue enough data for that 95% confidence rate. There aren’t a lot of advertisers who can afford to spend $20,000 per variation, especially if 95% of new creative fails to beat the control.
So, what to do?
What we do is move the conversion event we’re targeting up in the sales funnel. For mobile apps, instead of optimizing for purchases, we optimize for impression per install (IPM). For web- sites, we’d optimize for an impression to top-funnel conversion rate. Again, this is not a Facebook recommended best practice, this is our own voodoo magic/secret sauce that we’re brewing.
IPM Testing Is Cost-Effective
A concern with our process is that ads with high CTRs and high conversion rates for top-funnel events may not be true winners for down-funnel conversions and ROI / ROAS. But while there is a risk of identifying false positives and negatives with this method, we’d rather take that risk than spend the time and expense of optimizing for StatSig bottom-funnel metrics.
To us, it is more efficient to optimize for IPMs vs. purchases. Most importantly, it means you can run tests for less money per variation because you are optimizing towards installs vs purchases. For many advertisers, that alone can make more testing financially viable. $200 testing cost per variation versus $20,000 testing cost per variation can mean the difference between being able to do a couple of tests versus having an ongoing, robust testing program.
We don’t just test a lot of new creative ideas. We also test our creative testing methodology. That might sound a little “meta,” but it’s essential for us to validate and challenge our assumptions and results. When we choose a winning ad out of a pack of competing ads, we’d like to know that we’ve made a good decision.
Because the outcomes of our tests have consequences – sometimes big consequences – we test our testing process. We question our testing methodology and the assumptions that shape it. When we kill most of our new concepts because they didn’t test well, our entire team reacts by killing the losing concepts and pivoting the creative strategy based on those results to try other ideas.
Control Video: How We’ve Been Testing Creative Until Now
When testing creative we typically would test three to six videos along with a control video using Facebook’s split test feature. We would show these ads to broad or 5-10% LALs (Lookalike) audiences, and restrict distribution to the Facebook newsfeed only, Android only and we’d use mobile app install bidding (MAI) to get about 100-250 installs.
If one of those new “challenger” ads beat the control video’s IPM or came within 10%-15% of its performance, we would launch those potential new winning videos into the ad sets with the control video and let them fight it out to generate ROAS.
We’ve seen hints of what we’re about to describe across numerous ad accounts and have confirmed with other 7-figure spending advertisers that they have seen the same thing. But for purposes of explanation, let’s focus on one client of ours and how their ads performed in creative tests.
In November and December 2019, we produced +60 new video concepts for this client. All of them failed to beat the control video’s IPM. This struck us as odd, and it was statistically impossible. We expected to generate a new winner 5% of the time or 1 out of 20 videos – so 3 winners. Since we felt confident in our creative ideas, we decided to look deeper into our testing methods.
The traditional testing methodology includes the idea of testing a testing system or an A/A test. A/A tests are like A/B tests, but instead of testing multiple creatives, you test the same creative in each “slot” of the test.
If your testing system/platform is working as expected, all “variations”, should produce similar results assuming you get close to statistical significance. If your A/A test results are very different, and the testing platform/methodology concludes that one variation or another significantly outperforms or underperforms compared to the other variations, there could be an issue with the testing method or quantity of data gathered.
First A/A test of video creative: Give Control a Performance Boost
Here’s how we set up an A/A test to validate our non-standard approach to Facebook testing. The purpose of this test was to understand if Facebook maintains a creative history for the control and thus gives the control a performance boost making it very difficult to beat – if you don’t allow it to exit the learning phase and reach statistical relevance.
We copied the control video four times and added one black pixel in different locations in each of the new “variations.” This allowed us to run what would look like the same video to humans but would be different videos in the eyes of the testing platform. The goal was to get Facebook to assign new hash IDs for each cloned video and then test them all together and observe their IPMs.
These are the ads we ran… except we didn’t run the hotdog dog; I’ve replaced the actual ads with cute doges to avoid disclosing the advertiser’s identity. IPMs for each ad in the far right of the image.
Things to note here:
The far-right ad (in the blue square) is the control.
All the other ads are clones of the control with one black pixel added.
The far-left ad/clone outperformed the control by 149%. As described earlier, a difference like that shouldn’t happen. If the platform was truly variation agnostic, BUT – to save money, we did not follow best practices to allow the ad set(s) to exit the learning phase.
We ran this test for only 100 installs. Which is, our standard operating procedure for creative testing.
Once we completed our first test to 100 installs, we paused the campaign to analyze the results. Then we turned the campaign back on to scale up to 500 installs in an effort to get closer to statistical significance. We wanted to see if more data would result in IPM normalization (in other words, if the test results would settle back down to more even performance across the variations). However, the results of the second test remained the same. Note: the ad set(s) did not exit the learning phase and we did not follow Facebook’s best practice.
The results of this first test, while not statistically significant, were surprisingly enough to merit additional tests. So we tested on!
Second A/A test of video creative: Give controls different headers
For our second test, we ran the six videos shown below. Four of them were controls with different headers; two of them were new concepts that were very similar to the control. Again, we didn’t run the hotdog dogs; they’ve been inserted to protect the advertiser’s identity and to offer you cuteness!
The IPMs for all ads ranged between 7-11 – even the new ads that did not share a thumbnail with the control. IPMs for each ad in the far right of the image.
Third A/A test of video creative: One control and similar variations
Next, we tested six videos: one control and five visually similar variations to the control but one very different to a human. IPMs ranged between 5-10. IPMs for each ad in the far right of the image.
Fourth A/A test of video creative: One control and different ideas
This was when we had our “ah-ha!” moment. We tested six very different video concepts: the one control video and five brand new ideas, all of which were visually very different from the control video and did not share the same thumbnail.
The control’s IPM was consistent in the 8-9 range, but the IPMs for the new visual concepts ranged between 0-2. IPMs for each ad in the far right of the image.
Here are our impressions from the above tests:
Facebook’s split-tests maintain creative history for the control video. This gives the control advantage with our non-statistically relevant, non-standard best practice of IPM testing.
We are unclear if Facebook can group variations with a similar look and feel to the control. If it can, similar-looking ads could also start with a higher IPM based on influence from the control. Or perhaps similar thumbnails influence non-statistically relevant IPM.
Creative concepts that are visually very different from the control appear to not share a creative history. IPMs for these variations are independent of the control.
It appears that new, “out of the box” visual concepts vs the control may require more impressions to quantify their performance.
Our IPM testing methodology appears to be valid if we do NOT use a control video as the benchmark for winning.
IMP Testing Summary
Here are the line graphs from the second, third, and fourth tests.
And here’s what we think they mean:
Creative Testing 2.0 Recommendations:
Given the above results, those of us testing using IPM have an opportunity to re-test IPM winners that exclude the control video to determine if we’ve been killing potential winners. As such, we recommend the following three-phase testing plan.
Our 3-Step Creative Testing Process
Phase 1: IPM Test (No Control Video)
No control video
Create a new split test campaign using 3~6 new creatives (no control).
Setup campaign structure for basic App Install (No event optimization or value optimization)
Spend an equal amount on each creative. Ex: One ad per ad set.
Budget for at least 100 installs per creative
$200~$400 spend per ad is recommended (based on a CPI of $2-$4) if T1 English-speaking country
$20~$40 spend per ad/adset testing in India (based on $0.20-$0.40 CPI)
US Phase 1 testing.
10-15% LAL with a seed audience similar to past 90-day installers, or past 90 day payers.
Non-US Phase 1 testing.
Use broad targeting & English speakers only
If not available in India, try other English-speaking countries with lower CPMs than U.S. and similar results. Ex: ZA, CA, IE, AU, PH, etc.
Use the OS (iOS or Android) you intend to scale in production
Use one body text
Headline is optional
FB Newsfeed or Facebook Audience Networking placement only (not both and not auto placements)
Be sure the winner has 100+ installs (50 installs acceptable in high CPI scenarios)
100 installs: 70% confidence with 5% margin of error
160 installs: 80% confidence with 5% margin of error
270 installs: 90% confidence with 5% margin of error
IAP Titles: kill losers, top 1~3 winners go to phase 2
IAA Titles: kill losers, allow top 1~3 “possible winners” to exit the learning phase and then put into “the Control’s” campaign
Which Creatives Move From Phase 1 > Phase 2?
How To Pick A Phase 1 IPM Winner
IPMs may range broadly or be clumped together
Goal: kill obvious losers and test remaining ads in phase 2
Ads (blue) have IPMs 6.77 & 6.34, move to phase 2
If all ads are very close (e.g. within 5%), increase the budget
IAA (in-app ads titles) you may need more LTV data before scaling
Phase 2: Initial ROAS (No Control Video)
No control video
Create a new campaign with AEO or VO optimization
Place all creatives into a single adset (Multi Ads Per Adset)
Use IPM winner(s) from Phase 1 (you can combine winners from multiple Phase 1 tests into a single Phase 2 test)
OS – Android or iOS. 5-10% LALs from top seeds (purchases, frequent users + purchase) + Auto Placements
Testing can be done at a lower cost if you wish to run this campaign in other countries where ROAS is similar or higher but CPMs are much lower compared to the US – ie. South Africa, Ireland, Canada, etc.
Lifetime budget $3,500-$4,900 or daily budgets of $500-$750 over the course of 4-6 days (depending on your $/purchase).
WARNING! Skipping this step is highly likely to result in one of the following scenarios:
Challenger immediately kills the champion/control but hasn’t achieved enough statistical relevance or exited the learning phase and therefore the sustained ROAS/KPI may not be sustained.
Champion/control video has a lot more statistical history and relevance and most likely has exited the learning phases and may immediately kill the challenger before it has a chance to get enough data to properly fight for ROAS.
Phase 3: ROAS Scale (No Control Video)
No control video
Use strong CBO campaign
Choose winner(s) from Phase 2 with good/decent ROAS
You’ve proven the ad has great IPM and “can monetize”
To win this phase, it must hit KPIs (D7 ROAS, etc.)
Create a copy of an existing ad set
Delete old ads and replace them with your Phase 2 winner(s)
Allows new ads to spend in a competitive environment
Then, create a new ad set, roll it out towards target audiences with solid ROAS / KPIs
CBO controls budgets between ad sets with control creatives and ad sets with new creative winners.
Intervene with adset min/max spend control only if new creatives don’t receive spend from CBO.
Require challenger to exit the learning phase before moving to challenge the control “Gladiator” video
Once the challenger has exited the learning phase, allow CBO to change budget distribution between challenger and champion
Note: We’re continuously testing our assumptions and discussing testing procedures with large Facebook advertisers.
We look forward to hearing how you’re testing and sharing more of what we uncover soon.
Check out our newest whitepaper and learn how to scale Facebook user acquisition in 2020. UA automation has been a numbers game for a long time. In February 2018, the duopoly’s algorithms evolved into something sophisticated enough to take over humans’ jobs. Facebook started slowly but is incrementally nudging us all toward near-total automation.
This has had a profound effect on user acquisition advertising and user acquisition managers. For instance, now we have fewer levers left to achieve results than we used to have. And yes, automation has taken a lot of work away from us. It has also made entire industries (like adtech) increasingly obsolete. As a result, it will probably shrink the size of many UA teams.
But, while some things are being taken away, other opportunities are opening. Creative strategy, development, and testing actually end up being the primary driver of improvements to ROAS. However, those things are still best done by humans.
In our newest whitepaper, we review how we got to this point of UA automation and how it’s affected Facebook user acquisition performance and management. Also, what UA managers should do to evolve into this very new environment. And to elevate their level of UA techniques to become skillful masters. It’s an exciting time to be in user acquisition, but it demands a great deal of agility.
Learn how to scale Facebook user acquisition by downloading our new whitepaper today. Check it out!
How to Scale Facebook User Acquisition in 2020
Table of Contents
Section 1: How the Algorithms Have Been Moving Toward Automation
How We Got Here
Optimizing for Special Situations
Section 2: What User Acquisition Automation Means for UA Advertisers, UA Managers, and UA Teams
Don’t Fear the Machines
The Shrinking UA Department
What’s Next for UA Managers and Their Employers
Section 3: How Automation Affects Other Aspects of User Acquisition Management
The UA Automation and the Three-Legged Stool
The Algorithms Still Need to be Monitored
Third-Party Adtech is Largely Obsolete
…But Some Adtech is Still Useful
Section 4: Achieving UA Mastery: Advanced Techniques for Facebook