However… if you’re an experienced, proactive, and performance-driven advertiser, giving up so many controls of how you run your campaigns is tough. It means UA managers have to completely rethink how they advertise and update their skills because the algorithms can do so many user acquisition tasks better than people can now.
While there has been concern about UA managers losing their jobs to all this new automation, we see it as an opportunity. We recommend UA managers switch how they spend their time over to creative strategy, player profiles, and competitive analysis. Those are the key drivers of performance now, though it’s also critical for UA managers to understand how automation works. The algorithms, in a sense, are now key members of your team.
To navigate UA in 2020 and beyond, you’ll also need to understand how automation has affected UA advertising, UA teams, and what it means for your career prospects. We’ll cover all this and more in these pages.
If you’re in the trenches of UA, it’s easy to lose sight of the larger picture. So while we know you’re probably more focused on the future than on the past, understanding what’s happened in the last two years will help frame what’s happening now, and what’s likely to happen soon.
Way back in the stone age of UA, near the end of 2017, Google instituted a sudden change. It moved the all-new app install campaigns to over to a thing called “Google App Campaigns.” Advertisers were pushed into a very new advertising environment that had both significant limitations and significant powerful new features… all of which were made possible by the platform’s algorithm.
About a month after that, Google took things a step further. They turned off any Search, Display, and YouTube app promo campaigns that were running. All mobile app install campaigns on Google now had to be run through Google App Campaigns.
Here’s how Google described the new UAC:
“As an app advertiser, you want to get your app into the hands of more paying users. So, how do you connect with those people? Google App campaigns streamline the process for you, making it easy to promote your apps across Google’s largest properties including Search, Google Play, YouTube, and the Google Display Network. Just add a few lines of text, a bid, some assets, and the rest is optimized to help your users find you.”
Facebook followed suit quickly after. At the beginning of 2018, they rolled out an update that included new best practices for advertising on a platform now run mostly by an algorithm. Facebook’s changes at the time weren’t as forced as Google’s, they still influenced results.
All this was basically Phase 1 of UA’s shift towards automation.
Phase 2 began on February 19, 2018. That was when Facebook’s algorithm significantly changed how mobile app installs and lead generation campaigns were managed. Advertisers suddenly handed over quite a bit of social advertising control to these algorithms.
Luckily, giving algorithms this much control has a couple of upsides.
1. Since many responsibilities of the user acquisition manager have moved over to algorithms, there is an opportunity for less-experienced advertisers to get results comparable to their more advanced peers. This means more advertisers, even with less experience, can profitably use the platforms. This means, of course, that Facebook and Google get to expand their user base.
2. As advertising platform algorithms have become increasingly sophisticated, many third-party advertising tools are no longer needed. In the past, adtech tools were a significant competitive advantage available only to companies who could afford them. Now, both Facebook and Google App Campaigns offer almost comparable adtech tools for free.
Before February 2018, Facebook advertisers could run almost an unlimited number of ads. These ads could have audiences that overlapped. There were also no penalties for making frequent bid changes; even if there were multiple bid changes every couple of hours. Advertisers could pause ads, and budgets could be modified all the time. Facebook allowed adtech providers (similar to our AdRules tool) to edit bids, budgets, and pause rules with the utmost of precision and speed. Optimizations were done through many actions—most of which were controlled by the advertiser or by a third-party adtech tool.
All that changed dramatically on February 19th. The constant changes suddenly started to incur penalties for advertisers. Soon enough, it became clear Facebook would reward social advertisers for running and optimizing their campaigns according to the best practices outlined in Facebook’s “Blueprint Certification.”
One of the overarching principles of Facebook’s Blueprint Certification is that it’s better to rely more heavily on the Facebook algorithm, which will help sift through audiences and settings to help you acquire the right customers. Broad targeting with no overlapping audiences, combined with Facebook’s Value Optimization (VO) and App Event Optimization (AEO) work well to create a successful campaign.
We had come to the point (like Google had articulated earlier) where the algorithm was now doing the heavy lifting to “help your users find you.”
Since February 2018, Google has rolled out Value Bidding, Similar Audiences, Ad Groups, Media Library, and Asset Reporting. Two of those new features (Value Bidding, similar to Facebook’s value bidding, aka “target return on ad spend”) take significant advertising management tasks out of the hands of humans and give them over to the algorithm – aka “the machines.”
Facebook has rolled out many similar features, and most notably it’s Power5 and then Structure for Scale frameworks, which are basically a new set of best practices for advertising on a platform run by an algorithm.
So the machines have arrived. In fact, they’ve been running our campaigns for a while. It’s well past time for advertising managers to step back from many of the tasks that used to define their jobs and let the algorithms take the lead.
Facebook’s Structure for Scale framework lays out exactly how to do this.
If you want to know how to prepare for automated media buying, auditing is an excellent place to start.
Never just leap into an account and start making changes. Before you do anything, you need to know how the account has been performing to date, where it’s working and not working. To do this, we always start with an audit. When we’re working with a new client or creating a new media account we start with a full audit of both their creative and their media buying.
A thorough audit includes:
Why do you think each standout piece of creative performed well or badly? The goal here is not to replicate ideas that have won or failed but to discover fresh new directions to explore new ways for your creative to evolve.
Also look for which demographic selections tend to perform best (like age, gender, geography, and device). Then check how the performance of static or video ads compares.
Pablo Picasso said it best: “A good artist will borrow but a great artist will steal.” So go ahead — steal the best ideas of your competitors. Competitive analysis is one of the highest-value things user acquisition managers can do now.
Just know that your competitors are failing at the same rate as you are — between 85% to 95%. That means a vast majority of all their new concepts fail to outperform the best creative in a portfolio. And if your new creative can’t outperform your best ad, you lose money running it.
However, if you can incorporate your competitor’s best concepts and creative trends, it will give you an endless supply of concepts that they’ve tested.
Want some help with this phase of the audit? We offer a premium service called “Collaborative Creative” where we’ll put together a strategic creative plan with mini briefs that contain concept hypotheses and motivations. We then walk you through the document for feedback.
Now that your creative is dialed in, it’s time to pivot to audiences, ad spend, and campaign goals. These aren’t as big a driver of performance as creative, but they still matter — a lot.
1. Review KPIs (Key Performance Indicators) and Lifetime Data to verify your campaigns are achieving the KPIs they are expected to meet. If your campaigns aren’t meeting those KPIs, how far off are they from your goals?
2. Do you have an MMP (Mobile Measurement Partner)? If you do, check to make sure your Facebook data aligns with your MMP’s first-party data. If the data doesn’t align, how much is it off by?
3. Review your creative, campaign, and audience performance. See which components are achieving or near KPI.
4. Review your top-performing campaigns, audiences, and creative and highlight the top performers. What has worked best? Why do you think it’s worked so well?
5. Are you using CBO (Campaign Budget Optimization) and/or Non-CBO campaigns? Compare the performance of each type of campaign. Remember: CBO Campaigns allow Facebook’s algorithm to split out a set budget between the different ad sets instead of manually inputting these budgets at the ad set level.
6. Are you running DLO (Dynamic Language Optimization) ads? If you are, check their performance. Specifically, check if any languages monetize better than others and if that maps to their geography targeting. DLO allows multiple languages in one ad unit which Facebook dynamically serves users based on their indicated language. Sometimes it works well, sometimes not so much.
7. Review your bid types to determine what is working (VO, MinROAS, AEO, MAI, etc.). Here are the key differences in each type:
8. Review your campaigns’ performance by media type and determine if videos or static images, carousels, DCO (Dynamic Creative Optimization) are performing on the account.
9. Review your campaigns’ operating system performance (Android vs iOS).
Here are some of our favorite things to test:
Prepare to normalize your account structures with a balance between Facebook best practices of Structure for Scale (S4S) / Power 5 and our proven methodologies to achieve both scale and ROAS. Plan out what you’d want to do first, what it will take to implement it, and how you’ll use the resources you have to get it done.
Keep in mind that:
After all of that is done and we’re clear about how to target audiences, which bidding strategies we’ll use to reach them, and which Structure for Scale / Power 5 best practices we’ll use to implement the strategy, then we’ll move over to media buying.
We use our client’s strongest elements (videos, images, ad copy, and audiences) to establish baseline performance while using our preferred campaign structure. So if you’re working on your own campaigns, make sure you have solid baseline performance data before you move forward.
While you’re doing that benchmarking, you can begin to develop new creative based on what you’ve learned from your audits. For example, we will start writing new copy and our creative studio will begin creative development as soon as the creative audit is complete. That way, there’s no delay waiting for new creative. It’s ready to go right about the same time as the benchmarks have accrued enough data to go forward.
Question: How long do you use a client’s (or your own) legacy videos, audiences, ad copy, etc during the benchmarking process?
Answer: Typically, we don’t use clients’ creative assets very long. In most cases we’ll beat their copy and other creative elements within the first week we work with them. But we will let their creatives run for the first week so we can establish a baseline/benchmark metric.
Once the second week starts, we’ll begin testing our creative, copy, and audiences. Usually, within a week or two most clients’ creative assets are shut off or outperformed.
All that said, sometimes a client’s top-performing video can last quite a while. As long as that creative’s performance is on par or outperforming our new creative we will continue to run it.
So if you’re doing your own campaign optimization, don’t kill off old creative just because it’s old and doesn’t fit with your new strategy. So long as it works, keep it running. If your new approach is correct you’ll beat that old creative soon enough.
Audiences are a critical part of campaign performance, so we test them rigorously. This is our preferred testing approach to build an effective audience structure:
We usually use WW, T1, and the US on Broad, Interest Groups, and Lookalike Audiences.
Lookalike audiences are especially critical for our process. We’ll initially test narrower (higher quality) 1%, 3%, 5% audiences, analyze performance and then expand to wider (less expensive) 10%, 15%, 20% audiences in an effort to balance cost versus Return on Ad Spend.
Lookalike audiences can range from 1-20%, though typically we use 1, 3, 5, 7, 10, 12, 15, and 20%. These can also be based on seed audiences of spend (value) or events committed that drive KPIs like monetization, retention, and LTV.
Here are some examples of seed audiences:
These are a group of lookalike audiences that consist of similar lookalike audiences in the same percentage range. This allows us to create an expanded audience that is similar in intent. This expanded audience can include:
These use a revenue value that is relevant to the particular game. So instead of going after just any buyer, we’re targeting super-high value buyers.
To do this, first, we’ll create lists of users that meet “early whale” criteria. The values shown below are placeholders, but the idea is that the highest amount (in this case $10) may not be achievable for 1 day or even 2-day users but only 7-day users. Once these audiences are built, they are uploaded to Facebook for lookalike audience creation.
Once the lookalike audiences are established, we’ll increase the dollar amounts.
Here are some of our favorite tactics for optimizing bids and budgets:
This is a sophisticated and powerful technique made easier with our Audience Builder Express. First, we generate a list of users sorted by revenue that is then manipulated to go much higher and much lower based on their place along the average. This creates a profile of users who are “high value” and “low value” for Facebook.
You can use any one of the attributes below in your profile:
We’ll also test AEO and VO campaign optimization against the audiences described above to determine which bidding strategy produces the best results based on client KPIs.
These tests can include:
Our preferred audience/campaign structure allows us to quickly determine which geographies/audiences and bidding optimizations will achieve client goals from both a cost and scalability perspective.
This structure delivers more precise results with fewer variables within each campaign. Other agencies/media buyers may change bid strategy or audiences on the fly within campaigns as a quick fix, but we split out variables to identify true performance.
We’ll try all those tests just so we can understand specifically what causes a campaign to perform well. This is critical for later testing: We need to know which creative elements and campaign settings make a difference and which don’t.
By now you’ve done your audits and you’ve done a close evaluation of your audiences, ad spend, and campaign targeting options. Basically, you know where you’ve been, where you want to go, and how you want to get there.
Now let’s layer in some extra strategy for special situations. We’ve outlined three common situations that deserve a slightly different approach. Use these only if they apply to your situation, but each one of them should be familiar to you so you can apply them if the need arises.
(IAA) apps monetize with in-app ads. Generally, the longer a user remains in the game, the more revenue (ad views) they generate. The goal is to find high-retention users for the lowest acquisition cost possible. Targeting for these campaigns is designed to be low cost / low CPM. Usually that means App Install optimization with broad, wide lookalikes (10%-20%) and interest groups.
Basic App Install campaigns lack optimization levers (AEO, VO, etc), so instead we optimize on top performing age, gender, geo, language, device/Android/iOS, and platform placement (IG, FB, FAN, etc).
(IAP) apps monetize with in-app purchases. The goal is to acquire high-ROAS users, which is typically achieved through AEO & VO optimizations.
Typically for IAP campaigns, we seek high ROAS campaigns that are driven by AEO and VO bidding and are tied to lookalike campaigns.
Here’s how it works:
If there is enough data initially, we tend to test AEO and VO against each other to see which is the better performer.
While we are testing different campaign builds and audiences, we are consistently putting new creative through Phase1, Phase 2, and Phase 3 testing. Once we have determined winners we introduce these ads into our top-performing campaigns to determine their performance against control creative.
Anyone with a new app is in an interesting situation: There is no history. There is nothing to audit. But you can still employ many of the suggestions here to gain an advantage from the start. For example, beef up your competitive analysis if you don’t have any data history of your own. Each of your competitors has invested tens, maybe hundreds of thousands of dollars into creative testing. Learn how to pull insights from their performance to hit the ground running with your new ads.
In terms of campaign structure, if there is not a lot of campaign data, typically we’ll launch a new app with MAI and AEO campaigns to get good data on the audience and creative performance. Then, once we’ve identified top-performing audience groups and creative assets, we’ll pivot into VO bidding and MinROAS bidding to test out what works best there.
Next, with our best bid strategies defined, we’ll review the performance of different demographic data like:
From there, we’ll test