How Algorithms Are Moving Toward Automation
Facebook and Google have made major strides in 2019 and 2020 towards simplifying and streamlining their ad platforms. Algorithms are moving toward automation. This is overwhelmingly a good thing because:
- Allows more people to effectively use the platforms, regardless of their advertising skills
- It saves UA managers’ valuable and very limited time
- Gets more consistent results
However… if you’re an experienced, proactive, and performance-driven advertiser, giving up so many controls of how you run your campaigns is tough. It means UA managers have to completely rethink how they advertise and update their skills because the algorithms can do so many user acquisition tasks better than people can now.
While there has been concern about UA managers losing their jobs to all this new automation, we see it as an opportunity. We recommend UA managers switch how they spend their time over to creative strategy, player profiles, and competitive analysis. Those are the key drivers of performance now, though it’s also critical for UA managers to understand how automation works. The algorithms, in a sense, are now key members of your team.
To navigate UA in 2020 and beyond, you’ll also need to understand how automation has affected UA advertising, UA teams, and what it means for your career prospects. We’ll cover all this and more in these pages.
1.1 How We Got Here: A Brief History of UA Automation Over the Last Two Years
If you’re in the trenches of UA, it’s easy to lose sight of the larger picture. So while we know you’re probably more focused on the future than on the past, understanding what’s happened in the last two years will help frame what’s happening now, and what’s likely to happen soon.
Phase One of User Acquisition Automation
Way back in the stone age of UA, near the end of 2017, Google instituted a sudden change. It moved the all-new app install campaigns to over to a thing called “Google App Campaigns.” Advertisers were pushed into a very new advertising environment that had both significant limitations and significant powerful new features… all of which were made possible by the platform’s algorithm.
About a month after that, Google took things a step further. They turned off any Search, Display, and YouTube app promo campaigns that were running. All mobile app install campaigns on Google now had to be run through Google App Campaigns.
Here’s how Google described the new UAC:
“As an app advertiser, you want to get your app into the hands of more paying users. So, how do you connect with those people? Google App campaigns streamline the process for you, making it easy to promote your apps across Google’s largest properties including Search, Google Play, YouTube, and the Google Display Network. Just add a few lines of text, a bid, some assets, and the rest is optimized to help your users find you.”
Facebook followed suit quickly after. At the beginning of 2018, they rolled out an update that included new best practices for advertising on a platform now run mostly by an algorithm. Facebook’s changes at the time weren’t as forced as Google’s, they still influenced results.
All this was basically Phase 1 of UA’s shift towards automation.
Phase 2 began on February 19, 2018. That was when Facebook’s algorithm significantly changed how mobile app installs and lead generation campaigns were managed. Advertisers suddenly handed over quite a bit of social advertising control to these algorithms.
The Advantages of Algorithm Control
Luckily, giving algorithms this much control has a couple of upsides.
1. Since many responsibilities of the user acquisition manager have moved over to algorithms, there is an opportunity for less-experienced advertisers to get results comparable to their more advanced peers. This means more advertisers, even with less experience, can profitably use the platforms. This means, of course, that Facebook and Google get to expand their user base.
2. As advertising platform algorithms have become increasingly sophisticated, many third-party advertising tools are no longer needed. In the past, adtech tools were a significant competitive advantage available only to companies who could afford them. Now, both Facebook and Google App Campaigns offer almost comparable adtech tools for free.
Before February 2018, Facebook advertisers could run almost an unlimited number of ads. These ads could have audiences that overlapped. There were also no penalties for making frequent bid changes; even if there were multiple bid changes every couple of hours. Advertisers could pause ads, and budgets could be modified all the time. Facebook allowed adtech providers (similar to our AdRules tool) to edit bids, budgets, and pause rules with the utmost of precision and speed. Optimizations were done through many actions—most of which were controlled by the advertiser or by a third-party adtech tool.
All that changed dramatically on February 19th. The constant changes suddenly started to incur penalties for advertisers. Soon enough, it became clear Facebook would reward social advertisers for running and optimizing their campaigns according to the best practices outlined in Facebook’s “Blueprint Certification.”
Fewer Campaigns with Minimal Audience Overlap
One of the overarching principles of Facebook’s Blueprint Certification is that it’s better to rely more heavily on the Facebook algorithm, which will help sift through audiences and settings to help you acquire the right customers. Broad targeting with no overlapping audiences, combined with Facebook’s Value Optimization (VO) and App Event Optimization (AEO) work well to create a successful campaign.
We had come to the point (like Google had articulated earlier) where the algorithm was now doing the heavy lifting to “help your users find you.”
Since February 2018, Google has rolled out Value Bidding, Similar Audiences, Ad Groups, Media Library, and Asset Reporting. Two of those new features (Value Bidding, similar to Facebook’s value bidding, aka “target return on ad spend”) take significant advertising management tasks out of the hands of humans and give them over to the algorithm – aka “the machines.”
Facebook has rolled out many similar features, and most notably it’s Power5 and then Structure for Scale frameworks, which are basically a new set of best practices for advertising on a platform run by an algorithm.
So the machines have arrived. In fact, they’ve been running our campaigns for a while. It’s well past time for advertising managers to step back from many of the tasks that used to define their jobs and let the algorithms take the lead.
Facebook’s Structure for Scale framework lays out exactly how to do this.
1.2 Creative Audit
If you want to know how to prepare for automated media buying, auditing is an excellent place to start.
Never just leap into an account and start making changes. Before you do anything, you need to know how the account has been performing to date, where it’s working and not working. To do this, we always start with an audit. When we’re working with a new client or creating a new media account we start with a full audit of both their creative and their media buying.
A thorough audit includes:
The Creative Audit
1. Identify your best and worst-performing creative.
Why do you think each standout piece of creative performed well or badly? The goal here is not to replicate ideas that have won or failed but to discover fresh new directions to explore new ways for your creative to evolve.
Also look for which demographic selections tend to perform best (like age, gender, geography, and device). Then check how the performance of static or video ads compares.
2. Do a robust review of your competitors’ ads.
Pablo Picasso said it best: “A good artist will borrow but a great artist will steal.” So go ahead — steal the best ideas of your competitors. Competitive analysis is one of the highest-value things user acquisition managers can do now.
Just know that your competitors are failing at the same rate as you are — between 85% to 95%. That means a vast majority of all their new concepts fail to outperform the best creative in a portfolio. And if your new creative can’t outperform your best ad, you lose money running it.
However, if you can incorporate your competitor’s best concepts and creative trends, it will give you an endless supply of concepts that they’ve tested.
- Special Offer: See your competitors’ top video creative and understand which ads drive their performance. We’re giving away FREE access to over 1 million competitive videos.
3. Do a complete review of your assets to determine what’s required to take the pieces and recombine them to create new concepts.
4. If you have done a market segmentation analysis and produced player profiles, employ that information now to refine your calls to action to appeal to your best audiences.
Want some help with this phase of the audit? We offer a premium service called “Collaborative Creative” where we’ll put together a strategic creative plan with mini briefs that contain concept hypotheses and motivations. We then walk you through the document for feedback.
The Media Buying Audit
Now that your creative is dialed in, it’s time to pivot to audiences, ad spend, and campaign goals. These aren’t as big a driver of performance as creative, but they still matter — a lot.
1. Review KPIs (Key Performance Indicators) and Lifetime Data to verify your campaigns are achieving the KPIs they are expected to meet. If your campaigns aren’t meeting those KPIs, how far off are they from your goals?
2. Do you have an MMP (Mobile Measurement Partner)? If you do, check to make sure your Facebook data aligns with your MMP’s first-party data. If the data doesn’t align, how much is it off by?
3. Review your creative, campaign, and audience performance. See which components are achieving or near KPI.
4. Review your top-performing campaigns, audiences, and creative and highlight the top performers. What has worked best? Why do you think it’s worked so well?
5. Are you using CBO (Campaign Budget Optimization) and/or Non-CBO campaigns? Compare the performance of each type of campaign. Remember: CBO Campaigns allow Facebook’s algorithm to split out a set budget between the different ad sets instead of manually inputting these budgets at the ad set level.
6. Are you running DLO (Dynamic Language Optimization) ads? If you are, check their performance. Specifically, check if any languages monetize better than others and if that maps to their geography targeting. DLO allows multiple languages in one ad unit which Facebook dynamically serves users based on their indicated language. Sometimes it works well, sometimes not so much.
7. Review your bid types to determine what is working (VO, MinROAS, AEO, MAI, etc.). Here are the key differences in each type:
- AEO (App Event Optimization): Instructs Facebook to optimize for users most likely to complete the indicated event. For example, level achieved, add to cart, registration completed, purchase.
- VO (Value Optimization): Tells Facebook to optimize towards users that are most likely to purchase at a great amount over a longer period of time. VO is typically used for the highest LTV (Lifetime Value).
- MinROAS: This is a function of VO that instructs Facebook to optimize towards users who are likely to generate a specified Return on Ad Spend within a specified timeframe.
- MAI (Mobile App Install): Tells Facebook to optimize towards users that are most likely to install the app.
8. Review your campaigns’ performance by media type and determine if videos or static images, carousels, DCO (Dynamic Creative Optimization) are performing on the account.
9. Review your campaigns’ operating system performance (Android vs iOS).
Throughout Your Audits
1. Look for new testing opportunities.
Here are some of our favorite things to test:
- Facebook best practices (Structure for Scale or Power 5).
- Ad copy. The headline and ad text.
- Ad set structure testing (one ad per ad set, multiple ads per ad set).
- Your entire creative testing process. “Test your testing” against the best practices we outline in these resources:
2. Plan your strategy going forward.
Prepare to normalize your account structures with a balance between Facebook best practices of Structure for Scale (S4S) / Power 5 and our proven methodologies to achieve both scale and ROAS. Plan out what you’d want to do first, what it will take to implement it, and how you’ll use the resources you have to get it done.
Keep in mind that:
- Structure For Scale’s main strategy is to streamline and minimize the number of campaigns and ad sets targeting wider reach audiences. This allows the algorithm to more efficiently drive ROAS and other desired outcomes.
- Concentrating spend on fewer ad sets allows Facebook to quickly accumulate events and exit the learning phase. As you know, the longer your campaigns stay in the learning phase, the more revenue you lose.
- Maximize audience reach so the Facebook algorithm can find the most qualified users while it also minimizes audience overlap.
- Try to minimize changes to campaign/ad set settings so you can avoid a “significant edit” and have your campaigns forced into reverting back to the learning phase. To avoid this, we will often launch a new campaign with the desired changes to avoid affecting the original campaign.
- We tend to follow four out of the five “Power Five” best practices. Those five are:
- Auto Advanced Matching. Use this if you want to sync customer data.
- Account Simplification / Structure For Scale.
- Campaign Budget Optimization.
- Automatic Placements. This setting allows Facebook to choose where your ads will most efficiently be displayed across their ad networks.
- Dynamic Ads. We use these infrequently, but they can be effective for personalized product retargeting campaigns for e-commerce clients.
After all of that is done and we’re clear about how to target audiences, which bidding strategies we’ll use to reach them, and which Structure for Scale / Power 5 best practices we’ll use to implement the strategy, then we’ll move over to media buying.
1.3 Media Buying
Establish Performance Benchmarks
We use our client’s strongest elements (videos, images, ad copy, and audiences) to establish baseline performance while using our preferred campaign structure. So if you’re working on your own campaigns, make sure you have solid baseline performance data before you move forward.
While you’re doing that benchmarking, you can begin to develop new creative based on what you’ve learned from your audits. For example, we will start writing new copy and our creative studio will begin creative development as soon as the creative audit is complete. That way, there’s no delay waiting for new creative. It’s ready to go right about the same time as the benchmarks have accrued enough data to go forward.
Question: How long do you use a client’s (or your own) legacy videos, audiences, ad copy, etc during the benchmarking process?
Answer: Typically, we don’t use clients’ creative assets very long. In most cases we’ll beat their copy and other creative elements within the first week we work with them. But we will let their creatives run for the first week so we can establish a baseline/benchmark metric.
Once the second-week starts, we’ll begin testing our creative, copy, and audiences. Usually, within a week or two most clients’ creative assets are shut off or outperformed.
All that said, sometimes a client’s top-performing video can last quite a while. As long as that creative’s performance is on par or outperforming our new creative we will continue to run it.
So if you’re doing your own campaign optimization, don’t kill off old creative just because it’s old and doesn’t fit with your new strategy. So long as it works, keep it running. If your new approach is correct you’ll beat that old creative soon enough.
Optimize Audience Structure
Audiences are a critical part of campaign performance, so we test them rigorously. This is our preferred testing approach to build an effective audience structure:
1. Test available geographic setting
We usually use WW, T1, and the US on Broad, Interest Groups, and Lookalike Audiences.
Lookalike audiences are especially critical for our process. We’ll initially test narrower (higher quality) 1%, 3%, 5% audiences, analyze performance and then expand to wider (less expensive) 10%, 15%, 20% audiences in an effort to balance cost versus Return on Ad Spend.
Lookalike audiences can range from 1-20%, though typically we use 1, 3, 5, 7, 10, 12, 15, and 20%. These can also be based on seed audiences of spend (value) or events committed that drive KPIs like monetization, retention, and LTV.
Here are some examples of seed audiences:
- Purchase greater than certain $ amount
- Top 1% Purchasers
- Most Active users
- Top 10% Users
- Most App Launch Users
- Users who have reached a particular milestone
2. Create “MegaStacks”
These are a group of lookalike audiences that consist of similar lookalike audiences in the same percentage range. This allows us to create an expanded audience that is similar in intent. This expanded audience can include:
- Similar audiences (purchases vs top purchasers vs purchases > 9.99)
- Different lookback windows (7D, 30D, 90D, etc.)
- Different Geos (if the audience is worldwide)
3. Develop an audience of “Early Whales”
These use a revenue value that is relevant to the particular game. So instead of going after just any buyer, we’re targeting super-high value buyers.
To do this, first, we’ll create lists of users that meet “early whale” criteria. The values shown below are placeholders, but the idea is that the highest amount (in this case $10) may not be achievable for 1 day or even 2-day users but only 7-day users. Once these audiences are built, they are uploaded to Facebook for lookalike audience creation.
- 1st day all users with at least $2 of revenue
- 2nd day all users with at least $2 of revenue
- 7th day all users with at least $2 of revenue
Budgets and Bidding
Once the lookalike audiences are established, we’ll increase the dollar amounts.
- 1st day all users with at least $5 of revenue
- 2nd day all users with at least $5 of revenue
- 7th day all users with at least $5 of revenue
- 1st day all users with at least $10 of revenue
- 2nd day all users with at least $10 of revenue
- 7th day all users with at least $10 of revenue
Here are some of our favorite tactics for optimizing bids and budgets:
1. Value-based manipulated audiences.
This is a sophisticated and powerful technique made easier with our Audience Builder Express. First, we generate a list of users sorted by revenue that is then manipulated to go much higher and much lower based on their place along with the average. This creates a profile of users who are “high value” and “low value” for Facebook.
You can use any one of the attributes below in your profile:
- Interest Groups. Programmatically generated groupings of Facebook interest categories, games, products, pages, etc.
- Broad Targeting. Unrestricted targeting of all users in the geographic area. This allows Facebook the most reach in identifying quality users but might be too wide to control costs.
2. AEO and VO.
We’ll also test AEO and VO campaign optimization against the audiences described above to determine which bidding strategy produces the best results based on client KPIs.
These tests can include:
- VO+MinROAS. We follow this set of best practices:
- Start with 1% bids (unless you have a very high ROAS goal) or a range of bids
- Adjust the bid higher or lower depending on audience performance
- If we see that quality is too low, we increase the bid
- When the scale is too low, we decrease the bid
- If performance is very high, decrease the bid to increase the scale
3. Leverage campaign structure.
Our preferred audience/campaign structure allows us to quickly determine which geographies/audiences and bidding optimizations will achieve client goals from both a cost and scalability perspective.
This structure delivers more precise results with fewer variables within each campaign. Other agencies/media buyers may change bid strategy or audiences on the fly within campaigns as a quick fix, but we split out variables to identify true performance.
- Split audiences based on similar events (purchases, top purchases)
- Separate broad and interest campaigns
- Divide out campaigns from VO/AEO/MinROAS/MAI
- Split out different country targeting
- Separate out different conversion window targeting
We’ll try all those tests just so we can understand specifically what causes a campaign to perform well. This is critical for later testing: We need to know which creative elements and campaign settings make a difference and which don’t.
1.4 Optimizing for Special Situations
By now you’ve done your audits and you’ve done a close evaluation of your audiences, ad spend, and campaign targeting options. Basically, you know where you’ve been, where you want to go, and how you want to get there.
Now let’s layer in some extra strategy for special situations. We’ve outlined three common situations that deserve a slightly different approach. Use these only if they apply to your situation, but each one of them should be familiar to you so you can apply them if the need arises.
How to Optimize for a Monetization Strategy: Ads (IAA) vs Purchases (IAP)
IAA (Inter-App Audio)
(IAA) apps monetize with in-app ads. Generally, the longer a user remains in the game, the more revenue (ad views) they generate. The goal is to find high-retention users for the lowest acquisition cost possible. Targeting for these campaigns is designed to be low cost / low CPM. Usually, that means App Install optimization with broad, wide lookalikes (10%-20%) and interest groups.
Basic App Install campaigns lack optimization levers (AEO, VO, etc), so instead we optimize on top-performing age, gender, geo, language, device/Android/iOS, and platform placement (IG, FB, FAN, etc).
IAPs (In-App Purchases)
(IAP) apps monetize with in-app purchases. The goal is to acquire high-ROAS users, which is typically achieved through AEO & VO optimizations.
Typically for IAP campaigns, we seek high ROAS campaigns that are driven by AEO and VO bidding and are tied to lookalike campaigns.
Here’s how it works:
- We kick off campaign creation by testing broad campaigns and lookalike campaigns for initial testing.
If there is enough data initially, we tend to test AEO and VO against each other to see which is the better performer.
- If we start to see strong performance in VO, then we start testing MinROAS Bidding.
- We kick off testing with WW and US campaigns. Typically this is because the US has always been a consistent performer and WW campaigns give us data on the other countries for further testing.
- As we continue to run campaigns and identify top-performing countries, we create lookalikes for those specific countries and test them against worldwide.
- Once we get a winning bidding strategy and audience, we test different levers of optimization like:
- DLO vs Non-DLO
- CBO vs Non-CBO
- Multiple Ads per Ad Set vs One Ad Per Ad Set
- Different MinROAS Bidding Levels
- D1 Conversion Window vs D7 Conversion Window
- We also review different breakdowns to determine if top-performing breakdowns could be specifically targeted on a new campaign. These breakdowns include:
- OS Version
- Publisher Platform
While we are testing different campaign builds and audiences, we are consistently putting new creative through Phase1, Phase 2, and Phase 3 testing. Once we have determined winners we introduce these ads into our top-performing campaigns to determine their performance against control creative.
How we manage brand new accounts with no history
Anyone with a new app is in an interesting situation: There is no history. There is nothing to audit. But you can still employ many of the suggestions here to gain an advantage from the start. For example, beef up your competitive analysis if you don’t have any data history of your own. Each of your competitors has invested tens, maybe hundreds of thousands of dollars into creative testing. Learn how to pull insights from their performance to hit the ground running with your new ads.
In terms of campaign structure, if there is not a lot of campaign data, typically we’ll launch a new app with MAI and AEO campaigns to get good data on the audience and creative performance. Then, once we’ve identified top-performing audience groups and creative assets, we’ll pivot into VO bidding and MinROAS bidding to test out what works best there.
Next, with our best bid strategies defined, we’ll review the performance of different demographic data like:
- OS Version
From there, we’ll test
- CBO vs Non-CBO (if we didn’t do this already in earlier phases)
- DLO on top-performing audiences
- New audiences based on top-performing audiences (higher lookalike %s, similar lookalike events, etc.)
- Different bidding levels to determine top performers for MinROAS bidding