July 2020: How do we profitably scale Facebook user acquisition for mobile apps?  In our two-part series, we’re going to provide a full and detailed explanation for how to achieve ROAS at a sustained scale. For some context on where Facebook user acquisition is today, it’s important to note the major strides made in 2019 and 2020 towards simplifying and streamlining its ad platform. This is overwhelmingly a good thing because it does the following:

  • Allows more people to effectively use the platforms, regardless of their advertising skills
  • It saves UA managers’ valuable and very limited time
  • Gets more consistent results

And, while Facebook has automated some levers, other opportunities are opening up. Creative strategy, development, and efficient A/B testing are now primary drivers of ROAS and scale and they are still best done by humans.  This is where we come in as a trusted advisor in Facebook UA, both in creative and media buying strategy because automation has affected UA performance and management, and what UA managers should do to evolve into this very new environment. It’s an exciting time to be in user acquisition, but it demands a great deal of agility.

Scale Facebook User Acquisition (Part I)

When we start working with a new client or creating a new media account, we always begin with a full audit of both the creatives and the media buying account configuration. This includes:

Creative Audit

We review what is a client’s best and worst-performing creative and why. The goal is not to replicate ideas that have won or failed but to go into fresh directions to explore new areas. We always begin with a robust review of competitor ads. Pablo Picasso said it best. He said, “A good artist will borrow but a great artist will steal.” Your competitors are failing at the same rate, between 85% to 95%. That means a vast majority of all new concepts fail to outperform the best creative in a portfolio. If you can’t outperform your best ad, you lose money running it. However, if you can incorporate your competitor’s best concepts and creative trends, it will give you an endless supply of concepts that they’ve tested.

How To Scale Facebook User Acquisition Creative Inspiration

  • Special Offer: See your competitors’ top video creative and understand which ads drive their performance. We’re giving away FREE access to over 1 million competitive videos.

We then do a complete review of your assets to understand how easy or complex it will be to take the pieces and recombine them into editing software to create new concepts. 

If you have done market segmentation analysis and produced player profiles, we’ll use that information to refine our calls to action to appeal to your best audiences.

If you are looking for a collaborative approach, we offer a premium service called “Collaborative Creative.” Here, we put together a strategic creative plan with mini briefs that contain concept hypotheses and motivations and then walk you through the document for feedback.

Get killer Facebook creative! Facebook & Google AI are automating media buying making creative the main driver of profitable UA.  In our creative inspiration whitepaper, we layout hundreds of examples of why Creative Trends, Competitive Analysis, Player Profiles, and Creative Testing are critical for UA success. We have developed a new approach to creative testing that solves the adage of “why the control always seems to win” through extensive research, which we reveal in this guide to creative trends for mobile game advertisers.

Check out our other articles on Facebook Creative:


Media Buying Audit

Review KPIs (Key Performance Indicators) and Lifetime Data, verify if the client is achieving KPIs. If not, how far off goal.

Does the client have an MMP (Mobile Measurement Partner)? If so, does Facebook data align with their first-party data? If not, how much is it off?

Review creative, campaign, and audience performance – review which components are achieving or near KPI.

Review top-performing campaigns, audiences, and creative and highlight the top performers. What works and why?

Verify if the client has tested CBO (Campaign Budget Optimization) vs Non-CBO campaigns – confirm the performance of each.
  • CBO Campaigns allow Facebook’s algorithm to split out a set budget between the different ad sets instead of manually inputting these budgets at the ad set level.
Verify if running DLO ads – confirm performance.  Do particular languages monetize better and how does that map to their geography targeting?
  • DLO (Dynamic Language Optimization) allows multiple languages in one ad unit which Facebook dynamically serves users based on their indicated language. 
Review bid types to determine what is working, including VO, MinROAS, AEO, MAI, etc.
  • AEO (App Event Optimization): Instructs Facebook to optimize for users most likely to commit the indicated event (Examples could be: level achieved, add to cart, registration completed, purchase). 
  • VO (Value Optimization): Tells Facebook to optimize towards users that are most likely to purchase at a great amount over a longer period of time. Typically used for highest LTV (Lifetime Value).
  • MinROAS: A function of VO, instructs Facebook to optimize towards users who are likely to generate a specified Return on Ad Spend within a specified timeframe.
  • MAI (Mobile App Install): Tells Facebook to optimize towards users that are most likely to install the app.
Review performance by media type and determine if videos or static images, carousels, DCO (Dynamic Creative Optimization) are performing on the account.
Review operating system performance (Android vs iOS).
Uncover new opportunities where the client is under testing or not testing at all.

Why the Control Video is Hard To Beat

Prepare to normalize account structures with a balance between Facebook’s best practices of Structure for Scale (S4S) / Power 5 and our proven methodologies to achieve both scale and ROAS.
  • Structure For Scale’s main strategy is to streamline/minimize the number of campaigns and ad sets targeting wider reach audiences to allow the algorithm to most efficiently drive ROAS and scale or other desired outcomes. 
  • Concentrating spend on fewer adsets allows Facebook to quickly accumulate events and exit the learning phase.
  • Maximize audience reach so Facebook algorithms find the most qualified users while minimizing the audience overlap. 
  • We minimize changes to campaign/adset settings to avoid a “significant edit” and returning to the learning phase. We’ll often launch a new campaign with the desired changes to avoid affecting the original campaign. 
  • We tend to follow 4 out of the 5 “Power of Five” Best Practices and those would be
    • Auto Advanced Matching (typically set by the client if they want to sync customer data)
    • Account Simplification / Structure For Scale
    • Campaign Budget Optimization
    • Automatic Placements (Allows Facebook to choose where the ad will most efficiently be displayed across their ad networks)
    • Dynamic Ads – Infrequently used, but powerful for personalized product retargeting campaigns primarily for e-commerce clients.
Note, if there is not a lot of campaign data or it’s a brand new account with no history, typically we kick off with MAI and AEO campaigns to determine audience performance and creative performance.
  • Once top-performing audience groups and creatives are determined we move into VO bidding and MinROAS bidding.
  • Once bid strategies are determined, we start to review the performance of different demographics:
    • Age/Gender
    • Geos
    • OS Version
  • We will test CBO vs Non-CBO if we haven’t in the initial phase.
  • We’ll test DLO on top-performing audiences.
  • For MinROAS bidding, we start testing out different bidding levels to determine the top performer.
  • We will test new audiences based on top-performing audiences (higher lookalike %, similar lookalike events, etc.).

The Process of Media Buying

The simplest way to improve performance is to reduce daily spend. We generally see a correlation between lower daily spend and stronger ROAS. However, we’re generally tasked with improving ROAS without reducing spend, and the most common ways to do this are by producing new winners through creative testing (described above), audience expansion, changes to targeting, and optimization techniques.

Starting Point

To start, we use the client’s strongest elements (videos, images, ad copy, audiences) to establish a baseline performance while using our preferred campaign structure.  While benchmarking, we write a new copy and our creative studio will begin ideation following the above creative process.

Our Preferred Audience Structure for Scale:

Testing Client’s Available Geos. Usually WW, T1, and the US on Broad, Interest Groups and Lookalike Audiences.

Lookalike Audiences

We initially test narrower (higher quality) 1%, 3%, 5% audiences, analyze performance and then expand to wider (less expensive) 10%, 15%, 20% audiences in an effort to balance cost vs. Return on Ad Spend:

  • Lookalike audiences can range from 1-20% (Typically use 1,3,5,7,10,12,15,20%) 
  • Lookalike audiences are based on seed audiences of spend (value) or events committed that drive client KPIs (monetization, retention, LTV)
  • Seed Audience Examples
    • Purchase
    • Registration
    • Purchase greater than certain $ amount
    • Top 1% Purchasers
    • Most Active users
    • Top 10% Users
    • Most App Launch Users
    • Users who have reached a particular milestone
  • We create “MegaStacks”, a group of lookalikes that consist of similar lookalike audiences in the same percentage range. This allows us to create an expanded audience that is similar in intent. This can include:
    • Similar audiences (purchases vs top purchasers vs purchases>9.99) 
    • Different lookback windows (7D,30D,90D, etc.)
    • Different Geos (if the audience is WW) 
  • Early Whales, use a revenue value that is relevant to the particular game. Then create lists of users that meet those criteria. Values below are placeholders but the idea is that the highest amount (in this case $10) may not be achievable for 1 day or even 2-day users but only 7-day users.  Once these are built, they are uploaded to Facebook for lookalike audience creation.
  • Ex: 
    • On day 1 all users with at least $2 of revenue
    • Day 2 all users with at least $2 of revenue
    • By day 7 all users with at least $2 of revenue
  • Increase $ amounts
    • On day 1 all users with at least $5 of revenue
    • Day 2 all users with at least $5 of revenue
    • By day 7 all users with at least $5 of revenue
  • Then:
    • On day 1 all users with at least $10 of revenue
    • Day 2 all users with at least $10 of revenue
    • By day 7 all users with at least $10 of revenue

Value-Based Manipulated Audiences

The client generates a list of users sorted by revenue that is then manipulated to go much higher and much lower based on their place along with the average. This creates a profile of users who are “high value” and “low value” for Facebook.

  • Interest Groups – programmatically generated groupings of Facebook interest categories, games, products, pages, etc.
  • Broad Targeting – Unrestricted targeting of all users in the geographic area. This allows Facebook the most reach in identifying quality users but maybe too wide to control costs.
  • Testing AEO and VO against the above audiences to determine which bidding strategy produces the best results based on client KPIs.
  • VO+MinROAS. We follow a set of best practices
    • Start with 1% bids (unless the client has a very high ROAS goal) or a range of bids
    • Adjust the bid higher or lower depending on audience performance
    • Increase the bid, if we see that quality is too low
    • Decrease the bid, if we see that scale is too low
    • If performance is very high, decrease the bid to increase the scale
  • Our preferred audience/campaign structure allows us to quickly determine which Geos/Audiences and Bidding optimizations will achieve client goals from both a cost and scalability perspective. 
  • Our structure delivers more precise results with fewer variables within each campaign. Other agencies/media buyers may change bid strategy or audiences on the fly within campaigns as a quick fix, where we split out variables to identify true performance. 
    • Split audiences based on similar events (purchases, top purchases) 
    • Separate broad and interest campaigns
    • Divide out campaigns from VO/AEO/MinROAS/MAI
    • Split out different country targeting
    • Separate out different conversion window targeting
    • All so that we can understand specifically what causes a campaign to perform well

Scale Facebook User Acquisition Campaign Performance Graph

Tips for Additional Audience Expansion

By reaching net new users from audience expansion, we are able to significantly improve performance. Outside of creative testing, this is the most common method of improving KPIs. In today’s market, lookalike audiences that are generated from custom audiences commonly outperform interest groups and usually broad targeting (IAP games), and there is no limit to the volume of lookalike audiences we can create. For instance, we can create custom and lookalike audiences based on different events like app starts, purchases, tutorial completions, and revenue, etc.

Also, for each event, we can create custom and lookalike audiences based on the top 1% of users, the top 10% of users, the top 25% of users, etc. In addition, for each custom audience, we can create different lookalikes for users in the past 7 days, past 30 days, past 60 days, etc. And finally, for each custom audience, we can create lookalikes that are top 1% affinity, top 2% affinity, top 3% affinity, etc. We see strong performance when creating a highly diverse set of custom audiences and then targeting the top 1% affinity across associated lookalike audiences.

Audience Expansion Through FB Analytics

In addition to leveraging Facebook Analytics to gather app insights, Facebook also allows for the creation of “non-standard” audiences through Facebook Analytics.  One example of this can be seen through the creation of “rule-based” audiences.  Rule-based audiences can be more defined than standard audiences. This is due to the specific actions one can target on the FB Analytics platform.  The following example shows data for iOS users who launched this app more than 20 times. Also, those users who made a purchase within the last 28 days.  Currently, AdRules does not support all of the options listed in Facebook Analytics. So there could be large performance boosts by getting creative with these types of audiences.

scale facebook user acquisition audiences

Then, we optimize the media buying account for its monetization strategy – based on Ads (IAA) vs Purchases (IAP):


(IAA) apps monetize with in-app ads. Generally, the longer a user remains in the game, the more revenue (ad views) they generate. The goal is to find high retention users for the lowest acquisition cost possible. Targeting is designed to be low cost / low CPM – usually App Install optimization with broad, wide lookalikes (10%-20%) and interest groups. Basic App Install campaigns lack optimization levers (AEO, VO, etc), we instead optimize on top-performing age, gender, geo, language, device/Android/iOS, and platform placement (IG, FB, FAN, etc).


(IAP) apps monetize with in-app purchases. The goal is to acquire high-ROAS users, typically achieved with AEO & VO optimizations. Typically for IAP campaigns, we seek high ROAS campaigns that are driven by AEO and VO bidding, tied to lookalike campaigns.

Testing Structure For Scale Campaigns
  • We kick off campaign creation by testing broad campaigns and lookalike campaigns for initial testing
    • Initially, if there is enough data we tend to test AEO and VO against each other to see a better performer
    • If we start to see strong performance in VO we start testing MinROAS Bidding
  • We kick off testing with WW and US campaigns. Typically this is because the US has always been a consistent performer, and WW campaigns give us data on the other countries for further testing.
  • As we continue to run campaigns and identify top-performing countries, we create lookalikes for those specific countries and test them against worldwide.
Testing Structure For Scale Optimization
  • Once we get a winning bidding strategy and audience, we test different levers of optimization like:
    • DLO vs Non-DLO
    • CBO vs Non-CBO
    • Multiple Ads per Ad Set vs One Ad Per Ad Set
    • Different MinROAS Bidding Levels
    • D1 Conversion Window vs D7 Conversion Window 
  • We also review different breakdowns to determine if top-performing breakdowns could be specifically targeted on a new campaign. These breakdowns include:
    • Age
    • Gender
    • Geographies 
    • OS Version
    • Publisher Platform
  • While we are testing different campaign builds and audiences, we are consistently putting new creative through Phase1, Phase 2, and Phase 3 testing. Once we have determined winners we introduce these ads into our top-performing campaigns to determine their performance against control creative. 

Fluctuations in ROAS and Scale

You’ve tried everything but are still having issues! Check out how we diagnose performance fluctuations.

With Facebook advertising, the only constant is change. Performance commonly fluctuates as creative fatigues, audiences saturate, marketplace conditions change, and Facebook updates algorithms. When we notice an account’s performance fluctuating, the next step is to determine why performance is fluctuating.

While each performance fluctuation is unique, there are four common questions we can ask in the process of attempting to diagnose the cause of performance fluctuation:

1. What changes did we make that could have caused volatility?

When performance fluctuates, the first question we need to answer is whether we made any changes that would cause performance fluctuation. The common changes that drive significant fluctuation are new ad launches and major shifts to traffic from pausing ads or adjusting budgets.

To easily identify whether new ad builds are the cause of volatility, we can view ad build performance in advanced reporting and then filter out recent ad builds to determine if performance was “normal” if we ignore the recent ad launches. Outside of understanding the impact of recent ad builds, we can compare different date ranges in our reports for any object to identify major shifts in traffic allocation. For instance, we can compare video performance for yesterday vs. two days ago to quickly determine whether any major shifts in traffic distribution by video have occurred. 

2. What changes occurred within traffic distribution?

If we can’t identify any major shifts in traffic that we caused, the next step is to identify if any shifts in traffic were caused by changes outside of our control. For instance, our top ads may become disapproved and stop spending, which could reduce the volume and hurt performance. The common shifts in contribution % that significantly impact performance are shifts by the ad, by creative, and by the audience. It’s helpful to use the date comparison function in advanced reporting to compare traffic contribution % over different time periods for ads, creative, audiences, and demographics.

3. Assuming we can’t point to any major shifts in traffic allocation within Facebook, were there any product or tracking changes on the client-side?

If we can’t find any major shifts in Facebook traffic distribution, and performance is fluctuating for all / most high-volume ads, then we need to understand if anything changed with a client’s product (app or conversion funnel) or if any changes to tracking were made. With apps, we can see the version history in the app store or sites like App Annie and we can determine if any releases correlate with performance fluctuations. There could be changes to an app’s tracking that are not made through an app store release, so app store releases are not a definitive answer for whether changes were made, so it’s appropriate to ask a client if anything changed even if we don’t see any releases that correlate with performance fluctuation.

For web clients, the first step should be to visit the destination URL and look for any obvious changes to the site or to the Facebook pixel. The Chrome browser extension for Facebook pixel help is very helpful. If we notice app store releases or changes to a website, then we should discuss the changes with the client and determine whether they are the cause of performance fluctuation. Comparing Facebook performance to performance across other high-volume traffic sources can help identify if the issue is global and appears to be caused by a product change.

4. Assuming we cannot point to any changes on Facebook or the product, is performance fluctuation caused by macro events that are outside of our control?

If we’re unable to identify shifts in Facebook traffic or a client’s product, then we may be dealing with macro events that are outside of our control. For instance, competition on Facebook may shift to the end of the month, end of the quarter, and the end of the year. We also see major shifts in performance around national holidays and seasonal events like summer, when school is out and human behavior changes. There are also events like the start of NFL season when daily fantasy sports companies disrupt the Facebook marketplace by increasing aggressiveness. We can also look across our portfolio to determine if the fluctuation is client-specific or global. If we see performance fluctuation with most / all clients during the same time frame, then we will have higher confidence that we’re not in control of the fluctuation.

Lastly, the release of competitor apps could also play a large factor in diagnosing performance fluctuations. For example, performance for an FPS mobile game dips due to the release of a new major title. If we have completed a thorough analysis of Facebook changes and product changes, and we still can’t explain the performance fluctuation, then we need to work with clients to determine whether we should temporarily reduce spend until fluctuation normalizes. Or if we should increase testing in an effort to produce a win that will offset the fluctuation. In most cases, it makes sense to temporarily reduce spend. Because new ads that are launched during volatile marketplaces often perform poorly, regardless of creative and audiences.

Still looking for more Facebook & Google user acquisition info?

To learn more about how Consumer Acquisition can support your creative and media buying needs, contact us: https://www.consumeracquisition.com/contact-us/

As part of Brainlabs
we now offer:

Paid Search | SEO | Programmatic

Learn more!
Global CTA

Read Our

Creative & UA Best Practices For Facebook, Google, TikTok & Snap ads.

    Please prove you are human by selecting the Car.