Although automation has its place in everyday UA operations, there are still many distinct and advanced levers to pull. Creative strategy and manipulating lookalike audiences have become the primary drivers of ROAS.
And, as advertisers grow their business and exceed $300,000 per month, we have found that advanced UA techniques change quite drastically. Here we provide our best practices to achieve both UA scale and profitability. We like to call this our “UA Masterclass”.
The best way to improve performance is to test new media buying ideas and structures. Also, you should be running multiple tests simultaneously. Some tests can be launched immediately, where other tests (like producing new video concepts) require time and creative resources.
Here’s a list of items you can test right now, without waiting for help from anyone else.
Text, headlines, and calls-to-action can be tested immediately without creative resources:
Audience testing has a significant impact on performance and there are always audience tests that we can run:
By creating new custom audiences, we can reach net new users through retargeting and acquisition
There are many different ways to create new lookalike audiences
Interest groups commonly perform worse than lookalike audiences. But should be tested for all clients because some interest groups perform well
Similar to interest groups, Facebook allows for targeting certain behaviors and job titles. These also generally perform worse than LAL audiences but should be tested.
Facebook generally charges a lower CPM for larger audiences. So running ads with no interest groups or lookalike audiences can drive down costs
Facebook generally charges a lower CPM for larger audiences. So running ads with broader age ranges and both genders can outperform ads that are more narrowly targeted
Depending on the client, targeting newer vs. older hardware and software could boost performance. This is due to the technological requirements one app may have over the other.
Countries that run in multiple countries, there are many ways to target locations and performance varies by client.
Currently, Facebook is able to subjectively whitelist platform-specific targeting for mobile gaming clients.
Sample language and country targeting test for a gaming client:
Ad copy has a significant performance impact and is also a quick process. So the ROI from time spent is often very strong. Effective ad copy generally tells the story clearly and succinctly, but many different writing styles are effective. When writing ad copy, it’s helpful to first outline the themes of ad copy you want to test. Then write variations for each theme with different writing styles. By testing a variety of themes, and then testing a variety of variations of writing styles for each theme, you can quickly learn which themes perform best before optimizing messaging within a theme.
To identify appropriate themes to test, look at the messaging in the conversion funnel (landing pages, app store pages) for inspiration. There are some themes that can generally be applied to all businesses. Also, some themes will be genre-specific. Below is a list of themes that can be generally applied to all genres of businesses:
For each theme, you should test multiple variations of ad copy, using various writing styles, and spinning out slight variations. The best writing style is dependent on the audience, so many styles should be tested.
Use one or many ad copy styles for each ad copy variation, and these are common styles that perform well
Building custom and lookalike audiences are a constant part of audience expansion. By reaching net new users through audience expansion, you can significantly improve CPA / ROAS. Outside of creative testing, this is the most common method of significantly improving CPA / ROAS.
There is no limit to the volume of custom and lookalike audiences you can create. You can create custom audiences based on different events like app starts, purchases, tutorial completions, revenue, etc. For each event:
We see strong performance when creating a highly diverse set of custom audiences and then targeting the top 1% – 3% affinity across the various audiences. For audiences with an overlap above 40%, it can be beneficial to group them in a single ad set, creating a “lookalike stack.” For lookalike audiences without high overlap, they should be tested as individual audiences as well as lookalike stacks.
For the location, worldwide audiences generally perform well whether we’re targeting individual countries, groups of countries, or worldwide. Country-specific audiences generally perform well for the countries they were built from. But they generally do not perform well when targeting other countries or worldwide.
Value-based audiences work well (i.e. top 25% of purchasers). And you can manipulate purchase values to create different lookalike audiences. The theory is that Facebook has an easier time finding whales when there’s a greater variance between the highest-value and lowest-value purchasers. We manipulate audiences by increasing revenue for top payers. Also, by decreasing revenue values for low-value payers. Below is an example of 12 different custom audiences that were generated with various revenue increases and decreases for high-value and low-value payers.
You can also create additional types of audiences through Facebook Analytics. This can be useful for 1) new audiences to test that may not be available through other tools and 2) analytics to gain insights into FB-specific traffic quality.
The simplest way to improve CPA and/or ROAS is to reduce daily spend, as we generally see a correlation between lower daily spend and stronger CPA / ROAS. However, UA teams are generally tasked with improving CPA / ROAS without reducing spend, and the most common ways to do this are by producing new winners through creative testing, audience expansion, changes to targeting, and optimization techniques.
Testing entirely new concepts with a different look and feel can improve ROAS massively.
By reaching net new users from audience expansion, we are able to significantly improve performance. Outside of creative testing, this is the most common method of improving KPIs. In today’s market, lookalike audiences that are generated from custom audiences commonly outperform interest groups and usually broad targeting (IAP games), and there is no limit to the volume of lookalike audiences we can create.
In addition to leveraging Facebook Analytics to gather app insights, Facebook also allows for the creation of “non-standard” audiences through Facebook Analytics. One example of this can be seen through the creation of “rule-based” audiences. Rule-based audiences can be more defined than standard audiences due to the specific actions one can target on the FB Analytics platform. The following example shows data for iOS users who launched this app more than 20 times and also made a purchase within the last 28 days.
Changes to age, gender, location, placements, and devices can all have positive impacts on CPA / ROAS, and targeting tests should be run early so that future creative testing uses efficient targeting.
Facebook has rolled out a variety of new products over the past couple of years. The performance of these products is often inconsistent. Due to performance variance, these items should be tested periodically, but we can generally assume that best practices, on average, should be used as a starting point.
App event optimization is used to optimize for the lowest cost per app event, where value optimization is used to optimize for the highest revenue per event. Value optimization generally carries both a higher CPA and a higher ROAS than app event optimization, as Facebook is effective at identifying “whales” (high-revenue purchasers) through value optimization targeting.
Currently, Facebook offers advertisers the option to optimize toward a 1 day or 7-day post-click conversion window for AEO and VO campaigns. Generally, 1-day conversion windows net out higher ROAS with higher costs, while 7-day conversion windows typically bring in lower ROAS with higher volume. However, it is important to test both options occasionally to validate account-specific performance.
Available only for VO. Min ROAS bids allow the advertiser to “bid” preferred ROAS percentages, according to the selected conversion window selected during adset setup. Bidding here is slightly different as the advertiser is choosing the preferred return on spend instead of “cost per X”. Generally, it is recommended to cast a wide net of bids on each conversion window to optimize toward a “sweet spot” of quality and volume.
Dynamic language optimization allows us to input ads for various languages in the same ad, and Facebook will dynamically serve ads with the appropriate language to the appropriate audience.
Campaign budget optimization allows us to use a single campaign budget that governs all ad sets within a campaign. When using a single CBO campaign budget, Facebook then adjusts budgets for each ad set and shifts more spend to ad sets with better performance. This is different from non-CBO where each individual ad set has its own budget that is managed separately.
Dynamic creative optimization allows us to input multiple variations of each creative element (video, text, headline, etc.). Facebook will automatically randomize the creative combinations and begin serving more impressions to the creative combinations that perform best.
By pausing underperforming ads, we can shift spend to stronger ads and portfolio CPA / ROAS will generally improve. To offset volume losses by pausing underperforming ads, we need to either increase budgets for active ads or launch new ads.
By adjusting budgets to reduce spend from underperforming ads and shift spend to top-performing ads, we can balance a portfolio of ads to achieve better results. With each budget adjustment, a significant edit is triggered and the learning phase is reset. So there is some risk to increasing budgets for top ad sets. Making smaller changes in absolute dollars and percentages will help limit the negative impact of triggering a significant edit.
For ads that are using manual bids, we can change bids to improve CPA / ROAS. Bids tend to alter volume more than CPA; for instance, a bid decrease from $12 to $10 may cause spend to fall by 50% where CPA only falls by 10%. We generally recommend running higher bids, to gain access to the highest-quality inventory, and then using budgets to control CPA / ROAS. With a high bid in place, we would expect lower budgets to carry better CPA / ROAS than higher budgets. It’s counter-intuitive, but low bids could reduce access to high-quality impressions and actually hurt CPA / ROAS.
There is a general rule that CPA will increase and/or ROAS will decrease as we scale spend on Facebook. This is true both for aggregate spend over time (audiences get saturated and creative fatigues as overall spend increases), and for scaling spend overnight (Facebook often reaches into lower-quality inventory to fulfill inventory for incremental budgets). We are able to gain efficiencies vs. the market by taking intelligent approaches to scale spend with the goal of protecting CPA/ROAS.
The two most common methods of scaling spend overnight are increasing budgets for existing ads and launching new ads
When increasing budgets for existing ads, there are two primary reasons why CPA increases / ROAS decreases. The first reason is that anytime a budget is adjusted, a “significant edit” is triggered and a significant edit causes ads to go into the “learning phase.” When an ad is stuck in the learning phase, backend history is reset, and the ad faces temporary volatility as backend history builds back up. Because of this, our goal is to limit the frequency and volume of significant edits. The second reason is that Facebook will reach into lower-quality inventory to fulfill incremental budgets. So the quality of users generally drops as individual ad set or campaign budgets are increased. We generally see better performance when running a higher volume of ad sets at a lower average budget per ad set.
While making significant edits can cause performance volatility, it’s a necessary part of scaling and it’s OK to make significant edits. As long as you limit the frequency of significant edits and analyzing the impact. We often need to decide whether to make a significant edit for a top ad vs. increasing volume by launching new ads. And the best approach can vary depending on the account. For instance, if we have a relatively low volume of ads that are responsible for most of the account’s strong performance, we may decide not to edit these ads (to protect their performance) and instead focus on launching new ads to capture more volume.
Or, if there is a high volume of ads performing well, there’s less risk to portfolio performance by triggering a significant edit for a single ad. So we would be more willing to trigger a significant edit for a top ad since the risk is lower. When taking a significant edit, the best practice is to not change budgets or bids more than 30% at one time.
New ads begin in the learning phase and carry a higher CPM than mature ads, so launching a high-volume of new ads can hurt performance. Launching new ads generally carries more risk than editing existing ads, but the rewards can be much greater for launching new ads versus editing existing ads. For instance, if new ad launches are focused on expanding into new audiences and reaching net-new users, we may see CPA / ROAS improve in aggregate for new ads. Or, if new ad launches are focused on creative testing and we produce a winner, then CPA / ROAS may improve in aggregate for new ads. In general, new ad launches should be focused on doing something different, and most commonly this would be different audiences, different creative, different targeting, or different campaign structure/budget/bid strategies.
Maintaining performance at a high scale has different challenges than scaling spend overnight, but the optimization techniques are similar. Creative fatigue and audience saturation are the main drivers of performance degradation as advertiser spend increases, and these challenges are more pronounced at a high scale than when increasing spend overnight.
Users are repeatedly seeing the same creative over time and click-through rates / general performance declines as the frequency of impressions per user increases. It occurs at a faster rate when spend is higher, and when we have fewer high-performing creative assets. For instance, if we spend $1,000,000 with 3 high-performing video concepts for client A and we spend $1,000,000 with 6 high-performing video concepts for client B, client A’s creative will fatigue roughly twice as fast as client B. Because the impression volume for each of client A’s creative concepts will be double the impressions of creative concepts for client B.
There is a benefit to having a higher volume of high-performing creative and this means that we need to increase the frequency and volume of creative testing as we increase spend. Clients that spend $1,000,000 per month need roughly 10X the creative testing as a client at $100,000 per month, to maintain the same rates of creative fatigue.
Users are more likely to click when they see an ad for a product for the first time. So performance is stronger the first time we show ads to a new audience. As the frequency of impressions increases for an audience, users become less likely to click. An increased frequency of impressions causes an audience’s performance to decline. And simultaneously the highest-value users are effectively being removed from our audiences as they convert. So there are multiple reasons why performance drops as audience impression frequency increases.
We combat the performance degradation from audience saturation by continually testing new audiences that are designed to reach net-new users that have not seen our ads. Similar to the benefit of having a higher-volume of strong performing creative assets, we see a slower rate of fatigue for clients that have a higher volume of high-performing audiences. With twice as many high-performing audiences, we would expect the performance degradation to occur at roughly 50% the rate, depending on the amount of overlap that exists between audiences.
With Facebook advertising, the only constant is change. Performance commonly fluctuates as creative fatigues, audiences saturate, marketplace conditions change, and Facebook updates algorithms. When we notice an account’s performance fluctuating, the next step is to determine why performance is fluctuating. While each performance fluctuation is unique, there are four common questions we can ask in the process of attempting to diagnose the cause of performance fluctuation:
The common changes that drive significant fluctuation are new ad launches and major shifts to traffic from pausing ads or adjusting budgets. To easily identify whether new ad builds are the cause of volatility, we can view ad build performance in advanced reporting and then filter out recent ad builds to determine if performance was “normal” if we ignore the recent ad launches. Outside of understanding the impact of recent ad builds, we can compare different date ranges in our reports for any object to identify major shifts in traffic allocation. For instance, we can compare video performance for yesterday vs. two days ago to quickly determine whether any major shifts in traffic distribution by video have occurred.
If we can’t identify any major shifts in traffic that we caused, the next step is to identify if any shifts in traffic were caused by changes outside of our control. For instance, our top ads may become disapproved and stop spending, which could reduce the volume and hurt performance. The common shifts in contribution % that significantly impact performance are shifts by the ad, by creative, and by the audience. It’s helpful to use the date comparison function in advanced reporting to compare traffic contribution % over different time periods for ads, creative, audiences, and demographics.
If we can’t find any major shifts in Facebook traffic distribution, and performance is fluctuating for all / most high-volume ads, then we need to understand if anything changed with a client’s product (app or conversion funnel) or if any changes to tracking were made. With apps, we can see the version history in the app store or sites like App Annie. We can determine if any releases correlate with performance fluctuations. There could be changes to an app’s tracking that are not made through an app store release, so app store releases are not a definitive answer for whether changes were made, so it’s appropriate to ask a client if anything changed even if we don’t see any releases that correlate with performance fluctuation.
For web clients, the first step should be to visit the destination URL. Look for any obvious changes to the site or to the Facebook pixel. The Chrome browser extension for Facebook pixel help is very helpful. If we notice app store releases or changes to a website, then we should discuss the changes with the client and determine whether they are the cause of performance fluctuation. Comparing Facebook performance to performance across other high-volume traffic sources can help identify if the issue is global and appears to be caused by a product change.
If we’re unable to identify shifts in Facebook traffic or a client’s product, then we may be dealing with macro events that are outside of our control. For instance, competition on Facebook may shift end of the month, end of the quarter, and end of the year. We also see major shifts in performance around national holidays and seasonal events like summer when school is out and human behavior changes.
There are also events like the start of NFL season when daily fantasy sports companies disrupt the Facebook marketplace by increasing aggressiveness. We can also look across our portfolio to determine if the fluctuation is client-specific or global. If we see performance fluctuation with most / all clients during the same time frame, then we will have higher confidence that we’re not in control of the fluctuation.
Lastly, the release of competitor apps could also play a large factor in diagnosing performance fluctuations. If we’ve completed a thorough analysis of Facebook changes and product changes, and we still can’t explain the performance fluctuation, then we need to work with clients to determine whether we should temporarily reduce spend until fluctuation normalizes. Or, determine if we should increase testing in an effort to produce a win that will offset the fluctuation. In most cases, it makes sense to temporarily reduce spend because new ads that are launched during volatile marketplaces often perform poorly, regardless of creative and audiences.
Artificial intelligence will play a key role (if not THE role) in the future of user acquisition.
Even now, AI can run many key components of UA more effectively and more efficiently than humans can. So instead of spending time and overhead on quantitative tasks, UA managers should pivot into creative strategy, development, and testing if they want to keep their jobs.
This can be an exciting shift into UA management 2.0 if you’re agile enough to keep pace with all the changes.
Note that we do anticipate AI will eventually scale creative production beyond human capacity. It will eventually learn to create videos and develop copy in a greater capacity than people. But we are still years away from that reality. At least for now, creative is king, and humans can still do it best.