Advanced UA Techniques For Facebook (Part II)
- by Brian Bowman | July 29, 2020
- UA is Dead
- No Comments (0)
This is the second in our two-part Facebook user acquisition series. Consider this our Facebook UA Master Class! We are providing our advanced UA techniques and best practices to achieve scale and profitability. If you missed our first post on how to profitable scale Facebook UA – here ya go!
User acquisition has been a numbers game for a long time. Back in the old days, like 2017, the numbers were largely managed by people. UA managers would edit bids, budgets, and placements and tweak audience targeting numerous times a day to game the algorithm to maximize return on advertising spend (ROAS).
All that changed in February 2018. Almost overnight, Google took back all control all at once at first. Then dolled a little bit back to us. Facebook took a different approach, starting small, and has been incrementally nudging us all toward full automation.
This has had a profound effect on mobile app advertising and user acquisition managers. We have far fewer levers left to outperform the AI algorithms than we used to have. And yes, the automation has simplified account management for budgets less than $300,000 per month.
But while some things are being taken away, other opportunities are opening. Creative strategy and manipulating lookalike audiences have become the primary drivers of ROAS.
And, as advertisers grow their business and exceed $300,000 per month, we have found that advanced UA techniques change quite drastically. Here we provide our best practices to achieve both UA scale and profitability. We like to call this our “UA Masterclass”.
Advanced Testing Strategies When Resources are Limited
The best way to improve performance is to test new media buying ideas and structures. Also, you should be running multiple tests simultaneously. Some tests can be launched immediately, where other tests (like producing new video concepts) require time and creative resources.
Here is a list of items you can test right now, without waiting for help from anyone else:
Text, headlines, and calls-to-action can be tested immediately without creative resources.
The text has a strong impact on performance and should be tested regularly.
Headlines have an impact on performance, but the impact is not as strong as the impact of the text. They should be tested at a 1:3 ratio vs text. This means we should test 10 headlines for every 30 text variations that are tested.
Call-to-action buttons are predefined by Facebook and are limited in options that are appropriate to test periodically for each client. CTA should be tested early for each client, but infrequently once a clear winner is established.
Audience testing has a significant impact on performance and there are always audience tests that we can run.
By creating new custom audiences, we can reach net new users through retargeting and acquisition.
- For retargeting
- For lookalike creation
There are many ways to create new lookalike audiences.
- Test stacks of lookalikes
- Expand into a high volume of seeds
- Test country-specific vs. worldwide
- Expand into a broader affinity %
- Test nested vs. not nested
Interest groups commonly perform worse than lookalike audiences but should be tested for all clients because some interest groups perform well.
- Aiming for demographics
Like interest groups, Facebook allows for targeting certain behaviors and job titles. These also generally perform worse than LAL audiences but should be tested.
Facebook generally charges a lower CPM for larger audiences. So running ads with no interest groups or lookalike audiences can drive down costs.
- No interest groups or lookalikes
Facebook generally charges a lower CPM for larger audiences. As a result, running ads with broader age ranges and both genders can outperform ads that are more narrowly targeted.
Depending on the client, targeting newer vs. older hardware and software could boost performance due to the technological requirements one app may have over the other.
- OS version
- Device type (phone, tablet, iPod)
Countries that run in multiple countries, there are many ways to target locations and performance varies by client.
- Worldwide with/without various country exclusions
- Individual countries
- Groups of countries
- Facebook country groupings
- Country tiers
Currently, they can subjectively whitelist platform-specific targeting for mobile gaming clients.
- Instagram only
- FAN only
- Messenger only
- Facebook Feed only
Sample language and country targeting test for a gaming client:
Writing Ad Copy for ROI
Ad copy has a significant performance impact. It is also a quick process, so the ROI from time spent is often very strong. Effective ad copy generally tells the story clearly and succinctly. But many different writing styles are effective. When writing ad copy, it is helpful to first outline the themes of ad copy you want to test. Then write variations for each theme with different writing styles. By testing a variety of themes, and then testing a variety of variations of writing styles for each theme, you can quickly learn which themes perform best before optimizing messaging within a theme.
To identify appropriate themes to test, we can look at the messaging in the conversion funnel (landing pages, app store pages) for inspiration. There are some themes that can generally be applied to all clients, and some themes that will be client-specific. Below is a list of themes that can be generally applied to most clients.
- How it works
- Why use this product?
- Why play this game?
- Special events
- Welcome bonuses
U.D.: fear / uncertainty / doubt
- Do not miss out (FoMo)
- Do not take the risk
Specific for a video or image
- Not all ad copy is appropriate for all videos and images. Some images or videos will carry their own themes.
- Keywords from landing page headlines
- Keywords from the first 100 words of app store descriptions
- Words that appear on app store images or in-app store videos
- Relax (this is a common theme for anyone playing casual games)
- “Best” game ever (within the genre, i.e. “this is the best solitaire game ever!”)
- “Only 1% of users can beat this game”
- “Me vs. my grandma/mom/boyfriend/girlfriend/etc.”
- “Can you do better?”
- Taking quotes from actual 5-star reviews and quoting positive reviews for the game
- Fake testimonials and/or fake quotes (if the client is ok with this)
- Leveraging popular and/or relevant app genre emojis
- “Stacking” long ad copy (multiple lines) while mixing in emojis and/or challenges, questions, benefits
For each theme, we should test multiple variations of ad copy, using various writing styles, and spinning out slight variations. The best writing style is dependent on the client/audience, so many styles should be tested for each client.
We can use one or many ad copy styles for each ad copy variation. These are common styles that perform well.
- Short copy (a few words up to a couple of sentences)
- Long copy (paragraphs)
- Ask questions
- Use emoji
- Use bullet points (with hyphens or emojis)
- Change 1 word
- Change the order of words/phrases
Next Level Custom and Lookalike Audience Creation
Building custom and lookalike audiences are a constant part of audience expansion. By reaching net new users through audience expansion, you can significantly improve CPA / ROAS. Outside of creative testing, this is the most common method of significantly improving CPA / ROAS.
There is no limit to the volume of custom and lookalike audiences you can create. You can create custom audiences based on different events like app starts, purchases, tutorial completions, revenue, etc. For example, for each event you can:
- Create custom audiences based on the top 1% of users, the top 10% of users, the top 25% of users, etc.
- Create different custom audiences for users in the past 7 days, past 30 days, past 60 days, etc.
- For each event and time range, target a different location like worldwide or the United States
- For each custom audience, create lookalikes that are top 1% affinity, top 2% affinity, top 3% affinity, etc.
We see strong performance when creating a highly diverse set of custom audiences and then targeting the top 1% – 3% affinity across the various audiences. For audiences with an overlap above 40%, it can be beneficial to group them in a single ad set, creating a “lookalike stack”. However, for accounts spending > $300,000 per month, the concern with overlapping audiences seems lower. For lookalike audiences without high overlap, they should be tested as individual audiences as well as lookalike stacks.
For the location, worldwide audiences generally perform well whether we’re targeting individual countries, groups of countries or worldwide. Country-specific audiences generally perform well for the countries they were built from, but generally do not perform well when targeting other countries or worldwide.
Value-based audiences work well (i.e. top 25% of purchasers), and you can manipulate purchase values to create different lookalike audiences. The theory is that Facebook has an easier time finding whales when there’s a greater variance between the highest-value and lowest-value purchasers. We manipulate audiences by increasing revenue for top payers, and by decreasing revenue values for low-value payers. Below is an example of 12 different custom audiences that were generated with various revenue increases and decreases for high-value and low-value payers.
Manipulated Audience Values:
You can also create additional types of audiences through Facebook Analytics. This can be useful for 1) new audiences to test that may not be available through other tools and 2) analytics to gain insights into FB-specific traffic quality.
Improving CPA and/or ROAS without reducing spend
The simplest way to improve CPA and/or ROAS is to reduce daily spend, as we generally see a correlation between lower daily spend and stronger CPA / ROAS. However, UA teams are generally tasked with improving CPA / ROAS without reducing spend, and the most common ways to do this are by producing new winners through creative testing, audience expansion, changes to targeting, and optimization techniques.
Testing entirely new concepts with a different look and feel can improve ROAS massively.
By reaching net new users from audience expansion, we can significantly improve performance. Outside of creative testing, this is the most common method of improving KPIs. In today’s market, lookalike audiences that are generated from custom audiences commonly outperform interest groups and usually broad targeting (IAP games), and there is no limit to the volume of lookalike audiences we can create.
Audience Expansion Through FB Analytics
In addition to leveraging Facebook Analytics to gather app insights, Facebook also allows for the creation of “non-standard” audiences through Facebook Analytics. One example of this can be seen through the creation of “rule-based” audiences. Rule-based audiences can be more defined than standard audiences due to the specific actions one can target on the FB Analytics platform. The following example shows data for iOS users who launched this app more than 20 times and made a purchase within the last 28 days.
Changes To Targeting
Changes to age, gender, location, placements, and devices can all have positive impacts on CPA / ROAS, and targeting tests should be run early so that future creative testing uses efficient targeting.
Campaign Structure and Optimization Techniques
Facebook has rolled out a variety of new products over the past couple of years. The performance of these products is often inconsistent. Due to performance variance, these items should be tested periodically, but we can generally assume that best practices, on average, should be used as a starting point.
AEO vs VO
App event optimization is used to optimize for the lowest cost per app event, where value optimization is used to optimize for the highest revenue per event. Value optimization generally carries both a higher CPA and a higher ROAS than app event optimization, as Facebook is effective at identifying “whales” (high-revenue purchasers) through value optimization targeting.
Currently, Facebook offers advertisers the option to optimize toward a 1 day or 7-day post-click conversion window for AEO and VO campaigns. Generally, 1-day conversion windows net out higher ROAS with higher costs, while 7-day conversion windows typically bring in lower ROAS with higher volume. However, it is important to test both options occasionally to validate account-specific performance.
Min ROAS (VO)
Available only for VO. Min ROAS bids allow the advertiser to “bid” preferred ROAS percentages, according to the selected conversion window selected during adset setup. Bidding here is slightly different as the advertiser is choosing the preferred return on spend instead of “cost per X”. Generally, it is recommended to cast a wide net of bids on each conversion window to optimize toward a “sweet spot” of quality and volume.
Dynamic language optimization allows us to input ads for various languages in the same ad, and Facebook will dynamically serve ads with the appropriate language to the appropriate audience.
Campaign budget optimization allows us to use a single campaign budget that governs all ad sets within a campaign. When using a single CBO campaign budget, Facebook then adjusts budgets for each ad set and shifts more spend to ad sets with better performance. This is different from non-CBO where each individual ad set has its own budget that is managed separately.
Dynamic creative optimization allows us to input multiple variations of each creative element (video, text, headline, etc.) and Facebook will automatically randomize the creative combinations and begin serving more impressions to the creative combinations that perform best.
By pausing underperforming ads, we can shift spend to stronger ads and portfolio CPA / ROAS will generally improve. To offset volume losses by pausing underperforming ads, we need to either increase budgets for active ads or launch new ads.
By adjusting budgets to reduce spend from underperforming ads and shift spend to top-performing ads, we can balance a portfolio of ads to achieve better results. With each budget adjustment, a significant edit is triggered, and the learning phase is reset, so there is some risk to increasing budgets for top ad sets. Making smaller changes in absolute dollars and percentages will help limit the negative impact of triggering a significant edit.
For ads that are using manual bids, we can change bids to improve CPA / ROAS. Bids tend to alter volume more than CPA; for instance, a bid decrease from $12 to $10 may cause spend to fall by 50% where CPA only falls by 10%. We generally recommend running higher bids, to gain access to the highest-quality inventory, and then using budgets to control CPA / ROAS. With a high bid in place, we would expect lower budgets to carry better CPA / ROAS than higher budgets. It’s counter-intuitive, but low bids could reduce access to high-quality impressions and actually hurt CPA / ROAS.
Scaling Spend and Maintaining Performance at High Scale
There is a general rule that CPA will increase and/or ROAS will decrease as we scale spend on Facebook. This is true both for aggregate spend over time (audiences get saturated and creative fatigues as overall spend increases), and for scaling spend overnight (Facebook often reaches into lower-quality inventory to fulfill inventory for incremental budgets). We are able to gain efficiencies vs. the market by taking intelligent approaches to scale spend with the goal of protecting CPA / ROAS.
Scaling Spend Overnight
The two most common methods of scaling spend overnight are increasing budgets for existing ads and launching new ads.
Increasing Budgets for Existing Ads
When increasing budgets for existing ads, there are two primary reasons why CPA increases / ROAS decreases. The first reason is that anytime a budget is adjusted, a “significant edit” is triggered and a significant edit causes ads to go into the “learning phase.” When an ad is stuck in the learning phase, backend history is reset, and the ad faces temporary volatility as backend history builds back up. Because of this, our goal is to limit the frequency and volume of significant edits. The second reason is that Facebook will reach into lower-quality inventory to fulfill incremental budgets, so the quality of users generally drops as individual ad set or campaign budgets are increased. We generally see better performance when running a higher volume of ad sets at a lower average budget per ad set.
Optimizing for Significant Edits
While making significant edits can cause performance volatility, it’s a necessary part of scaling and it’s OK to make significant edits as long as you limit the frequency of significant edits and analyzing the impact. We often need to decide whether to make a significant edit for a top ad vs.increasing volume by launching new ads, and the best approach can vary depending on the account. For instance, if we have a relatively low volume of ads that are responsible for most of the account’s strong performance, we may decide not to edit these ads (to protect their performance) and instead focus on launching new ads to capture more volume.
Or, if there is a high volume of ads performing well, there’s less risk to portfolio performance by triggering a significant edit for a single ad, so we would be more willing to trigger a significant edit for a top ad since the risk is lower. When taking a significant edit, the best practice is to not change budgets or bids more than 30% at one time.
Launching New Ads
New ads begin in the learning phase and carry a higher CPM than mature ads, so launching a high-volume of new ads can hurt performance. Launching new ads generally carries more risk than editing existing ads, but the rewards can be much greater for launching new ads versus editing existing ads. For instance, if new ad launches are focused on expanding into new audiences and reaching net-new users, we may see CPA / ROAS improve in aggregate for new ads. Or, if new ad launches are focused on creative testing and we produce a winner, then CPA / ROAS may improve in aggregate for new ads. In general, new ad launches should be focused on doing something different, and most commonly this would be different audiences, different creative, different targeting, or different campaign structure/budget/bid strategies.
Maintaining performance at a high scale has different challenges than scaling spend overnight, but the optimization techniques are similar. Creative fatigue and audience saturation are the main drivers of performance degradation as advertiser spend increases, and these challenges are more pronounced at a high scale than when increasing spend overnight.
Users are repeatedly seeing the same creative over time and click-through rates / general performance declines as the frequency of impressions per user increases. It occurs at a faster rate when spend is higher, and when we have fewer high-performing creative assets. For instance, if we spend $1,000,000 with 3 high-performing video concepts for client A and we spend $1,000,000 with 6 high-performing video concepts for client B, client A’s creative will fatigue roughly twice as fast as client B. Because the impression volume for each of client A’s creative concepts will be double the impressions of creative concepts for client B.
There is a benefit to having a higher volume of high-performing creative and this means that we need to increase the frequency and volume of creative testing as we increase spend. Clients that spend $1,000,000 per month need roughly 10X the creative testing as a client at $100,000 per month, to maintain the same rates of creative fatigue.
Users are more likely to click when they see an ad for a product for the first time. So performance is stronger the first time we show ads to a new audience. As the frequency of impressions increases for an audience, users become less likely to click. An increased frequency of impressions causes an audience’s performance to decline. And simultaneously the highest-value users are effectively being removed from our audiences as they convert. So there are multiple reasons why performance drops as audience impression frequency increases.
We combat the performance degradation from audience saturation by continually testing new audiences that are designed to reach net-new users that have not seen our ads. Similar to the benefit of having a higher-volume of strong performing creative assets, we see a slower rate of fatigue for clients that have a higher volume of high-performing audiences. With twice as many high-performing audiences, we would expect the performance degradation to occur at roughly 50% the rate, depending on the amount of overlap that exists between audiences.
Diagnosing Performance Fluctuations
With Facebook advertising, the only constant is change. Performance commonly fluctuates as creative fatigues, audiences saturate, marketplace conditions change, and Facebook updates algorithms. When we notice an account’s performance fluctuating, the next step is to determine why performance is fluctuating. While each performance fluctuation is unique, there are four common questions we can ask in the process of attempting to diagnose the cause of performance fluctuation:
What changes did we make that could have caused volatility?
The common changes that drive significant fluctuation are new ad launches and major shifts to traffic from pausing ads or adjusting budgets. To easily identify whether new ad builds are the cause of volatility, we can view ad build performance in advanced reporting and then filter out recent ad builds to determine if performance was “normal” if we ignore the recent ad launches. Outside of understanding the impact of recent ad builds, we can compare different date ranges in our reports for any object to identify major shifts in traffic allocation. For instance, we can compare video performance for yesterday vs. two days ago to quickly determine whether any major shifts in traffic distribution by video have occurred.
What changes occurred within traffic distribution?
If we can’t identify any major shifts in traffic that we caused, the next step is to identify if any shifts in traffic were caused by changes outside of our control. For instance, our top ads may become disapproved and stop spending, which could reduce the volume and hurt performance. The common shifts in contribution % that significantly impact performance are shifts by ad, by creative, and by audience. It’s helpful to use the date comparison function in advanced reporting to compare traffic contribution % over different time periods for ads, creative, audiences, and demographics.
Assuming we can’t point to any major shifts in traffic allocation within Facebook, were there any product or tracking changes on the client-side?
If we can’t find any major shifts in Facebook traffic distribution, and performance is fluctuating for all / most high-volume ads, then we need to understand if anything changed with a client’s product (app or conversion funnel) or if any changes to tracking were made. With apps, we can see the version history in the app store or sites like App Annie. We can determine if any releases correlate with performance fluctuations. There could be changes to an app’s tracking that are not made through an app store release, so app store releases are not a definitive answer for whether changes were made, so it’s appropriate to ask a client if anything changed even if we don’t see any releases that correlate with performance fluctuation.
For web clients, the first step should be to visit the destination URL. Look for any obvious changes to the site or to the Facebook pixel. The Chrome browser extension for Facebook pixel help is very helpful. If we notice app store releases or changes to a website, then we should discuss the changes with the client and determine whether they are the cause of performance fluctuation. Comparing Facebook performance to performance across other high-volume traffic sources can help identify if the issue is global and appears to be caused by a product change.
Assuming we can’t point to any changes on Facebook or the product, performance fluctuation may be caused by macro events that are outside of our control.
If we’re unable to identify shifts in Facebook traffic or a client’s product, then we may be dealing with macro events that are outside of our control. For instance, competition on Facebook may shift end of the month, end of the quarter, and end of the year. We also see major shifts in performance around national holidays and seasonal events like summer when school is out and human behavior changes.
There are also events like the start of NFL season when daily fantasy sports companies disrupt the Facebook marketplace by increasing aggressiveness. We can also look across our portfolio to determine if the fluctuation is client-specific or global. If we see performance fluctuation with most / all clients during the same time frame, then we will have higher confidence that we’re not in control of the fluctuation.
Lastly, the release of competitor apps could also play a large factor in diagnosing performance fluctuations. If we’ve completed a thorough analysis of Facebook changes and product changes, and we still can’t explain the performance fluctuation, then we need to work with clients to determine whether we should temporarily reduce spend until fluctuation normalizes. Or, determine if we should increase testing in an effort to produce a win that will offset the fluctuation. In most cases, it makes sense to temporarily reduce spend because new ads that are launched during volatile marketplaces often perform poorly, regardless of creative and audiences.
Final Thoughts for UA Masters
Artificial intelligence will play a key role (if not THE role) in the future of user acquisition. Even now, AI can run many key components of UA more effectively and more efficiently than humans can. We recommend UA managers plan to pivot into grow analytics, creative strategy, audience expansion, and A/B testing if they want to keep their jobs.
This can be an exciting shift into UA 2.0 if you’re agile enough to keep pace with all the changes.
It’s also an opportunity to elevate your level of advanced UA techniques to become UA masters.