Facebook’s Structure for Scale: How to Prepare for Automated Media Buying

It is time to rethink how we manage Facebook advertising campaigns, and Facebook has a new framework to do it with: Structure for Scale. Learn how to prepare for automated media buying within this new framework.

Facebook’s Structure for Scale: How to Prepare for Automated Media Buying

Structure for Scale is designed to optimize campaign setup and management in a post-AI advertising world. Basically, it’s a set of best practices advertisers should adopt if they want the Facebook advertising algorithm to operate at peak performance.

Best practices for Automated Media Buying

Reducing the number of campaigns and ad sets to prioritize quality over quantity

  • Setting campaign optimization budgets (CBO) so each ad set can achieve 50 unique conversions per week
  • Moving optimization events closer to the beginning of the sales funnel
  • Reducing changes so campaigns and ad sets spend less time in the learning phase and move to the optimized mode
  • Increasing reach and minimizing audience overlap so the algorithm can more efficiently find conversions, and thus has enough data to further calibrate and optimize all other campaign settings
  • Bidding aggressively enough to maintain delivery
  • Using creative asset customization and auto placements to let the algorithm figure out what works best

If you pay close attention to Facebook advertising and the announcements and best practices they publish, you’ll remember the Power5 recommendations they announced this summer. Structure for Scale has a lot of similarities to the Power5 best practices, but it is a further evolution or iteration of how to run Facebook ads right now.

It also appears to work. That might seem like an odd thing to say, but we sometimes hear advertisers express suspicion about the best practices Facebook recommends. There is a sentiment of “well, using those tactics may be best for Facebook, but it’s not necessarily best for me.”

Facebook is building up a library of case studies to dispute the naysayers. You can review several dozen different examples of how effective the Structure for Scale best practices are. 

Here are just a few examples:

  • The National Holistic Institute applied Structure for Scale by consolidating their account’s ad sets and turning on campaign budget optimization and automatic placements. It got them a 77% decrease in cost per lead, 4.9 times more leads, and a 230% increase in school enrollment.
  • Bombas simplified its account structure and got a 2x increase in purchases.
  • Mino Games simplified its ad campaign structure and ran Facebook video ads with automatic placements. This increased installs and in-app actions, earning them a 90–95% higher average revenue per player on day 7.
  • Jet.com expanded its placements to Facebook, Instagram, and the Audience Network. That got them a 334% lower cost per purchase and an 86% lower cost for traffic on the Audience Network. 

 

The Key Principles Behind Structure for Scale

So the benefits of Structure for Scale are clear. Here’s how to apply them to your own account or to your clients’ accounts. 

 

Simplify Your Campaign Structure

This is a recommended best practice you may have heard of before, but it’s at the crux of this new approach to Facebook ads. 

Basically, Facebook wants you to KonMari your campaigns and ad sets. To go from this:

campaign and ad sets

To this:

automated media buying ad sets

Why? Because it gives the algorithm more room to learn and efficiently optimize.

To work well, the Facebook advertising algorithm needs the correct amount of data to crunch. This is core to how machine learning works, and it’s a concept you must grok if you want to advertise successfully now and in the future.

Before we had an algorithm to manage campaigns, it made sense for human ad managers to create many (sometimes hundreds) of campaigns to control all sorts of variables – bids, targeting, audiences, placements. This lets us test settings, isolate audiences, and manage many other levers of campaign management. 

But now that the algorithm manages so many of those things, creating lots of campaigns just gives the algorithm less data to work with. And with less data, it doesn’t work as well.

To understand how the algorithm works, it helps to know which data signals it’s working off of. These are just a few of the ones used for the type of goals an eCommerce site might use:

  • Ad engagement or video viewed
  • Add to cart actions 
  • First day on brand site before purchase
  • Purchase device
  • Number of pages viewed each day on brand site
  • Number of days on brand site before purchase

That’s some very detailed information. When you create a lot of campaigns, you’re basically putting a blindfold on how much data the algorithm can use from campaign to campaign to optimize your ads. As a result, campaign performance gets crimped.

Facebook is calling for advertisers to increase “liquidity (flexibility) by removing constraints” on their campaigns. One of the constraints to be removed is to open up your account structure by minimizing how many campaigns and ad sets you have. Another way to improve “flexibility” is to use Campaign Budget Optimization to free up how your ad budget is spent. And yet another way to give the algorithm the flexibility it needs is to allow it to pick ad placements so it can test your ads across the 14 different placements available.

This means, of course, you will be giving up lots of control. Some advertisers do not like that. But here’s how Facebook views this “control” versus “algorithmic optimization” quandary:

automated media buying conversions

Which would you want? More conversions, or more control? Personally, if I could be assured the value of those conversions is as good as what I was getting before, I’d go with more conversions.

But what happens if you can’t cede control? Your ads will stay in what’s called “the Learning Phase” or “Ad Set Calibration” longer because the algorithm can’t crunch data in the way it was built to do.

Staying in the Learning Phase for any longer than necessary is not a good thing. It suppresses return on ad spend by anywhere from 20-40%. To shift out of it, each ad set needs to generate approximately 50 unique conversions per week and to stay out of it, advertisers need to not make any “significant edits.

So as we walk through each aspect of Structure for Scale, understand that the overarching principle here is to remove limitations on the algorithm in order to gain conversions. This means setting campaigns up in a way that allows the algorithm to do its work, and to minimize tinkering with the system once it’s running, lest you trigger any “Substantial Edits” that force the algorithm to recalibrate again. 

Ultimately, Facebook just wants you to adjust a few key levers so the algorithm can do its job.

Automated Media Buying

 

Lever #1: Increase Audience Size

Facebook recommends four ways to do this:

  • Increase retargeting windows. Many advertisers have tight windows, like one to seven days. That only works if you have substantial website traffic. So, to expand your audience and give the algorithm more room to work, try increasing your retargeting window and see if you don’t get better results.
  • Merge your Lookalike audiences into larger groups. Facebook recommends 0-1%, 1-2%, 3-5%, and 5-10%.
  • Group interest and behavior targets with high overlap together. Just make sure the creative strategy is the same for the groups you’re merging.
  • Minimize the audience overlap. Get smart about audience exclusions, including screening out past purchasers. 

 

Lever #2: Combine Placements

Facebook has 14 ad placements across its family of apps right now. Trying to manage that is complex, to say the least. It’s really better to let the algorithm manage placements, especially when you add in other performance factors like ad sizes. So select Automatic Placements and turn your focus to other things, like doing competitive analysis or enhancing your creative strategy.

Still don’t want to give up control? Then consider this: Facebook says shifting to automatic placements reduces the cost per conversion by 71%. That’s an awful lot of freed-up ad budget. And if you just really must have control over where certain ads appear, use “Asset customization.” It will let you choose which images or videos people see in your ads depending on where your ads appear. 

automated media buying combined placements

 

Lever #3: Increase Budget Liquidity

Here are a few ways to give the power of the purse strings to the algorithm. Because even if you minimize control on every other aspect of campaign management, without freed-up budgets, the party can’t really get started. 

  • Increase budget-to-bid ratio. Calculate your daily budget based on the 50 conversions per week threshold needed to get out of the Learning Phase. So if you want to pay $1 per conversion, set your budget for at least $50 a week (probably about 10-15% higher because bids usually end up being that much higher than what you’ll actually pay).
  • Use Campaign Budget Optimization. As Facebook says, “Free the budget!”
  • Test creative at the ad level. Don’t create multiple ad sets, each with one piece of creative, and then run the ad sets against each other. Put the different pieces of creative all within one ad set.

 

Lever #4: Bid High Enough, and With the Right Strategy

The days of human-managed bidding are over. It’s vastly better to let machines manage the complexities of changing bids. But choosing a bid strategy is still best left to humans… if humans take the time to think about it carefully.

Facebook breaks out three types of bid strategy: 

  • Lowest cost. Want maximum conversions from your budget? Then pick the lowest cost. Use this especially if you don’t know lifetime value.
  • Target cost. This gives a cost per result on average. If you want consistency, go with the target cost. 
  • Lowest cost with a bid cap. Got a broad audience that’s not as likely to convert? This is a good option to control costs. Or, if you are willing to spend up to a certain amount for a given conversion, but no more… then choose the lowest cost with a bid cap.

Two extra pieces of advice:

  • If at all possible, bid according to lifetime value. It makes no sense to bid like all your prospects are created equal. They aren’t.
  • If you are using a bid cap, make sure that the cap is high enough. Otherwise, you’ll be severely limiting how often the ad gets shown, and thus you’ll be limiting the amount of data Facebook gets to work with, and… you know the rest. 

 

Lever #5: Optimize for the right event

For niche or higher-value products with fewer purchases, try moving your conversion event up towards the beginning of the funnel. This usually gives the algorithm more data points to work with because for every step of the funnel toward the purchase, there are fewer conversion events.

Here’s how this can work: When we first launch a game/app, we’ll optimize for App Installs. These clearly, aren’t as valuable as purchases, but if we haven’t generated enough purchases (or aren’t generating enough purchases) we can optimize the overall system much faster by focusing on App Installs. As soon as we’ve got enough App Installs to move down the funnel, we’ll optimize for App Events like a purchase. 

Facebook Advertising: Past, Present, and Future

Facebook and Google have made major strides in the last year towards simplifying and streamlining their ad platforms. This is overwhelmingly a good thing because of:

  • Allows more people to effectively use the platforms, regardless of their advertising skills
  • Saves ad managers’ valuable and very limited time
  • Gets better results

However… if you’re an experienced, proactive, and performance-driven advertiser, letting go of all that control is tough. It means completely rethinking how you advertise. It may also mean updating your skills in automated media buying because the algorithm can do a lot of Facebook ad management tasks better than people can now. We recommend you switch how you spend your time over to creative strategy and competitive analysis.

For a more detailed view of preparing for automated media buying, click here.

5 Facebook Best Practices For A|B Testing New Ad Creative

Video ads work. Really well. We have been watching them outperform still image ads for years now. If you have been holding back from using more video ads because they are expensive, stop. You might be reducing costs by avoiding video ads, but you are also avoiding their upside, which is significantly better performance. The performance improvements or testing of video ads are so significant that if you are not running them, we know you are losing money.

Or maybe you have been avoiding video ads because they are hard to create. There are solutions to that problem. You can either outsource video production, or you can make it easier to create video ads by leveraging Facebook’s Video Creation Kit, or by using some of our tricks to creating video ads from still images.

But maybe you know how to do all of that. Maybe you’re running several video ads right now, and you’ve already got a nice system set up to generate and test new video ads every week.

Great start. But you’re not done. Because no matter how much testing you’re doing, we recommend you do more. 

A lot more.

5 Facebook Best Practices For A|B Testing New Ad Creative

 

Here’s why: From what we’ve seen after working with nearly a thousand companies and managing more than $3 billion worth of ad spend (much of it through video ads), we believe creative testing is the highest-return activity in user acquisition. It beats campaign structure, audience expansion, and ad copy testing – all of it.

Simply put, creative testing separates the winners and the losers in UA management right now.

So with creative testing being that important and that powerful… how do you do it with video ads? And if you’re already testing a lot, how can you do it better?

Here’s how:

 

1. Use Quantitative Creative Testing

If you’ve been doing standard A/B split-testing up until now, Quantitative Creative Testing may blow your mind. It’s something we designed expressly for the performance UA ad testing environment. It gets around many of the problems with traditional A/B split-testing, like the expense and time required to reach statistical relevance.

Quantitative Creative Testing operates on the principle of earthquakes over tremors. It works because most ads don’t perform. Maybe some ads do okay, but they don’t perform like breakout, 100x ads. And it’s the 100x ads you need if you want to compete today.

There’s an illustration below of what earthquakes look like in ad performance. We allocate ad spend based on an ad’s performance. So when you look at the chart below, you’re looking at the distribution of ad spend for over 520 winning ads. That slim orange column on the far right of the chart represents the tiny handful of ads that performed well enough to eat 80% of the budget. It’s that tiny fraction of all the ads we tested that Quantitative Creative Testing is designed to find.

testing

This is what we mean when we say we’re looking for earthquakes, not tremors. We do not focus on ads that perform 5% or 10% better than the portfolio. We do not care about the middle of that chart. We’re looking for new winners – super-high performing ads that are good enough to replace the current winning ad before creative fatigue sets in. Typically, we have to test twenty ads to find one that’s good enough to beat the old control. Having run and tested well over 300,000 videos, on average we see a 5% success rate in finding new winning ads.

Quantitative Creative Testing uses two types of creative: “Concepts” and “Variations.”

Concepts are completely new, out-of-the-box creative approaches. They usually fail, but when they win, they tend to win big. About 20% of the ads we test are Concepts. Variations are small tweaks made to Concepts. We take the big ideas and modify them just a bit to see if we can make them work better. 80% of what we test are variations.

We test a lot of Concepts quickly, accruing just enough exposure for these ads to know whether they could be big winners or not. Often, these ads are only 80% brand-compliant. We like to play a bit fast and loose with these early ads, both in terms of statistics and branding, so we can find the big winners fast. This saves time, saves a ton of ad spend, and gets us just “good enough” results to go into the next phase.

We’ll take the winning ads and – if they’re not brand-compliant – tweak them ever so slightly so they meet brand requirements, but still perform well. We’ll also take the ads that performed almost well enough to be winners and retool them a bit to see if we can’t get them to do better. We take the winners from the first round and start running them against our current best-performing ads.

This keeps us several steps ahead of creative fatigue, and – combined with careful audience targeting – lets us extend the life of our best creative. It is a constant process. We don’t just find a new winner and then twiddle our thumbs until its performance starts to drop a week or two later. We have a constant pipeline of creative being developed and tested.

 

2. Create a System for Creative Refreshes

Staying ahead of creative fatigue is challenging. The better your ads perform, the more likely you are to want to pour more ad budget onto them. And the more ad budget you spend on them, the faster they fatigue.

There are several ways to beat fatigue (we just mentioned our favorite, Quantitative Creative Testing). But one other way to extend the life of creative is to develop a system for Creative Refreshes.

We developed a Creative Refresh strategy for Solitaire Tri Peaks that leveraged our Creative Studio, competitive research, and gaming design best practices to deliver five new high-performance videos every week. This allowed Tri Peaks to get much more ROI out of their existing creative assets, all while maintaining (and even improving) ad performance and ROAS.

There are three flavors of creative refreshes: Iterations, Revisions, and Simple Changes. Each of these approaches uses the original ad much like a template but switches out one or two key aspects. After testing thousands of ads like this, we’ve figured out which elements of an ad tend to improve performance the most, so we know which variables to play with.

Iterations

Add or remove one element from the original high-performing ad. In the ad examples below, we’ve iterated the original “New Concept” by adding a second, smaller brown cat to the iterated ad.

Revisions

Give us room to make several changes to the template for revision. Typical tweaks include resizing one or more elements, changing the header around, and/or swapping out different music.

Simple Changes

Typically have one significant change, like making a localized version of the ad, changing the CTA, adding or swapping Start and End Cards, or changing the text in some way. In the examples below, the Simple Change was just to switch the ad’s language from English to German.

These four types of creative are our current approach to a la carte testing. It’s a process we are constantly testing and changing. Even three months from now, we’ll probably be using it just a bit differently.

 

3. Give the Algorithms What They Want

If you’re a creative who hasn’t been updated on how the advertising algorithms at Facebook and Google work now, you need to fix that. Like yesterday. Because any UA team that wants to succeed now needs to let the algorithms figure out ad placements. The machines do this work more efficiently than humans can, and not handing over this part of campaign management in Q4 2019 is going to hurt campaign performance.

This full shift over to algorithm-managed bids, placements, and audience selection actually has big consequences for how we design video ads. We’re now basically just telling the ad platforms what we want and then sitting back and letting them deliver those requests, so we need to give the algorithms enough raw creative assets to do their jobs.

Testing

At the very least, create four versions of each ad: one for each of the viewing ratios shown below.

testing ad creative

You have to do this because humans aren’t controlling ad placements anymore. The algorithms do that now. If you give the algorithms only one video ratio, they’ll be severely limited as to where they can show your ads. You don’t want that – it will cripple your ads’ performance.

If that’s more work than you want to do, try this trick: Don’t make ads in all three ratios at first. Instead, run the first versions of all your new ads at the image ratio used in the newsfeed: 16:9. This will allow you to do a quick but effective test in a controlled “apples to apples” way. Then take the stand-out ads from that first round of testing and make them into the other two ratios. This saves a ton of work, as you only have to make three versions of winning ads, not three versions of every single ad you test.

By the way… just making your winning ads into three versions with three different ratios isn’t enough. At least not according to Google. They recommend you also give their platform three different lengths for each of your videos: 10 seconds, 15 seconds, and 30 seconds. Depending on where the algorithm shows your ad, those different lengths could result in very different performance metrics. And again, because the algorithm – not a human – is the entity figuring out optimal placement and audience combinations, we need to give it every opportunity to find the sweet spots.

If just the idea of making nine different versions of every winning video makes you feel tired (three ratios times three lengths), no worries. Our Creative Studio can make those versions for you.

 

4. Do Regular Competitive Analyses of Your Competitors’ Ads

Now that the advertising algorithms have taken some work off your plate, we highly recommend you spend more time doing competitive analysis. And do it in a systematic, documented way. We recommend using SensorTower’s share of voice feature to help you uncover the advertisers and creative that matter the most. It’s also enough to give you hundreds of ideas for your own ads.

testing ad creative

Here are a few possible columns for a competitive analysis spreadsheet:

  • Advertiser
  • Title 
  • Date
  • Platform/Network
  • Length in seconds
  • Call to action
  • Main Colors
  • Mood
  • Text density (the amount and size of copy)
  • Which words they emphasize
  • Screenshot or recorded video
  • Logo placement
  • Similar to other ads they’ve run… how?
  • What to test in your own ads based on this ad

As you review your competitors’ ads, also look for:

  • Storytelling tactics
  • Messaging
  • Which emotions they’re trying to evoke from the viewer
  • How the product is used in the ad
  • Colors and fonts

There’s a real art to doing competitive analysis. To do it well requires someone with both a creative and analytical mindset. So if you’ve been worried about keeping your skills current through the massive changes going on in UA, learning how to do great competitive analyses would be a really smart way to keep your skills up.

 

5. Test Ad Features

There’s an old concept in testing that describes testing different levels of things as testing “leaves” versus “trees” versus “forests.”

For instance, you can test call to action button colors. That’s an easy, simple, and very popular thing to test.

But it’s only testing leaves. It’s not going to get you very far. Sure, button color can get you a lift. A little lift. But most testing experts will roll their eyes at button color tests and beg you to think bigger. They’d want you to test “trees” and “forests,” not tiny, somewhat insignificant things like button colors.

Start and end cards are a great example of this. These two frames – at the end and at the beginning of ads – can make a major difference in performance. They help lure viewers in, and they can impress viewers with a strong call to action at the end of the ad.

But if you’re caught up in testing button colors, you aren’t going to have time to test big things like Start and End Cards. Or any of the other powerful ad features Google and Facebook are rolling out practically every week.

So be choosy about what you test. Test to find earthquake-level improvements whenever you can, not just tiny improvements.

This approach is, of course, very similar to Quantitative Creative Testing and the idea of concepts versus variations. But it refers more to the structure and features of the ad vehicles themselves. Google and Facebook are giving us some really cool toys to play with these days. Go try them out… even if it means you can’t test ten different shades of red on a button.

 

BONUS: Creative Testing Best Practices

Once you have creative ready to test, we recommend the following Creative Testing Best Practices.  While you can test all your ads to statistical significance (StatSig), we don’t recommend this approach. StatSig is an extremely expensive and slow process. Instead, we recommend testing as follows: 

  • Test radically different concepts and only modify winning concepts.
  • Restrict Targeting: US, Facebook Newsfeed, iOS, or Android. The Newsfeed, Instagram, and the Audience Network all have wildly different performance KPIs. We recommend isolating your testing to reduce variables.
  • Target 100 Installs. For most games/apps, getting 100 installs is enough to get a read on IPM (installs per thousand), so use this as your initial KPI to determine “potential winners.”
  • Always run a Facebook Split Test (3-5 videos + Winning Video).
  • For the three to five concepts you should test at a time, the goal is to kill the losers quickly and inexpensively. You will get both fast negatives and false positives on the margins, but you’ll know an earthquake when you see it – 100X ads stand out. The ads that do well, but don’t blow the doors off, are what we call “potential winners.”
  • ROAS Test: Take your potential winners and drop them into ad sets with your winning creative. Then let the gladiator battle begin. If the new video gets a majority of impressions, Facebook has named it the new winner. If it gets some impressions but it stops delivering quickly, we recommend re-testing/enhancing creative if it is within 10% of the winning video’s performance.

testing

 

Testing New Ad Creative Conclusion

The smartest performance advertisers have always known creative testing is where the bulk of performance gains are to be found. It was true back when Claude Hopkins published Scientific Advertising in 1923, and it’s true now.

If you really want results, test. Then test again. Then test more. Test strategically and repeatedly. Even in late 2019, the vast majority of advertisers aren’t doing anywhere near enough creative testing.

Fortunately, that is really good news for you.

 

 

2020 Playbook: Soft Launch To Worldwide Launch Strategy for New Games Using Facebook Ads

If you have a new mobile game and you want to bring it to a worldwide market, you will need a new launch strategy with effective user acquisition at its core.

Launching a game is more than getting it into the stores and managing user feedback in the forums. Success is largely hinged on attracting and retaining the right users for your game in each market.

Here, we’re focusing on how Facebook can help you achieve this goal. What’s particularly challenging is that user acquisition advertising evolves fast. What worked for Facebook user acquisition advertising in 2019 won’t work nearly as well in 2020, and here, we’re going to help you focus on the future.

 

Worldwide Launch Strategy in 2020

Facebook pivoted toward algorithm-driven advertising last February and never looked back. Their new requirement for Campaign Budget Optimization is more evidence of the algorithm taking over, and Facebook’s Power5 recommendations for advertising best practices just drove the point home further. Both Facebook and Google have simplified and automated a lot of the levers user acquisition managers used to rely on. That trend will continue and accelerate in 2020. Now is the time for you to prepare your accounts for that acceleration by structuring them for scale with the best practices and new launch strategy we will describe here.

The big takeaway of Facebook UA advertising in 2019 is it’s best to let the Facebook algorithm do what it does best: Automated bid and budget management and automatic ad placements. Let humans do things the algorithm can’t do well (yet), like optimizing creative strategy.

If you want to take your game from soft launch to worldwide launch strategy using Facebook’s 2020 best practices, we have created this three-part series with our recommendations. 

These are the launch strategy best practices we have developed by working with hundreds of clients, profitably managing over $1.5 billion in social ad spend. Based on an awful lot of trial and error (and quite a lot of successes, too), this is how to get the best return on ad spend possible and to launch your app efficiently.

It breaks out into three phases:

  1. Early Creative Testing in the Soft Launch
  2. Taking Your Campaign to the Next Level in the Worldwide Launch Strategy
  3. Scaling Worldwide Through Optimizations

 

Phase One: Early Creative Testing in the Soft Launch Strategy

We have been doing well with “soft” launches. It gives us a great opportunity to pre-test creative, test campaign structures, identify audiences, and help evaluate our client’s monetization strategy and LTV model. By the time we’re ready for the worldwide launch strategy, we’ll have found several winning creatives and have a strong sense of the KPIs necessary to achieve and sustain a profitable UA scale.

Soft launches tend to work best if we focus on a limited international market. Usually, we will pre-launch in an English-speaking country outside of the US and Europe; Canada, New Zealand, and Australia are ideal picks for this. Choosing countries like those let us conduct testing in markets that are representative of the US, but without touching the US market. As we are not launching in the US, it will not spoil our chances of being featured by Apple or Google.

Once we’ve got the market selected, we pivot to:

 

Identifying the most efficient campaign setup

A simplified account structure, rooted in auction and delivery best practices will enable you to efficiently scale across the Facebook family of apps. We typically hear things like “My performance is extremely volatile.”  “My ad sets are under-delivering.” “My CPAs were too high, so I turned off my campaign.” “I’ve heard I need to use super-granular targeting and placements to find pockets of efficiency.” The best way to avoid this is to structure your account for scale based on Facebook’s best practices. Facebook defined those best practices in Facebook’s Power5 recommendations earlier this year. But – in evidence of how rapidly the platform evolves – they fine-tuned their best practices again lately in their Structure for Scale methodology.

Structure for Scale

The gist of Structure for Scale – and of what Facebook wants advertisers to do now – is to radically simplify campaign structures, minimize the amount of creative you’re testing, and use targeting options like Value Bidding and App Event Bidding to control bids, placements, and audience selection for you. Facebook is building up a considerable body of evidence that this approach results in significant campaign performance improvements, though if you’re a UA manager who likes control, it can be an adjustment.

Compliment the Algorithm

The underlying driver of all these new recommendations from Facebook is we need to build and manage our campaigns to compliment the algorithm – not to fight it. One of the key benefits of adopting new best practices is to minimize Facebook’s Learning Phase. Ad sets in the learning phase are not yet delivering efficiently, and often underperform by as much as 20-40%. To minimize this, structure your account to give the algorithm the “maximum signal” it needs to get you out of the Learning Phase faster.

Results During the Learning Phase

Expect somewhat volatile results during this exploration period (aka the Learning Phase) as the system calibrates to deliver the best results for your desired outcome. Generally, the more conversions the system has, the more accurate Facebook’s estimated action rates will be.  At around 50 conversions per week, the system is well-calibrated. It will shift from the exploration stage to optimizing for the best results given the audience and the optimization goals you’ve set.

Through all of this, keep in mind that Facebook has built its prediction system to use much data as possible. When it predicts the conversion rate for an ad, it takes into consideration the ad’s history, the campaign’s history, the account’s history, and the user’s history.

When the system says that an ad is in Learning Phase, it’s only a warning that the ad has not yet had enough conversions for the algorithm to be confident that its predictions are as good as they will be later. The standard threshold for confidence is 50 conversions, but having 51 conversions is not that much different from having 49. The more conversions you give the system, the better its predictions will be.

While it is best practice to let the algorithm manage placements and bids, we do still have quite a lot of levers of control over specific parts of campaign management.

 

Lever #1 Increase Audience Size

 

  • Increase retargeting windows beyond 1 day, 3 days, or 7 days and make sure retargeting increments align with website traffic volume.
  • Bucket Lookalike audiences into larger groups. For example:. 0-1%, 1-2%, 3-5%, 5-10%.
  • Group interest and behavior targets that have high overlap together, but make sure your creative strategy is the same for each segment.
  • Minimize the audience overlap. Use proper audience exclusions and make sure you are excluding past purchasers.
  • Increasing audience size can help us gather more data and prevent inefficiencies caused by targeting the same audience across multiple ad sets.
  • Exclude past purchasers and website traffic from prospecting campaigns. This allows us to better track KPIs and to ensure we reach new users, not those who have recently purchased and are no longer in the market for your app or offer.
  • Structure initial launch audiences for maximum performance. Here’s an example of how we do it:

Launch Strategy App Installs

  • Once you have about 10,000 installs you can move to AEO (App Event Optimization for purchases). Then your audience structure can shift to something more like this:

AEO Purchase

  • And once you have about 1,000 purchases, you can move to Value Bidding and reselect your audiences again:

Value Optimization

Lever #2: Combine Placements: Select automatic placements for better results.

 

  • The more placements your ads appear in, the more opportunities you have to reach or convert someone. As a result, the more placements your ads are in, the better your results can be. And you won’t get penalized for letting the algorithm test new placements. After the Learning Phase, the algorithm will just not show your ads where they don’t perform. It can do the placement testing for you.
  • Facebook’s system of Discount Bidding (Also known as “Best Response Bidding”) will always try to find the lowest cost results based on a campaign’s objective and within the audience constraints set by the advertiser. But if you’re willing to widen the delivery pool by including additional placements, you’re giving the algorithm more to work with. That gives it a better shot at finding lower-cost results and delivering more results for the same budget.

 

Asset customization gives you control over placements

 

  • The ad sizes and ratios you use, of course, determine which placements those ads can appear in. So you’ll want to choose the images or videos people see in your ads based on where those ads may appear.
  • If you elect to manually select placements, use asset customization. It will let you specify what ads are shown for specific placements to ensure your ad displays the way you want. Asset customization also allows organizations to easily choose the ideal image or video for some placements within one ad set. If you have a content strategy that requires specific assets to appear in specific placements, this option is your best bet.

 

Lever #3: Increase Budget Liquidity: Select automatic placements for better results.

 

  • Increase your campaigns’ budget-to-bid ratios. Calculate daily budgets based on the cost to achieve Facebook’s 50 conversions per week threshold.
  • Use Campaign Budget Optimization. Our current best practice for CBO is to separate prospecting, retargeting, and retention into separate campaigns. Otherwise, CBO will push the budget toward retargeting and retention. Focus on a split of roughly 70% prospecting, 20% retargeting, and 10% loyalty (loyalty is optional). Segment your budget at this high level, then let CBO do the work within those objectives.
  • Test creative at the ad level instead of creating separate ad sets for individual creative assets. Some clients will have one ad set for each piece of creative they want to test. This isn’t the best practice because they are likely targeting the same audience within each ad set (which creates 100% audience overlap) and each ad set only has one ad. Instead, set up multiple ads with different creatives in a single ad set. It’s a fast, streamlined way to test how multiple creatives will perform. 
  • Use Placement Asset Customization. This is the setting to use if you want to build complementary messages across platforms and benefit from utilizing placement optimization, but you want to be able to specify which creative asset is used for each platform or placement type.

 

Lever #4: Bid smarter.

Never underestimate the power of choosing the right bid strategy. Make your pick carefully (and test it) based on your campaigns’ goals and cost requirements. Whatever bid strategy you pick is basically giving the Facebook algorithm instructions on how it should go about reaching your business goals. Here are a few things to consider:

  • Lowest Cost: Directs the Facebook algorithm to bid so you achieve maximum results for your budget. Use the Lowest Cost when:
    • You value the volume of conversions over a strict efficiency goal.
    • You have certain audiences you just want to get in front of, and the conversion rate is high enough to justify the spend.
    • You’re unsure of the LTV of a conversion.
    • You’re already using the lowest cost bidding and are satisfied with the cost per result.
  • Target Cost: This aims to achieve a cost per result on average. So even if cheaper conversions exist, Facebook will optimize for the specified cost per result. Use it when:
    • You want a volume of results at a specific cost per result on average, and you want consistency at this cost.
    • You’re willing to sacrifice some efficiency for consistency.
  • Lowest Cost with Bid Cap: Sets a limit on how high Facebook will bid for an incremental conversion. Use it when:
    • You know the maximum amount you can bid per incremental result, and any incremental conversion above this value would be unprofitable and unwanted.
    • You’re targeting a broader audience with a lower likelihood to convert, so you want to appropriately manage costs.
    • You have a highly segmented audience with a defined LTV for each segment, and you understand the associated bid.

Whenever possible, assign a value to your audience. Ignoring LTV when you bid just doesn’t make sense long-term.

If you’re using a bid cap, make sure that the cap is high enough. We suggest setting your cap higher than what your goal actually is. Still not sure what’s high enough? The average cost per optimization event your ad set was getting when you weren’t using a cap can be a useful starting point. Just keep in mind that bids are often higher than costs. So setting your bid cap at your average cost per optimization event could result in your ads winning fewer auctions.

 

A word about campaign structure

Campaign setup matters. A lot. We need to figure out which campaign structure is going to work best for the particular app we’re launching. That usually means using Campaign Budget Optimization settings, but we also have to decide if we want to initially optimize for Mobile App Installs (MAI) or App Event Optimization (AEO).

Typically, if we don’t already have a large database of similar payers, we will need to start with a limited launch using (MAIL) app installs as our campaign optimization objective until we’ve got enough data to shift to AEO (app event optimization). For initial testing, we like to buy 10,000 installs to allow for testing of game dynamics, KPIs, and creative.

Once we have 2-3 rounds of initial testing and data complete, we recommend switching UA strategies to focus on purchases using AEO and eventually VO to drive higher value users. This one-two punch of AEO and then VO is a great solution that allows ROAS to start to flow through the system for LTV modeling and tuning. Ultimately, what we’re doing is training Facebook’s algorithm for maximum efficiency and testing monetization and game dynamic assumptions.

Audience Demographics

Audience demographics get a lot of attention at this phase, too. We’ll review the performance of our campaigns at various age and gender thresholds, first using broad audience selections to build up a pool for evaluation and eventually allowing Facebook to focus on AEO/VO audiences to test monetization.

Facebook’s recommendation is to start as broad as possible and run without targeting, and we agree with this. Also, keep your account structure simple by using one or two campaigns and minimal ad sets where you reduce or eliminate audience overlap and set your budgets to allow for 50 conversions per week per ad set. Facebook refers to this as “Structure for Scale” and it gives their algorithm the best opportunity to learn and adapt to the audience you’re seeking. It will help get your ads out of the Learning Phase and into the optimized mode as quickly as possible.

Testing and optimizing creative

We believe creative is still the best competitive advantage available to advertisers. Because of that, we relentlessly test creative until we find break-out ads. Historically, this focus on creative has delivered most of the performance improvements we’ve made. But we’ve also found that new creative concepts have about a 5% chance of being successful. So we usually develop and test at least twenty new and unique creative concepts before we uncover a winning concept.

That’s far more work than most advertisers put in, so to stay efficient we’ve developed a methodology for testing creative that we call Quantitative Creative Testing. QCT, combined with some creative best practices, allows us to develop the new high-performance creative concepts that clients need to dramatically improve their return on ad spend (ROAS) and to sustain profitability over time.

Our overarching goal with all this pre-launch creative is to stockpile a variety of winning creative concepts (videos, text, and headlines) so we’re ready for the worldwide launch and can launch in the US and other Tier 1 countries with optimized creatives, audiences, and a fine-tuned Facebook algorithm.

Collect lifetime value data

This is where optimizing the game’s monetization comes in. While we’re working on campaign structure, what to optimize for, and developing creative, we’re also collecting lifetime value data. This helps us meet the client’s early ROAS targets based on their payback objectives. Most mature gaming companies want a payback window of one to three years, which is pretty easy to attain if all the other aspects of a campaign are on track.

Low expectations for pre-launch

Post-launch metrics tend to be noticeably stronger than pre-launch metrics. Several factors contribute to this:

  • The same creative we tested at pre-launch will usually perform better when it’s used for the worldwide launch.
  • The US audience we had held back from advertising for pre-launch will ultimately make up about 40% of the total ad spend once the worldwide rollout is underway. This gives us a lot of potential user base to go after in the most cost-effective ways.
  • We tend to see a correlation between higher reach and higher ROAS on Facebook. So the worldwide targeting we use post-launch also gets a boost in performance over the limited market targeting we did pre-launch.

 

Shifting Towards The Worldwide Launch Strategy

Pre-launch campaigns can run from anywhere between a week to a month. They are an investment, but they let us hit the ground running with proven creative, an efficient campaign structure, and a monetization strategy that further boosts profitability. For advertisers who want to scale fast, this is absolutely the way to go.

 

Phase Two: Taking Your Campaign to the Next Level in the Worldwide Launch Strategy

Creative testing early and often was one of the key takeaways from our first post in this three-part series. We discussed how to complete the pre-launch phase of launching a gaming app and structuring your account.

Now, we’re ready to gear up for the worldwide launch because we have tested creative, optimal campaign structure, and a monetization strategy that gives you the payback window you want.

To prepare for this global launch, we typically start by casting a wide net with different campaign structures so we can identify top-performers and scale them quickly.

We also focus on:

Which geographies to use

We’ll test Worldwide, the United States only, and Tier 1 minus the United States to see which performs best. Then we’ll drill down further as soon as we have enough data to decide which option to prioritize.

Testing audiences

We’ll test different interest groups, and we’ll also do a ton of work with lookalike audiences as soon as we’ve got enough purchases to start working with that data. We do so much work with audience selection that we built a tool to make it easier. Now our Audience Builder Express tool lets us create hundreds of super-highly targeted audiences with just a few clicks.

Which optimization goal works best

We did this in the pre-launch, but it has to be re-tested again now that we’re advertising in dramatically larger markets. Typically we’ll choose Mobile App Installs (MAI), App Event Optimization (AEO), or Value Optimization (VO).

Developing a campaign structure grid

These are spreadsheets that block out campaign structure and different campaign settings including ads sets, the budgets for each campaign, and more. They are basically a blueprint of the entire launch strategy.

Here’s what one section of a campaign structure grid might look like:

Launch Strategy Campaign Structure Grid

Sometimes we’ll have two campaign structure grids – one from our team, and one from Facebook. Generally, Facebook’s recommended best practices are the right way to go. Those are well summed up in the first post in this series, in their Power5 recommendations, and reviewed in detail in their Blueprint certification training.

We agree with this approach, but every company is a little bit different. So while we usually follow (and always endorse) Facebook’s best practices, it’s critical to understand the backstory and the technical side of why those best practices work. When you look at the underlying principles and the new features we have to work with, every so often, for a particular client situation, we’ll bend those best practices a bit.

For example, heavy mobile app installs are recommended for the first week of launch. We’ve seen success with this strategy, and we’ve also seen some games scale more profitably with Value Optimization in week one than they did with Mobile App Installs. This is why we recommend casting a wide net instead of exclusively optimizing for Mobile App Installs.

The Learning Phase

We also want to get the campaigns out of the learning phase as quickly as possible because it tends to suppress ROAS by anywhere from 20 to 40%. Getting out of the learning phase typically requires 50 conversions per ad set per week. Once we’re out of the learning phase, we’ll also avoid any “significant edits” to top-performing campaigns and ad sets, as those would put those campaigns back into the learning phase. Facebook’s system defines a “significant edit” as a campaign budget change of 40% or more or any bid change greater than 30%.

Then there’s the issue of budgets. We aim to balance budgets within one to three days of launch so we can then shift spend to top-performing segments. We do that by first reducing the spend from underperforming geographies and optimization goals, and then reallocating it to top-performing geographies and optimization goals.

Once that’s all balanced out, we can safely increase the budgets for CBO campaigns and ad set budgets. We can also launch new campaigns with these same optimized settings.

We will have achieved a successful worldwide launch strategy – the exposure will have gone global. The campaigns will be profitable and operating with the best efficiency we can deliver for now.

The next step is to fine-tune that efficiency and try to scale up further with audience expansion.

 

Phase 3: Scaling Worldwide Launch Strategy Through Optimization

In the first and second segments of this series, we did our pre-launch strategy work and successfully launched a global campaign with positive ROAS. Now it’s time to optimize what we’ve got and make it even better.

The launch strategy shown below is a snapshot of everything we’ve done so far. It summarizes bid strategies and optimization goals, geographic roll-out, budgets, placements, and which audiences we’re targeting. Everything, basically. It’s not the sort of thing you would want one of your competitors to get hold of.

Launch Strategy Plan

Audience Expansion, Creative Testing, and Creative Refresh

These are the three fronts we will tackle to optimize this campaign launch strategy.

1. Audience Expansion

Once again, we’ll fire up our Audience Builder Express tool and start isolating specific audiences. In addition to the standard in-app event audiences (registrations, payers, etc), we’ll build and test audiences based on these KPIs:

  • Spend – all time
  • Last 7 days spend
  • The last 30 days spend
  • First activity date
  • Last activity date
  • Last spend date
  • First spend date
  • Spend in first 7 days
  • Spend in first 30 days
  • Highest level
  • Value-based audience by setting a minimum value 
  • Split by Android and iOS and select individual countries

This is what it looks like as you build up your payer base. As the data accrue, you’ll be able to build manipulated audiences like this:

Launch Strategy Custom Audience List

A little bit later on, as your title continues to grow, you can build even more audiences focused on users who pay early and pay a lot:

 

The other benefit to audience expansion

Great creative takes a lot of work to develop, so we want it to last as long as possible. We also want to find every single person on the planet who could be a high-value customer. So we very carefully expand audiences to avoid audience fatigue as much as we do it to avoid creative fatigue.

But being able to tie these audiences and rotate through them means our creative lasts significantly longer. It allows us to find a huge potential customer base, and we get thousands of conversions we might never have found or would have spent way too much to get.

Being able to control and expand audiences like this will also be valuable later on, as we scale up and the spending goes up because audiences burn out even faster as campaigns scale. Exploiting every possible audience expansion trick, at every step of the campaign, is critical. Being able to do it efficiently and effectively is a massive competitive advantage.

2. Creative Testing

Just as audiences burn out faster with high-velocity campaigns, creative burns out faster, too, of course. So we have to be aggressively testing all the time. And we do: We are constantly rotating through new creative.

To find creative that performs at the level we need, we usually have to test twenty ads to find one piece of creative good enough to replace control. That means we need a constant stream of new creative – both new “concepts” (completely new, out-of-the-box creative approaches) and new ad variations. Our creative development work is about 20% concepts and 80% variations.

This creative testing machine is running all the time, fueled by creative from our Creative Studio. It has enough capacity to easily deliver the 20 ads we need per week and can handle delivering up to hundreds of ads every week.

Because the game is global by this point, we’ll also need localized creative assets. Creative has to be in the right language and may even be optimized for localized placements or cultures.

So we don’t need just one winning ad every week. We need that winning ad cloned into every language and optimized for every region. Of course, all those ads also have to be at the right aspect ratios. In addition, optimized for Facebook’s 14 different ad placements. That’s when creative development gets really work-intensive. But the Creative Studio can handle that. They’re adept at creating all those variations efficiently.

Dynamic Language Optimization

That’s the creative development side. There’s also a huge amount of testing strategy required to grow campaigns like this. First, we have to decide when and how we’re going to use Dynamic Language Optimization. Also, when and how we’ll use Direct Language Targeting. These two levers can make a nice difference in campaign performance, but they don’t always work. Or sometimes they need tweaks to work well.

We’ll also test worldwide versus country clusters, optimizing for large populations based on the dominant language of those populations. With dozens of countries and at least a dozen languages in play, this gets complicated. Fortunately, we have tools that make sorting all these inputs easy. And it is worth the work. Matching the right ad, language, and country cluster can improve performance by 20% or more.

3. Creative Refresh

Even with all the optimization like creative development and audience expansion tricks, creative is still going to get stale. So we have to be slowly rotating new, high-performance creative into ad sets all the time. We don’t just stop showing one piece of creative and jump over to the new ad. As you probably know, even if a new ad tests well, it doesn’t mean it will beat the control. So we do careful “apples to apples” creative tests to ramp up new assets.

New Concepts


Change Many Elements
Large Changes & Impact!
Low Succes Rate – 5%

Variations


Change Main Content
Keep Header & Footer
Keeps Winners Alive Forever

Creative Refresh


Change Only 1 Element
Use A/B Testing Methods
Small Change & Impact

Worldwide Launch Strategy Conclusion

So that is how we’re launching new gaming apps on Facebook using their newest best practice, structure for scale. This process is working well and we’re constantly testing, tweaking, and enhancing it.

That’s the fun thing about user acquisition on Facebook: It’s constantly evolving. The article we’ll write for how to launch a gaming app in 2020 Q4 (and even Q1) will be different from what you’ve just read.

whitepapers

Read Our
Whitepapers

Creative & UA Best Practices For Facebook, Google, TikTok & Snap ads.

    Please prove you are human by selecting the Cup.