“Structure for Scale” is Facebook’s set of best practices advertisers should adopt if they want the Facebook advertising algorithm to operate at peak performance.
Here are more details on this approach for your reference:
1. Why do 50 conversions per week matter? Is there a performance benefit above 50 conversions per week? For instance, if there are 5 ad sets each delivering 50 conversions per week, would performance improve if they were consolidated to 1 ad set with 250 conversions per week?
Each time an ad is shown, Facebook’s ads delivery system learns more about the best people and places to show the ad. The more an ad is shown, the better the delivery system becomes at optimizing the ad’s performance. By this logic, it would be better to have 250 conversions for an ad set, rather than just 50 conversions, but in reality, Facebook learns the most about an ad during the first ~50 conversions. 50 conversions is an estimate of how much data Facebook needs to stabilize and exit the learning phase. During the learning phase, the delivery system is exploring the best way to deliver your ad set — so performance is less stable and CPA is usually worse. Once you’ve exited the learning phase, Facebook is still getting incremental learnings, but they are not as critical as the initial learnings.
Again, 50 conversions/ad set/week is just a good rule of thumb, but it represents the initial learnings Facebook’s delivery system needs to be efficient. In reality, 51 conversions are better than 50, 50 better than 49, etc. up until a certain point when we’ve learned enough and additional data does not really change much. Also, if the campaign has CBO turned on, delivery can flow to the most efficient ad set so 250 conversions among 5 ad sets are about the same as 250 conversions for one ad set. Also, something to consider is the audience overlap. If there is a high audience overlap among the 5 ad sets, it’s better to consolidate into fewer ad sets with lower overlap between them.
More information on this can be found here: https://www.facebook.com/business/help/112167992830700?id=561906377587030
2. If there isn’t enough budget to get to 50 conversions (or purchases), and there are no other events being tracked, does Facebook recommend moving from AEO or VO to MAI? And if so, does the install event still have 50 conversions per week goal to get out of the learning phase?
Yes, the goal is 50 conversions/ad sets per week for the install event.
3. If advertisers never have enough budget to get to 50 conversions per week for AEO or VO, does that mean they should completely abandon AEO and VO?
If you can hit the 50 conversions with MAI, then you can move back to AEO or VO once you’ve hit the threshold with MAI. It’s not that you have to stick with MAI forever, think of it as a fluid process where you are moving “up funnel” to get more learnings and then moving back “down funnel” with those learnings to hopefully get enough conversions with the additional learnings and improve performance.
4. Which manual changes trigger a significant edit and reset the learning phase?
Every edit you make (during the learning phase or after it) has some effect on delivery, but not every edit causes the ad set to reenter the learning phase. Only a significant edit causes an ad set to reenter the learning phase.
The following are considered significant edits:
A change to any of the following areas may or may not be significant, depending on the magnitude of the change:
For example, if you increase your budget from $100 to $101, that isn’t likely to cause one or more ad sets to reenter the learning phase. However, if you change your budget from $100 to $1000, one or more ad sets may reenter the learning phase.
For more information: https://www.facebook.com/business/help/316478108955072?id=561906377587030
5. Is there a recommended frequency and number of manual changes?
What would be considered “too frequent” or “too many” changes? There isn’t a set number of manual changes, this could be something to test in a conversion lift test to see what makes sense for your clients. It’s also important to remember the time and effort needed to make changes and if that’s worthwhile.
6. Are the 50 conversions per week goal non-unique or unique conversions?
Unique. Just the conversion event that’s being used for the optimization goal, or all conversion events throughout the conversion funnel? Just the conversion event that’s being used for the optimization goal.
7. What is the expected performance benefit of being out of the learning phase vs. being in the learning phase?
Once you are out of the learning phase, you can expect less volatility and more stable and predictable results, in terms of both CPAs/ROAS and the number of impressions served.
8. Is the efficiency gain occurring over time as one gets to 50 conversions, or does it only come when 50 conversions are achieved?
It comes over time, with each conversion up to 50 contributing to increased efficiency. For example, 30 conversions are better than 20, 50 are better than 40, etc. 51 conversions are better than 50 and so on, but efficiency gains tend to level out around 50 conversions.
9. What is the % threshold that triggers a reset of the learning phase for bids, budgets, and the amount of time paused? For instance, if the budget is increased by 20% or 70%, would 20% avoid a significant edit while 70% would trigger a significant edit? Or if there’s a pause for 1 hour vs. 1 month, is the learning phase reset different?
There isn’t a specific threshold, as it depends on a lot of factors, such as the overall budget for the campaign, the number of edits (are you changing just the budget or budget and bid, etc.). In general, it’s better to make a 20% bid or budget change rather than a 70% change as the 20% change is less likely to register as a significant edit. It’s also better to make a bid or budget change rather than both at the same time if the goal is not to reset the learning phase. It’s impossible to say an exact percentage that would work for all campaigns at all times, but generally, I would say avoid changes larger than 30% as a rule of thumb. Interestingly, there was some internal research done and they found that the smaller the overall budget, the bigger the change is likely to have to be considered significant (seems counterintuitive to me).
10. What is the ideal reach/audience size?
A good rule of thumb for gaming clients is to keep audience sizes 5M+ for AEO and VO and 2M+ for MAI. However, it’s hard to make a broad recommendation, it depends on the goals of the campaign, performance, etc. It would be best to have hypotheses and tests to find the ideal reach/audience size for clients.
11. What would be considered “too low” reach?
For gaming clients, anything below 5M+ for AEO and VO and 2M+ for MAI would be considered too low.
12. Is it possible to have “too high” reach, like broad targeting?
When you have a strong signal, e.g. seeds for lookalike audiences, these should be used in tandem with broad targeting. You can go too broad in a sense by not taking advantage of the information Facebook has about valuable audiences.
13. What if the client is running in limited geography like the SF bay area only?
When targeting a specific region or making decisions that will severely limit the audience, try to go broad where you can to achieve 5M+ AEO/VO audiences and 2M+ MAI audiences.
14. How do advertisers know if bids are aggressive enough? Or too aggressive?
If you are hitting the performance targets/KPIs at your current bid, you are bidding appropriately. Also recommend leveraging bid strategy within Delivery View: https://www.facebook.com/business/m/one-sheeters/deliveryview
15. How frequently should advertisers be making bid adjustments?
Dependent on performance, if performance is not where you want it to be, try making a few, small, isolated changes to bidding to see if that improves performance.
16. Is auto bid considered an aggressive bid?
Autobid is aggressive in the sense that it is telling Facebook’s system to find as many conversions without a cap on efficiency or price of conversion. It’s good for when you want to spend your full budget and it will reach all the lowest-cost opportunities while spending your budget. However, costs can rise as you exhaust the least expensive opportunities or as you increase the budget. It is aggressive but it isn’t the best approach if you have performance targets or ROAS goals. In the case that you do have one of these, you should use a cost cap for target CPA or min ROAS bidding for ROAS goals. This is favored more in the auction because including these parameters gives Facebook’s system more signals about what is important to you and will allow us to make more informed decisions.
17. How is auto bid treated in the auction? For instance, is auto bid VO the same as a min-ROAS bid of 0.001?
Autobid tells the Facebook auction to spend the full budget, even if opportunities get more expensive.
18. Does FB recommend auto or manual bidding?
When you have a max bid (bid cap), want to maximize cost efficiency (cost cap), want to maintain a consistent cost (target cost), or if ROAS is the primary measure of success (min ROAS), use manual bidding options instead of auto. When you do have this information, it’s better to incorporate it into your bidding strategy since it is giving an additional signal to Facebook’s auction. One note about cost cap—it can be more volatile if there are fewer optimization events or frequent changes made to the campaign. It also aims to average the CPA over a 14 day period.
19. Is there a particular time of day or day of the week that is best to adjust bids?
No, not in particular. For more helpful information on bidding, please see: https://www.facebook.com/business/m/one-sheeters/facebook-bid-strategy-guide
20. How much overlap is too much?
There isn’t a set percentage, it’s dependent on a number of factors. 20% is a good threshold to try and stay under, although it can vary by the advertiser, auction conditions, etc. It’s also dependent on CBO usage—audience overlap can be tolerated if CBO is turned on since Facebook can determine where to spend the budget (but overlapping ad sets need to be in the same campaign, separate campaigns with overlap will not be affected by CBO usage). You can use the Auction Overlap section in Delivery View to determine if poor performance or under-delivery is related to overlap: facebook.com/business/m/one-sheeters/deliveryview
21. How should advertisers manage audience exclusions when running lookalikes vs. interests vs. broad targeting?
It depends on how the campaigns are set up. For example, if running one campaign with custom audiences ad set and one broad/interest group, exclude the custom group from the broader ad set. Lookalikes will be mutually exclusive from seed audiences, so no need to make exclusions on something like a campaign with a custom audience ad set and a lookalike ad set. If you are going very broad, you may want to consider adding exclusions like website visitors, etc. to exclude current customers. If using interest targeting, always enable targeting expansion to serve to additional audiences if Facebook finds it will be efficient.
22. If we’re managing multiple accounts for a client, or if they also run an internal account, do advertisers need to consider audience overlap across all accounts or only within each individual account?
Audiences are deduped across ad accounts if the accounts use the same page or app ID, meaning advertisers can determine it’s the same advertiser. If they use a different page or app IDs, then the ads will compete against each other.
23. How does an advertiser determine compliance with the structure for scale? Is there a scorecard or anything to access? What is the internal definition of Facebook dashboards?
Delivery view, which is only available in Ads Manager, not yet available in API, is a good measure of success. Please see: https://www.internmc.facebook.com/business/m/one-sheeters/deliveryview
Also, here is Facebook’s internal definition generally used for dashboards: An advertiser has 40 or fewer ad sets OR 2 or fewer ad sets per campaign, AND less than 50% of ad sets in the learning phase OR less than 50% of 28d revenue is in the learning phase.
24. Are there different best practices for apps that monetize primarily through IAP and run AEO / VO, vs. apps that monetize primarily through ad revenue and run MAI?
If most of the revenue is IAP, run majority AEO, and VO, mostly towards VO, next AEO, and some MAI, bust just about 10%. If just monetizing on ads, MAI is really the only option. For AEO and VO, keep audiences 5M+ and for MAI 2M+. From a structure standpoint, use one campaign per optimization. So for AEO for purchase, for example, just use one campaign with no other optimizations. Keep the structure the same if the app event changes, and separate by iOS and Android at the campaign level.
25. How does this factor into the structure for scale best practices, and do incremental campaigns from split tests hurt account performance?
I know you walked me through these, but not really sure what this question is asking now that I am rereading it. Split testing should not affect structure for scale performance, etc.
26. As gaming performance deteriorates during the holiday shopping season, what’s the best way to protect client performance while also staying within best practices for structure for scale? For instance, should advertisers reduce bids and fall below 50 conversions per week? Or is it better to reduce # of campaigns to allow 50 conversions per week per ad set, while still bidding aggressively?
Focusing on a consolidated/sound structure should put you in the best position to have the best performance possible during harsher auction conditions.
It’s better to ensure all ad sets reach 50 conversions/week and exit the learning phase through aggressive bidding, even if you have to reduce the overall number of ad sets/campaigns running.
27. How should advertisers handle clients in a soft launch that have specific volume goals and tests per geo, where 50 conversions per week are not achievable? For instance, should advertisers do this in a different account than the account advertisers will use for scaling worldwide after soft launch?
Consolidate if possible to get to fewer ad sets and/or move the optimization event up the funnel, as there isn’t a difference either way.
28. Is there a recommended minimum frequency of changes?
It depends on performance, etc. You could do a 3 cell Conversion Lift test to determine how many frequent changes might work best for your advertisers-one cell with no edits, one cell with a few edits, and one cell with many edits.
29. Is there a benefit from taking significant edits at some point?
Yes, dependent on the situation. If performance is bad, for example, and you know a new creative is performing well in other campaigns, it probably makes sense to take a significant edit to add it in.
30. Is it a good idea to reset the learning phase if performance is dropping as ads fatigue?
It can be if you have a lot of new assets, audiences, etc. that you think will improve performance. Adding in new creatives will reset the learning phase but could be more beneficial in the long run. It’s about being intentional about when and why you want to reset the learning phase.
31. What are the options for working with Facebook to reduce the risk for testing structure for scale?
Utilize lift testing, work closely with AMs and CSMs to develop a testing structure that everyone agrees on from the onset, address any questions upfront, understand how teams will measure success, etc.
32. How can advertisers test structure for scale, for large spenders, without going through a full rebuild?
Start with one or a few initiatives, e.g. aim to get 50% or more of all campaigns out of the learning phase. Make small, incremental changes if it’s too risky to fully rebuild the account.
33. Is it possible for FB to construct a backend split where advertisers take a small % of the total budget and invest in a split test to get clean data for different campaign structures?
I don’t think this is possible. The best solution might be to test out of a separate ad account.