We’ve been doing well with “soft” launches. It gives us a great opportunity to pre-test creative, test campaign structures, identify audiences, and help evaluate our client’s monetization strategy and LTV model. By the time we’re ready for the worldwide launch, we’ll have found several winning creatives and have a strong sense of the KPIs necessary to achieve and sustain a profitable UA scale.
Soft launches tend to work best if we focus on a limited international market. Usually, we’ll pre-launch in an English speaking country outside of the US and Europe; Canada, New Zealand, and Australia are ideal picks for this. Choosing countries like those let us conduct testing in markets that are representative of the US, but without touching the US market. As we are not launching in the US, it will not spoil our chances of being featured by Apple or Google.
Once we’ve got the market selected, we pivot to:
A simplified account structure, rooted in auction and delivery best practices will enable you to efficiently scale across the Facebook family of apps. We typically hear things like “My performance is extremely volatile.” “My ad sets are under-delivering.” “My CPAs were too high, so I turned off my campaign.” “I’ve heard I need to use super-granular targeting and placements to find pockets of efficiency.” The best way to avoid this is to structure your account for scale based on Facebook’s best practices. Facebook defined those best practices in their Facebook’s Power5 recommendations earlier this year. But – in evidence of how rapidly the platform evolves – they fine-tuned their best practices again lately in their Structure for Scale methodology.
The gist of Structure for Scale – and of what Facebook wants advertisers to do now – is to radically simplify campaign structures, minimize the amount of creative you’re testing, and to use targeting options like Value Bidding and App Event Bidding to control bids, placements, and audience selection for you. Facebook is building up a considerable body of evidence that this approach results in significant campaign performance improvements, though if you’re a UA manager who likes control, it can be an adjustment.
The underlying driver of all these new recommendations from Facebook is we need to build and manage our campaigns to compliment the algorithm – not to fight it. One of the key benefits of adopting new best practices is to minimize Facebook’s Learning Phase. Ad sets in the learning phase are not yet delivering efficiently, and often underperform by as much as 20-40%. To minimize this, structure your account to give the algorithm the “maximum signal” it needs to get you out of the Learning Phase faster.
Expect somewhat volatile results during this exploration period (aka the Learning Phase) as the system calibrates to deliver the best results for your desired outcome. Generally, the more conversions the system has, the more accurate Facebook’s estimated action rates will be. At around 50 conversions per week, the system is well-calibrated. It will shift from the exploration stage to optimizing for the best results given the audience and the optimization goals you’ve set.
Through all of this, keep in mind that Facebook has built its prediction system to use much data as possible. When it predicts the conversion rate for an ad, it takes into consideration the ad’s history, the campaign’s history, the account’s history, and the user’s history.
When the system says that an ad is in Learning Phase, it’s only a warning that the ad has not yet had enough conversions for the algorithm to be confident that its predictions are as good as they will be later. The standard threshold for confidence is 50 conversions, but having 51 conversions is not that much different from having 49. The more conversions you give the system, the better its predictions will be.
While it is best practice to let the algorithm manage placements and bids, we do still have quite a lot of levers of control over specific parts of campaign management.
Never underestimate the power of choosing the right bid strategy. Make your pick carefully (and test it) based on your campaigns’ goals and cost requirements. Whatever bid strategy you pick is basically giving the Facebook algorithm instructions on how it should go about reaching your business goals. Here are a few things to consider:
Whenever possible, assign a value to your audience. Ignoring LTV when you bid just doesn’t make sense long-term.
If you’re using a bid cap, make sure that the cap is high enough. We suggest setting your cap higher than what your goal actually is. Still not sure what’s high enough? The average cost per optimization event your ad set was getting when you weren’t using a cap can be a useful starting point. Just keep in mind that bids are often higher than costs. So setting your bid cap at your average cost per optimization event could result in your ads winning fewer auctions.
Campaign setup matters. A lot. We need to figure out which campaign structure is going to work best for the particular app we’re launching. That usually means using Campaign Budget Optimization settings, but we also have to decide if we want to initially optimize for Mobile App Installs (MAI) or App Event Optimization (AEO).
Typically, if we don’t already have a large database of similar payers, we will need to start with a limited launch using (MAIL) app installs as our campaign optimization objective until we’ve got enough data to shift to AEO (app event optimization). For initial testing, we like to buy 10,000 installs to allow for testing of game dynamics, KPIs and creative.
Once we have 2-3 rounds of initial testing and data complete, we recommend switching UA strategies to focus on purchases using AEO and eventually VO to drive higher value users. This one-two punch of AEO and then VO is a great solution that allows ROAS to start to flow through the system for LTV modeling and tuning. Ultimately, what we’re doing is training Facebook’s algorithm for maximum efficiency and testing monetization and game dynamic assumptions.
Audience demographics get a lot of attention at this phase, too. We’ll review the performance of our campaigns at various age and gender thresholds, first using broad audience selections to build up a pool for evaluation and eventually allowing Facebook to focus on AEO/VO audiences to test monetization.
Facebook’s recommendation is to start as broad as possible and run without targeting, and we agree with this. Also, keep your account structure simple by using one or two campaigns and minimal ad sets where you reduce or eliminate audience overlap and set your budgets to allow for 50 conversions per week per ad set. Facebook refers to this as “Structure for Scale” and it gives their algorithm the best opportunity to learn and adapt to the audience you’re seeking. It will help get your ads out of the Learning Phase and into the optimized mode as quickly as possible.
We believe creative is still the best competitive advantage available to advertisers. Because of that, we relentlessly test creative until we find break-out ads. Historically, this focus on creative has delivered most of the performance improvements we’ve made. But we’ve also found that new creative concepts have about a 5% chance of being successful. So we usually develop and test at least twenty new and unique creative concepts before we uncover a winning concept.
That’s far more work than most advertisers put in, so to stay efficient we’ve developed a methodology for testing creative that we call Quantitative Creative Testing. QCT, combined with some creative best practices, allows us to develop the new high-performance creative concepts that clients need to dramatically improve their return on ad spend (ROAS) and to sustain profitability over time.
Our overarching goal with all this pre-launch creative is to stockpile a variety of winning creative concepts (videos, text, and headlines) so we’re ready for the worldwide launch and can launch in the US and other Tier 1 countries with optimized creatives, audiences, and a fine-tuned Facebook algorithm.
This is where optimizing the game’s monetization comes in. While we’re working on campaign structure, what to optimize for, and developing creative, we’re also collecting lifetime value data. This helps us meet the client’s early ROAS targets based on their payback objectives. Most mature gaming companies want a payback window of one to three years, which is pretty easy to attain if all the other aspects of a campaign are on track.
Post-launch metrics tend to be noticeably stronger than pre-launch metrics. Several factors contribute to this:
Pre-launch campaigns can run from anywhere between a week to a month. They are an investment, but they let us hit the ground running with proven creative, an efficient campaign structure, and a monetization strategy that further boosts profitability. For advertisers who want to scale fast, this is absolutely the way to go.