Many advertisers come to us with well-developed user acquisition programs. They’ve got established advertising accounts and an extensive history of which creative approaches have worked.
That’s not the case if you’re launching an app. So when a brand new advertiser approaches us wanting to advertise a brand new app, this is how we do it.
Our process breaks out into three phases:
We’ve been doing well with “soft” launches. It gives us an opportunity to pre-test creative, test campaign structures, identify audiences, and help evaluate our client’s monetization strategy and LTV model. By the time we’re ready for the worldwide launch, we’ll have found several winning creatives and have a strong sense of the KPIs necessary to achieve and sustain a profitable UA scale.
Soft launches tend to work best if we focus on a limited international market. Usually, we’ll pre-launch in an English speaking country outside of the US and Europe. Canada, New Zealand, and Australia are ideal picks for this. Choosing countries like those let us conduct testing in markets that are representative of the US, but without touching the US market. As we are not launching in the US, it will not spoil our chances of being featured by Apple or Google.
Once we’ve got the market selected, we pivot to:
The gist of Structure for Scale – and of what Facebook wants advertisers to do now – is to radically simplify campaign structures, minimize the amount of creative you’re testing, and to use targeting options like Value Bidding and App Event Bidding to control bids, placements, and audience selection for you. Facebook is building up a considerable body of evidence that this approach results in significant campaign performance improvements, though if you’re a UA manager who likes control, it can be an adjustment.
The underlying driver of all these new recommendations from Facebook is we need to build and manage our campaigns to complement the algorithm – not to fight it. One of the key benefits of adopting new best practices is to minimize Facebook’s Learning Phase. Ad sets in the learning phase are not yet delivering efficiently, and often underperform by as much as 20-40%. To minimize this, structure your account to give the algorithm the “maximum signal” it needs to get you out of the Learning Phase faster.
Expect somewhat volatile results during this exploration period (aka the Learning Phase) as the system calibrates to deliver the best results for your desired outcome. Generally, the more conversions the system has, the more accurate Facebook’s estimated action rates will be.
At around 50 conversions per week, the system is well-calibrated. It will shift from the exploration stage to optimizing for the best results given the audience and the optimization goals you’ve set.
Through all of this, keep in mind that Facebook has built its prediction system to use much data as possible. When it predicts the conversion rate for an ad, it takes into consideration the ad’s history, the campaign’s history, the account’s history, and the user’s history.
When the system says that an ad is in Learning Phase, it’s only a warning that the ad has not yet had enough conversions for the algorithm to be confident that its predictions are as good as they will be later. The standard threshold for confidence is 50 conversions.
While it is best practice to let the algorithm manage placements and bids, we do still have quite a lot of levers of control over specific parts of campaign management.
Increasing audience size can help us gather more data and prevent inefficiencies caused by targeting the same audience across multiple ad sets. We’ll follow the standard best practices for increasing audience size that we’ve mentioned before, like:
Then we’ll also:
Once you have about 10,000 installs you can move to AEO (App Event Optimization for purchases). Then your audience structure can shift to something more like this:
When you have about 1,000 purchases then you can move to Value Bidding and reselect your audiences again:
The more placements your ads appear in, the more opportunities you have to reach or convert someone. As a result, the more placements your ads are in, the better your results can be – and you won’t get penalized for letting the algorithm test new placements.
After the Learning Phase, the algorithm will just not show your ads where they don’t perform. It can do the placement testing for you.
Keep in mind that Facebook’s system of Discount Bidding (Also known as “Best Response Bidding”) will always try to find the lowest cost results based on a campaign’s objective and within the audience constraints set by the advertiser. But if you’re willing to widen the delivery pool by including additional placements, you’re giving the algorithm more to work with. That gives it a better shot at finding lower-cost results and delivering more results for the same budget.
The ad sizes and ratios you use, of course, determine which placements those ads can appear in. So you’ll want to choose the images or videos people see in your ads based on where those ads may appear.
If you elect to manually select placements, use asset customization. It will let you specify what ads are shown for specific placements to ensure your ad displays the way you want. Asset customization also allows organizations to easily choose the ideal image or video for some placements within one ad set. If you have a content strategy that requires specific assets to appear in specific placements, this option is your best bet.
Never underestimate the power of choosing the right bid strategy. Make your pick carefully (and test it) based on your campaigns’ goals and cost requirements. Whatever bid strategy you pick is basically giving the Facebook algorithm instructions on how it should go about reaching your business goals. Here are your options:
This directs the Facebook algorithm to bid so you achieve maximum results for your budget.
Use the Lowest Cost when:
This aims to achieve a cost per result on average. So even if cheaper conversions exist, Facebook will optimize for the specified cost per result.
Use it when:
Sets a limit on how high Facebook will bid for an incremental conversion.
Use it when:
Campaign setup matters. A lot. We need to figure out which campaign structure will work best for the particular app we’re launching. That usually means using Campaign Budget Optimization settings, but we also have to decide if we want to initially optimize for Mobile App Installs (MAI) or App Event Optimization (AEO).
Typically, if we don’t already have a large database of similar payers, we’ll start with a limited launch using (MAIL) app installs until we’ve got enough data to shift to AEO (app event optimization). For initial testing, we like to buy 10,000 installs to allow for testing of game dynamics, KPIs, and creative.
Once we have 2-3 rounds of initial testing and data complete, we recommend switching strategy to focus on purchases using AEO and eventually VO to drive higher value users. This one-two punch of AEO and then VO is a great solution that allows return on ad spend to start to flow through the system for lifetime value modeling and tuning.
Ultimately, what we’re doing is:
Audience demographics get a lot of attention at this phase, too. We’ll review the performance of our campaigns at various age and gender thresholds, first using broad audience selections to build up a pool for evaluation and eventually allowing Facebook to focus on AEO/VO audiences to test monetization.
Our overarching goal with all this pre-launch creative is to stockpile a variety of winning creative concepts including videos, text, and headlines so we’re ready for the worldwide launch and can launch in the US and other Tier 1 country with optimized creatives, audiences, and a fine-tuned Facebook algorithm.
We’ll follow all the best practices for creative testing that we outlined in the earlier creative testing section of this white paper.
This is where optimizing the game’s monetization comes in. While we’ve been working on campaign structure, what to optimize for, and developing creative, we’ve also been collecting lifetime value data. This helps us meet the client’s early ROAS targets based on their payback objectives. Most mature gaming companies want a payback window of one to three years, which is pretty easy to attain if all the other aspects of a campaign are on track.
Post-launch metrics tend to be noticeably better than pre-launch metrics. Here’s why:
Pre-launch campaigns can run from anywhere between a week to a month. They are an investment, but they let us hit the ground running with proven creative, an efficient campaign structure, and a monetization strategy that further boosts profitability. For advertisers who want to scale fast, this is absolutely the way to go.
Now we’re ready to gear up for the worldwide launch because we have tested creative, optimal campaign structure, and a monetization strategy that gives you the payback window you want. To prepare for this global launch, we typically start by casting a wide net with different campaign structures so we can identify top-performers and scale them quickly.
We also focus on:
We’ll test Worldwide, the United States only, and Tier 1 minus the United States to see which performs best. Then we’ll drill down further as soon as we have enough data to decide which option to prioritize.
We’ll test different interest groups, and we’ll also do a ton of work with lookalike audiences as soon as we’ve got enough purchases to start working with that data. We do so much work with audience selection that we built a tool to make it easier. Now our Audience Builder Express tool lets us create hundreds of super-highly targeted audiences with just a few clicks.
We did this in the pre-launch, but it has to be re-tested again now that we’re advertising in dramatically larger markets. Typically we’ll choose Mobile App Installs (MAI), App Event Optimization (AEO), or Value Optimization (VO).
These are spreadsheets that block out campaign structure and different campaign settings including ads sets, the budgets for each campaign, and more. They’re basically a blueprint of the entire launch.
Here’s what one section of a campaign structure grid might look like:
Sometimes we’ll have two campaign structure grids – one from our team, and one from Facebook. Generally, Facebook’s recommended best practices are the right way to go.
While we agree with their approach, every company is a little bit different. We usually follow (and always endorse) Facebook’s best practices, but it’s critical to understand the backstory and the technical side of why those best practices work. When you look at the underlying principles and the new features we have to work with for each client, every so often, for a particular client situation, we’ll bend those best practices a bit.
For example, heavy mobile app installs are recommended for the first week of launch. We’ve seen success with this strategy, and we’ve also seen some games scale more profitably with Value Optimization in week one than they did with Mobile App Installs. This is why we recommend casting a wide net instead of exclusively optimizing for Mobile App Installs.
We also want to get the campaigns out of the learning phase as quickly as possible. Once we’re out of the learning phase, we’ll also avoid any “significant edits” to top-performing campaigns and ad sets, as those would put those campaigns back into the learning phase. Facebook’s system defines a “significant edit” as a campaign budget change of 40% or more or any bid change greater than 30%.
Then there’s the issue of budgets. We aim to balance budgets within one to three days of launch so we can then shift spend to top-performing segments. We do that by first reducing the spend from underperforming geographies and optimization goals, and then reallocating it to top-performing geographies and optimization goals.
Once that’s all balanced out, we can safely increase the budgets for CBO campaigns and ad set budgets. We can also launch new campaigns with these same optimized settings.
We’ll have achieved a successful worldwide launch – the exposure will have gone global. The campaigns will be profitable and operating with the best efficiency we can deliver for now. The next step is to fine-tune that efficiency and try to scale up further with audience expansion.
In the first and second segments of this series, we did our pre-launch work and successfully-launched a global campaign with positive ROAS. Now it’s time to optimize what we’ve got and make it even better.
The launch plan shown below is a snapshot of everything we’ve done so far. It summarizes bid strategies and optimization goals, geographic roll-out, budgets, placements, and which audiences we’re targeting. Everything, basically. It’s not the sort of thing you’d want one of your competitors to get hold of.
These are the three fronts we’ll tackle to optimize this campaign.
Once again, we’ll fire up our Audience Builder Express tool and start isolating specific audiences. In addition to the standard in-app event audiences (registrations, payers, etc), we’ll build and test audiences based on these KPIs:
This is what it looks like as you build up your payer base. As the data accrue, you’ll be able to build manipulated audiences like this:
A little bit later on, as your title continues to grow, you can build even more audiences focused on users who pay early and pay a lot:
Great creative takes a lot of work to develop, so we want it to last as long as possible. We also want to find every single person on the planet who could be a high-value customer. So we very carefully expand audiences to avoid audience fatigue as much as we do it to avoid creative fatigue.
But being able to tie these audiences and rotate through them means our creative lasts significantly longer. It allows us to find a huge potential customer base, and we get thousands of conversions we might never have found or would have spent way too much to get.
Being able to control and expand audiences like this will also be valuable later on, as we scale up and the spending goes up because audiences burn out even faster as campaigns scale. Exploiting every possible audience expansion trick, at every step of the campaign, is critical. Being able to do it efficiently and effectively is a massive competitive advantage.
Just as audiences burn out faster with high-velocity campaigns, creative burns out faster, too, of course. So we have to be aggressively testing all the time. And we do: We are constantly rotating through new creative.
To find creative that performs at the level we need, we usually have to test twenty ads to find one piece of creative good enough to replace control. That means we need a constant stream of new creative – both new “concepts” (completely new, out-of-the-box creative approaches) and new ad variations. Our creative development work is about 20% concepts and 80% variations.
This creative testing machine is running all the time, fueled by creative from our Creative Studio. It has enough capacity to easily deliver the 20 ads we need per week and can handle delivering up to hundreds of ads every week.
Because the game is global by this point, we’ll also need localized creative assets. Creative has to be in the right language and may even be optimized for localized placements or cultures.
So we don’t need just one winning ad every week: We need that winning ad cloned into every language and optimized for every region. Of course, all those ads also have to be at the right aspect ratios and optimized for Facebook’s 14 different ad placements. That’s when creative development gets really work-intensive. But the Creative Studio can handle that. They’re adept at creating all those variations efficiently.
That’s the creative development side. There’s also a huge amount of testing strategy required to grow campaigns like this. First, we have to decide when and how we’re going to use Dynamic Language Optimization, and when and how we’ll use Direct Language Targeting. These two levers can make a nice difference in campaign performance, but they don’t always work. Or sometimes they need tweaks to work well.
We’ll also test worldwide versus country clusters, optimizing for large populations based on the dominant language of those populations. With dozens of countries and at least a dozen languages in play, this gets complicated.
Fortunately, we have tools that make sorting all these inputs easy. And it is worth the work. Matching the right ad, language, and country cluster can improve performance by 20% or more.
Even with all the optimization like creative development and audience expansion tricks, creative is still going to get stale.
So we have to be slowly rotating new, high-performance creative into ad sets all the time. We don’t just stop showing one piece of creative and jump over to the new ad. So that’s how we’re launching new gaming apps on Facebook right now. This process is working well, but we’re constantly testing, tweaking, and enhancing it.
That’s the fun thing about user acquisition on Facebook: It’s constantly evolving. The article we’ll write for how to launch a gaming app in Q4 2020 will be different from what you’ve just read.
The algorithms at Google and Facebook may be able to handle most of the quantitative side of UA campaign management now, but they still can’t develop effective creative. They can’t do competitive analysis, either. And they can’t plan out a coherent creative strategy, or intelligently apply player profile data to that strategy.
Creative development, testing, and strategy are still best done by human beings.
If you’re an acquisition manager, we recommend you focus on expanding your creative testing and competitive analysis skills in 2020. And no matter who you are, or how much creative testing you’re doing, do more of it. It’s the single best way to improve the ROAS for your accounts.
You don’t necessarily have to become a creative yourself, but you do need to show creatives how to become data-driven. You need to be able to distill and interpret data for them so they can deliver better results. If you want to thrive in this environment, you’ll need to synthesize data in both left-brained and right-brained way.
But also keep your eye out for new opportunities. Machine learning is a powerful tool, but if you ask it to solve problems it hasn’t encountered before, it flunks. For example, if a machine learning algorithm was asked to optimize an ad for augmented reality, it wouldn’t perform well. A human, however – a smart user acquisition manager – might be able to take what they’ve learned in other contexts and apply that knowledge successfully to the new situation.
Humans have always been good at this. We’re adaptable. And in this environment, adaptability may be the best skill anyone can have.