Facebook, TikTok & Google Creative Testing Best Practices: Post-IDFA Loss is our new Q4 2021 update to our wildly successful whitepaper.  IDFA loss, SKAN requirements, and media buying automation through Facebook AAA or Google UAC are tough challenges facing user acquisition and creative teams right now. We have new best practices with updated recommendations, including the following:

  • Facebook, TikTok & Google creative testing best practices
  • iOS A/B creative testing without IDFA
  • Updated Android A/B creative testing best practices
  • Analysis of the evolving mobile app ad ecosystem
  • Creative testing recommendations for IAP and IAA apps
  • Automated ad buying benefits and deficits


Facebook, TikTok & Google Creative Testing Best Practices: Updated For IDFA Loss

Section 1: Creative Testing Today


With the loss of IDFA and the increase in automated ad buying, user acquisition and creative teams must adapt to survive in the volatile mobile app advertising ecosystem. The devastating impact of IDFA loss and the erosion of lookalike audience targeting is affecting revenue across the mobile app industry. However, persona-led creative is what Facebook calls a “future-proof solution” for the evolving ad environment.

Creative diversification based on motivation is critical, according to The Big Catch from Facebook: “it’s time to make different creatives inspired by these motivators. The more unique these are, the easier it will be to ultimately determine what has attracted its audience.” It seems obvious that ads should be tailored to appeal to different audiences’ motivations and interests. However, too many UA managers rely on the crutch of behavioral targeting to solve for one-size-fits-all, middle-of-the-pack creative.

Lookalike Targeting: Before and After

Prior to the loss of IDFA, lookalike audience targeting was reliable, effective, and efficient; it offered limitless opportunities to slice and dice revenue events to uncover new high-value users. A strong lookalike audience could scale and run for a month or more; but now, ROAS maybe 0.5% when it was previously 15%. As a result, advertisers are decreasing social ad spend for iOS by 40-50% each month since ATT enforcement and are consistently shifting more ad spend over to Android.

As reliance on upper-funnel campaigns increases, ad creative optimized to appeal to discreet personas is the most efficient lever for sustained profitable user acquisition. Without deterministic tracking, understanding user motivation is critical to attracting high-quality top-of-funnel installs. Beyond the install, the first 48 hours of app usage should identify consumers that indicate a propensity to monetize. We recommend streamlining and instrumenting your onboarding flows and events to capture these early monetization signals. With persona-led creative paired with onboarding events, the algorithms will learn what changes to make to deliver better audiences.

UA teams that understand their target personas can scale efficiently, even without IDFA. While much of the industry has been focused on user behavior, the deterioration of lookalike audiences and the black box of deeper funnel events means a user’s declared interests are critically important for insights into motivation and intent.

Sample User Motivations



At scale, agile persona-led creative is the most efficient way to support the ongoing experimentation now required for profitable user acquisition. 85-95% of new creative concepts fail to outperform the best ad, so 20-50 new original concepts are necessary to find a new winner. Winning ads last only 10 weeks, then fatigue and die, requiring fresh creative concepts.

To maintain advertising efficiency, you need a steady pipeline of new creative ideas and content to test. When successful, a new creative concept can lift performance by 200% or more. This performance increase is worth the cost, time, and risk that’s inherent with testing new concepts.

creative testing best practices



We test a lot of ad creative. We produce and test more than 100,000 videos and images yearly for our clients, and we have performed over 25,000 A/B and multivariate tests on Facebook, Google, TikTok, and Snap. Our industry expertise comes from managing over $3 billion in creative and paid social spend for the world’s largest mobile apps and performance advertisers. We focus on gaming, e-commerce, entertainment, automotive, D2C, eSports, digital subscriptions, financial services, and lead generation.

Consumer Acquisition runs our tests using our software AdRules via Facebook, Google, and TikTok APIs. Our process is designed to save time and money by killing losing creatives quickly and to significantly reduce non-converting spend. This process also does not necessarily follow the published Facebook, Google, or TikTok guidance of running a split test to reach statistical significance before moving into the optimized phase. Our insights are specific to the above scenarios, not a representation of how all testing on all platforms operates.



In classic testing, a 95% confidence rate is ideal to declare a winner, exit the learning phase, and reach statistical significance (StatSig). However, that ideal 95% confidence rate for in-app purchases may end up costing an advertiser $20,000 per creative variation.

Here is an example scenario: To reach a 95% confidence level, you’ll need about 100 purchases. With a 1% purchase rate (typical for gaming apps), and a $200 cost per purchase, you will end up spending $20,000 for each variation to accrue enough data for that 95% confidence rate. There are not a lot of advertisers who can afford to spend $20,000 per variation, especially if 95% of new creative fails to beat the control.

With a cost of $20,000 per variation and 20 variations to find a winner with a 95% failure rate, it would cost $400,000 just to find a new control.

creative testing best practices


To avoid such high testing costs, we move the conversion event we are targeting up, or towards the beginning of the sales funnel. For mobile apps, instead of optimizing for purchases, we optimize for impressions per install (IPM). For websites, we optimize for an impression-to-top funnel conversion rate. Again, this is not a Facebook recommended best practice. This is our own methodology, designed to allow advertisers to find new, top-performing creative in the most cost-efficient and reliable way.

This process does pose a risk that high CTRs and high conversion rates for top-funnel events may not be true winners for down-funnel conversions and ROI/ROAS. It is more efficient to optimize for IPMs over purchases, so we take that risk to save the time and expense of optimizing for StatSig bottom-funnel metrics. Perhaps most importantly, this method means you can run more tests for less money per variation. For many advertisers, that alone can make more testing financially viable. $200 testing cost per variation versus $20,000 is the difference between a couple of tests or an ongoing, robust testing program.

Because the outcomes of our tests have consequences, we also test our creative testing methodology. That might sound a little “meta,” but it is essential for us to validate and challenge our methodologies, assumptions, and results. The outcome of every test shapes our evolving creative strategy, so making the wrong call means incremental changes that could have significant consequences. When we choose a winning ad out of a pack of competing ads, we ensure it’s the right decision.



Section 2: Our Foundational Testing Approach


When testing creative on Facebook, we typically test three to six videos along with a control video using Facebook’s split test feature. We show these ads to broad or 5-10% lookalike audiences. We restrict distribution to the Facebook newsfeed only, Android only, and use mobile app install bidding (MAI) to get about 100-250 installs. If one of those new “challenger” ads beats the control video’s IPM or comes within 10%-15% of its performance, we launch those potential new winning videos into the ad sets with the control video and let them fight it out to generate ROAS.

In November and December 2019, we produced 60 new video concepts for a client. All of these creatives failed to beat the control video’s IPM, which was statistically impossible. We expected to generate a new winner 5% of the time, resulting in three fresh winners. Confident in our creative expertise and execution, we looked deeper into the testing methods.

Based on numerous ad accounts and confirmations from 7-figure spending advertisers, the following ad performance scenario has been common. We’ve anonymized the scenario to share critical performance insights.



To ensure a methodology is sound, you must test your testing system through an A/A test. Instead of testing multiple creatives as you would with an A/B test, A/A tests run the same creative in each “slot” of the test.

If your testing system is working as expected and you are close to statistical significance, all “variations” in an A/A test should produce similar results. If your testing system concludes that one creative variation significantly outperforms or underperforms compared to the others, there could be an issue with the testing method or data.

We set up an A/A test to validate our non-standard approach to Facebook testing and to understand if Facebook maintains a creative history for the control. If so, the control video would get a performance boost making it very difficult to beat.

A/A Test 1

We copied the control video four times and added one black pixel in different locations in each of the new “variations.” This allowed us to run what would look like the same video to humans but would be different videos in the eyes of the testing platform. The goal was to get Facebook to assign new hash IDs for each cloned video and then test them all together and observe their IPMs.

For anonymity purposes, we’ve replaced the actual ad creative with this dapper dachshund. The far-right ad in the blue square is the control creative and all others are clones of the control with one black pixel added. The IPMs for each ad are on the far right of the image.

First A/A test of video creative

The far-left ad/clone outperformed the control by 149%. As described earlier, a difference like that should not happen if the platform were truly variation agnostic.

We ran this test for 100 installs, our standard operating procedure for creative testing. However, we did not follow Facebook’s recommendation to allow the ad set(s) to exit the learning phase. We analyzed the results then scaled up to 500 installs to get closer to statistical significance. Our remaining question was if more data would result in IPM normalization (in other words, if the test results would settle back down to more even performance across the variations). However, the results of the second test remained the same. Note: the ad set(s) did not exit the learning phase and we did not follow Facebook’s best practice.

The results of this first test, while not statistically significant, were surprisingly enough to merit additional tests. So, we tested further!

A/A Test 2

Test 2 included six videos. Four videos were controls with different headers; two videos were new concepts that were very similar to the control. Again, the actual ad creative has been replaced with cute animals.

The IPMs for all ads ranged between 7-11 – even the new ads that did not share a thumbnail with the control. IPMs for each ad in the far right of the image.

Second A/A test of video creative

A/A Test 3

Test 3 included six videos: one control, four visually similar variations to the control, and one very different for a human. IPMs ranged between 5-10. IPMs for each ad in the far right of the image.

Third A/A test of video creative

A/A Test 4

This was when we had our “ah-ha!” moment. We tested six very different video concepts: the one control video and five brand new ideas, all of which were visually very different from the control video and did not share the same thumbnail.

The control’s IPM was consistent in the 8-9 range, but the IPMs for the new visual concepts ranged between 0-2. IPMs for each ad in the far right of the image.

Fourth A/A test of video creative


  • Facebook’s split tests maintain creative history for the control video. This gives the control an advantage with our IPM testing methodology.
  • We are unclear if Facebook can group variations with a similar look and feel to the control. If it can, similar-looking ads could also start with a higher IPM based on influence from the control — or perhaps similar thumbnails influence non-statistically relevant IPM.
  • Creative concepts that are visually very different from the control appear to not share a creative history. IPMs for these variations are independent of the control.
  • It appears that new, “out of the box” visual concepts vs the control may require more impressions to quantify their performance.
  • Our IPM testing methodology appears to be valid if we do NOT use a control video as the benchmark for winning.
  • The line graphs below show the IPMS  from the second, third, and fourth tests


creative testing best practices IPMs


We do all our testing on Facebook because of the granular controls we have for bidding, creative reporting, and targeting. While Facebook’s Automated App Ads (AAA) reduce visibility into ad campaigns, there are some clear advantages to using the system.

  • You get creative performance reporting on images, videos, and text, and you can get that tied to standard events. This reporting persists whereas Google’s doesn’t.
  • You have a choice between an auto bid and a bid cap.
  • There are fewer restrictions on the text. You can use CAPS, exclamation points, emojis, etc.
  • Flexibility with creative assets. You can basically do whatever you want with 50 assets, and the reporting will persist.
  • You can assume there is a CPM discount and a CPI discount. Whenever you use Facebook solutions that they are encouraging, the platform will give you a discount.
  • Android and iOS performance can be comparable to standard campaigns.
  • 4 x 5 and square aspect ratios tend to do best on Facebook, though in some placements, a full portrait (9 x 16) is better. Be careful about Facebook allowing you to run creative assets that don’t entirely fit on given placements. Facebook will just shave off the top and bottom of your ads in some cases, which could mean they are chopping off the ad’s CTA or other critical information.
  • Note: We also suspect that Facebook will soon give advertisers even more flexibility with how many creative assets they can use. They know that creative testing is the best competitive advantage left for advertisers, and they want to give us the tools to aggressively optimize our ads.




Section 3: Post-IDFA Creative Testing Recommendations



Our creative testing process was built leveraging deterministic tracking and 1:1 asset-level reporting of multi-stage creative’s lifecycle testing. This process spans from IPM to ROAS across the learning phase to the eventual optimized phase. A/B creative test reporting is coupled with client-provided revenue targets frequently provided by an MMP (Appsflyer, Adjust, Singular, Kochava).

Even prior to IDFA loss, we tested on Android because it was less expensive and easily translated to iOS. We saved iOS testing for the rare clients without an Android app or those who were solely targeting iOS users. Testing creative for both IAP and IAA (outlined below) continue to work effectively on Android. In that sense, IDFA loss has validated our cost-effective strategies.

Fortunately, advertisers with Android apps who are NOT using Facebook’s AAA algorithm can follow our best practices and maintain their deterministic efficiency. You’ll maintain your ability to A/B test and see results at the individual asset level. Once a winning creative is identified, move it to iOS or other paid social platforms.



Creative Testing Best Practices IAP

Phase 1: IPM Test

  • No control videos
  • Create a new split test campaign using 3~6 new creatives (no control).
    • Setup campaign structure for basic App Install (No event optimization or value optimization)
  • Spend an equal amount on each creative. Ex: One ad per ad set.
  • Budget for at least 100 installs per creative
    • $200~$400 spend per ad is recommended (based on a CPI of $2-$4) if T1 English-speaking country
    • $20~$40 spend per ad/adset testing in India (based on $0.20-$0.40 CPI)
  • US Phase 1 testing.
    • 10-15% LAL with a seed audience like past 90-day installers, or past 90-day payers.
  • Non-US Phase 1 testing.
  • Use broad targeting & English speakers only
  • If not available in India, try other English-speaking countries with lower CPMs than the U.S. and similar results. Ex: ZA, CA, IE, AU, PH, etc.
  • Use the OS (iOS or Android) you intend to scale in production
  • Use one body text
  • Headline is optional
  • FB Newsfeed or Facebook Audience Networking placement only (not both and not auto placements)
  • Be sure the winner has 100+ installs (50 installs acceptable in high CPI scenarios)
    • 100 installs: 70% confidence with 5% margin of error
    • 160 installs: 80% confidence with 5% margin of error
    • 270 installs: 90% confidence with 5% margin of error
    • IAP Titles: kill losers, top 1~3 winners go to phase 2
    • IAA Titles: kill losers, allow top 1~3 “possible winners” to exit the learning phase, and then put into “the Control’s” campaign

Which Creatives Move from Phase 1 > Phase 2?

Which Creatives Move From Phase 1 > Phase 2?

How To Pick a Phase 1 IPM Winner?

  • IPMs may range broadly or be clumped together
  • Goal: kill obvious losers and test remaining ads in phase 2
  • Ads (blue) have IPMs 6.77 & 6.34, move to phase 2
  • If all ads are very close (e.g., within 5%), increase the budget
  • IAA (in-app ads titles) you may need more LTV data before scaling

Phase 2: Initial ROAS

  • No control videos
  • Create a new campaign with AEO or VO optimization
  • Place all creatives into a single ad set (Multi Ads Per Ad Set)
  • Use IPM winner(s) from Phase 1 (you can combine winners from multiple Phase 1 tests into a single Phase 2 test)
  • OS – Android or iOS. 5-10% LALs from top seeds (purchases, frequent users + purchase) + Auto Placements
  • Testing can be done at a lower cost if you wish to run this campaign in other countries where ROAS is similar or higher but CPMs are much lower compared to the US – ie., South Africa, Ireland, Canada, etc.
  • Lifetime budget $3,500-$4,900 or daily budgets of $500-$750 over 4-6 day (depending on your $/purchase).
  • WARNING! Skipping this step is highly likely to result in one of the following scenarios:
    • Challenger immediately kills the champion/control but hasn’t achieved enough statistical relevance or exited the learning phase and therefore the sustained ROAS/KPI may not be sustained.
  • Champion/control video has a lot more statistical history and relevance and most likely has exited the learning phases and may immediately kill the challenger before it has a chance to get enough data to properly fight for ROAS.
  • The latest change to our Phase 2 strategy: we have become more focused on retention than we were a few months ago. Before, we were just looking at pure ROAS. Now, we pay attention to retention information as it comes in on Day 1 to Day 3 or so. This is usually possible, because we’ll run Phase 2 ads for about 5 days, and that’s worth weighing in for which pieces of creative you take to Phase 3.

Phase 3: ROAS Scale

  • No control videos
  • Use strong CBO campaign
  • Choose winner(s) from Phase 2 with good/decent ROAS
    • You have proven the ad has great IPM and “can monetize”
    • To win this phase, it must hit KPIs (D7 ROAS, etc.)
  • Create a copy of an existing ad set
    • Delete old ads and replace them with your Phase 2 winner(s)
    • Allows new ads to spend in a competitive environment
  • Then, create a new ad set, roll it out towards target audiences with solid ROAS / KPIs
  • CBO controls budgets between ad sets with control creatives and ad sets with new creative winners.
    • Intervene with ad set min/max spend control only if new creatives do not receive spend from CBO.
  • Require challenger to exit the learning phase before moving to challenge the control “Gladiator” video.
  • Once the challenger has exited the learning phase, allow CBO to change budget distribution between challenger and champion.


Our 3-Step Creative Testing Process for IAA (In-App-Ads)

Creative Testing Best Practices IAA


Phase 1: IPM Test

  • We use the same approach for IAA ads as we do for IAP ads in Phase 1.

Phase 2: Initial RPM

  • Look at your MMP. Find whichever countries/regions/geos are paying out a good RPM, and just go with those.
  • No control videos. Multiple ads per ad set. We call this “the gladiator battle.”
  • Create a new campaign with AEO (not VO), but instead of the event being a purchase, have it be an event that could only occur after someone has played the game for a long time like they’ve achieved a certain level.
  • For AEO, with a nonpurchase event, you can lower your budget down to even $250 per day.
  • Auto placements are okay if that’s what you typically use for the account. Generally, just use the placements you already know will work. No need to reinvent the wheel here. Just run your default best setup.
  • For audiences, just run your top two to three audiences. If you are concerned about budget, stack your audiences so you only have one ad set. This will get the ad set out of the learning phase faster and save you some money. (Keep in mind that you may not get out of the Learning Phase at all sometimes in Phase 2 testing.)

Phase 3: ROAS Scale

For IAA, there really is not a Phase 3, but our recommendations for what to do at this stage depend on your budget:

  • If your budget is small, you are not going to know performance for several weeks and so you might as well just roll your best-performing ads out into production.
  • If your budget is large, do a scaled-down Phase 3 structure as we suggest for IAP advertising. This is especially important if you have control ads that are still doing well. Roll strong Phase 2 performers whenever you need a win.

creative testing best practices


The loss of IDFA has profoundly impacted creative testing on iOS and requires a new strategy for creative optimization. Unfortunately, the trifecta of IDFA loss, account simplification required by SKAN, and media buying automation through Facebook’s AAA or Google UAC had an immediate impact on creative testing and creative strategy.

If you don’t have an Android app or you have embraced Facebook’s AAA algorithm or Google’s UAC, get ready for a different way to A/B test: ASSET FEEDS!

  • Most major platforms (Facebook, Google, Tik Tok, Snap) will have limited account configurations due to iOS14 SKAN tracking limitations. iOS14 accounts will be restricted to 9-11 campaigns with 5 ad sets per campaign, meaning you’ll have 45-60 permutations. It will be difficult to justify burning these ad sets on A/B testing. Therefore, new concepts will be the most important lever for your UA team.
  • Facebook (XML feed spec) and TikTok (XML feed spec) have published solutions for asset-level reporting data tied to dynamic asset-feeds for creative. The solutions attempt to:
    • Allow creative partners to tag, track and measure the performance of individual media.
    • Allow basic dynamic reporting (eg. CTR, spend/asset, clicks, impressions) for an individual asset in the Ad Set. However, they appear to not allow for multivariate-level reporting of the combination of ad copy, headline, and creative.
    • Prevent MMP data from being married to asset feeds based on the current platform’s specs. This may be a concern for companies leveraging their reporting to make creative or financial decisions from A/B testing.
    • Help with fatigue identification through Google’s introduction of asset performance labels (Best/Good/Poor…). This will aid in asset feed performance diagnosis and is a starting point to provide simple suggestions for what asset to optimize or replace.

From small changes to wholly new concepts

Due to limited testing slots and opaque creative-level reporting, creative strategy is shifting from optimizing with small changes to testing entirely new concepts. When creative optimization moves into asset feeds, the performance results of each creative are blended together into a kind of creative blob. Unfortunately, it will be difficult to know the contribution of each element is optimized. What we’re seeing:

  • Creative iterations and variations based on the most effective asset in a portfolio provide a higher likelihood for success but a much lower lift in performance (think < 5%).
  • Creative optimization is very likely to shift toward new concepts that take 5-20x as long to conceive and execute, but they offer a much higher potential for success (think 20% to 500%) and a correspondingly large risk of failure.
  • On average, we see a 5-15% success rate for new creative concepts, but when they succeed, the results can be a massive increase in KPI performance.
  • As creative becomes a targeting mechanism and user-intent filter, the new demand for fresh creative concepts vs iterations is certain to put a large strain on internal creative teams.


Google UAC Creative Testing

The loss of Apple’s IDFA has a cascade of consequences including how ad creative is used on Google. If you are running an asset feed ad like Google UAC, you need plenty of creative combinations within the feed to be truly effective.

With Google AC split testing feature not yet widely available, we recommend a phased Evergreen> Motivation ad group test plan.

Account Creative Cycling Structure

Week 1

  • Launch with one ad group (evergreen)
    • Fully loaded 20 video assets, landscape image, and 10 text assets

Week 2

  • Open 3-4 ad groups with distinct text assets only, centered on product motivations
    • Social, Exploration, Achievement, etc.

Week 3

  • match top motivation text group with corresponding video assets and iterate
    • Kill underperforming motivation groups
    • Repeat the process and iterate on the winning motivation concept


  • Avoid relying solely on Google’s Best/Good/Low Creative Score for optimization; these scores are relative to other comparable sized assets in the ad group and do not always align with KPI
  • Instead, identify creatives with scale potential at KPI and consider creative metrics IPM/CTR along with CPM

creative testing best practices

Google UAC Advantages

  • Flexibility in campaign bidding structure.
  • You can have multiple campaigns running in any geo. You can have different CPAs, campaigns just running for installs, etc.
  • Better transparency for performance with 1:1 event reconciling via MMP data.
  • You can see the performance by traffic source.
  • Using multiple Ad Groups you can put different creative approaches into separate “buckets” and see how their performance compares. This also means you can run seasonal or ad hoc events without disturbing evergreen campaigns.
  • Being able to create multiple Ad Groups also means you can test a lot more creative than you can on Facebook. Google does cap your assets at 20 videos and 10 images, but if you need more you can just create more Ad Groups.
  • Portrait (9 x 16) and landscape (16 x 9) aspect ratios are the best bet right now for videos on UAC.



  • In Google, you can have up to 20 videos and 10 images. Those are hard caps; you cannot swap out and have even 21 videos and 9 images. But if you swap those assets out to add new assets, you will not be able to see the stats for those creative assets. It is possible to get the stats back if you re-add the videos. So, the data isn’t lost. It’s just not shown.
  • Reporting on creative performance is harder to get.
  • You cannot control bid settings (like an auto bid and bid cap).
  • You cannot control where your traffic is coming from, or how it is allocated throughout your campaigns, Ad Groups, and ads. This means you cannot really do a proper split-test. You can have one creative in one Ad Group, and then create as many Ad Groups as creative variations, but that is not a proper split-test.
  • Google is more restrictive about the text. It will not allow CAPS, exclamation points, emojis, etc.
  • There are no standard campaign options on Google. There’s only UAC.




TikTok’s A/B testing capabilities are still rapidly evolving so we recommend A/B testing on Facebook Android. If you can only test on TikTok, here are our suggested best practices:

  • Creative testing on TikTok is limited to running two creatives at a time, due to the limitation of two split audiences
  • Broad targeting is recommended for testing to keep CPMs low
  • Each ad set within a split test must spend at least $20/day in order to run
  • TikTok automates split tests to run for 7 days, but this can be shortened
  • Tests can be run with an optimization goal of Clicks, Installs, or In-App Events
  • Available bidding: Cost Cap Bidding or Lowest Cost Bidding. The lowest cost is recommended to maximize your number of results.
  • Both Standard Ads and Spark Ads (Organic Posts) can be split tested.
  • Musical iterations of ads can be easily created and tested using the TikTok Video Editor



To maximize creative testing, run optimal aspect ratio and media types relative to each platform across Facebook, Google App Campaigns, TikTok, and Snap. However, as the failure rate of new creative is 95% you will also want to minimize the number of sizes you produce until you have uncovered a fresh winning creative. Below is our recommendation to maximize distribution and while minimizing creative production.

creative testing best practices

Section 4: A Final Word On Creative Testing

IDFA loss, SKAN requirements, and media buying automation through Facebook’s AAA or Google UAC are tough challenges facing user acquisition and creative teams.

But don’t despair. Dean Takahashi from GamesBeat says, “Advertisers aren’t helpless. In a post-IDFA world, Facebook, Branch, and Consumer Acquisition recommend focusing on ‘persona-led creative’ (or marketing to a type of person) to regain efficiency by allowing paid social algorithms to cluster users based on behaviors and creative trends.”

Tactics we recommend throughout these pages may not be strictly aligned with publicly available platform best practices. But we know they work. Our market insights and creative expertise come from managing over $3 billion in creative and social ad spend for the world’s largest mobile apps and web-based performance advertisers. Since 2013 we’ve worked with Roblox, Glu Mobile, Disney, SuperHuman, Rovio, Jam City, Wooga, NBA, MLB, Ford, Sun Basket, Lion Studios, MobilityWare, and many others. We provide end-to-end creative and user acquisition services for mobile app marketers via performance-oriented creative storytelling, integrated UA, and creative optimization.

Contact sales@consumeracquisition.com to discuss our user acquisition and creative services.


Download Creative Testing Best Practices: Post-IDFA Loss Today!

As part of Brainlabs
we now offer:

Paid Search | SEO | Programmatic

Learn more!
Global CTA

Read Our

Creative & UA Best Practices For Facebook, Google, TikTok & Snap ads.

    Please prove you are human by selecting the Key.