On Facebook, Creative is King
As Facebook and Google App Campaign have automated their bidding, creative has quickly become the driver for the best financial performance. Creative can now be optimized to match the user’s experience, location, and device. Video templates assist with the creation of generic creative and dynamic pricing; however, non-templated creative cannot be generated with AI or machine learning…yet!
This begs the question, how much creative do you need? Our experience in spending over $3 billion indicated that 95% of direct response creative fails to outperform a portfolio’s best ad. Thus, a large volume of quality creative is needed to find that 5% of winning creative to achieve and sustain ROAS. Since creative rapidly fatigues with increased spend and audience reach, constant creative testing is necessary to produce wins to offset fatigue.
Dynamic Creative Optimization and Split Testing
Facebook has released two new features to help determine which creative is the best. As we know, DCO selects the best elements to put into an ad based on audience segments to deliver in real-time the right ad, right copy, right audience, right time, right language, right placement, and right device. Further, DCO offers both creative delivery at scale as well as testing with endless experimentation, all without human intervention required to drive continuous testing and optimization. It’s also possible to enable language testing through DCO to deliver the most efficient language based on performance.
Their second innovation is a simple split test that allows advertisers to efficiently to run A|B comparisons between videos, images, ad copy, and more. The real innovation here is that Facebook will split the traffic on the back end to avoid overlap. For example, if you have an audience of 1,000,000 and you want to test five videos, Facebook will show each video to an audience of up to 200,000 people. This new feature has radically simplified the creative testing of ads for Facebook and allows for much more efficient media spend.
We recommend that you configure split tests in a dedicated campaign to eliminate the likelihood of incurring a significant to your regular structure. Once a winner is identified, you can roll it out and scale using your dedicated scale campaigns ad sets. If you are testing new audiences, and an ad set performs well, use the ad set continuously and add new ad creative over time to offset audience fatigue. If the audience does not perform, pause and abandon it.
All of your split testings should have an end date. Facebook recommends that you now use daily budgets instead of lifetime budgets. Once a winner is identified, we recommend that you pause the split test contestants and re-launch an ad set with a dedicated unique audience.
When a split test ends and ads keep running, does the backend split still apply after the split test end date?
Broad vs. LAL? Does the audience matter for the integrity of results or moving faster? Note: Broad targeting carries lower CPM and lowers CPI, which means you can reach 50 conversions more quickly and theoretically learn faster and with lower financial risk than running higher CPA audiences.
You should always be A|B testing your campaign types to reduce wasteful ad spend:
Campaign Budget Optimization
CBO vs. non-CBO: By placing a single campaign budget and allowing Facebook to dynamically adjust ad set budgets, you give control to Facebook for budget optimization. This also allows top ad sets to scale volume without triggering a significant edit, putting less risk on high-value ad sets.
How do you scale CBO campaigns? If you edit campaign budget and trigger a significant edit, is it more efficient to just launch a new CBO campaign to avoid the risk of significant edits?
Dynamic Language Optimization (DLO)
DLO vs. non-DLO: DLO is particularly useful for targeting multilingual audiences in a single location. By allowing Facebook to determine which ad and which language to serve, you hand more control to Facebook to surface the winning ads automatically.
It’s important to note that DLO performance seems to fluctuate significantly and it is possible that non-DLO ads can outperform DLO ads during any given month. In general, language targeting requires continual testing with a focus on refreshing copy, creative, and audience. Also, country targeting appears to make a significant difference. For example, targeting worldwide French speakers may produce different results compared to targeting French speakers in France, Canada, etc., and vice versa.
If you run DLC, you can not also run DCO at the same time. Does the value of DLO outweigh the value of DCO, plus multiple creative variations?
Dynamic Creative Optimization (DCO)
DCO vs. non-DCO: DCO automatically optimizes Facebook ad creative based on multivariate testing. By running DCO, Facebook has control over which creative is served and can more quickly optimize a high volume of creative combinations versus just running many ads per ad set.
Is DCO a more efficient form for A|B testing new creative versus split testing? Is there more value in Facebook quickly finding the top creative combination with DCO versus getting even impression splits and backend audience overlap prevention from the A|B testing feature?
Split Testing vs. CBO vs. Neither: Use split testing for creative, audience, and placement testing to optimize your campaigns.
When a split test “ends” and ads keep running, does the backend split remain, or does each ad set now have access to the entire population? Is it better to test within CBO and not use split testing so that it’s easier to scale winners from split testing? And, if testing directly on CBO, launching ads in the existing campaign will cause a significant edit, something to keep in mind.