How to Produce Data-Driven Ads and Adhere to Brand Guidelines

How to Produce Data-Driven Ads and Adhere to Brand Guidelines

  • by Brian Bowman | February 15, 2019
  • Facebook Advertising
  • Comments (2)

There are two types of marketers: brand marketers and data-driven marketers. Brand marketers spend their time thinking about visibility and reputation, while data-driven marketers care about just one thing: measurable results.

Generally, both of these marketers agree that it’s good to have brand guidelines. The company should have a consistent look, and that look should be clearly defined so that everyone in the company can understand and replicate it.

You can see the importance of brand marketing exhibited across large, multinational companies. For example, Coca-Cola has a look that differs slightly between Spain and the U.S., but their products have to be instantly recognizable in both countries.

Data-driven marketers deserve a lot of respect, too. They’re focused on measurable and testable results. Thanks to the rise of digital marketing, this group has come a long way from 50 years ago, when it took six to eight weeks to get split test results.

While both types of marketers are necessary, sometimes disputes arise from seemingly small things.

Take, for example, Facebook ad testing.

 

Facebook Ad Testing: Brand vs. Data-Driven Marketers

A company with a strong brand identity needs to bring in new users on a regular basis. This company started Facebook advertising efforts a couple of years ago. In the beginning, there was low competition and success seemed to be easy. Fast forward to a few years later, and Facebook advertising had become progressively more competitive and costly, though it was still worth investing in. So, the company continued to run ads on the platform, with a monthly spend of $1 million or more. Facebook ads became a critical stream of new customers for them.

The company began to rely on that continuous stream of new users, and their advertising evolved. They figured out everything: all the optimal settings and technical tricks. Their ads were running more efficiently than ever.

Unfortunately, their competitors figured it out too, and the company needed to find a new edge. So, they decided to invest in ad tech.

Investing in Ad Tech

Ad tech showered them with new insights and capabilities. They saw improved advertising at scale and discovered what they could do with the result of that advertising (converting clicks into paid purchases, attracting repeat customers).

All was good for a while until eventually, the company’s competitors began to use the same ad tech products. The company again lost its competitive edge and began to notice their results plateauing.

Luckily, the majority of their user acquisition team were data-driven marketers.  The team went back to the drawing board. They analyzed their data and Facebook advertising reports and discovered the big issue was the ads themselves.

Ads containing breakout creative had been responsible for the majority of the company’s success. Those fresh ads drove most of their conversions.

The user acquisition team now starts to refocus on creative, while still utilizing ad tech. They determined that their competitive edge lies in the development of breakout creative, and they had to use that to their advantage.

Staying Ahead of Creative Fatigue

To create more breakout creative and to stay ahead of creative fatigue, the company had to build an ad creation and testing machine that could deliver a successful, breakout ad weekly. They ramp up their creative development capabilities by hiring more creative personnel or by finding a creative studio.

They establish a way to produce a high volume of quality creative so that even if 95 percent of what they generate fails (and it does), they are still able to produce new breakout creative every week.

Certainly, they must also be testing this creative. So, the team creates a testing system to assess the new creative and deliver a breakout, control-defeating ad every week.

This time, they’ve got their competitive edge back.

Imagine a digital assembly line: the creative machine churns out fresh ads, and the testing engine evaluates the ads. It’s perfect. They can now consistently generate a breakout ad every week. They exceed their KPI goals.

Then a VP notices that one of the ads they’re running is not brand-compliant.  

The system comes to a dead halt.

Within the cycle of creative development and testing processes, this situation has been seen many times.

After long hours of our own trial and error, we’ve found that prototype ads resolve this situation the best.

 

The Reason Prototype Ads Work

Simply put, prototype ads are concept ads. These ads are often broken into two categories: concepts and variations. Concepts are the big picture ideas that are fresh and different from the ads you’ve run in the past. Concepts take a lot of brainstorming and refining, but because they are so distinctive, they are often the source of breakout ads. Variations are required for testing individual elements in the creative.

Prototype ads are generally successful because they allow your team to make data-driven decisions while still largely adhering brand-driven rules.

 

Why Prototype Ads Should Meet 60% of Brand Guidelines

Prototype ads require some input from brand marketers. These out-of-the-box ideas are a bit uncomfortable for people who have memorized their brand guidelines to digest. However, we’ve found that ads that are at least 60 percent complaint with brand guidelines will not damage the brand image. With heightened freedom of ad creation, the creative team is able to develop ads more quickly, which is essential given the volume of creative required to scale success.

If ads are even 10 percent more guideline complaint—meaning they meet 70 percent of brand requirements—the creation of new ads becomes a lot more difficult. It slows down ad creation and makes it a much more expensive process.

If a prototype ad survives round one of testing, it can be reworked to better adhere to brand guidelines. We’ve found that it’s easier and more efficient to tweak a winning ad and make it more brand-compliant than it is to take a brand-compliant ad that is underperforming and incrementally test it until it performs well (if it ever does).

 

Prototype Ads: The “Fail Fast” Testing Method

Round one of prototype ad testing is merciless. Each ad gets only about 10,000 impressions to prove itself.

There are three key benefits to this method:

  1. You can test a large number of ads rapidly. Since 95 percent of tested ads will fail to beat the control, it is crucial to be able to test quickly and dig through all of the underperforming ads to uncover the breakout gem.
  2. If by chance an ad is not abiding by brand guidelines, this method limits how many times the ad will be seen. Ten thousand impressions is not enough to do damage to your brand, especially if the ad is at least 60 percent guideline-complaint.
  3. Wasted ad spend is minimized. Prototype ads only get ten thousand impressions to prove themselves, which translates to just $15 to $20 in ad spend. Say goodbye to the days of spending $500 on each underperforming ads.

 

Understanding Statistical Significance

You may be thinking, “10,000 impressions is not enough to achieve statistical significance.” And, you might be right – if we were doing a typical A/B split test.

But we’re not doing an A/B test. We’re not looking for small 5 to 10 percent advancements. We’re on the hunt for one breakout ad that will outperform 95 percent of other ads.

Here’s an example:

The graphic below shows a dynamic view of the Visual Website Optimizer A/B Test Statistical Significance Calculator. 

In the first view, the variation only gets 15 conversions. Unfortunately, this isn’t enough for the test to be statistically significant. However, if the test variation does well and gets 20 conversions from those same 1,000 visitors, it then achieves statistical significance.

20 conversions is an “earthquake”—the type of performance breakout ads can produce. The first test falls short with just 15 conversions and does not perform well enough to be considered a breakout ad.

For this reason, results from prototype ad tests can be trusted after very few variations. Prototype ad tests are fundamentally different than A/B split-testing. The test is not looking for incremental improvements – it’s looking for winners in the form of breakout results. And though it’s necessary to test many ads to find the breakout winners, remember that we have an ad testing machine built to manage that process.

 

Final Thoughts

Data-driven Facebook advertisers can fulfill their need for high-performing ads but also comply with brand-driven guidelines utilizing these tips.

Though prototype ads may not always adhere perfectly to brand guidelines, they can be made to fit very closely. The added performance delivered by prototype ads is a good trade for bending branding rules, even if only just a little.

 

Want to learn more? Discover our definitive guide for Facebook advertising best practices