Your analysis provides clear data that the campaign was a (glorious) failure.
It could not be clearer.
The KPI you chose for your brand campaign was Trust, it had a pre-set target of +5. The post-campaign analysis that compares performance across Test & Control cells shows that Trust did not move at all. (Suspiciously, there are indications that in a handful of Test DMAs it might have gone down!)
Every so often, the story is just as simple as that.
You do the best you can with a marketing campaign (creative, audience, targeting, channels, media plan elements like duration, reach, frequency, media delivery quality elements like AVOC, Viewability, etc.), and sometimes the dice does not roll your way when you measure impact.
You would be surprised to know just how frequently the cause for failure is things that have nothing to do with the elements I mentioned above. In future Premium editions we’ll cover a bunch of these causes, today I want to cover one cause that is in your control but often a root cause of failure:
Judging a fish by its ability to climb a tree!
AKA: You picked the wrong KPI for the campaign.
[Note 1: I’m going to use the phrase Success KPI a lot. To ensure clear focus, clear postmortems and clear accountability, I recommend identifying one single solitary metric as the Success KPI for the initiative. You can measure seven additional metrics – say for diagnostic purposes -, but there has to be just one Success KPI. Close accountability escape hatches.]
[Note 2: Although the guidance in this article applies to companies/analytics teams of all sizes, it applies in particular to larger companies and large agencies. It is there that the
Category: Advanced Analytics
Robust Experimentation and Testing | Reasons for Failure!
Since you’re reading a blog on advanced analytics, I’m going to assume that you have been exposed to the magical and amazing awesomeness of experimentation and testing.
It truly is the bee’s knees.
You are likely aware that there are entire sub-cultures (and their attendant Substacks) dedicated to the most granular ideas around experimentation (usually of the landing page optimization variety). There are fat books to teach you how to experiment (or die!). People have become Gurus hawking it.
The magnificent cherry on this delicious cake: It is super easy to get started. There are really easy to use free tools, or tools that are extremely affordable and also good.
ALL IT TAKES IS FIVE MINUTES!!!
And yet, chances are you really don’t know anyone directly who uses experimentation as a part of their regular business practice.
Wah wah wah waaah.
How is this possible?
It turns out experimentation, even of the simple landing page variety, is insanely difficult for reasons that have nothing to do with the capacity of tools, or the brilliance of the individual or the team sitting behind the tool (you!).
It is everything else:
Company. Processes. Ideas. Creatives. Speed. Insights worth testing. Public relations. HiPPOs. Business complexity. Execution. And more.
Today, from my blood, sweat and tears shed working on the front lines, a set of two reflections:
1. What does a robust experimentation program contain?
2. Why do so many experimentation programs end in disappointing failure?
My hope is that these reflections will inspire a stronger assessment of your company, culture, and people, which will, in turn, trigger corrective steps resulting in a regular, robust, remarkable testing program.
First, a little step back to imagine the bigger picture.
This blog post was originally published as
The Most Important Business KPIs. (Spoiler: Not Conversion Rate!)
I was reading a paper by a respected industry body that started by flagging head fake KPIs. I love that moniker, head fake.
Likes. Sentiment/Comments. Shares. Yada, yada, yada.
This is great. We can all use head fake metrics to calling out useless activity metrics.
[I would add other head fake KPIs to the list: Impressions. Reach. CPM. Cost Per View. Others of the same ilk. None of them are KPIs, most barely qualify to be a metric because of the profoundly questionable measurement behind them.]
The respected industry body quickly pivoted to lamenting their findings that demonstrate eight of the top 12 KPIs being used to measure media effectiveness are exposure-counting KPIs.
A very good lament.
But, then they then quickly pivot to making the case that the Most Important KPIs for Media are ROAS, Exposed ROAS, “Direct Online Sales Conversions from Site Visit” (what?!), Conversion Rate, IVT Rate (invalid traffic rate), etc.
Wait a minute.
ROAS?
Most important KPI?
No siree, Bob! No way.
Take IVT as an example. It is such a niche obsession.
Consider that Display advertising is a tiny part of your budget. A tiny part of that tiny part is likely invalid. It is not a leap to suggest that it is a big distraction from what’s important to anoint this barely-a-metric as a KPI. Oh, and if your display traffic was so stuffed with invalid traffic that it is a burning platform requiring executive attention… Any outcome KPI you are measuring (even something basic as Conversion Rate) would have told you that already!
Conversion Rate obviously is a fine metric. Occasionally, I might call it a KPI, but I have never anointed it as the Most Important KPI.
In my experience, Most Important
Increase Analytics Influence: Leverage Predictive Metrics!
Almost all metrics you currently use have one common thread: They are almost all backward-looking.
If you want to deepen the influence of data in your organization – and your personal influence – 30% of your analytics efforts should be centered around the use of forward-looking metrics.
Predictive metrics!
But first, let’s take a small step back. What is a metric?
Here’s the definition of a metric from my first book:
A metric is a number.
Simple enough.
Conversion Rate. Number of Users. Bounce Rate. All metrics.
[Note: Bounce Rate has been banished from Google Analytics 4 and replaced with a compound metric called Engaged Sessions – the number of sessions that lasted 10 seconds or longer, or had 1 or more conversion events or 2 or more page views.]
The three metrics above are backward-looking. They are telling us what happened in the past. You’ll recognize now that that is true for almost everything you are reporting (if not everything).
But, who does not want to see the future?
Yes. I see your hand up.
The problem is that the future is hard to predict. What’s the quote… No one went broke predicting the past. 🙂
Why use Predictive Metrics? As Analysts, we convert data into insights every day. Awesome. Only some of those insights get transformed into action – for any number of reasons (your influence, quality of insights, incomplete stories, etc. etc.). Sad face.
One of the most effective ways of ensuring your insights will be converted into high-impact business actions is to predict the future.
Consider this insight derived from data:
The Conversion Rate from our Email campaigns is 4.5%, 2x of Google Search.
Now consider this one:
The Conversion Rate from our Email campaign is
Marketing Analytics: Attribution Is Not Incrementality
One of the business side effects of the pandemic is that it has put a very sharp light on Marketing budgets. This is a very good thing under all circumstances, but particularly beneficial in times when most companies are not doing so well financially.
There is a sharper focus on Revenue/Profit.
From there, it is a hop, skip, and a jump to, hey, am I getting all the credit I should for the Conversions being driven by my marketing tactics? AKA: Attribution!
Right then and there, your VP of Finance steps in with a, hey, how many of these conversions that you are claiming are ones that we would not have gotten anyway? AKA Incrementality!
Two of the holiest of holy grails in Marketing: Attribution, Incrementality.
Analysts have died in their quests to get to those two answers. So much sand, so little water.
Hence, you can imagine how irritated I was when someone said:
Yes, we know the incrementality of Marketing. We are doing attribution analysis.
NO!
You did not just say that.
I’m not so much upset as I’m just disappointed.
Attribution and Incrementality are not the same thing. Chalk and cheese.
Incrementality identifies the Conversions that would not have occurred without various marketing tactics.
Attribution is simply the science (sometimes, wrongly, art) of distributing credit for Conversions.
None of those Conversions might have been incremental. Correction: It is almost always true that a very, very, large percentage of the Conversions driven by your Paid Media efforts are not incremental.
Attribution ≠ Incrementality.
In my newsletter, TMAI Premium, we’ve covered how to solve the immense challenge of identifying the true incrementality delivered by your Marketing budget. (Signup, email me for a link to that newsletter.)
Today, let me unpack the crucial
The Power of Exponential Growth | Data Viz to Simplify Complexity
There has been a lot of heartbreak around the world with the CV-19 pandemic.
This chart, from NPR, illustrates some cause for optimism. It shows the 7-day average new cases per day across the world.
It is crucial to acknowledge what’s hidden in the aggregated trend above: The impact on individual countries is variable.
A large percentage of humans on the planet remain under threat. We don’t nearly have enough vaccines finding arms. We have to remain vigilant, and commit to getting the entire planet vaccinated.
Recent worries about Covid were increased by the proliferation of virus variants around the world. Variant B.1.1.7 was first identified in the UK. Variant B.1.351 was first identified in South Africa. Variant P.1 in Brazil has 17 unique mutations. The variant identified in India, B.1.617.2, had a particularly devastating impact (see the blue spike above). There are multiple “variants of interest” in the United States, Philippines, Vietnam, and other countries.
A particularly dangerous thing about variants is that they are highly transmissible (evolution, sadly, in action).
Some journalists rush to point out, hey, the death rate remains the same.
I believe this is a mistake. It imprecisely minimizes the danger, and results in some of our fellow humans feeling a false sense of hope. This is possibly due to a lack of mathematical savvy.
As Analysts, you can appreciate that a lay individual might not quite understand the complexity behind infection rates, and the impact on death rates. At the same time all of us, journalists and Analysts have to figure out how to communicate this type of insight in a way that everyone can understand.
This reality is similar to what we face in our business environment every single
The Impact Matrix | A Digital Analytics Strategic Framework
The universe of digital analytics is massive and can seem as complex as the cosmic universe.
With such big, complicated subjects, we can get lost in the vast wilderness or become trapped in a silo. We can wander aimlessly, or feel a false sense of either accomplishment or frustration. Consequently, we lose sight of where we are, how we are doing and which direction is true north.
I have experienced these challenges on numerous occasions myself. Even simple questions like “How effective is our analytics strategy?” elicit a complicated set of answers, instead of a simple picture the CxO can internalize. That’s because we have to talk about tools (so many!), work (collection, processing, reporting, analysis), processes, org structure, governance models, last-mile gaps, metrics ladders of awesomeness, and… so… much… more.
Soon, your digital analytics strategic framework that you hoped would provide a true north to the analytics strategy question looks like this…
The frameworks above cover just one dimension of the assessment (!). There is another critical framework to figure out how you can take your analytics sophistication from wherever it is at the moment to nirvanaland.
A quick search query will illustrate that that looks something like this…
It is important to stress that none of these frameworks/answers exist in a vacuum.
Both pictures above are frighteningly complex because the analytics world we occupy is complex. Remember, tools, work, processes, org structure, governance models, last-mile gaps, metrics ladders of awesomeness, and… so… much… more.
The Implications of Complexity.
There are two deeply painful outcomes of the approaches you see in the pictures above (in which you’ll also see my work represented as well).
1. Obvious:
No CxO understands the
Closing Data's Last-Mile Gap: Visualizing For Impact!
I worry about data’s last-mile gap a lot. As a lover of data-influenced decision making, perhaps you worry as well.
A lot of hard work has gone into collecting the requirements and implementation. An additional massive investment was made in the effort to perform ninja like analysis. The end result was a collection trends and insights.
The last-mile gap is the distance between your trends and getting an influential company leader to take action.
Your biggest asset in closing that last-mile gap is the way you present the data.
On a slide. On a dashboard in Google Data Studio. Or simply something you plan to sketch on a whiteboard. This presentation of the data will decide if your trends and insights are understood, accepted and inferences drawn as to what action should be taken.
If your data presentation is good, you reduce the last-mile gap. If your data presentation is confusing/complex/wild, all the hard work that went into collecting the data, analyzing it, digging for context will all be for naught.
With the benefits so obvious, you might imagine that the last-mile gap is not a widely prevalent issue. I’m afraid that is not true. I see reports, dashboards, presentations with wide gaps. It breaks my heart, because I can truly appreciate all that hard work that went into creating work that resulted in no data-influence.
Hence today, one more look at this pernicious problem and a collection of principles you can apply to close the last-mile gap that exists at your work.
For our lessons today, I’m using an example that comes from analysis delivered by the collective efforts of a top American university, a top 5 global consulting company, and
Five Strategies for Slaying the Data Puking Dragon.
If you bring sharp focus, you increase chances of attention being diverted to the right places. That in turn will drive smarter questions, which will elicit thoughtful answers from available data. The result will be data-influenced actions that result in a long-term strategic advantage.
It all starts with sharp focus.
Consider these three scenarios…
Your boss is waiting for you to present results on quarterly marketing performance, and you have 75 dense slides. In your heart you know this is crazy; she won’t understand a fraction of it. What do you do?
Your recent audit of the output of your analytics organization found that 160 analytics reports are delivered every month. You know this is way too many, way too often. How do you cull?
Your digital performance dashboard has 16 metrics along 9 dimensions, and you know that the font-size 6 text and sparkline sized charts make them incomprehensible. What’s the way forward?
If you find yourself in any of these scenarios, and your inner analysis ninja feels more like a reporting squirrel, it is ok. The first step is realizing that data is being used only to resolve the fear that not enough data is available. It’s not being selected strategically for the most meaningful and actionable insights.
As you accumulate more experience in your career, you’ll discover there are a cluster of simple strategies you can follow to pretty ruthlessly eliminate the riffraff and focus on the critical view. Here are are five that I tend to use a lot, they are easy to internalize, take sustained passion to execute, but always yield delightful results…
1. Focus only on KPIs, eliminate metrics.
Here are the definitions you’ll find in my books:
Metric: A metric
Smarter Career Choices #3: Solve for the Global Maxima!
Today, a simple lesson that so many of us miss at great peril. In fact in your role, at this very moment, your company is making a mistake in terms of how it values your impact on the business.
The lesson is about the limitation of optimizing for a local maxima, usually in a silo.
We are going to internalize this lesson by learning from Microsoft. It is a company I love (am typing this on my beloved ThinkPad X1 Carbon Gen 5, using Windows Live Writer blogging software!). I bumped into the lesson thanks to their NFL sponsorship.
If you were watching the Oakland Raiders beating the hapless New York Giants (so sad about Eli) this past Sunday, you surely saw a scene like this one:
Quarterback Geno Smith using his Microsoft Surface tablet to figure out how he added two more fumbles to this career total of 43. Or, maybe it was him replaying the 360 degrees view of the three times he was sacked during the game.
The Surface tablet is everywhere in an NFL game. Microsoft paid $400 million for four years for the rights, and just renewed the deal for another year (for an as yet undisclosed sum).
For all this expense, you’ll see players and coaches using them during the game (as above). The Surface branding also gets prominent placement on the sidelines – on benches, on movable trollies and more. It is all quite prominent.
Here’s one more example: Beast mode!
I adore Mr. Lynch’s passion. Oh, and did you notice the Surface branding?
Now, let’s talk analytics and accountability.
NFL ratings are down, but an average game still gets between 15 m – 20 m viewers. That is