If there is one thing the universe agrees on, it is that you should just provide data… You should provide INSIGHTS!!!
In the 807,150 (!) words I’ve written on this blog thus far, at least 400,000 have been dedicated to helping you find insights.
In posts about advanced segmentation, in posts about how to build strategic dashboards that don’t suck, in encouraging you to reimagine how you pick metrics to obsess about using the magnificent Impact Matrix, and on and on and on.
Go for insights!
In time, I’ve come to hate the word insights.
In our world – marketing research and analytics – that word has come to represent data puking.
It has come to represent telling people, with dozens of reports or eighty slides, that water is wet.
I’ve observed, during my work across the world, when we deliver insights, we mostly deliver to our audiences things in-sight – things they can already see!
As in, the blue line is 20% above the red line. I CAN SEE THAT! Or, life-time value of California purchasers is 3x when compared to those who reside in Georgia. Oh, please, I can also see that on the table with my eyes.
This, unsurprisingly, ends up being a massive waste of your incredible talent, and an insult to the intelligence of our audience (the people who pay your salary).
This blog post was originally published as an edition of my newsletter TMAI Premium. It is published 50x/year, and shares bleeding-edge thinking about Marketing, Analytics, and Leadership. You can sign up here – all revenues are donated to charity.
The last time I changed jobs, I wanted to change the aspiration of what our talented team and I should shoot
Since you’re reading a blog on advanced analytics, I’m going to assume that you have been exposed to the magical and amazing awesomeness of experimentation and testing.
It truly is the bee’s knees.
You are likely aware that there are entire sub-cultures (and their attendant Substacks) dedicated to the most granular ideas around experimentation (usually of the landing page optimization variety). There are fat books to teach you how to experiment (or die!). People have become Gurus hawking it.
The magnificent cherry on this delicious cake: It is super easy to get started. There are really easy to use free tools, or tools that are extremely affordable and also good.
ALL IT TAKES IS FIVE MINUTES!!!
And yet, chances are you really don’t know anyone directly who uses experimentation as a part of their regular business practice.
Wah wah wah waaah.
How is this possible?
It turns out experimentation, even of the simple landing page variety, is insanely difficult for reasons that have nothing to do with the capacity of tools, or the brilliance of the individual or the team sitting behind the tool (you!).
It is everything else:
Company. Processes. Ideas. Creatives. Speed. Insights worth testing. Public relations. HiPPOs. Business complexity. Execution. And more.
Today, from my blood, sweat and tears shed working on the front lines, a set of two reflections:
1. What does a robust experimentation program contain?
2. Why do so many experimentation programs end in disappointing failure?
My hope is that these reflections will inspire a stronger assessment of your company, culture, and people, which will, in turn, trigger corrective steps resulting in a regular, robust, remarkable testing program.
First, a little step back to imagine the bigger picture.
This blog post was originally published as
I was reading a paper by a respected industry body that started by flagging head fake KPIs. I love that moniker, head fake.
Likes. Sentiment/Comments. Shares. Yada, yada, yada.
This is great. We can all use head fake metrics to calling out useless activity metrics.
[I would add other head fake KPIs to the list: Impressions. Reach. CPM. Cost Per View. Others of the same ilk. None of them are KPIs, most barely qualify to be a metric because of the profoundly questionable measurement behind them.]
The respected industry body quickly pivoted to lamenting their findings that demonstrate eight of the top 12 KPIs being used to measure media effectiveness are exposure-counting KPIs.
A very good lament.
But, then they then quickly pivot to making the case that the Most Important KPIs for Media are ROAS, Exposed ROAS, “Direct Online Sales Conversions from Site Visit” (what?!), Conversion Rate, IVT Rate (invalid traffic rate), etc.
Wait a minute.
Most important KPI?
No siree, Bob! No way.
Take IVT as an example. It is such a niche obsession.
Consider that Display advertising is a tiny part of your budget. A tiny part of that tiny part is likely invalid. It is not a leap to suggest that it is a big distraction from what’s important to anoint this barely-a-metric as a KPI. Oh, and if your display traffic was so stuffed with invalid traffic that it is a burning platform requiring executive attention… Any outcome KPI you are measuring (even something basic as Conversion Rate) would have told you that already!
Conversion Rate obviously is a fine metric. Occasionally, I might call it a KPI, but I have never anointed it as the Most Important KPI.
In my experience, Most Important
Almost all metrics you currently use have one common thread: They are almost all backward-looking.
If you want to deepen the influence of data in your organization – and your personal influence – 30% of your analytics efforts should be centered around the use of forward-looking metrics.
But first, let’s take a small step back. What is a metric?
Here’s the definition of a metric from my first book:
A metric is a number.
Conversion Rate. Number of Users. Bounce Rate. All metrics.
[Note: Bounce Rate has been banished from Google Analytics 4 and replaced with a compound metric called Engaged Sessions – the number of sessions that lasted 10 seconds or longer, or had 1 or more conversion events or 2 or more page views.]
The three metrics above are backward-looking. They are telling us what happened in the past. You’ll recognize now that that is true for almost everything you are reporting (if not everything).
But, who does not want to see the future?
Yes. I see your hand up.
The problem is that the future is hard to predict. What’s the quote… No one went broke predicting the past. 🙂
Why use Predictive Metrics? As Analysts, we convert data into insights every day. Awesome. Only some of those insights get transformed into action – for any number of reasons (your influence, quality of insights, incomplete stories, etc. etc.). Sad face.
One of the most effective ways of ensuring your insights will be converted into high-impact business actions is to predict the future.
Consider this insight derived from data:
The Conversion Rate from our Email campaigns is 4.5%, 2x of Google Search.
Now consider this one:
The Conversion Rate from our Email campaign is
One of the business side effects of the pandemic is that it has put a very sharp light on Marketing budgets. This is a very good thing under all circumstances, but particularly beneficial in times when most companies are not doing so well financially.
There is a sharper focus on Revenue/Profit.
From there, it is a hop, skip, and a jump to, hey, am I getting all the credit I should for the Conversions being driven by my marketing tactics? AKA: Attribution!
Right then and there, your VP of Finance steps in with a, hey, how many of these conversions that you are claiming are ones that we would not have gotten anyway? AKA Incrementality!
Two of the holiest of holy grails in Marketing: Attribution, Incrementality.
Analysts have died in their quests to get to those two answers. So much sand, so little water.
Hence, you can imagine how irritated I was when someone said:
Yes, we know the incrementality of Marketing. We are doing attribution analysis.
You did not just say that.
I’m not so much upset as I’m just disappointed.
Attribution and Incrementality are not the same thing. Chalk and cheese.
Incrementality identifies the Conversions that would not have occurred without various marketing tactics.
Attribution is simply the science (sometimes, wrongly, art) of distributing credit for Conversions.
None of those Conversions might have been incremental. Correction: It is almost always true that a very, very, large percentage of the Conversions driven by your Paid Media efforts are not incremental.
Attribution ≠ Incrementality.
In my newsletter, TMAI Premium, we’ve covered how to solve the immense challenge of identifying the true incrementality delivered by your Marketing budget. (Signup, email me for a link to that newsletter.)
Today, let me unpack the crucial
There has been a lot of heartbreak around the world with the CV-19 pandemic.
This chart, from NPR, illustrates some cause for optimism. It shows the 7-day average new cases per day across the world.
It is crucial to acknowledge what’s hidden in the aggregated trend above: The impact on individual countries is variable.
A large percentage of humans on the planet remain under threat. We don’t nearly have enough vaccines finding arms. We have to remain vigilant, and commit to getting the entire planet vaccinated.
Recent worries about Covid were increased by the proliferation of virus variants around the world. Variant B.1.1.7 was first identified in the UK. Variant B.1.351 was first identified in South Africa. Variant P.1 in Brazil has 17 unique mutations. The variant identified in India, B.1.617.2, had a particularly devastating impact (see the blue spike above). There are multiple “variants of interest” in the United States, Philippines, Vietnam, and other countries.
A particularly dangerous thing about variants is that they are highly transmissible (evolution, sadly, in action).
Some journalists rush to point out, hey, the death rate remains the same.
I believe this is a mistake. It imprecisely minimizes the danger, and results in some of our fellow humans feeling a false sense of hope. This is possibly due to a lack of mathematical savvy.
As Analysts, you can appreciate that a lay individual might not quite understand the complexity behind infection rates, and the impact on death rates. At the same time all of us, journalists and Analysts have to figure out how to communicate this type of insight in a way that everyone can understand.
This reality is similar to what we face in our business environment every single
Like you, I consume a whole lot of reports every day – company data, public data.
Many are acceptable, some are very good and all the rest leave me extremely frustrated with both the ink and the think.
People make so many obvious mistakes. Sometimes repeatedly.
Just yesterday I was quietly seething because none of visuals included in the report contained any context to understand if the performance I was looking at was good or bad.
The graph could be going up, down, all around and I as a consumer had the job of figuring if something was good, bad or worth ignoring.
The heartbreaking part is that most executives will take a look, realize the difficulty in interpretation in 15 – 20 seconds, and go back to shooting from the gut. Even if the report has hidden gold.
In a move that might not surprise you, I sat down with the person for 90 minutes going visual by visual, table by table, directing changes that would ensure everything had context.
A report usually has a hard time explaining why something is going awry or going really well. (That is why you have job security as an Analyst!)
A report can usually be very good at clearly highlighting what is going well or badly.
Your #1 job is to make sure your reports don’t fail at this straightforward responsibility.
So today a simple collection of tips that you can use to up-level your reports – to allow them to speak with a clear, and influential, voice.
For many of you a reminder of what you might have let slip, for others a set of new things to implement as you aim for your next promotion.
#1. Context, Context,
The gap between a bad and good data visualization is small.
The gap between a good and great data visualization is a vast chasm!
The challenge is that we, and our HiPPOs, bring opinions and feelings and our perceptions of what will go viral to the conversation. This is entirely counter productive to distinguishing between bad, good, and great.
What we need instead is a rock solid understanding of the updraft we face in our quest for greatness, and a standard framework that can help us dispassionately assess quality.
Let’s do that today. Learn how to seperate bad from good and good from great, and do so using examples that we can all relate to instantly.
We’ll start by looking at the two sets of humans who are at the root of the conflict of obsessions and then learn to assess how effective any data visualization is in an entirely new way. If you adopt it, I guarantee the impact on your work will be transformative.
The Conflict of Obsessions.
There are two parties involved in any data visualization.
1. Analyst/Data Visualizer. As I’ve passionately shared frequently on this blog, we, Analysts, are all in the business of persuasion. We work against that desired outcome because when we work on creating a data visualization, here are our top-of-mind concerns/desires/perspectives:
How can I cram as much as I can into the graphic?
What can I include to ensure everyone clearly gets just how much work I did?
How much of my agenda do I need to make overt, and how much can I make covert?
Is there something I can add to increase the chances that this will go viral and result in fame and glory?
Ok. I’m only teasing.
But, as an
Some moments in time are perfect to reflect on where you are, what your priorities are, and then consider what you should start-stop-continue. In those moments, you are not thinking of delivering incremental change… You are driven by a desire to deliver a step change (a large or sudden discontinuous change, especially one that makes things better – I’m borrowing the concept from mathematics and technology, from “step function”).
In those moments – common around new years or new annual planning cycles – the difference between delivering an incremental change vs. a step change is the quality of ideas you are considering. In this post, my hope is to both enrich your consideration set and encourage the breadth of your goals.
My professional areas of interest cover Customer Service, User Experience and Finance, though here on Occam’s Razor my focus is on influencing incredible Marketing through the use of innovative Analytics. To help kick-start your 2019 step change, I’ve written two “Top 10” lists, one for Marketing and one for Analytics – consisting of things I recommend you obsess about.
Each chosen obsession is very much in the spirit of my beloved principle of the aggregation of marginal gains. My recommendation is that you deeply reflect on the impact of the 10 x 2 obsessions in your unique circumstance, and then distill the ten you’ll focus on in the next twelve months. Regardless of the then you choose, I’m confident you’ll end up working on challenging things that will push your professional growth forward and bring new joy from the work you do for your employer.
First… The Analytics top ten things to focus on to elevate your game this year…
I worry about data’s last-mile gap a lot. As a lover of data-influenced decision making, perhaps you worry as well.
A lot of hard work has gone into collecting the requirements and implementation. An additional massive investment was made in the effort to perform ninja like analysis. The end result was a collection trends and insights.
The last-mile gap is the distance between your trends and getting an influential company leader to take action.
Your biggest asset in closing that last-mile gap is the way you present the data.
On a slide. On a dashboard in Google Data Studio. Or simply something you plan to sketch on a whiteboard. This presentation of the data will decide if your trends and insights are understood, accepted and inferences drawn as to what action should be taken.
If your data presentation is good, you reduce the last-mile gap. If your data presentation is confusing/complex/wild, all the hard work that went into collecting the data, analyzing it, digging for context will all be for naught.
With the benefits so obvious, you might imagine that the last-mile gap is not a widely prevalent issue. I’m afraid that is not true. I see reports, dashboards, presentations with wide gaps. It breaks my heart, because I can truly appreciate all that hard work that went into creating work that resulted in no data-influence.
Hence today, one more look at this pernicious problem and a collection of principles you can apply to close the last-mile gap that exists at your work.
For our lessons today, I’m using an example that comes from analysis delivered by the collective efforts of a top American university, a top 5 global consulting company, and