Turns out, just about all the time. Whenever you look at your metrics consider that a metric is, in virtually every case, designed to measure a specific outcome. It may be a positive outcome, one that is beneficial to your company: more sales, more customers, less inventory shrink, etc. Or it may be a negative effect, something that impedes your ability to do business: more churn, fewer repeat customers, higher costs. It doesn’t take a rocket scientist to figure out you want “more” of the former and “less” of the latter.
Either way, the most insightful metrics are the ones that lead to actionability and that comes from isolating one end of the the above spectrum. A metric that is good at indicating positive benefit pretty much is not going to be useful for indentifying detrimental effects. And vice versa. In fact, a single metric that attempts to imply anything about both ends of such a spectrum is almost always useless. Oh, you might get a few nuggets now and then, but those nuggets will only be nuggets because some other metric(s) were involved in isolating them.
Why should this be?
Of course part of the answer lies in our human and never-ending fetish that “more numbers mean more insight” — the current buzzword for this is “big data”, by the way. But from more of a mathematical perspective, a metric that’s designed to measure one sort of outcome cannot tell you much about the anti-outcome. Particularly when what you’re looking at is based on human behavior, rather than a physical system.
Knowing this, you can learn a lot more from your metrics when you cast them as 1’s and 0’s. For every positive outcome you want more of (conversely, every negative outcome you want to minimize), consider inputs to the metric that result in what you want more (less) of as one, and everything else as zero. There’s a good reason to do so, because in a system based on human responses and behavior, zero now takes on a more useful meaning: “I don’t know”. It’s not a binary physical system, where 0 is the opposite of 1; instead the zero indicates an absence of any knowledge in the context of the behavioral outcome you measured (the “one”).
Consider direct mail. The common average of response to direct mail is an expectation of about 2%. Which, of course, means 98% of people didn’t respond. You’d probably be ecstatic if you could get your DM response rate up to, say, 2.5%. You’d scrub your mailing list of all sorts of characteristics, demographics, previous behavior, psychographics, etc to figure out how to do I better indentify more of the sort of people who were like the 2%. If you were building a model for this (perhaps for predictive purposes), you’d want to have available as fat a data stream about people on the list as possible, among those that responded positively. Not those other folks.
In other words, if I’m building a model or even a simple metric upon which to act, I don’t need to do a hell of a lot of work to predict what most people on the list will do… I have the remarkable well-correlated insight already to know what 98% of them will do! Which is, “not respond”. This in itself is fairly valuable. But I’m not looking for anti-responders. Instead I want to be looking at characteristics of those that complete the action I want and to derive what goes into getting more of those. Once I’ve done that, then I acquire more traffic through that filter and my response rate goes up.
Of course, this doesn’t mean this is the only way to skin this cat. It’s simply the best way to do it, in the context of the metric we set up, which was to look at the response rate, and the characteristics of people who exhibit that behavior, and proactively identify it in others. But you can also approach the same goal by looking at the opposite behavior, which is “those that didn’t respond”. Now, the “one” becomes how we indentify non-responders and our zero becomes not positive responders but “any behavior that doesn’t inform non-responsiveness”. Our goal remains the same (more cutomers from a higher response rate) but our strategy has changed to indentifying characteristics of those who do not respond and to cut those sorts of people out of our future mailings. Having culled out the non-responders, our response rate will naturally go up, without even having acted to better indentify positive responders.
This sort of “opposite pole” approach might remind you of the old joke about the fellow who goes to the doctor, moves his arm up and down, and complains “doc, my arm hurts when I do this!”. To which the doctor responds, “Well, don’t do that”. The take-away from the joke is to do the anti-behavior. The real take-away is that sometimes it’s easier to get more of what you want by getting less of what you don’t want.
Of course, you can combine both of the above techniques. You should keep in mind, however, that you cannot combine both techniques if you cannot be each technique separately. Start off with the wrong metrics, and you can create a phenomenally well-fitted model that predicts a behavior you’re not even looking for.
More on this topic in the future, but I’d like to leave you with a quote from Neils Bohr, the famous Danish physicist:
“The opposite of a correct statement is a false statement. But the opposite of a profound truth may well be another profound truth.”
Don’t muddy your insight with a mishmash of metrics that muddy up the profound truths in your analytics data. Aim for simple, focused metrics that measure one thing very well.
[This article is Cross-posted to my monthly column at MarketingLand, which is a great place to read all sorts of interesting content.]