This is a continuation of my exploration of negative metrics thread which covers how to think about your negative metrics and how to set them up as part of your testing and optimization process. It’s based on earlier work I’ve published on the value of nothing and treating your conversion symptoms but not the disease.
Now, let us turn attention to setting up negative metrics for any of your tests. This, too, I’ll cover by discussing a specific example from real life.
One of my larger clients earns a significant percentage of its revenue from ad impressions on its site. It’s considered a “trusted source” in its niche, and people visit this site all the time for information, reviews, and fair-market price comparisons.
Because of the important revenue coming from ad impressions, one of the success metrics they use is “pages per session”. One of the metrics they try to minimize is “bounce rate”. Both of these are completely rational ways to measure success. In a recent test they were able to substantially reduce bounce rate on a given landing page, and apparently page views per session was approximately constant, so on first blush this seems like a success. “But what about the negative metric?”, I asked. Reader, can you think of what the negative metric might be? Or at least think of how it would manifest?
What if solving the bounce rate problem simply causes the visitor to “pogo stick” around on ensuing pages? Pogo sticking means more page views but very short, very quick changes of pages, indicating non-engagement. This isn’t good, you’ve simply moved the problem to another place as I mentioned in my earlier post on conversion symptoms versus conversion disease. So in this test, bounce went down, which is good, but the effect for the company isn’t good.
What about page views per session, this test’s other success metric? If someone is pogo sticking then these should be expected to actually go up . Great for ad impressions (maybe), but probably not for ad viewing (surely not what the person who bought the ad impressions wants!). And pogo sticking means the visitor isn’t achieving what they want, and that’s not an effect the company wants either.
So what we’re actually looking for is an additional metric, this time negative metric, to indicate if our success metrics of bounce rate and page views per session are fooling us. In this case a time component would help, so perhaps “avg time spent on page per session”. Right? If the visitor is pogo sticking, this metric will go down strongly relative to the control, even if bounce on the landing pages improves and page views go up. It can help indicate to us if the test is truly a success or not. If this number stays relatively constant, and we improve bounce and page views at the same time, then we really do know we have a success.
Take-away: every time you define a success metric for your testing and optimization efforts, consider what the negative metric will be. What ways could your test have a false success … and how would you know?