Analytics and Optimization: Understanding the Difference

**Frank’s Folly
**After 12 long years of managing his plant’s material inventory, Frank was rewarded with a long-awaited promotion to general manager where he would oversee all plant operations including sales. Just two months into his tenure as general manager, Frank returned from lunch to find the company’s global sales manager waiting in Frank’s office. The sales manager wasted no time with polite conversation. He was there to let Frank know that profits were sagging and Frank would need to increase the plant’s net income by 15% in the 4th quarter if he wanted to still be employed in January.

The sales manager left the office abruptly and Frank felt his pulse racing through his temples. As he pondered the situation he drew upon his long experience managing the plant’s inventory. Frank remembered that the months when profits were highest, inventory purchases also tended to be high. Instantly Frank knew what he had to do. He spent the afternoon on the phone with his top suppliers, where he directed each of them to deliver 20% more materials than forecasted for the 4th quarter. Frank sat back and waited for profitability to surge.

After reading this example, two things are likely apparent. One, Frank should start polishing his resume, and two, correlation is not causation. Unfortunately, the fundamental assumption Frank makes here is based on the same fallacy that plagues web analytics and web optimization analysts every day. Analytics and web optimization are distinct disciplines and they should not be approached in the same way.

**The Difference between Analytics and Optimization
**Analytics is very much about identifying correlations or trends in your data. Here is an example: When page depth during a single visit is greater than three, visitors are 40% more likely to return two or more times per month and also 20% more likely to purchase an offline subscription.

Optimization is much more about cause and effect. By changing X, we saw a measured response in Y. With optimization it is not merely enough that a change to the layout of the article page template coincided with an increase in a visitor’s time on the site. Time on site may generally correlate directionally with page consumption, but that correlative leap is not causation. With correlation you will never know if increased time on site causes visitors to consume more page views, or if the people who are looking at more pages just happen to spend more time. As a result, you cannot easily conclude that a change in one will cause an equal change in the other.
What if the layout change to the template made the article page more difficult to navigate? If so it may take visitors longer to find what they are looking for, increasing time on site, even though more visitors end up leaving the site after fewer pages, driving page consumption and ad consumption down. This highlights the need to go beyond correlation in testing and actually prove causation. Proving causation requires you to pick your metrics carefully.

**Pick Metrics that Truly Represent Revenue
**If you pick optimization success metrics simply because they line up with your analytics dashboard, you are dangerously close to repeating Frank’s folly. Measuring bounce rate and click-thru rate to evaluate a test are two great examples. While bounce rate has a key role in the correlation-driven analytics world, it is rarely a good fit for determining the winner of a test when causation is what matters. While decreasing the number of visitors that bounce on their initial page load may correlate with visitors who ultimately purchase a product or consume more content, this is merely correlation.

The problem arises from using a correlative measure as a proxy for something you can and should measure directly. This is no different than looking up the weather by zip code on your phone to see if it is raining outside instead of simply looking out the window. Why rely on a proxy when you can track the real thing? In this case it could be that bounce rate is being used as a proxy for downstream page consumption (pages consumed after seeing the test experience). It may be a good proxy, but it may also be a terrible proxy. It could be that a tested change produces higher bounce rate because it does a better job of filtering out less-qualified visitors. As a result you have fewer visitors that remain in the path but they are much more qualified and ultimate conversion improves. If you stop at bounce rate you could be misjudging the test.

Do not focus on metrics that correlate with revenue. Instead, pick metrics that are revenue.

If you are lucky enough to find yourself among the rising ranks of analysts with responsibilities in both the analytics and optimization spheres, be careful not to repeat Frank’s folly. Do not let a correlative metric take the place of a truly monetized metric in your testing. Just as correlation is not causation, there is a fundamental difference between the disciplines of analytics and optimization. Be careful not to confuse the two.