5: Preparing to Be Data-Driven

This chapter provides an outline of a specific methodology for implementing Web analytics. The author discloses that it is the methodology used by his own firm, and suggests that the plan should be considered as a "basic roadmap" that may need to be adapted to a need of a specific firm.

(EN: Also, I'd beware of it being overly one-sided. Thus far, the author has avoided salesmanship more than some I have read, so I'm not overly concerned about it being a sales pitch to hire his firm. Still, it's worth keeping the potential for bias in mind.)

There are four basic components or steps to the methodology the author proposes: business metrics, reports, analysis, and action - which, like so many models, are shown to be a cycle that repeats rather than a once-and-done process.

This process is not a major shift, but indicates the steps most businesses go through when planning any operation: determining what measurable phenomena will indicate success, determining how to collect the data, determining how to analyze it, and planning actions to take based on the outcome.

Defining Business Metrics (KPIs)

Determining the metrics sounds daunting, but it comes down to figuring out what you're trying to achieve, what must happen for your to succeed, and how these actions can be monitored as they occur. For example, the desire for advertising to create more revenue can be reasoned as being measurable by the number of people who see your ad, then click your ad, then enter your site, then view a product, then make a purchase. These are the critical metrics.

Granted, other metrics may exist: the number of this per visitor, the average visit length, etc. are all measurable, but they are not directly related to the goal (there may be an indirect relation, or a strong correlation, but it is not the success-path to which you want to drive user behavior).

Ultimately, this behavior can be monetized, to determine the value of getting customers through the funnel. Ideally, each behavior in the chain can be distilled to a figure. The author suggests more details will be given in the next couple of chapters.


Once the key metrics are identified, you have to determine how to collect the data related to those metrics. On the surface, it seems fairly simple - and it is fairly simple to gather the basics, though it may take some consideration to ensure that all the data is collected as accurately as possible.

For example, if you set out to know how many people "saw an ad", you can report the number of times a graphic was loaded - though you might seek to segment this (what sites was it seen on, what times of day was it seen, etc.) to facilitate more meaningful analysis, as well as considering what might interfere with the analysis (it it was "seen" by a crawler or robot, if the same person saw it multiple times, what if they clicked the ad today but didn't buy until tomorrow, etc.) to have a more accurate sense.

There is also the potential for there to be other factors involved that have not been identified, so it's worthwhile not to put on blinders to phenomena outside the path (what pages were viewed between the time they clicked through the ad and the time they purchased, is there a difference in pages viewed by buyers versus non-buyers, etc.) or you may miss factors that seem only marginally related, but may have a causal connection to the ultimate behavior you're attempting to drive (product purchase, regardless of what came first).


Analysis of data also seems simple - it comes down to counting clicks and calculating percentages. This enables you to monitor behavior, which is important, but it's a step short of the potential.

Ultimately, your goal is not to analyze what people are doing, but to discover the reason why they are doing it (or why they are not doing it, if it's what you want them to do). This is critical to success: to stand idly by and watch as people walk into a store and back out again without making a purchase tells you that there is some kind of problem (from the perspective of an operator who wants them to purchase) but does not provide much any indication as to the reason they aren't buying, nor provide any clue about what can be done to get them to behave otherwise.

(EN: no indication of how this is done, by a note that this will be considered in later chapters, particularly nine and ten.)


The final step in the process is to take action on the information yielded by analysis - unless action is taken, the firm is just a bystander, and there's some validity to the question of why collect the information at all. Oddly enough, this is typical of many firms: they collect and analyze the data, compile reports and make recommendations, but in the end, nothing is done.

Most commonly, "action" takes the form of optimization - identifying and removing the inefficiencies and obstacles - but in some instances, it requires a more significant rewiring, or even taking a completely different approach. This is fairly common in the usability process, where analytics are applied to a simple A:B test, but it tends to be rare when analytics occur "in the wild" - even if a problem is apparent, there is no plan in place to react to the findings.

Results and Starting Again

Ideally, the methodology is done in a cyclical fashion, or at least a recurring one, as the business metrics will change over time and your experience will identify different or additional metrics. If the metrics are not updated as the business evolves, they will become less relevant and less useful over time.

The nature of business tends to be repetitive: the same activities repeat daily, monthly, quarterly, or annually; and even a "one time" project will be repeated if it was at all successful (or if it fails and someone considers a different way to achieve the goal); and the lessons carried forward may be expressed by the analysis of the past.