4 - Create Hypotheses
Optimizing conversion rate is done in order to dramatically improved business results. The first step to doing so is admitting that things are not presently as effective or efficient as they could be, and the second is being open to the possibility that changes can result in improvements. This would seem to be implicit, but cannot be taken for granted.
It is generally easier to admit when the problems are very serious and customers are leaving in droves. In such a desperate situation, it cannot be denied that there is a problem and there is great enthusiasm for seeking it out and fixing it. But even when things seem to be going well, they could be better, and there are very few instances in which a company would not welcome a 10% or even 1% increase in its business.
Methodology is More Valuable than Tips
Too often, companies seek to fix their problems by seeking the advice of a consultant who provides helpful tips from outside sources. Consultants are guided by theoretical notions, the whims of fashion, and historical evidence that doesn't quite fit. And they can be quite arrogant in their expertise. The author recounts dealing with one such person who flatly suggested that it was "pointless to test something when you already know it's going to win."
The problem is, nobody has that level of certainty. Consultants are good at posturing themselves as experts, but they charge for giving advice regardless of whether it actually does any good - there are few who are paid based on results, and none who offer to compensate for damages if following their advice does more harm than good. That is to say that even the best advice, well grounded in theory, popular with all the smart kids, and with a historical record of success at other businesses, may not pan out. Even a "sure thing" ought to be tested.
The author reports, as can anyone with much experience in testing, that the "winner" of a test doesn't always match common beliefs and preconceptions.
For that reason, the author suggests that the method, a well-structured testing process, is more valuable than "best practices" - though it also seems that everyone is looking for free advice and a quick fix to their problems rather than taking a more methodical approach.
That is not to say that best practices should be blithely ignored - just that they should not be blithely accepted without testing them against alternative approaches.
The author belabors the psychological observation that people tend to have selective attention: when they are interested in one thing, everything else is ignored or dismissed. The same is true of people, executives and designers, who become infatuated with the first idea and fail to give adequate consideration to others - their minds are occupied with deciding how to implement it, imagining the results they will achieve, and so on, as they fail to give sufficient consideration to any other alternative (EN: or in my experience, curtly dismissing any suggestion after a "good enough" one has been presented).
The testing model maintains not only that a good idea should be tested, but it should also be tested against other ideas. Doing so means that creative thinking and brainstorming is not cut short at the first thing that comes to mind - testing two or more alternatives encourages (or requires) thinking of several other approaches and evaluate them more thoroughly.
The author has identified six factors that affect conversion rates (EN: Chances are this is not comprehensive, but the list does seem to include the most common factors). Consider each factor when identifying possible obstacles to conversion.
These are ranked in order of importance to the prospective customer.
(EN: I sense I will likely do a lot of augmentation and rewriting in the sections that follow - as the author is definitely onto something here but his focus seems a bit blurred.)
The value of taking an action to the user includes the benefits and the costs: what will I get and is it worth the amount of effort it will take to get it?
It is not merely that they will get a product for a price. A product is only meaningful because it delivers benefits to the user, and the price is only one part of the cost of getting that benefit, as there are other actions the user must take.
A common mistake among businesses is to assume customers see a value in relative terms: if your product is cheaper or better than the competition. This is considered, but the first thing on the customer's mind is the absolute value - is it worth it at all, regardless of other options? Only when they have decided that it is will they consider whether there are other alternatives.
A product that offers no value to a customer, or is a significantly worse value than competitors' offers, is doomed and the design of your Web site cannot save it.
There is also the notion of conversion-rate elasticity - the degree to which you will gain more customers, or make more sales to existing ones, by making changes. The greater the elasticity, the greater the benefit you will get from making an improvement.
This notion derives from price elasticity and it has to do with market saturation. No matter how cheap or easy to get something is, only a certain number of people will be interested and they will only consume so much.
(EN: It is however unlikely that conversion-rate elasticity will have the same issues as price elasticity. For example, consider stockpiling - a low price will get people to buy more now and less later. The same may be true of short supply. I tend to doubt there are very many optimization improvements that will elicit the same behavior.)
The author mischaracterizes relevance as expectations: whether the page shows the visitor what they thought they were going to see.
For example, if a user expects to be able to search for products and choose among several options, a Web site that takes them from a category menu to a specific product would be an unexpected result and the user may feel that the one product is their only option with a given site, and seek other options elsewhere. It is not until they expect to choose a single product that they will be ready to see one.
(EN: This is interesting in that it would seem reasonable that if you have only one product in a given category, skipping a "search results" page with only one match saves the user a click ... but violates their expectations. Worth testing.)
A common practice, which testing often bears out, is that the user needs context and cues that they are on the right path to get to where they wish to go. The author refers to research (Chi and Pirolli) that compares human behavior to food-gathering behavior in animals: they follow a scent trail and seek the fastest source. If they lose the scent, they drop off.
(EN: back to my opening remark about the author mischaracterizing relevance ... relevance pertains to the user's needs and interests. A product may deliver a good benefit for a cost, but the person does not feel that they really need the benefit, which is particularly pronounced when someone is buying an item for someone else's use. There is also the notion of time-sensitivity - I may need it, but I don't need it right now, though his later obstacle of "urgency" may address that. Finally, there is relevance to present interests, in that the person may be in pursuit of something they feel to be more important and do not wish to even consider your offering at the present time. That, too, may fall under urgency. I'm picking at this a lot because relevance is misunderstood and undervalued, and characterizing it as relating to immediate expectations in an online experience is selling it short.)
Clarity has to do with the user's ability to understand what they see. It may be clarity of text (the language of the page is vague or incomprehensible) or it may be the visual clarity of the design (the user can readily see where to click to get to the next step).
The author then strays into the realm of efficiency: using more text or art than necessary creates a clutter that can cause the user to feel lost or overwhelmed, or they could give attention to something that is unnecessary and lose their interest or run out of time.
The author likewise offers a hazy description of anxiety as "any uncertainty in your prospect's mind about completing the conversion" and identifies it first as deriving from the credibility of the brand and the site.
He mentions the brand in other regards as a contributor to or detractor from the way people feel about using your Web site (EN: the site can do little to overcome reluctance due to a damaged brand, but much to damage a credible one)
He also mentions the anxiety people feel when they are confronted by a demand for information (such as asking for e-mail address, name, and phone number to see information).
(EN: He does not mention anxiety as a result of the transaction itself. Some people are still not comfortable performing some transactions online, which is often a matter of their trust in the technology or their own abilities rather than the credibility of the brand. It's also notable that lack of clarity can cause anxiety, as can too much urgency. The author's notion of relevance also touches on anxiety, when people feel lost in a flow, not knowing what's next or where they have been. Even something as simple as asking the same question twice makes people anxious and uncertain.)
The author's concept of distraction is similar to the notion of clarity, above, in that it deals with instances in which something on the page redirects attention from the primary call to action. (EN: Arguably, I may be distracted by my own emotions, but that might better be considered an anxiety factor rather than a distraction, and I may be distracted by my environment, which the site cannot control, but can predict - particularly in mobile and tablet channels that are not used in the serene environment of a home office.)
Too many options can become a distraction - which is particularly bad when marketers get hold of a page and wish to fill any white space with a competing product offer, or aren't sure how to target the messaging and want to barf four messages at the user in hope they will respond to something.
Sometimes distraction is the result of proliferation - too many choices can be paralyzing. (The mental block is better characterized as anxiety) and the design may make it difficult to find the right choice (which is a matter of clarity). There's a reference to research (Airily's 2009 "jam" studies) that suggests too many choices is harmful.
(EN: Ariley's experiment showed that a wider array of options causes more people to stop at the display but a lower percentage to purchase, but I would be reluctant to trust that entirely because a higher conversion of a smaller audience may not be better than a lower conversion of a larger one. I also suspect that the larger variety may have attracted more first-time buyers and the smaller variety attracted people who were more likely to purchase anyway, but the data could not be sliced quite that way. Said another way, Ariley's research was very interesting, but should not be taken as a rule. The question of the number of options should be tested for any given market and vendor with an eye toward the desired outcome.)
It's well-established that if a shopper has the option to do something later rather than immediately, she generally will delay taking action. The (powerful) attraction of a "sale" event is that it is a limited time offer and people will lose the opportunity to purchase at a given price draws people to brick and mortar stores in droves. In this sense, creating a little anxiety can improve performance.
(EN: The opposite, however, has also been demonstrated. Vendors who regularly discount something find that shoppers do not buy certain items unless they are on sale. As such, a customer might purposefully delay buying an item or even visiting the store with the expectation that there will be a discount later.)
There's a brief mention of "internal urgency" that arises from the customer's needs (which is part of the time-relevance of an offer). A shopper with a pressing need to have something sooner will generally buy from the first source they find without shopping around - and conversely, if they do not need it soon they the time, and will take their time. A marketer has little influence over the urgency that arises due to events that occur in a prospect's life, but may be able to react to them - consider seasonality or event-driven marketing as examples. However, it must be noted that conversion-rate elasticity will be low: the customer will pay a higher price and soldier through inconvenience if they have a pressing need.
External (artificial) urgency can be influence by the marketer by making limited-time offers, or playing on the fear of the audience in the copy (every day you don't have our product is a day you are at risk for X and are not benefitting from Y).
(EN: What also comes to mind is the question of whether enabling a user to "save" in the middle of a lengthy process and return later is a convenience that improves conversion or a detraction that decreases it. It could be argued either way - and likely differs by customer and situation, which is to say it merits testing.)
Convergence of Factors
The author notes that each customer has their own "tipping point" where the factors interact with one another to get them to move forward or abandon, and further suggests that strength in one area may compensate for weaknesses in others.
It's also suggested that customers will react differently to stimuli: a change that gives one customer a sense of urgency that will lead them to convert might also give another customer a sense of anxiety that will lead them to bail out. (EN: Which implies that segmentation testing may be worthwhile.)
He carries on with a metaphor to illustrate this, but it's tedious and adds no value.
Create Valid Hypotheses
The author also belabors the notion of creating test hypotheses, and botches it a bit. Valid points are:
- The basic structure is "We will test to determine whether changing [this] to [that] will [result]
- This structure avoids open-ended questions that cannot be tested for lack of specificity
- It also avoids vague descriptions such as "changing the font" (without specifying what it will be changed to)
- The result most also be specific, such as "increase the number of visitors who submit their contact information"
- The result should focus on a valued outcome, rather than a step in the process (to increase the number of buyers, not just the number of people who click a button and bail before buying)
- The hypothesis doesn't indicate a reason it is assumed to be true - because the reason cannot be proven, only the outcome (changing a button from blue to green increases sales, but it may not be for a supposed reason)
- The hypothesis does not indicate a specific number - it may be that if the increase is not at least 5% the firm will deem that it is not worth pursuing, but this is external to the test
The author then goes on to state that a "great" hypothesis does not only serve to answer a question pertaining to a small task, but provides broader insights to marketing. (EN: this contradicts what he said previously about consultants' assuming that success in one situation does not guarantee success in others - such that the offers that work online do not necessarily work in direct mail. It suggests they might, but there would be a need to test them in the mail channel specifically rather than taking it for granted.)