jim.shamlin.com

5: The Science of Sales Training

Companies are estimated to spend $4 to $7 billion per year on sales training, about 60% of which is internal (product updates), but the training provide is highly ineffective. The same source reports 90% of companies suggest that training doesn't provide any "lasting value." The authors suggest that their scientific approach to training will yield better results.

Why Sales Training Fails

Sales training is done very poorly. Most commonly, it merely orients the trainees to the products and has little to do with selling to customers. In other instances, the training may be a motivational speech. Or it may focus on incidental behaviors such as the way a salesman should dress, how he should speak, even his nonverbal gestures.

The solution selling model, while seemingly improved, focuses on entirely ritual behaviors: learning the company's process and procedures and memorizing scripted dialog in he-says-you-say pairs.

The rituals to which salesmen are trained are often quite bad. The author notes with some irony that many of the phrases used in sales training were ripped directly from actor W.C. Fields, whose salesman character was meant as a farce, ridiculous and over-the-top. That is, he meant to portray a bad salesman, yet many of his lines appear in training and literature for salesmen.

It's also noted that much sales training and literature has a "distinctly religious flavor" based on hope and faith that things will work out. Some of the most recognized names in sales training (Dale Carnegie, Napoleon Hill, and Norman Vincent Peale) all make heavy use of religious themes.

On a more serious note, the patterns to which salesmen are coached are not based on research, but on supposition of what ought to work, and take a one-size-fits-all approach that is not natural or effective for customers. Even when an effective pattern is recognized, there's no consideration of the specific circumstances under which the phrase worked.

The fact that these sales rituals failed to produce results did not discourage companies from continuing to use them: the notion was that the rituals needed to be better, or that salesmen needed to stop going off-script.

Another issue was that customers began to recognize sales rituals, and could easily detect when a salesman was on-script because the conversation was unnatural. The result is that the customer feels pushed into an awkward pattern, on top of being pushed toward an unsuitable product. The fact that many sales mangers focus on helping their people deal with rejection and engaged motivational speakers to "keep spirits up while sales are down" is a recognition that the practice of sales was designed to fail.

There is also the impact on the morale of salesmen: reducing the job to ritual behaviors made sales a "futile and depressing" job, which led many individuals to avoid entering a career in sales. This wasn't entirely off base: a salesman was held personally accountable for making quotas, at the same time encouraged or compelled to follow rituals that were clearly ineffective.

In pop culture, plays such as "Death of a Salesman" and "Glengarry Glenn Ross" depict the utter misery of the sales profession - and while they are works of fiction, many salesmen will attest that they are fairly accurate depictions of their profession. And yet, sales training constantly recycles the same tired material - the same cartoonish notions that Elmer Wheeler trotted out in the 1940s are still being taught today.

It's also mentioned that sales training is highly faddish. The author describes a scenario in which a sales executive picks up a book at the airport, reads it on the flight, and immediately decides that what he has just read is exactly the thing his sales team needs to be successful. When that doesn't work, the executive does the same thing over, with a different book, resulting in a "flavor of the month" approach to training that leaves salesmen frustrated and with the sense that training is a complete waste of time.

Sales training has great potential to have a positive impact, but it must be done properly ... and it is not:

All of these factors lead to training that does not match the needs of the sales team, which yields marginal improvement (if any at all) that does not recover the expense of training.

Why Sales Training Doesn't Get Measured

In general, training suffers from a lack of scientific measurement. The only assessment is whether order volume increased - did we get more sales after training than we got before - and the answer is often "no." If sales do happen to go up, the training course is credited and if they do not, the training is blamed, both regardless of any other factor that might have impacted sales.

As indicated above, there is no assessment before training to determine which specific skills are in greatest need of improvement - which means that not only does the company lack the ability to select and implement an effective training program, but there is also no detailed baseline against which the results of training can be assessed afterward.

Alternately, evaluation is done based on compliance: checking to see whether the employees who received the training are actually following the practices and procedures taught by the course. That is not a bad idea, but it has nothing to do with determining whether the training had a positive effect.

The use of course evaluations has been attempted as a means to determine whether a course was good - but the responses gathered have more to do with whether the trainees enjoyed the training experience. An engaging instructor and entertaining class exercises will result in a high positive rating, regardless of whether the trainees learned any skills.

The most valid form of measurement involves use of a control group: splitting the sales team into groups that receive training and others that do not, and comparing their sales results afterward (given the same market, and similar levels of experience). This is seldom done because companies insist on training everybody.

It's theoretically possible, of course, to build measurement models comparing the performance of similar groups (or individuals) selling into similar markets, one group trained and the other not. In practice, however, this is seldom done, either because it's difficult to find groups that are easily comparable or because there is pressure (from the sales trainers and often from sales management) to train everybody:

All of this represents short-term thinking that does long-term damage. If scientific measurement were the rule rather than the exception, it would very quickly become clear which training courses were effective and which were not. Over the long term, this would enable trainers and management to choose effective courses, improve ineffective ones, and develop an effective training program. Until measurement is not applied, companies will continue to take a blind approach to training, and achieve poor results.

Scientific Measurement versus Theoretical Inference

Many training programs are based on theoretical inference rather than scientific measurement: a trainer speaks to a practice that "should work" based on assumptions that do not prove to be true.

One particular example are companies that offer computer-based training modules (a fad recently revived with the popularity of smart phones and tablet computing): the theory is that busy sales professionals have no time for classroom training and that training will be more efficient or effective if it is delivered in smaller chunks that they can peruse at their convenience. That seems entirely reasonable based on theoretical inference - but is utter nonsense.

The adoption and success rates of computer-based training have been, and continue to be, abysmal regardless of how good the idea seems in theory. The author refers to a study by the Aberdeen Group into sales training, which

found that companies that used classroom-based training showed 14% greater success (measured by the number of sales reps who achieved their annual quota) compared to firms that relied on CBT.

While this study demonstrates the need for scientific measurement to decide, develop, and deliver on effective training, its methodology is based on a broader scale than most companies need or are capable of implementing. However, the basic principles can be applied within a smaller firm or even a small sales team.

Applying Science to Sales Training

The first step in applying sales training is customer research: you must be aware of consumer behavior (how your customers prefer to purchase) to determine what sales tactics are likely to be effective. Without knowing this, training will teach ineffective an inappropriate tactics, and the outcome will be failure and potential damage to your reputation in the market.

It's noted that consumer behavior tends to be consistent - it is not unchanging, but buyers generally follow the same patterns to solve problems that have been effective in the past. Attempting to impose an unfamiliar process upon them increases discomfort with the buying process.

This research must also be based on the customer's insights, not internal assumptions, and should consider a range of situations rather than assuming that one approach will work with all customers.

The second step involves understanding the current skills of each sales representative. This assessment helps to define the training that is not needed (it's pointless, expensive, and demoralizing to lecture people about things they already know and do well) and better identify areas in which skills improvement can have a significant impact on performance.

This assessment should also be granular: often the only assessment of sales skill is sales results, but this provides no insight and tends to reinforce the notion that sales is an innate ability rather than a trainable skill - that a salesman whose conversion rate is high must be good at absolutely everything and that one whose conversion rater is low must be bad at absolutely everything.

A scientific approach to training requires a firm to benchmark each salesman's present skills before training is delivered, and to reassess those skills afterward to determine whether the training has been effective. The author's own assessment test (SSAT) is an instrument for performing this kind of analysis to gather objective and actionable data.

The net effect of assessment is to demystify the sales process - to abandon the inaccuracies of theoretical guesswork and self-aggrandizing case studies and focus instead on a scientific approach.

The authors present the sales process as a sequence of eight steps:

  1. Administer a skill benchmark to determine present skills
  2. Select training that will address the need identified by that benchmark
  3. Consult with salesmen individually to determine whether the training is a good match for their individual needs
  4. Conduct the training for the entire team
  5. Coach individual employees after training
  6. Reassess skills (6-12 months later) to determine if the training was effective
  7. Provide additional training or coaching
  8. Conduct a second reassessment (12-24 months later) to track progress and identify needs for future training

Many firms skip the analysis and personal coaching. Where analysis is skipped, there is only a vague sense (based on sales figures) whether the training was effective. Where personal coaching is absent, there is only a vague sense of what seems to have worked for the group.

The author presents an anecdote about a sales manager who was looking to provide training on presentation skills. When asked how he came to the conclusion that this skill was lacking, he replied "that's what they asked for." When the authors used their SSAT skill, it was found that presentation was among the strongest skills the team already had, whereas investigation (asking questions of prospects) and confirmation ("getting a yes") were very low.

Not only had the team misidentified its weakness, it was also an underlying cause that they felt their presentation skills were lacking. That is, they were quite capable of presenting, but the presentation content was not targeted to the customer and did not lead to an effective sales proposition. In short, presentations were well delivered, but poorly designed.

In this situation, the trainers developed a course that was based on delivering sales presentations, but the content of which was focused more on investigation and confirmation in the context of a sales presentation. The outcome of this was an 18% increase in business in the following year, even in a very tough economy.

If training had focused on presentation skills alone, it is doubtful the firm would have achieved these results.

Customizing Sales Training

Most sales training companies provide a standard course, particularly when a sales course is offered to students from a variety of firms. Even those who claim to have a customized offering will change some of the details in their standard course to appear customized, but the fundamental theories and principles taught remain unchanged.

At best, the audience of such a course must translate the information into something that is useful to their situation; at worst they will assume that it is directly applicable without modification.

Genuine customization requires a course to be tailored to the needs and situation of its audience: it does not merely use different case studies, but adapts the core lesson to the specific needs of trainees. The goal of customization is to ensure that the training is relevant to the students, such that every minute of their time in training is well spent and they can apply the lessons immediately to their day-to-day behavior.

The authors present a five-step process for delivering customized training:

  1. Skills Assessment. Prior to training, the instructor should administer a survey to the students to discover their existing skills, with a goal of discovering what the students actually need to learn. This is done for each individual, though the results may be aggregated to the group (as a whole, or the students that will attend one session).
  2. Research Interviews. Following the survey, interviews can be conducted with four of five students for about 20 minutes to more thoroughly explore the topic being taught. This should involve a cross-section of high. Average, and low performers. An interview provides a more flexible means to gather more detailed information: such as their current situation, the barriers they perceive, and their perspective on what they hope to gain.
  3. Topic Survey. The author suggests a second survey instrument that focuses on the topic, rather than the skills: the survey quantifies the qualitative information gathered in interviews. Ideally, this is done in an anonymous manner to ensure that the staff does not merely agree with the assumptions of management.
  4. Consolidate Findings. The data gathered in the first three steps is consolidated to determine what skills need to be taught. It should provide highly relevant information, but will also uncover information that leads to actions other than training: the data may indicate, for example, that there is a major issue with morale around compensation, and training salesmen to close more deals will not make them feel fairly compensated.
  5. Create Case Scenarios. To develop course content, it may be necessary to conduct additional interviews to develop client-specific case scenarios that will be recognized as immediately relevant. This pertains to the details, rather than the core skills, but using highly relevant details increases student engagement and information retention.

The author follows with two case studies of training, but they are vague and superficial and do not provide any additional insight.