jim.shamlin.com

3: Expert Failure

The author opens with a few incidents in which a group of uninformed individuals came to a more accurate decision than people who are regarded as experts:

What Experts Do Poorly, and What They Do Well

As the amount of data collected and the computing power to process it grows, it's being discovered that expertise isn't all it's cracked up to be. Cases like the above demonstrate that statistics of the opinion of the messes can be more accurate than a single person with a deep level of knowledge.

In general, experts are valuable in areas in which the principles are unclear and information is lacking - but in instances in which there are limited inputs and limited outcomes, statistical models are more reliable. (EN: What I find a bit ironic is that people, apparently including this author, fail to consider that statistical models are built by experts - gathering and analyzing information is not an alternative to expertise, but a method through which expertise is applied.)

Interject another anecdote: Harrah's Casino performed a detailed analysis of their revenues to discover that it is not the high-stakes gamblers that drive their profits, as is often assumed in the gaming industry - but instead, most of its profit was generated by the pensioners at the slot machines that contributed the most over time. While they made lower wagers, there were far more of them and they visited far more often. This enabled Harrah's to target the low end of the market, while its competitors remained focused on the more glamorous but less profitable whales.

The folly of expertise can be seen in the regularity with which experts disagree vehemently. Experts at the extremes of an argument will come out with wildly different forecasts: if the expertise were valid, there would be far less variance. They are often enamored of their own theory, and out of touch with reality.

At yet, experts do well with rules-based problems that have a wide range of outcomes, presumably because they are better than computers at applying organic logic to eliminate bad choices and perceive creative connections between seemingly unrelated bits of information. They also fare far better than computers in decisions that require subject knowledge.

The author permits that human experts may be better than "computers and crowds" in three capacities:

(EN: This seems to follow specious logic - that a hypothesis must be accepted as true unless you can prove otherwise. It would be interesting to see it turned around, specifically, if the author were to assume expertise is valuable except in situations where there is a rational argument for computers or crowds. Such an analysis is not performed, however. So while I'm inclined to agree with this argument, I do think that this particular "proof" is based on a logical fallacy, and as such it cannot be assumed that these are the only three situations in which human expertise surpasses statistics and surveys.)

The Wisdom of the Crowd

It's suggested that surveying large numbers of people can produce a more accurate result than asking a single expert - but crowds can also be very wrong.

The author cites an experiment in which students were asked to guess the number of jellybeans in the jar, and the average guess of the students was off by only three percent. However, the average individual was off by 63% - it's just that the average of the high and low guesses evened out.

This is the "diversity prediction theorem" that maintains that a group of people will always predict more accurately than the average person. The author stresses "always," but then concedes that in the jelly bean experiment, about 3% of participants were more accurate than the average.

(EN: What's lacking in then jellybean experiment is a control and test group - take the guesses of a group of average people and compare them to the average guesses of a equally sized group of mathematicians. If the mathematicians were no better than average, the theory would hold more water.)

However, this only holds true when the subject of their estimation is relatively familiar to them - guessing the number of beans in a jar requires no specialized knowledge and is a skill that most adults in the US have to some degree.

It's intimately conceded that collectives cannot solve all problems - if your plumbing system needs repair you are better off with a plumber than the guesses of a group of people with no knowledge of plumbing.

Trusting Intuition

The author views the present fad of managing by intuition in a dim light. "Intuition does not work all the time," and in some instances it can be dangerous to rely on it.

Intuition is largely based on our experiences and knowledge - it is automated decision-making that performs well in rigid systems with well-defined parameters. As a specific example, consider the master chess player - his intuition does not arise from thin air, but his past experience of the game, knowing various patters that the have seen again and again, or may even have memorized by rote. They are acting on this knowledge and matching it to a set of circumstances.

This is very different to the process by which the mind processes new information and new situations, which requires a decision-maker to be more observant and deliberate in their approach - in some instances abandoning the established rules that will lead them to a solution that may ignore the reality of their present situation.

Intuition falters in unfamiliar territory - and especially in the current environment of constant change, intuition is "losing relevance in an increasingly complex world."

Systematic Mass Failure

While the author has spent this chapter making a case for statistics and crowds, he concedes that "They do not warrant blind faith."

In terms of statistics, the author refers to the mismatch problem, in which factors that seem to have a high level of correlation are not at all causally related, and without human intervention, blind faith in numbers can lead to very bad outcomes.

One example is the process by which sports teams (particularly the NFL) ranks and rates college players based on their statistical similarity to professional players, to render "combine" scores based on various abilities, and has yet to find a dependable algorithm that can predict their professional performance. It's alleged that hockey and baseball combines have suffered the same lack of effectiveness. Similar uses of quantitative analysis in other professions (assessing educators, attorneys, and policemen) also fail. These tests meet all the standards of quantitative analysis, but they simply measure the wrong things.

Likewise, unchecked devotion to the wisdom of crowds is also a dangerous folly when the collective errors do not average out, but instead magnify one another. Human behavior in economic markets can demonstrate the effect if collective error - stock market crises and financial system collapses are functions of crowd behavior - and anyone who has ever been part of a committee or working team can speak of the inefficiency and compromise that occurs when people work at odds with one another.

Steps to Consider

No methodology is infallible in all situations, so you must match the problem you face with the most appropriate solution, carefully considering what kind of decision you are making and whether it is suited to a particular approach. Or better still, use multiple approaches.

Question the methodology. What distinguishes an expert opinion from an inexpert one is not who the person is, but how they think and the methodology used to formulate their prediction. Be acutely attuned to the difference between deep expertise (knowing one thing well but lacking breadth of knowledge) and broad expertise (knowing a lot of different things with deep knowledge of none). It's suggested that breadth of knowledge usually leads to more accurate predictive ability.

Leverage data and technology whenever they are available and applicable. Most organizations have vas storehouses of information that is already available and highly relevant, but they fail to leverage it. This become obvious in the wake of a disaster, when it is discovered that the information that would have prevented it, or enabled a faster and better response, was simply not considered.