1: The Outside View
The chapter introduction goes into a bit much detail about "Big Brown," a horse that seemed certain to win at Belmont, having won a couple other major races, and which came in dead last. There was a great deal of confidence from his owner, trainer, sponsor, and many fans - but handicappers who used a statistical model (Beyer Speed Model, considering track, weather, and time of day) accurately predicted the outcome, booked as many bets as people cared to make, and raked in a major profit.
Overconfidence is a common problem: people think a great deal of themselves. Consider a 1976 survey of college students: 85% considered themselves to be better than average at getting along with others, 70% above average in their ability to lead, 60% above average at sports, 80% better than average drivers. A separate study asked subjects to rate their ability to score well on a test - and the people who were in the bottom quartile dramatically overstated their ability beforehand.
They also tend to think optimistically about the future - assuming that the experience they will have (good or bad things that will happen to them) will be better than average during their lifetimes. (EN: There may be some covariance here, as people tend to regard the future as being controllable, or at least influenced by themselves, rather than random events.)
Finally, there is the illusion of control. Even when people are in a situation where the outcome will be completely random, they have the most absurd superstitions. Returning to the casino example, consider the peculiar behaviors and taboos around a craps table. More specifically, an experiment was done in which some participants in a lottery were permitted to choose specific numbers whereas others would be assigned random numbers. The former group expressed greater sense of their chance of winning, the latter had no appreciable bias.
The author's own career experience in investment management is likely an excellent example of the illusion of control in the professional world. In repeated studies, it has been demonstrated that actively managed portfolios do not perform as well as market indexes. In spite of years of evidence, there is a constant belief that with the right research and theories an investor can regularly "beat" the market. Financial professionals believe in their ability to do this, and investors who pay their fees implicitly agree.
Belief in Success Where Others Have Failed
People in a wide range of professions lean on the "inside view" to make decisions that get poor results - not because they are negligent or malicious, but because the three illusions lead them to believe they are making the right decisions. It's important to be aware of your biases, and to consider their tendency to influence your perception.
Consider mergers and acquisitions: it's a very significant decision and no-one goes into it thinking that it will be harmful - but in two-thirds of the cases, the price of stock goes down shortly afterward, when insider optimism is at its peak. Even those who are aware of this express that they believe that they can beat the odds, or something is different about their particular situation.
Basic math explains why most companies don't add value when they acquire another firm: even when it works out, the increase in cash flow is often outweighed by the premium paid to acquire the other firm. Stockholders demand to get from the acquiring firm more than they would have earned by holding shares in a separate entity - those who sell before the merger take a profit, and the premium paid by the acquiring firm constitutes a loss in market value. This loss may be recovered in time, but the short-term impact is consistently negative.
Mistaking Anecdote for Evidence
Anecdotes are often presented in place of evidence where hard evidence does not exist or is inconclusive. The author uses the example of the specious medical treatments, which use reviews and testimonials to suggest their validity - the companies that offer such treatments believe, and rightly so, that desperate patients will be willing to try anything and will take any omen as proof.
The author describes a study in which people were presented treatments for a fictitious disease. For one treatment, they were told to accept it was 50% effective, for the other, they were given testimonials that combined positive, neutral, or negative anecdotes. Nearly 80% of subjects preferred a treatment that had a positive anecdote over one that was known to be 50% effective.
The healthcare profession is concerned about patients who are misinformed - whether by family, friends, media, or the Internet. Doctors sometimes find anecdotes to be an effective way to get the point across to patients without using statistics and jargon, but patients are often moved by anecdotes even when they come from dodgy sources.
Late and Over Budget
People find it hard to estimate how long a job will take and how much it will cost - and they underestimate. This also goes back to the fallacies of self-confidence and optimism - "the planning fallacy" is a subject of much psychological study. What's been found is that only about 25% of people consider data at all (base-rate data from their own experience or that of others who have done similar tasks), most simply go on their expectations - and they expect they will do efficient work and encounter no obstacles.
One experiment (Buehler) asked students how long it would take to complete an assignment with three levels of chance - e.g., there was a 50% chance of being don Monday, 75% Wednesday, and 99% Friday. The outcome was that only 13% were right about the first estimate,19% about the second, and 45% about the third. The survey concluded that students overconfidence was a problem, even when they made a highly conservative forecast.
An interesting twist: while people are poor at guessing their own completion rate, they tend to be pretty good at assessing that of other people. They are more likely to look to data sources and consider the task carefully when assessing how other people work. (EN: No source or statistics here - it seems reasonable but anecdotal.)
(EN: My sense is a larger problem with estimating projects - people are trained or encouraged to lowball projects to get a sponsor or customer to sign off, and then come back later once money has already been spent and ask for more, knowing the person will feel a sense of commitment. In this way, you can get people to accept an outlay that might have scared them off if they knew in advance how much it would really cost. It's disingenuous, but it is a very common practice.)
How to Incorporate the Outside View into Your Decisions
Getting an "outside view" can be helpful in making estimates more accurate.
A first step in doing so is to find a reference class, a group of situations that are similar to the decision that you face. Chances are, someone has done something similar to what you are about to do, and getting data from them can help set expectations that are more reasonable.
Next, assess the distribution of outcomes. You will likely find a wide range of outcomes - and rather than distill them to an average, consider the extremes of success and failure. One example is survival rates for diseases such as mesothelioma - a report might indicate that the average person lives only eight months after being diagnosed - but this failed to consider the age of the patient and how far the condition had progressed by the time diagnosis took place. A young person, caught in the early states, should not be misled to thinking that the average is a reasonable expectation for himself.
Next, make a prediction. With the data you have gathered and an awareness of the factors that influence the range of outcomes, you are in a better position to make an accurate forecast. Express your chances of success as a range of outcomes rather than a single prediction.
Finally, assess the reliability of your prediction and fine-tune it. In some instances, the predictive model is very weak - e.g., weathermen are generally accurate predicting what the temperature will be, though cloud cover and precipitation still seem to escape them. The less accurate the predictive model, the more it will need to be adjusted toward the mean or other relevant statistical measures.
The main lesson to learn from the difference between the inside view and an outside one is that many decision makers tend to assume uniqueness, whereas the most accurate prediction models are often based on the assumption of sameness. While we should not become slaves to statistics and attempt to follow the most risk-free decision models, the numbers should not be ignored.