jim.shamlin.com

7: Critical Points, Extremes, and Surprises

The chapter opens with the opening-day incident of the Millennium Bride across the Thames, a grand public embarrassment for all involved. A great deal of care and attention was given to its construction, but the architects failed to consider the horizontal force that occurs when a person walks, negligible for an individual, becomes significant when many people are walking.

As such, when the crowd began across the bridge, it shifted as much as seven centimeters side-to-side, just enough for people to feel unsteady: they widened their gait and started matching the pace of others, both of which magnified the effect. No serious injuries resulted and the bridge stood, but had to immediately be closed for retrofitting to dampen the lateral shift, a major embarrassment for the architects and the British government, especially given the media attention they drummed up for the grand opening.

Feedback and Resonance

There are fluctuations in most complex systems, the net effect of positive and negative balance out to result in overall stability. For example, people walking over a bridge, each at their own pace, exert horizontal forces that cancel one another out and the bridge remains stable, but if they march in time, the forces work in unison rather than against one another, and the results can be disastrous.

Consider that every financial market fluctuates because of the optimism and pessimism of buyers and sellers - a person seeks to buy a stock because they expect it to increase in value, another is willing to sell it because he believes it will decrease. The motion of the market, up and down, reflect the net effect of these opposite forces. But when a large majority tends to one side, rapid increases and decreases can take effect - a preponderance of pessimism can crash a given stock, or an entire market.

The same can also be seen in the consumer markets, in terms of fads and fashions - consider that every season there is at least one hot item (power rangers, cabbage-patch dolls, tickle-me elmo, etc.) that all shoppers seem to want and retailers, unable to predict which items will be hot, are unable to satisfy the sudden surge in demand.

The author mentions the freezing effect in liquids - water remains a liquid if you drop the temperature from 34 degrees to 33 degrees, but if you then decrease the temperature one more degree, it freezes solid. It's no great surprise because we know that water freezes at exactly 32 degrees, but a person who was unaware of that fact might be surprised because a decrease in temperature of only one degree caused this dramatic change - where it had not done when the temperature was decreased in the past.

In many instances, we are in that position: we are completely unaware of the breaking point until we encounter it - or, at best, we have a sense there is such a point, but no specific knowledge of where it might be. How much you can raise prices or reduce quality before customers will stop buying is likewise a matter of degrees that can be done slowly, over time, until you reach a point at which there is a mass exodus.

Going back to the Millennium bridge, the engineering firm tested the bridge with 156 pedestrians and detected no sway and little sense of hazard. After the fact, they added just ten more, and immediately noticed a dramatic effect. The way that people reacted to the sense of instability only made matters worse - and this reaction could not have been predicted based on the physical properties of the bridge itself.

Extremes

With many phenomena, it is presumed that outcomes do not stray far from the average, but the extremes can be significant.

Consider human height: 95% of people are within six inches of the average height - but at the extremes, the tallest person was 8-11 and the shortest only 1-10. But any discussion of height generally includes race and nationality, as there are locations where the distribution is skewed as a result of genetics and/or nutrition.

Consider city sizes: Beijing China is the world's largest city with nearly 18 million citizens, but there are also cities (legally defined government entities) that have populations of less than fifty. This provides a much wider range of outcomes.

This echoes the point of diversity in crowd intelligence - while the guesses of individuals can be off, the average of all their guesses is often accurate under certain conditions.

But it's also worth noting that diversity fails where humans are involved. While any group of people begin as a diverse group, it is not long before they begin to imitate one another, between the desire to control and the desire to conform, and the diversity is replaced by uniformity in degrees. At some point, it ceases to be a diverse group of individuals and becomes a homogeneous culture.

There's some experimentation that suggests the diversity in crowds waxes and wanes over time, particularly when they are doing the same thing repetitively and are motivated to achieve a certain goal. An example is the behavior within professions: stock market traders begin with their own ideas, gravitate toward following a strategy that seems successful ... but when that strategy fails, they then become diverse in their strategies.

The Problem of Induction, Reductive Bias, and Bad Predictions

Induction, reductive bias, and bad predictions are three other problems when dealing with complex systems of uncontrolled circumstances.

Induction

Induction is a method of reasoning that looks to circumstances to predict what will happen in future, as opposed to deductive reasoning that looks to the past to explain why an outcome occurred. Many philosophers have pointed out the problems of this kind of reasoning, but it still remains human nature to do so.

Bertrand Russell's example of a chicken illustrates this problem: the chicken is well tended and even pampered for a long period of time, up until the day it is slaughtered. Inductive reasoning would lead the animal to make assumptions of its status based on its treatment, and it would not foresee the conclusion.

Deductive reasoning would first consider the outcome, then seek to the causes, and consider whether present conditions will lead to the same end. (EN: While it's generally agreed to be superior, deductive reasoning is not fallible. The chicken may observe the lamb is fatted before a slaughter and recognize the connection, or he may also observe that the horse is well-tended and does not meet the same bitter end, and may not.)

The plight of the chicken - a catastrophe the follows a long period of prosperity - has occurred repeatedly in the business world. Consider that the companies that crashed during the recent financial crisis had been doing quite well for a long period before and were caught quite unawares, expecting the good times would last forever. The good outcomes reinforced their perceptions that their strategies were good and everything was fine. Until it wasn't.

In general, people are faster to verify their own ideas than they are to falsify them, and to ignore observations that do not agree with their preconceptions. The fact that black swans exist does not cause us to abandon the belief that all swans are white - we see them as outliers. It's still correct to say that most swans are white, and to maintain that the chances of any given swan being white are very high, but we cannot completely dismiss the possibility it may not be.

Another theory of psychology is that people tend to identify things in a certain way and assume that they are limited to their intended function. We consider a paper clip to be a device for holding together paper, and are mentally blocked from discovering a myriad of other uses if it's merely regarded as a metal wire that can be straightened and reshaped. Those who find "ingenious" uses for things are merely turning off their preconceptions.

Reductive Bias

Reductive bias is another common mistake when dealing with complex systems. This is the tendency of people to oversimplify things - to suggest that "X causes Y" without considering any other factor that might contribute to the occurrence of Y, or any other outcome that might arise as a consequence of doing X. Many phenomena, especially those involving human beings, are complex and nonlinear, and reducing them to simple linear models turns a blind eye to critical factors.

Any system that imposes a bell curve on natural phenomena brings with it a reductive bias, as it overlooks the skew and kurtosis that often occurs in nature. Financial analysis suffers greatly from this: many analysts speak of alpha, beta, standard deviation, and other statistical measurements that assume a bell curve - and many disastrous investment decisions are the result of using a bell curve to apply an convenient but unrealistic model to reality.

People cling strongly to convenient models that do not work. It's noted that renowned mathematician Benoit Mandelbrot published an insightful argument in 1964 about the misuse of bell curves in financial analysis. Leading economists at the time panicked because his findings meant "all of our statistical tools are obsolete" - they did not find fault with his argument. To this day, financial management classes are still being taught using bell-curve analytics, not because they are accurate, but because they simplify the world and make the math more tractable.

Bad Predictions

The third mistake is the belief in predictions - specifically, in the tendency to accept or reject a prediction wholesale: if there is a 51% chance something is true, we act as if it will always be true and fail to set contingencies for how to adjust the plan if it turns out not to be true.

Social Influence

The author departs the notion of bad predictions abruptly, turning to an example given of an experiment with a Web site called "Music World" in which participants were able to listen and rate 48 songs by unknown bands, with an option to download the ones they liked. Over14,000 people participated (mainly teens in the US). 20% of the people were placed in an environment where they were unable to see what ratings other people were given, and were considered to be a "control group" whose ratings were uninfluenced by others. The other 80% were broken into eight groups that could see what other people were saying.

As expected, there were huge fluctuations as a result of social influence: a participant is more likely to give a rating that is in line with what other raters have given. Moreover, each of the eight groups had different skew: a song that was rated near the middle by the control group (26 of 48) was the number one song in one group and number forty in another.

Pattern Biases

There was also some skew as a result of which songs a person listened to first - which is also a common phenomenon. The author mentions Polya's urn (EN: which is mathematical rather than psychological), an experiment in which an urn begins with two balls, one red and one blue, and the subject pulls one ball to determine what color to add until the URN is full. Naturally, the outcome is skewed to the color of first ball selected (if you draw red first, the next draw is a random of one blue and two red, making it 67% likely the next will be red as well).

Granted, the urn experiment is purely random and emotionless - people are far more emotional, but even emotions tend to follow trends as people get in a groove. In the music world experiment, people formed a judgment of what they liked or disliked based on the first few songs they heard and carried these perceptions forward, evaluating additional options based on the influence of their early experiences.

The author ties this back to market behavior in which a clearly inferior technology becomes the favored in the market: people prefer QUERTY keyboards to Dvorak, VHS to betamax, IBM to Apple, Blue-Ray to HD, etc. because of their early experiences. You cannot predict which will win based on product quality, history, or even market research - when it hits the market, things will go as they will.

Suggestions

Do not rely on statistical measures without inspecting the histogram. It's not sufficient to know the average or standard deviation - you must also consider the shape of the curve and the outliers, in order to sense whether the bell-curve assumption holds true and whether the outliers will break your predictive model.

Be aware of social influences. Where you consider that every person will behave in a random pattern, and positives and negatives will cancel one another out, you may be setting yourself up for a catastrophe when they begin marching in time to one another - as is their tendency in any situation where one person can observe the behavior of others.

Beware of forecasters. We generally feel safe following someone else's prediction because our reliance on their expertise gives us someone else to blame when things go wrong. In the final analysis, failure to consider the accuracy of a prediction is at fault (and consequences often fall on those who relied on a bad prediction, not those who made it).

Consider the full range of outcomes. We are inclined to plan to suit a single, expected, average scenario and to fail to plan for any other outcome. A good plan must also provide contingencies that will mitigate the downside and exploit the upside, should those contingencies occur. Betting too much on the expected outcome, and devoting too many resources to making expectations come true, has been disastrous.

Also consider the negative consequences of a highly positive outcome. Those who provide contingencies are often focused on the pessimistic "worst case" scenario and assume that if things come out better, it's a pure windfall. Consider Christmas 2000, when a handful of aggressive start-ups (e-toys, toys.com and a few others) had done so well at advertising that they were overwhelmed with orders they could not fill, and consumer disappointment/outrage resulted. The wonderful surge in popularity ended in bankruptcy in 1Q2001 because they failed to plan for a better-than-predicted response.