Chapter 12: Decision Making and Reasoning
Some basic definitions:
- Judgment is the process of evaluating choices or opportunities with the intent of taking action
- Reasoning is the process of evaluating information to assess a fact or perceive a relationship, either deductively (from principles) or inductively (from evidence)
Judgment and Decision Making
It is said that the rational mind is mankind's primary means for survival - but more specifically, it is ability to exercise judgment and make decisions that guide his course of action - our welfare and in some instances our survival is dependent on these abilities.
Classical Decision Theory
The classical model of decision making derives chiefly from economic decisions, and as such tends to demonstrate the strengths of that perspective.
Classical decision-making relies on some basic assumptions:
- Decision makers are aware of all possible options
- They are aware in advance of the outcome of each decision
- They are sensitive to subtle distinctions among decision options
- They are fully rational in regard to their choice of options (i.e., they choose to maximize value)
Decisions are largely boiled down to simple mathematics: the value of the outcome times the probability of achieving that outcome is seen to be the value of taking the option, and the option with the highest value (less cost) is the correct decision.
Where the outcomes in question cannot be monetized, the classical model turns to "subjective expected utility theory." The "utility" of an outcome can vary according to the decision maker, but generally is guided by increasing pleasure while minimizing pain.
Additionally, where outcomes cannot be quantified, the same theory deals with subjective probability, which is an individual's assessment of probability (regardless of statistical probability of outcomes).
For example, consider the options available to a job seeker. The value of salary offers from different prospective employers can be assessed mathematically, but the job seeker might also wish to consider the location, some of which might be quantifiable (the cost of living, precise distance from existing friends and family) but much of which is not (whether the candidate likes living in a given climate, culture, density of population, etc.)
The same can be said of consumers in purchasing goods or services: the cheapest alternative that satisfies their basic functional need is seldom the one they choose - but instead they evaluate unquantifiable benefits (their trust for the vendor, their appreciation of a product's style, etc.)
For most decisions, there is no single option that is perfect for all people, even if they are entirely rational, their circumstances will cause one decision to be better suited to them.
There has been some attempt to assess individual decision-making based on a few simplistic factors:
- The consideration of known alternatives (given that they do not know of all possible alternatives)
- The limited amount of information about the options and their outcomes
- The degree to which they value specific features and benefits of each alternative
- The individual's willingness to pay a certain cost and assume a certain amount of risk
- The subjective assessment of the probability of each option's success
- The degree to which they are willing to invest the time in considering these factors in making a decision
While all of this sounds great in theory, there are actually few decisions to which individuals apply a meticulous process of reasoning.
"Satisficing" is a portmanteau of "satisfying" and "sacrificing" - in one sense, it entails making the best of limited resources, an in another sense, it means that accepting that our choice of one option often precludes our choice of other options. In general, satisficing means selecting an option that meets our basic requirements to an adequate degree.
Aside of limited resources and exclusive opportunity, this may also be the result of a truncated decision-making process: rather than considering all options, we consider only a few and exercise the first option that seems acceptable (even though if we had invested more time in deliberating, we might have been more pleased by another option).
In that sense, satisficing is an energy-saving endeavor that focuses on minimizing he cost of a decision rather than maximizing the benefit.
Elimination by Aspects
Another approach to decision making involves considering the aspects of a decision or an outcome to eliminate choices that are not appealing in terms of the most important aspect or aspects.
In essence, it is criteria-based decision making, in which we define the various criteria we wish to satisfy, eliminating those that do not meet it to a sufficient degree, in hope that the process of elimination will leave us with a single option, which makes the decision clear.
This is another form of cognitive shortcutting in that it disregards the possibility that an option may be desirable because it satisfies other criteria, or satisfies a "lesser" criterion to a greater degree than those we consider to be important.
(EN: My sense is the energy-savings occurs in the evaluation of criteria, arbitrarily accepting that something is important while failing to consider why it is important. When you shop for a vehicle, you may initially want one that gets 20 miles to the gallon, but fail to consider whether mileage is really important to you, or whether a vehicle that gets 19 mpg might be better if it is superior in other regards.)
Heuristics and Biases
In general, people seek to conserve energy in the decision-making process, sacrificing the efficiency and effectiveness out the outcome of a decision even though they are aware that they are taking shortcuts (heuristics) and that there are flaws in their reasoning (biases).
Representativeness is a heuristic that gauges probability according to similarities of a sample to population and a degree to which we recognize the salient features of a process. Consider how, in a series of coin-tosses, we expect tails to occur after a string of three heads: we understand the probability is 50/50 on each toss, and feel that the next three incidents are more likely to come up tails in order to balance the equation. Likewise, we tend to believe that in any group of 366 people, two of them will have the same birthday - which is certain, but we may find that there is more than one matching pair in such a group unless it is purposefully constructed.
This is similar in nature to the gambler's fallacy, that independent events are in fact related, though there is less of an assessment of statistical probabilities. In a casino, all games are rigged to have an advantage to the house, but gamblers ignore the advantage and proceed as if the odds were fair.
There is a related "hot hand" fallacy that believes that a person who has a string of successes has a higher probability of remaining successful in the attempts that follow. In some instances, the skill of an individual does in fact mean that they will be more successful, and the experience gained in previous attempts will improve their likelihood of success. But for many events, previous successes do not increase the likelihood of future endeavors ending in success (in the game of craps, a shooter does not "learn" or develop skill from successfully rolling a given number with a pair of dice).
A common problem with the representative heuristic is small sample size. Going back to the coin toss, the number of heads and tails thrown are indeed likely to balance out over the course of a hundred thousand throws, but not over the course of ten.
The author also mentions the "man who" heuristic, in which an anecdote about a single incident or individual that defied statistical probability primes the decision maker to overestimate the possibility of the same unusual outcome in an event that will take place in the near future. "I knew a man who ate a stick of butter every day and lived to be 100 years old" is not evidence that there is a causal relationship between the two behaviors, even if the story happens to be true (and in the case of urban legends, it often is not).
There is also the problem of considering a sample to be representative of a population when it is not. If you consider the base rate of heart attacks in 40-year-old men is lower than the base rate in 60-year-old men, then you recognize the two groups should not be considered as a set, nor should you expect what is true of one group to apply to the other.
The availability heuristic causes us to consider the things we perceive and remember to have a higher level of probability. In the most direct sense, this means that if we have witnessed something to happen, we assume it is more likely to happen again, even if the incident itself was highly unusual or unlikely.
This also occurs independent of experience: if a person is asked whether there are more words in the English language that begin with the letter R or have R as their third letter, they will find that it's easier to call to mind words that begin with the letter (when in fact, there are more words with R as their third letter - which is also true of K, L, N, and V).
It's noted that heuristics and other forms of mental shortcut can lead to wrong answers in some situations - but they are relied upon because they are very often reliable. That is to say that in some instances the experience we have had, particularly when we have considerable experience or exposure to a phenomenon, gives us an accurate perspective rather than a biased one.
Other Judgment Phenomena
Anchoring is a phenomenon that causes a person to base their estimate on the first phenomenon they encounter. For example, when asked to estimate the outcome of an equation without being given time to actually calculate, subjects will provide a higher estimate if the first numbers in the equation are large (8x7x3x2) than if the first numbers are small (2x3x7x8), the reason being that the first few numbers give them a sense that the total will be large.
There are also "framing effects" that skew judgment to our expectations based on the way a problem is presented. Particularly in financial decisions, we are more likely to feel confortable with a proposition that provides a low rate of return rather than a high rate, even if no other information is known, because of risk aversion and the assumption that higher gains entail higher risk (even without knowing the details).
"Illusory correlation" leads us to be predisposed to believe that there is a connection between events that we believe to be correlated, even to the point of having a causal connection, where no such relationship exists. For example, we have fixed beliefs about behavior correlated to race, gender, age, and religion that cause us to expect people who have certain characteristics to hold certain beliefs and engage in certain behaviors - and our perception of actual behavior is filtered accordingly, such that when we witness something that does not correspond to our beliefs, we tend to ignore it. This similarly affects our professional judgment, such as doctors who witness a given phenomenon in patients who have a condition tend to automatically assume that the condition is a symptom upon which they base later diagnoses.
Overconfidence is another judgment error in which a person assumes that he knows something and ignores any evidence to the contrary. One study (Kahneman) suggests that people who are 100% confident in their ability turn out to be correct only about 80% of the time. In this instance it is certainty in their own expertise that causes them to proceed without giving due consideration. The author suggests that overconfidence occurs when:
- A person assumes his knowledge to be sufficient (he doesn't realize how much he does not know)
- The knowledge they do have leads them to rest on assumptions rather than giving due consideration
- The knowledge they have comes from unreliable or incorrect sources
- People assume that they are correct, and prefer not to think that they might be wrong.
(EN: My sense is the author is speaking purely of cognitive errors in judgment, but I have the distinct sense that overconfidence is often more of a social error. People attempting to "sell" their ideas express confidence even when they do not have it, and once they have expressed an idea they fear losing esteem if they later admit to being wrong. As such, they will project confidence in situations where they are not confident and defend a bad decision even when they realize they were wrong.)
The "sunk cost fallacy" is a decision to continue with a wrong course of action because of the resources that have been committed in the past, rather than the chance of achieving a desired outcome. One example of this is the way in which a person will spend thousands of dollars on repairs for a defective vehicle: a person faced with a single expensive repair will consider whether it would be better simply to buy a new vehicle than perform the repair, but a person who has already paid for multiple expensive repairs will feel that the prospective repair is necessary to get their money's worth from the repairs they have performed in the past. In the same way, a person who pays in advance for something feels committed: fewer travellers cancel their plans when airline tickets are non-refundable because they are seeking to get value out of a cost paid in the past.
Another judgment error is failure to consider associated or opportunity costs. For example, a person will accept a job in a major city because the pay is higher than a similar position in a small town, failing to consider that the cost of living will more than consume the difference in salary (associated cost). Likewise, a manufacturer will tend to operate a factory at a loss rather than closing down the facility and selling off the equipment, even when the latter is more profitable in the long term, because he is focused on one possible way of making profit (running the business rather than selling it) to the exclusion of all others (opportunity cost).
"Hindsight bias" involves skewing our perception of a past event to match a general assessment. If we have a general positive sense, we will accentuate positive outcomes and ignore negative ones, and vice-versa. This will cause our judgment of future events to be unrealistic. It's also found when people are asked to predict the chances of success without knowing the outcome, their estimates are not accurate - but when told the outcome of a similar situation in the past, they feel that the outcome was "obvious" and should have been easy to accurately predict.
Studies of Judgment
Most of the work on judgment and decision-making is focused on errors, with the goal of finding information that will equip people to make better decisions, primarily focused on dysfunction. This tends to mischaracterize decision-making as bad, when in reality most of the decisions people make turn out for the better. In effect, research in this area is itself biased to presume (and prove) the negative.
A careful decision is often a sound decision. When we have done adequate research from qualified sources, considered the decision making strategies, planned a course of action, and monitored the effects as the plan was being executed, we succeed far more often than we fail.
What is striking about judgment biases is that people who have higher intelligence and more extensive domain knowledge are just as prone to errors in judgment as people who have less (although the lack of intellect and expertise increases the errors of non-experts) - it's a matter of careful decision making - and people fail to fully apply their competence and expertise to decisions, but instead fall to habits or patterns of behavior that emphasize efficiency rather than effectiveness of decision-making.
Judgment vs Reasoning
A distinction is drawn between judgment and reasoning:
- Judgment is evaluative - it assesses the likelihood that an action will have the intended results or an item is suitable to a given purpose.
- Reasoning is predictive - it applies principles based on evidence to determine whether something is to be held true, or will truly come to pass.
A key differentiator is that, at the onset of a decision, judgment is used when the factors and outcome are presumed to be known, and reasoning is used when they are unknown and must be discovered.
Deductive reasoning is a method of evaluation that begins with broad principles to arrive at a conclusion about a specific instance.
Propositions form the basis of deductive reasoning: a statement such as "all dogs have fur" is a statement that is taken to be true, and is applied as a premise in the course of analyzing the instance: in identifying a creature we have encountered, we consider that it might be a dog if it has fur.
Naturally, this is complicated because few propositions are universally true (some dogs do not have fur) and because a single condition is often insufficient to arrive at a conclusion (creatures other than dogs also have fur), such that proof by deduction is often a complex and error-prone process.
Conditional reasoning evaluates propositions to arrive at a conclusion, suggesting that if one or more conditions are satisfied, then it follows that another statement must be true. For example, "if students study, then they get better grades."
Deductive validity does not correspond perfectly with truth: a false premise, a bad interpretation, missing information, etc. can undermine the accuracy of a deductive proof.
The "modus ponens" argument takes a rigid form:
- If X is true then Y is true
- We can observe that X is true
- Therefore Y is True
If the second statement finds that X is not true, and the conclusion is that Y is therefore not true, it is a "modus tollens" argument - though it has also been shown (Kirby) that the choice of approach depended on the initial reaction to the problem situation - i.e., if people expected the outcome to be negative, they used the tollens approach.
Details are provided about an experiment (Wason) that tested the ability to apply ponens/tollens arguments in which subjects were asked to test a statement about a set of cards (if the number shown is even, the letter on the other side of the card is a vowel). The results demonstrated that people can "more easily" (EN: Assuming time and accuracy were considered) qualify a ponens argument than a tollens one. Further analysis (Cheng) also indicates that even individuals who have taken a course in formal logic fail to demonstrate reasoning across various situations.
In more practical situations, people do not isolate their reasoning to the factors given in a problem. Given the statement that a manufacturer would provide a rebate if a person purchases an item, participants expressed doubt that the premise would hold true - expecting that there were unspoken conditions that would need to be satisfied in order to receive the rebate. Others suggested that there were also ways to receive the rebate without purchasing the item.
Further research suggests that individuals use pragmatic reasoning schemas that define the conditions upon which a conclusion may be drawn as a basis for making evaluations - and often misapply the pragmatic rules relating to the use of language to general logic. Which is to say that an individual's understanding of language influences his application of logic. This is evident in differing results that are derived when a problem is simply rephrased, which causes the facts presented to be interpreted differently even by the same subject.
Perspective can also be a significant factor. Given a word problem involving police breaking up a party, people come to different conclusions and apply entirely different pragma to decisions depending on whether they are asked to that the perspective of a police officer or a partygoer.
Deductive reasoning is not only used to evaluate fact, but to consider taking a given course of action. A proposition that "If you do X then Y will be the result" serves as the basis of a decision as to whether to undertake the action X - whether the subject desires to accomplish outcome Y or assesses the probability of the condition (doing X may not result in Y).
The evaluation of a condition is largely based on experience, and can be viewed in an evolutionary manner. If condition X has been successful in achieving outcome Y in the past, a subject will consider the condition to be true and will tend to apply the condition to a broader set of circumstances. This is generally functional, though it does lead to fallacies where condition X is not sufficient to cause Y, condition Z also occurred and was not considered in the pragma, etc.
A syllogism is a form of logical proof in which two statements that (the major and minor premises) are considered to yield a conclusion, provided the premises themselves are taken as true:
- You are smarter than your brother
- Your brother is smarter than your uncle
- Therefore, you are smarter than your uncle
The example above is a "linear syllogism" in that the quality being considered (height) that presents three points on a quantifiable scale and the comparison is greater/less. So long as the linear relationship is the same (greater/greater or less/less) then a conclusion will stand. However, if you change the second sentence to "your uncle is smarter than your brother" then a conclusion cannot be drawn with confidence.
There are various theories as to the way in which people think through a linear syllogism, none of which strike the author as being "quite right": laying out the terms on a continuum, visualizing the relationships by treating abstract qualities such as intelligence to observable ones such as height. There is no single method that all subjects report, not even a method that a single subject uses consistently. As such these methods are likely a secondary process, or an analogy that describes a process that is ineffable.
Another common syllogism is a categorical syllogism, whose premises indicate inclusion or exclusion from a group:
- All dogs are mammals
- All mammals are animals
- Therefore, all dogs are animals
This functions similar to the linear syllogism, in that the largest category contains the middle category and the middle category contains the smallest category - yielding a Venn diagram that resembles an archery target. Were it not so, then there cannot be the assumption of a relationship between the large and small categories.
The author likewise describes a number of theories for the way in which people think through a categorical syllogism, but arrives at the same conclusion: there is no single theory that is used by all people in all situations, and the theories themselves represent tactics that may not reflect the actual mental processes engaged in deriving a solution.
There is an aside about the way in which syllogisms demonstrate the function of short-term memory: in order to consider the conclusion a person must hold the two premises in mind and identify their relationship. Even young children show the ability to test a conclusion against two premises.
Some deductive-reasoning problems, and a great many real-world situations, have more than two premises, and there is no tenable theory as to the way in which such problems are approached, except to distill them into a sequence of syllogisms - that is, to determine if D can be derived from premises A, B, and C, it is necessary to dissect the relationship into three syllogisms to test AB, AC, and BC against D. Again, there is no credible or consistent evidence that the human mind actually works that way.
Another observation is about the manner in which the mind deals with abstraction - which is to say, it does so poorly - as evidenced by the efficiency with which subjects are able to confirm a syllogism that deals with qualities that can be represented visually versus those that can only be understood conceptually.
Aids and Obstacles to Deductive Reasoning
Deductive reasoning is subject to the same flaws as any cognitive processes that will lead to inaccurate conclusions, which tend to stem from poor consideration of premises (accepting as truth a statement that is false, or not completely true) or a mischaracterization of the relationship between the premises and the conclusion.
The foreclosure effect is a problem that occurs when we fail to consider all possibilities before accepting a premise as true or an outcome as given. They way in which a premise is phrased may lead us to assume a quality that is false or exclude one that is true.
A confirmation bias occurs when we assume the outcome before fully considering the premises - which leads us to make errors in evaluating the syllogism or to choose statements that will support an expected outcome.
Giving attention or precedence to irrelevant data is another obstacle to deductive reasoning that leads us to focus on incidentals that have nothing to do with the identity or causal relationships among factors.
Emotion can also be a factor in evaluating logical issues. Melancholy seems to be the most productive mode: people who are experiencing strong emotions, positive or negative, tend to make more mistakes than those who are emotionally detached, though it is suggested (Schwarz) that a slightly "sad" mood leads people to be more attentive and meticulous in problem solving.
Where as deductive reasoning applies an exiting principle to interpret incidental information, inductive reason considers incidental information and attempts to define a principle. The inductive approach is generally useful in situations where known principles do not seem to apply in a given situation.
(EN: This would seem to imply a relationship to the two - that inductive reasoning must be done to define principles for later use in deductive reasoning, but that is not necessarily so: a principle can be adopted from another source or simply imagined without much consideration.)
Inductive reasoning is prone to lead us astray when we assume a given observation to be universal: if we notice that every person in a restaurant is wearing a coat an tie, we may assume that there is a dress code that requires it, when such may not be the case.
Inductive reasoning is an attempt to predict the future from observations of the past. Consider logic puzzles that present a sequence of numbers and require the subject to indicate the following number. For example, given the sequence of 2-4-6 a subject would likely be inclined that 8 will be the next number, based on the observation that it is a procession of even numbers. He may not immediately recognize that the next number might be 10, if it is assumed that each number is the sum of the previous two.
Inductive reasoning is based on subjective experience: when an individual observes that all incidences he has observed to follow a pattern, he assumes that pattern will be followed in future. There may be limitations to his observation (there are instances that do not follow his "rule" that he has no observed) as well as mistakes in his analysis.
As such, inductive conclusions are often hedged by qualifiers. A person who has only seen black swans would suggest that "most" or "many" swans are black to hedge against the possibility that a swan of another color might exist.
Probability is another common hedge, though it presents data about observations in a mathematical manner. A person who has seen ten swans and nine of them were black might conclude that 90% of swans are black - but the statistic he presents is based on his limited observation. It would be more accurate to state "90% of the swans I have seen are black" to acknowledge that the entire population has not been observed and the sample on which the induction is based may not be representative.
Cognitive psychology acknowledges to functions of inductive reasoning:
- It enables people to feel confident about their knowledge
- It enables people to apply past knowledge to future events
In essence, inductive reasoning is the same as generalization: we recognize when we see something that we saw before, and when our experience supports a given conclusion, we adopt it as a principle.
A "causal inference" is the assumption that a link exists between observed phenomena, such that we assume that one thing occurred only because something else has - and would not have occurred if the event on which it is dependent had not, and can be expected to occur in future if the other event does.
The author refers to John Stuart Mill's "canons" of causality, which largely pertain to the coincidence of two events, such that causality is likely when:
- An effect always occurs after its cause occurs
- An effect never occurs unless its cause occurs
- There is no other event that would cause the "cause" to occur
(EN: Mills has a panoply of methods for sorting out causality when multiple events seem to occur to isolate a single cause-effect pair, or to indicate when there are multiple causes for an effect, etc. The system gets entirely too punctilious for the present topic of study.)
There may be various biases that lead us to make bad inferences, such as ignoring any evidence that does not support a conclusion, focusing too narrowly on one thing as the cause of another, etc. The notion of "confirmation" (demonstrating that any new experience agrees with our previous ideas) is useful in reinforcing our confidence in logic, but also serves to prejudice us.
In some instances, our devotion to preserving inductions is carried to such an extreme that it constitutes a mental disorder. And, ironically enough, the misdiagnoses of mental disorders is often the result of the diagnostician's desire to defend his assumptions about a given patient.
(EN: I don't think that the author has yet defined the notion of "disorder." If memory serves, a condition is not considered to be a disorder until it becomes debilitating to the subject. As such, the irrational tendencies of many individuals are not disorders, merely mistakes or bad habits, until they become sufficiently detrimental to the subject or others that it merits interference to address.)
There's a brief mention of inference that deals with categorization (which items below in certain groups) rather than causation. From multiple observations, a person begins to form a prototype of how things are categorized, and eventually amasses enough confidence to treat this prototype as a categorization schema.
The problem of subjectivity and limited information still apply: a person who arrives at the decision that "there are only six kinds of X" is basing his logic on personal observation and is ignoring that some phenomenon not yet observed may not fall neatly into the schema to which he has limited himself.
Analogical reasoning considers the similarities among the relationship of things, often in category schema. A common textbook representation takes a formulaic approach: "DOE is to DEER as MARE is to _____" - with the answer being "HORSE" (the first being the female term for an animal of a given type).
The value of analogical reasoning is in the ability to recognize relationships among and apply similarities in relationship to a wide array of phenomena. Given the example is based on the principle that there is a specific name for the female of a species, the subject then expects that such a name exists for any species, and the ability to understand this facilitates learning those terms.
Development of Inductive Reasoning
Inductive reasoning skills develop with age, and develop to different degrees in different subjects.
One Experiment (Carey) found that a four-year-old who accepts that dogs and worms have a stomach will conclude that animals similar in appearance to those two animals (cats and snakes) also have a stomach, but will not recognize that animals that are dissimilar to them (frogs, apes, horses, etc.) also do - but in ten-year-old subjects, they recognize the broader implication that any animal has a stomach.
It is likewise commonly observed that children below the age of five test inductive principles broadly. A toddler who learns the name for a dog will call any small, furry animal "doggie" and a parent's attempt to correct them will generally not work until they are old enough to recognize finer distinctions.
It cannot be discerned whether the inability of young children to make these connections is derived from their basic cognitive processes or their familiarity with language, however the two are likely very closely linked.
(EN: It occurs me that "education" in most cultures does not begin before the age of five, likely because the human mind is unable to recognize and apply general principles before that age and reacts to stimuli in a disconnected and isolated manner.)
Even in the present day, cognitive psychologists disagree, sometimes quite vehemently, about the mechanisms of human reason. It's obvious we haven't quite got it sorted out, but theories in general fall into the two areas of associative and rules-based systems.
An associative system is based on the observation of patterns and general tendencies. It is a very loose system by which observation leads to a hasty conclusion that is tested against future observations until a level of certainty is reached in a given assumption. Likely the reason that people living in proximity develop the same systems of beliefs is that they are exposed to the same or highly similar stimuli. Hence a system of cultural beliefs is strongly based on common experiences.
The rules-based system suggests more deliberate procedures for reaching conclusions: rather than relying on casual observation, much deliberation is done to reason the theoretical nature of things and relationship among them, to derive rules in advance of experience. Even when experience is gained, observations are contemplated and schematized into the existing system of rules than preceded experience.
Meanwhile, the "connectionist" model suggests that experiences and thoughts are not distinguished - both causal observation and deliberate consideration combine in the mental framework to guide us in interpreting future experiences. In terms of experience, the confirmation or contradiction of these connection is tested and serves to reinforce or diminish confidence in a given connection.