Considering the Alternative

“The quality of any given option cannot be assessed in isolation from its alternatives. One of the “costs” of making a selection is losing the opportunities that a different option would have afforded. Thus, an opportunity cost of vacationing on the beach in Cape Cod might be missing the fabulous restaurants in the Napa Valley.”

-Barry Schwartz[1]

If you  were faced with a choice, and receiving $100 was one option, would you choose it? If the other choice was to receive $1000, you probably wouldn’t; if the alternative was getting kicked in the shin, you’d probably take the hundred bucks. In decision making, different options can not be rationally evaluated on their own, only compared to each other. What is important are the differences between the expected outcomes of the choices. The best choice is the one that tends to produce the best outcome relative to the other options.

In evaluating evidence too, it is differences that are important. When we obtain evidence for some theory about the world, it is because we observed something that would tend to be different depending on if that theory were true or false. If we want to know if the theory “it is raining” is true, evidence could be seeing water falling outside. Alternatively, it could be observing something indirectly related to the rain, like a weather report, or noticing that your basement is flooded. However, if you use a coin toss to inform a belief that it is raining, that belief will not be reliable (even though it may occasionally be correct just by coincidence). You can not gain any information about the rain by flipping a coin because the outcome of a coin flip is independent of the state of the rain. It will not tend to be any different depending on the rain, and so it provides no information about rain and constitutes no evidence that it is raining. In the language of information theory we say there is zero mutual information between the rain and the coin flip.

If your theory says event X is likely, and the competing theories say event X is unlikely, and then you observe event X, your theory becomes more plausible and the alternatives become less plausible. To know how strongly an observation supports a particular theory you need two independent pieces of information: the probability of the observation if the theory was true, and the probability of the observation if the theory was false. As long as these probabilities are different, the observation is evidence for or against the theory; it gives you information about how likely the theory is to be true.

Failing to consider the alternative to a theory is a common error in human reasoning because an instinct to evaluate theories relative to their alternatives does not appear to be built into our brains’ reasoning software. This is shown rather dramatically by a reasoning problem invented by cognitive psychologist Peter Wason called the 2-4-6 task[2]. In this problem subjects are told that the experimenter has a rule that applies to a sequence of three numbers, and that “2-4-6″ is one example of a sequence that satisfies this rule. The subjects are then asked to invent other sequences of three numbers and then they are told if their new sequence satisfies the rule or not. The goal of the subject is to eventually guess what the rule is.

Most subjects quickly formulate a theory about what the rule is and then try asking the experimenter about sequences that follow that rule. For example, a subject might suspect the rule is “three consecutive even numbers”, and so they might try asking if sequences like “54-56-58″, “20-22-24″ and “998-1000-1002″ satisfy the rule. When they are told that these sequences do indeed satisfy the rule, they become increasingly confident that they have discovered the correct rule. After trying several more sequences (to “make sure”), they announce their theory, “the rule is three consecutive even numbers”. At this point they are informed that they are wrong.

The experimenter’s actual rule used in this experiment is “any three increasing numbers”. If the subjects had simply tried a sequence such as “1-2-3″, they would be told that this sequence also satisfies the rule and so they would discover immediately that “three consecutive even numbers” can not possibly be the experimenters rule. The mistake here is that people tend to focus only on seeking confirmation of their theory, even when it would be more efficient to ask questions that could falsify their theory. This tendency to seek only confirmatory evidence and to ignore falsifying evidence is known as confirmation bias.

In a variation of this test subjects are instead told that there are two types of number sequences; sequences are either “DAX” or “MED”. The rule is the same as before, “DAX” sequences are any three increasing numbers, and “MED” are everything else. The subject is given the same task, to determine the rule that decides what sequences are “DAX” and which are “MED”. When the task is presented like this the subjects find the correct rule much more easily. This is because each time they try to confirm a theory about what “DAX” means, they are also trying to falsify a theory about what “MED” means and vice versa. In an experiment conducted at Bowling Green University[3], undergraduate psychology students presented with this version of the problem solved it on the first try 60% of the time, versus just 11% when presented with the first version. Simply by making the alternative theory more salient, people test their theories much more efficiently.

In decision making too, it is important to consider alternatives relative to each other, rather than in isolation. Failing to maintain the mindset of thinking about the differences between options, rather than examining each possibility in isolation, often leads to serious errors. For example, people opposed to the construction of wind turbines often cite the number of birds killed by the turbines. Depending on whose study you read, it is estimated that wind turbines kill somewhere in the range of tens to hundreds of thousands of birds every year in the US alone[4][5]. When used as an argument against wind power, this figure is usually intended to evoke a mental image of some poor bird being smashed out of the sky mid-flight, only to fall onto a enormous pile of other dead birds. How could we allow wind turbines to be built if this is the consequence?

But so far we have entirely ignored the alternatives. The statistic above may trigger a compelling emotional response, but it’s not a rational argument against wind power unless it is evaluated relative to the other real-world options we have. When we ask how this compares to other ways we could generate electricity, the picture changes dramatically. Fossil fuel power plants cause around 15 million bird deaths per year[4]. Even if we use the highest estimates for number of birds killed by wind turbines, fossil fuel power kills vastly more birds than wind turbines. Even if we look at the number of birds killed per unit of energy generated, which is the relevant statistic after all, fossil fuels still kill more birds. Fossil fuels simply kill birds in an indirect, less vividly imaginable way that doesn’t invoke dramatic mental images of violent deaths, but if what we care about truly is the number of birds being killed by our energy infrastructure then wind power is a better choice than burning fossil fuels. If we let the emotional argument override the rational argument by failing to consider the alternative choice, we may end up preventing our own goal from being achieved.[6]

Experimental psychologists refer to choices presented in isolation as single evaluation and choices presented along with the alternatives as joint evaluation. Experiments have shown that people often make choices in single evaluation situations that are the opposite choice that they make when given the same choice alongside the alternatives. For example, Christopher Hsee conducted an experiment[7] where subjects were presented with a description of a dictionary and asked what price they were willing to pay for it. People were given one of the following descriptions:

Dictionary A
Year of publication: 1993
Number of entries: 10,000
Condition: Like new.
Dictionary B
Year of publication: 1993
Number of entries: 20,000
Condition: Cover is torn; otherwise like new.

On average, people were willing to pay $24 for Dictionary A, and $20 for Dictionary B when they were shown only one description. Another set of subjects were shown both descriptions at the same time, and this group was willing to pay only $19 for Dictionary A, but $27 for Dictionary B. The subjects preferences were reversed when the choice was presented as a joint evaluation!

Consideration of the alternative choices can make a significant difference to our decision making abilities. In most cases we are better off making decisions under joint evaluation
than single evaluation, but as Nobel-prize-winning psychologist Daniel Kahneman points out, there are a few cases where it’s wise to be wary of joint evaluation:

Rationality is better served by broader and more comprehensive frames, and joint evaluation is obviously more comprehensive than single evaluation. Of course, you should be wary of joint evaluation when someone who controls what you see has a vested interest in what you choose. Salespeople quickly realize that manipulation of the context in which customers see a good can profoundly influence preferences.

-Daniel Kahneman[8]

In both decision making and belief formation it is important to consider ideas not on their own, but relative to each other. When we evaluate evidence without considering the alternative hypothesis, we can fall victim to confirmation bias. When we make choices without considering the alternative option, we can fall victim to framing effects. If we train our thinking to avoid these pitfalls, we can improve the quality of both our choices and beliefs.

FacebookTwitterGoogle+Share