Several years ago, I delivered a presentation to the economic society at my university entitled “Why Psychologists know more about Economic Behavior than Economists.”
It was provocative, so the society attended expecting to eat me for breakfast. They left looking somewhat morose, so I like to think I won that round. The gem in the argument was a
New York cab driver study conducted in 2000 (Camerer, Babcock, Loewenstein & Thaler). Although rates per mile are set by law, on busy days (caused by bad weather, subway breakdowns, holidays or conventions) drivers spend far less time searching for customers and therefore earn a higher hourly wage.
New York drivers don’t own their cabs, but hire them for 12-hour shifts, though of course they can quit earlier if they wish. Economic theory predicts drivers should work longer on busy days and less on slow days. Yet, the study showed that drivers quit earlier on high-wage days, such as when it rained, and drive longer on low-wage, sunny days.
Without working a minute more, drivers could increase income by an average of 7.8% by changing the times at which they quit. Instead, they act against their economic self-interest and, incidentally, make it even harder for the rest of us to get cabs when more people need them – so everyone loses out. Behavioral finance suggested that the drivers’ behavior would be more akin to daily targeting; that they would set a target of money they want to earn and quit when they reach that target. The results bore this out.
I saw reflected in the faces of my audience that this study was not vulnerable to their standard criticisms of behavioral finance: that it was not “real,” or that the subjects did not understand the experimental task or lacked proper incentives. These were data reflecting real economically incentivized behavior from the real world and it didn’t make a scrap of sense in classical economic terms. For adherents to the efficient market hypothesis, evidence that the world isn’t perfect can be distressing. Really it shouldn’t be
Peter Wason was an English cognitive psychologist who challenged the contemporary thinking of his day. This was dominated by the theories of Piaget (1896-1980), who believed humans reasoned by logical analysis. Wason (1924-2003) argued that this was wrong and showed the way we tackled problems was often illogical and irrational.
Visual illusions, such as the Müller-Lyer illusion, had already revealed that our perceptual process is susceptible to errors, although these errors are, in effect, results of intelligence, not stupidity. Wason was one of the first to discover “cognitive illusions,” illusions of thought, rather than vision.
He developed a set of simple logical experiments that are still used today. These include his 1966 “selection task” – often described as the “four card problem.” What is surprising is that over 90% of people get this problem wrong.
In the beloved children’s book
Winnie-the-Pooh, Pooh and Piglet dig a trap to catch a heffalump, drawn as a type of elephant. No heffalumps are caught, but in another adventure, Pooh and Piglet fall into their own trap. The term has become shorthand for a large, crass trap, particularly one self-initiated, that no one should fall for but many actually do.
Indeed, Wason was fond of pointing out that even people with PhDs in logic, who should perhaps know better, regularly fell into this “heffalump” trap. Wason interpreted this as a “confirmation bias,” that is, when confronted by a logical problem, people fix on a hypothesis and try to confirm it rather than look for instances where the hypothesis could be false. The results were a shock because the assumption in psychology until then, stemming from Piaget, was that we all have logic within our heads in some shape or form. Yet, it emerged that people were illogical in their approach – which was startling. Some used this to argue that people are, in some fundamental way, irrational.
Indeed, certain commentators described people as “cognitive cripples,” unable to think correctly about even basic principles of inference. There is still ongoing controversy about this issue. I tell my students that, before they come to the conclusion that this is a failing of the human brain, they should consider visual illusions. The fact that there are hundreds of visual illusions doesn’t mean we need a guide dog and a white stick before we venture out into the world.
The work of Wason overlapped at the end of the 1960s with the beginnings of Tversky (1937-1996) and Kahneman. They focused not so much on logic, but rather on how people judge probability. Their argument was that humans make judgments on the likelihood of events or outcomes by using rules of thumb called heuristics.
An heuristic is a simple rule that guides problem solving to process information in order to generate quick solutions. They discovered three main heuristics underlying human judgment. One was “availability”, where you judge the probability by the ease in which instances could be brought to mind. The more often you hear about something, the more common it is.
For example, in the United States, when people are asked what they believe is more common, suicide or homicide, most answer murder, although the opposite is actually the case. Our judgment can be influenced by other factors, such as sensationalized news reporting and the prevalence of fictional detectives.
In the main, availability works well over your lifetime because judgments are based on memory and this is a better tool for decision-making than taking a random guess every time. However, this doesn’t mean you are invulnerable to illusions.
The second is the “representativeness” heuristic, a bias where a situation is judged by similarity to previous experiences or events. This can be useful for making a quick decision, but it can also be limiting as it may lead to close-mindedness, such as falling back on stereotypes.
1983 study, participants were asked to judge the probability that a woman named Linda (who had liberal interests as a student) was a feminist, a bank teller, or both a bank teller and feminist. Probability dictates that the likelihood of her being both a bank teller and feminist would be less than her being either a feminist or a bank teller. However, many participants judged that she was more likely to be a bank teller and a feminist than a bank teller alone. Effects of the representative heuristic are plentiful and include, among others, the “Gambler’s Fallacy,” “Hot Hand Fallacy” and “Base Rate Fallacy.”
The third heuristic is “anchor and adjust.” The idea is that if you are trying to estimate an unknown quantity, such as next year’s sales numbers or the price of Microsoft stocks next week, you anchor on one value and adjust that as new data or evidence comes to light. This is perfectly reasonable, but people are often influenced by inappropriate anchors. They can also fail to adjust sufficiently.
In one study, Tversky and Kahneman spun a wheel of fortune numbered 1 to 100 in a subject’s presence. They then asked the subject a question, such as to estimate the percentage of African nations in the United Nations. Although the number on the wheel was clearly random, the estimates given were influenced by the spun number: those with low numbers gave low estimates and those that had high numbers gave high estimates. In this case, people were influenced by inappropriate anchors, but in other cases they adjust insufficiently.
another study (Northcraft and Neale, 1987), real estate agents were asked to estimate the value of a piece of property on the market. The agents were given information about the property, including details of the list price (initial value). Although all agents denied their valuation was influenced by it, those who received a higher list price (for the same property) invariably estimated a higher appraisal value. The study showed they were in fact strongly anchored to the list price and adjusted downwards, but the final figure they provided was still biased in the direction of the initial anchor value.
What you see is maybe not what you get
Tversky and Kahneman created the basis of a powerful body of work that has caused a revolution in economics and psychology. For a long time, it was dismissed by many economists, but when Kahneman won the Nobel Prize in Economics clearly a sea change was acknowledged by the Economics establishment. There certainly is increasing acknowledgment among economists that there is a real point to all this behavioral stuff. Many economists are becoming comfortable with the notion that humans have characteristic methods of compensating for limited brain capacity via heuristics and, therefore, cannot be perfectly rational decision-makers. Resistance comes in terms of defining what the consequences are.
Some economists still say that, on aggregate, markets remain efficient and rational, but this position is coming under threat by anomalies in the market. It is becoming increasingly obvious that we have left behind the world where the assumptions of classic economics hold true in all cases. Replacing it is a realization that economics is only one branch of the social science that teaches us about the complexities and contradictions inherent in humanity, and that all of them together can help us gain a unified view of human behavior.