Symposium Book Club – Conversation with Sanda Kaufman about Kahneman’s Book, Thinking, Fast and Slow

This is another part of the “virtual book club” discussing readings for the symposium at the University of Missouri on October 7:  Moving Negotiation Theory from the Tower of Babel Toward a World of Mutual Understanding.

Tower of Babel Symposium ad - big

Sanda Kaufman suggested that we read Daniel Kahneman’s book, Thinking, Fast and Slow (2011).  Here’s her summary:  Kahneman is one of my favorite writers for his substance and style.  In this book, he brings some results of his research to bear on why we might make thinking mistakes and how we might lessen their impact when negotiating.

John:  Thanks for this suggestion, Sanda.  This book is clearly written, with easily digestible examples to illustrate the problems he discusses.  In addition to being a summary of his work, it’s a delightful memoir and homage to his longtime collaborator and very dear friend, Amos Tversky, who died in 1996 at the age of 59.  Kahneman, a psychologist, won the Nobel Prize for economics in 2002 for his foundational work in “behavioral economics.” 

I previously read bits and pieces about people’s common cognitive errors but I kept thinking, “People often make systematic mistakes.  So what?”  This book helps answer that question for me.

I will start with a few quibbles, then highlight some points, and then ask for your reactions.

Quibbles

I found the title confusing (though the book is a best-seller).  It might have been clearer if the title was: “Why Our Brains Keep Doing Dumb Things” or “Economists are Wrong, Wrong, Wrong.”

The book entertainingly illustrates a long list of common cognitive errors.  As I read the book, I had a hard time figuring out how everything fit together and where the argument was going.  Hopefully this conversation will help people who want to read it.

I kept wondering how people could avoid or correct for the cognitive errors and he didn’t say much about that.  Indeed, at the very end, he briefly said that there isn’t much that one can do to correct for one’s own errors and people may need outside observers to help us see the errors of our ways.  A practical takeaway is that DR professionals can help clients simply by helping them recognize problematic ideas and thus think more clearly (though professionals have our own biases).

Summary of Some Key Points

As you noted in an earlier post, the book begins by distinguishing two parts of our mind, which he calls “System 1″ and “System 2.”  System 1 is automatic, unconscious, and fast.  It often leads to good judgments but can be easily fooled.  Characteristics of System 1 include that it distinguishes the surprising from the normal, infers and invents causes and intentions, neglects ambiguity and suppresses doubt, is biased to believe and confirm, exaggerates emotional consistency, ignores absent evidence, substitutes easier questions for harder ones, and overweights low probabilities, among other quirks.  (P. 105).

System 2 is conscious and slow and is often called on to solve problems that System 1 can’t handle.  It can even “train” System 1 to avoid some problems.  System 2 requires real effort and has limited capacity.  For example, our analytic capabilities are reduced when we are tired, so it’s harder to think well in that situation.

System 2 also can be fooled in numerous ways.  For example, System 2 can overgeneralize, especially based on limited information, focus too much (or “anchor”) on irrelevant and highly vivid information, make erroneous causal inferences, make poor analyses due to hindsight bias, be overly confident, etc.

Kahneman distinguishes what Richard Thaler called “Econs” and “Humans.”  Econs are beings in classic economic theory who are rational and selfish whereas Humans are beings in psychological theory, who are not completely rational or selfish.   In this telling, Econs operate solely through System 2 after thoroughly investigating and analyzing situations.  Kahneman says that it is as if the two disciplines were studying two different species.

Kahneman and Tversky are particularly well known for their “prospect theory,” which differs from economic “expected utility theory.”  Under the latter, it is assumed that people who are faced with a choice between two options (such as whether to settle or go to trial) calculate the expected outcome of each option and choose the more favorable option.   Prospect theory illustrates common situations when most people choose options that, upon analysis, are not the best options – but people often don’t do careful analysis.

Perhaps the most compelling illustration of prospect theory is that people make judgments in relation to particular reference standards, which often are quite arbitrary and irrelevant to “Econs.”  One of the best known examples is that if people will demand a higher price to sell an object they own than they would pay to buy it.  This makes no logical economic sense but it is perfectly logical to most “Humans” under prospect theory.

This is related to a status quo bias in which people value the status quo (such as an existing contract) more than an equivalent situation that would involve a change. [Sanda notes:  This is similar to the “status quo inertia” observed and discussed by both Larry Susskind and Judy Layzer with respect to public decisions.]

A related dynamic – loss aversion – is that people generally will take more risks about decisions framed as losses (i.e., by reference to the status quo) than gains of the same amount. This means that when they stand to gain something, they would rather take a smaller gain for sure than gamble for more.  By contrast, if they stand to lose something, they would rather take a gamble for a greater loss than take a smaller loss for sure.

Classical economic theory assumes that individuals are risk-neutral and react to uncertain gains and losses in the same way.  Kahneman and Tversky noticed that beyond this mere difference in outlook, you could actually fool people with the mere wording used to describe a situation. They end up making different choices depending on whether you frame the same problem as a choice between gains or between losses. [Sanda notes:  That is scary!!!]

Kahneman writes, “Negotiations over a shrinking pie are especially difficult, because they require an allocation of losses [to both sides].  People tend to be much more easygoing when they bargain over an expanding pie.”  (P. 304).  [Sanda comments:  So true!  Getting something you didn’t have before is so much nicer than losing something you had. This too relates to public decisions and the inertia of the status quo, since any proposed change contains loss for some, who strenuously oppose the move.]

The significance of reference points is illustrated in patterns of exchanges of offers.  Parties may “anchor” their reference points to the initial demands and offers even when they are absurd.

Citing research by our colleague, Chris Guthrie, Kahneman applies this logic of risk aversion when thinking of gains and risk-taking in decisions whether to settle or go to trial.  In these situations, perceptions about whether events are high- or low-probability interact with perceptions of gains and losses.

If a plaintiff has a 95% chance of winning at trial, she is likely to accept an offer with a somewhat lower expected value to avoid the risk of losing at trial.  By contrast, if the plaintiff rejects a reasonable offer, the defendant may be willing to go to trial in the hope of beating the odds.

Conversely, if the plaintiff has only a 5% chance of winning at trial and the defendant offers very little, the plaintiff is likely to go to trial because she has little to lose and considers that the small potential of losing a small settlement is not as compelling as the sure loss of a settlement.  By the same token, the defendant may be willing to make a substantial nuisance payment to avoid the risk of losing at trial, even though it is unlikely.  This set of perspectives is commonly known as the “fourfold pattern” of decisions (pp. 316-321).

Kahneman concludes by advocating for “libertarian paternalism,” which takes advantage of insights of behavioral economics about the real-life behavior of “Humans” to set policies that encourage wise decisions but people retain the power to make their own decisions.

A common example is a policy of employers to set a default that employees contribute to a retirement plan but they may opt out if they want.  Research shows that because of the status quo bias, people are more likely to save, a good thing, with this opt-out system than with a system that requires them to make an affirmative decision to opt in.

He touts the book, Nudge, by Richard Thaler and Cass Sunstein, which is the bible of this approach.  (Missouri will hold a symposium on Oct. 21, 2016, “Evaluating Nudge: A Decade of Libertarian Paternalism,” where Sunstein will be a speaker along with some speakers who will critique this approach.)

I assume that this is an accurate summary as far as it goes, would you agree?  Are there important concepts or principles in the book that this summary omits?

Most importantly, given this summary and anything you would add to it, can you say more why this book appeals to you so much?

Sanda:  First, I want to say that I am in awe of your ability to summarize stuff that is often very difficult to get across.

Second, I want to propose that Kahneman and others (Herbert Simon and Dietrich Dörner, among others) account for why we keep doing seemingly “dumb things” as you called them at the outset.

We could talk about that at length.  But essentially, it is because a lot of the time our shortcuts (heuristics) serve us pretty well and are less costly than other thinking strategies.  But we get used to them and apply them even when we should be more thoughtful.

In fact, Steven Pinker says that some of our thinking behaviors that predictably get us in trouble apparently are on our “hard disk” (i.e., we are born with what he calls “gadgets”).  One such gadget is our tendency to seek causal relationships in our observations.  Needless to say, at times we find them where there are none, and then proceed to fix the world according to those mistaken linkages.

Third, I would like to briefly argue for Econs, even if I am the last person on Earth to do it.  I do so because I live with a physicist who talks to me every day about complexity and how one can model and make sense of it.  I will to go out on a limb and be “incorrect” to an extent (i.e.,not fitting the current take) and contend that economists are not entirely wrong with their assumptions (though it is popular to believe that their assumptions are wrong).  Lawyers, remember you don’t have many fans either which, in my opinion, is also based largely on misconceptions.  So I like to defend both you and the economists.

Economists would be wrong only if they contended that any individual really behaves as they assume, which we know is not so.  But good economists would remind us that their models are based on large numbers of “Econs.”  That means their predictions of outcomes, based on the numerous choices Econs make, are as good as if the assumptions fit Humans.

In a counter-intuitive way, this does not require any given Human to be an Econ and actually behave in that way.  It only means that when you aggregate the behaviors of large numbers of people, your outcome predictions can be pretty good (i.e., “as if” the Econ assumptions were correct).

This is also true in other domains where, regardless of individual quirks, the aggregate results of numerous observations fit assumptions that may not be true for individual actors.  To predict outcomes of complex situations, it sometimes helps to go up a few steps on the scale ladder.  When you look from “above,” you may pick up patterns and regularities that are not apparent when looking at individuals at the scale where they operate.

In my incorrect, anachronistic view of economics, we should not expect each individual to be an Econ and behave according to the economics assumptions.  On the one hand, by doing so one would  readily notice that Humans mostly don’t behave that way.  On the other hand, then we are likely to throw out the good information we can obtain by aggregating large numbers of observations . The resulting systemic picture is valuable and often correct even though Humans make their decisions differently than Econs.

Kahneman actually has offered fixes for the failures of our thinking, and so has Dörner before him.  Steven Pinker, who also addressed some of these failures from an evolutionary genetics perspective, can also be a source of good thinking practices.

First, they would all recommend a dose of humility: simple awareness that we could be wrong in our assumptions, supposed causal links, and ensuing conclusions.  We are arguably an arrogant culture, though other cultures are not much better at decision making under uncertainty.

We tend to believe we understand and can handle situations, where other cultures may believe that outcomes are out of their hands or even pre-ordained.  We also tend to look down on hesitation and prefer bold, decisive leaders – thus rewarding System 1 over System 2 thinking.  Counseling humility may not go far, but it is nevertheless necessary, especially in high-stakes situations.

As I mentioned in another post, the nature of the stakes in a situation matters in many ways.  One is how much biological energy (as I believe Jared Diamond calls it) we want to expend thinking about a specific decision. Thinking is hard, time-consuming and costly in energy.  Piddly decisions are not worth it.  Highly consequential ones surely are.

For such decisions, Kahneman has proposed pre-mortem analyses to counter several of our bad thinking habits – the confirmatory bias, the hindsight bias, and our tendency to overconfidence (which paradoxically increases the less we know about a subject).

Kahneman applied his idea to military strategy decisions but it works in many other instances where mistakes may be very costly.  His solution is simple to describe but mentally taxing to perform.  It is to spend time before implementing a decision to think of all the ways in which it could fail.  How many of us do that with any regularity?!

The idea is to identify weak points and strengthen our solution against them, or even abandon a decision that seems to have too many ways to fail.  Interestingly, this strategy relates to the literature on robust decision making and even to Nassim Nicholas Taleb’s black swans.  (This is by no means accidental – it stems from the fact that they all deal with decision making under uncertainty). This reminds me that I should have put Taleb’s books on our list…

Humility is the right attitude for countering – or at last tempering – our own biases.  But it is also useful to be aware that just as we are subject to a large number of thinking failures,  all others are too.  Therefore, we can manipulate – yes, we do that all the time – others’ weaknesses.

Applying prospect theory lessons, we can frame our arguments in ways that predictably sway others to make choices we prefer them to make. For example, we can choose to describe a problem in terms of expected gains or losses from specific solutions, depending on the results we seek.

Before I dig myself into an even deeper hole (after having defended economists and lawyers and acknowledged that we manipulate each other in negotiations), I want to caution against thinking that our negotiation counterparts are stupid, or believing we have discerned irrationality in their choices.  They surely make the mistakes Kahneman described.  But we may not be great at recognizing them correctly.

At times, we mistake others’ choices as irrational because they do not comport with their declared interests or our expectations of what would be rational for them to do.. But we need to remember that people (including ourselves) may have socially undesirable/unacceptable interests they would not declare out loud.  As a consequence, what people choose to do may not look consistent with what they say they want.  This is not necessarily because people made mistakes or are irrational but because they haven’t disclosed all their interests.

I am guessing that if we asked Kahneman, he would counsel that it is safer to assume that our opponents are at least as intelligent as we are and perfectly rational, even if we don’t see it.  The mistakes we would make based on this assumption may be less costly to us than giving in to hubris.

John:  Thanks again, Sanda, for your extremely helpful analysis of complex material.

I just want to note some examples of mistakes that many of us DR folks make in our attributions about others.  We often assume that people are irrational to use positional negotiation instead of interest-based negotiation.  Similarly, Peter Benner and I found that many individuals in businesses find it reasonable from their perspective to continue using patterns of litigation-as-usual when using planned early negotiation systems would seem like no-brainers to people in our field.

 

 

2 thoughts on “Symposium Book Club – Conversation with Sanda Kaufman about Kahneman’s Book, Thinking, Fast and Slow”

  1. In this post, I wrote that Daniel Kahneman’s book, Thinking, Fast and Slow, is a “delightful memoir and homage to his longtime collaborator and very dear friend, Amos Tversky.”

    I just read a review of a new book by Michael Lewis about their friendship and collaboration, which sounds fascinating. According to the review, The Undoing Project: A Friendship That Changed Our Minds provides the back stories of their academic work on behavioral economics and also tells “one hell of a love story, and a tragic one at that. The book is particularly good at capturing the agony of the one who loves the more. . . . [L]like so many romances, it was fragile — and ultimately painful. As often happens in collaborations, one person, fairly or unfairly wound up getting more credit for the work, and in this case, it was Dr. Tversky.”

  2. Inductive reasoning (system 1) is so fast and easy for humans. It provides the right answer an astonishing amount of the time, despite the inherent logical flaws. Deductive reasoning (system 2) lacks those logical flaws, but requires the proper time and resources to think the situation all the way through. A little caution can go a long way when you understand the inherent flaws in human cognition though.

Comments are closed.