Storytelling, Trust and Bias

One intellectual rule of thumb on which I rely is that one disagrees with Tyler Cowen at one’s peril. Cowen is an economist at George Mason, and is one of the two bloggers at Marginal Revolution, a very popular blog on economics and culture. So while I won’t call what I’m about to write a disagreement, I do want to offer thoughts on Cowen’s TEDx talk on storytelling. (The talk is from 2009, but I just wandered across a transcript of it for the first time today.) Here’s Cowen’s typically interesting premise:

So what are the problems of relying too heavily on stories? …I think of a few major problems when we think too much in terms of narrative. First, narratives tend to be too simple. The point of a narrative is to strip it way, not just into 18 minutes, but most narratives you could present in a sentence or two. So when you strip away detail, you tend to tell stories in terms of good vs. evil, whether it’s a story about your own life or a story about politics. Now, some things actually are good vs. evil. We all know this, right? But I think, as a general rule, we’re too inclined to tell the good vs. evil story. As a simple rule of thumb, just imagine every time you’re telling a good vs. evil story, you’re basically lowering your IQ by ten points or more. If you just adopt that as a kind of inner mental habit, it’s, in my view, one way to get a lot smarter pretty quickly. You don’t have to read any books. Just imagine yourself pressing a button every time you tell the good vs. evil story, and by pressing that button you’re lowering your IQ by ten points or more.

Another set of stories that are popular – if you know Oliver Stone movies or Michael Moore movies. You can’t make a movie and say, “It was all a big accident.” No, it has to be a conspiracy, people plotting together, because a story is about intention. A story is not about spontaneous order or complex human institutions which are the product of human action but not of human design. No, a story is about evil people plotting together. So you hear stories about plots, or even stories about good people plotting things together, just like when you’re watching movies. This, again, is reason to be suspicious.

It is certainly true that we rely heavily on stories. And it’s further true that in doing so we tend towards oversimplification. The human mind doesn’t do well with uncertainty; we seek narrative coherence even where it isn’t justified. And yet stories are deeply useful. They help us process information more easily. In his recent book Thinking, Fast and Slow Nobel prize winning psychologist Daniel Kahneman relied on a kind of story to relate the way we think, through the use of “characters” System 1 and System 2. The former is the intuitive mind, making quick decisions below the level of consciousness. The latter is what we think of as the rational mind, coming to our aid when we consciously reason through something. The dichotomy has some acceptance in the psychology literature (not as a distinction within the brain, but as a theoretical distinction for studying thinking) but some of Kahneman’s colleagues objected to his personification of these “characters.” Here’s Kahneman explaining his conceit:

System 1 and System 2 are so central to the story I tell in this book that I must make it absolutely clear that they are fictious characters. Systems 1 and 2 are not systems in the standard sense of entities with interating aspects or parts. And there is no one part of the brain that either of the systems would call home. You may well ask: What is the point of introducing fictitious characters with ugly names into a serious book? The answer is that the characters are useful because of some quirks of our minds, yours and mine. A sentence is understood more easily if it describes what an agent (System 2) does than if it describes what something is, what properties it has. In other words, “System 2” is a better subject for a sentence than “mental arithmetic.” The mind – especially System 1 – appears to have a special aptitude for the construction and interpretation of stories about active agents, who have personalities, habits, and abilities. (p. 29)

I believe this is a better way of thinking about stories. It may be true that embedding information in stories generally leads to oversimplification or avoidance of uncertainty. On the other hand, plenty of stories can be nuanced and accurately relate complicated information. But even if storytelling does sacrifice something, it gives us the ability to digest and remember information much more quickly and easily. And the fact is that, in practice, most of us are working with very limited resources (most notably time). We need all the help we can get to process information. If stories can help, that’s generally a good thing.

Yet, I generally agree with Cowen’s point about stories that seem too convenient (especially Michael Moore movies!) But I’d like to propose that, rather than setting up a mental filter to resist certain types of stories, we focus our efforts on the evaluation of the sources of those stories.

Here’s where trust and credibility come in. When Kahneman says he’s going to tell me a story about two characters that make up the mind, I trust that he won’t mislead me, that he won’t overstate his case, or eschew complexity so completely that I’m left with a misguided impression. I believe that he’s trying to help me get a basic grip on very complicated information as best he can given the time I’ve allotted to learn it. That’s because he comes recommended by lots of thinkers whom I respect, and because he’s extraordinarily well credentialed. I find him credible and so I trust him.

I’d urge us all to spend more effort evaluating who we trust – whose stories we’ll buy and whose we’ll treat with Cowen-esque skepticism. And perhaps one metric for assessing credibility would in fact be to apply Cowen’s criteria (does so-and-so constantly tell black-and-white stories?) This seems a more promising path. After all, at this point if either Cowen or Kahneman told me a good vs. evil story I’d believe him.



Algorithms and the future of divorce

In Chapter 21 of Thinking, Fast and Slow Dan Kahneman discusses the frequent superiority of algorithms over intuition. He documents a wide range of studies showing that algorithms tend to beat expert intuition in areas such as medicine, business, career satisfaction and more. In general, the value of algorithms tends to be in “low-validity environments” which are characterized by “a significant degree of uncertainty and unpredictability.”*

Further, says Kahneman, the algorithms in question need not be complex:

…it is possible to develop useful algorithms without any prior statistical research. Smple equally weighted formulas based on existing statistics or on common sense are often very good predctors of significant outcomes. In a memorable example, Daws showed that marital stability is well predicted by a formula:

frequency of lovemaking minus frequency of quarrels

You con’t want your result to be a negative number.

Kahneman concludes the chapter with an example of how this might be used practically: hiring someone at work.

A vast amount of research offers a promise: you are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as “I looked into his eyes and liked what I saw.”

All of this makes me think of online dating. This is an area where we are transitioning from almost entirely intuition to a mixture of algorithms and intuition. Though algorithms aren’t making any final decisions, they are increasingly playing a major role in shaping peoples’ dating activity. If Kahneman is right, and if finding a significant other is a “low-validity environment”, will our increased use of algorithms lead to more optimal outcomes? What truly excites me about this is that we should be able to measure it. Of course, doing so will require very careful attention to the various confounding variables, but I can’t help but wonder: will couples that meet online have a lower divorce rate in 20 years than couples that didn’t? Will individuals who spent significant time dating online be less likely to have been divorced than those that never tried it?

*One might reasonably object that this definition stacks the deck against intuition, and I think this aspect of the debate deserved a mention in the chapter. The focus on “low-validity environments” is the focus on areas where intuition is lousy. So how shocking is it that these are cases where other methods do better? And yet, the conclusions here are extremely valuable. Even though we know that these “low-validity” scenarios are tough to predict, we still generally tend to overrate our ability to predict via intuition and underrate the value of simple algorithms. So in the end this caveat – while worth making – doesn’t really take away from Kahneman’s point.


Fight bias with math

I just finished the chapter in Kahneman’s book on reasoning that dealt with “taming intuitive predictions.” Basically, we make predictions that are too extreme, ignoring regression to the mean, assuming the evidence to be stronger than it is, and ignoring other variables through a phenomenon called “intensity matching.” 

Here’s an example (not from the book; made up by me):

Jane is a ferociously hard-working student who always completes her work well ahead of time.

What GPA do you think she graduates college with? Formulate it in your mind, an actual number.

So Kahneman explains “intensity matching” as being able to toggle back and forth intuitively between variables. If it sounds like Jane is in the top 10% in motivation/work ethic, she must be in the top 10% in GPA. And our mind is pretty good at adjusting between those two. I’m going to pick 3.7 as the intuitive GPA number; if yours is different you can substitute it in below.

Kahneman says this is biased because you’re ignoring regression to the mean, another way of saying that GPA and work ethic aren’t perfectly correlated. so here’s a model to use Kahneman’s trick for taming your prediction.

GPA = work ethic + other factors

What is the correlation between work ethic and GPA? Let’s guess .3 (It can be whatever you think is most accurate).

Now what is the average GPA of college students? Let’s say 2.5? (Again, doesn’t matter).

Here’s Kahneman’s formula for taming your intuitive predictions:

0.3(3.7-2.5)+2.5 = statistically reasonable prediction

So apply the correlation between GPA and work ethic to the difference between your intuitive prediction and the mean, and then go from the mean in the direction of your intuition by that amount.

I played around with some different examples here because my intuition was grappling with some issues around luck vs. static variables, but those aside, this is a neat way to counter one’s bias in the face of limited information.

I can’t help but wonder, though, if the knowledge that this exercise was designed to counter bias led anyone to avoid or at least temper intensity matching. In other words, what were your intuitions for the GPA she’d have after just reading the description of her hard work? Did the knowledge that you were biased lead you to a lower score than the one I mentioned?

Here’s what I’m getting at… If it’s possible (and this is just me riffing right now) to dial down your biases (either consciously or not) when the issue of bias is on your mind, it would seem possible that one’s intuitions could be dialed down going into this exercise, at the point of the original GPA intuition, which could ruin the outcome. Put another way, the math above relies on accurate intensity matching which is itself a bias! If someone were able to come into this with that bias dialed down, they might actually end up with a worse prediction if they also did Kahneman’s suggested process.