Storytelling, Trust and Bias

One intellectual rule of thumb on which I rely is that one disagrees with Tyler Cowen at one’s peril. Cowen is an economist at George Mason, and is one of the two bloggers at Marginal Revolution, a very popular blog on economics and culture. So while I won’t call what I’m about to write a disagreement, I do want to offer thoughts on Cowen’s TEDx talk on storytelling. (The talk is from 2009, but I just wandered across a transcript of it for the first time today.) Here’s Cowen’s typically interesting premise:

So what are the problems of relying too heavily on stories? …I think of a few major problems when we think too much in terms of narrative. First, narratives tend to be too simple. The point of a narrative is to strip it way, not just into 18 minutes, but most narratives you could present in a sentence or two. So when you strip away detail, you tend to tell stories in terms of good vs. evil, whether it’s a story about your own life or a story about politics. Now, some things actually are good vs. evil. We all know this, right? But I think, as a general rule, we’re too inclined to tell the good vs. evil story. As a simple rule of thumb, just imagine every time you’re telling a good vs. evil story, you’re basically lowering your IQ by ten points or more. If you just adopt that as a kind of inner mental habit, it’s, in my view, one way to get a lot smarter pretty quickly. You don’t have to read any books. Just imagine yourself pressing a button every time you tell the good vs. evil story, and by pressing that button you’re lowering your IQ by ten points or more.

Another set of stories that are popular – if you know Oliver Stone movies or Michael Moore movies. You can’t make a movie and say, “It was all a big accident.” No, it has to be a conspiracy, people plotting together, because a story is about intention. A story is not about spontaneous order or complex human institutions which are the product of human action but not of human design. No, a story is about evil people plotting together. So you hear stories about plots, or even stories about good people plotting things together, just like when you’re watching movies. This, again, is reason to be suspicious.

It is certainly true that we rely heavily on stories. And it’s further true that in doing so we tend towards oversimplification. The human mind doesn’t do well with uncertainty; we seek narrative coherence even where it isn’t justified. And yet stories are deeply useful. They help us process information more easily. In his recent book Thinking, Fast and Slow Nobel prize winning psychologist Daniel Kahneman relied on a kind of story to relate the way we think, through the use of “characters” System 1 and System 2. The former is the intuitive mind, making quick decisions below the level of consciousness. The latter is what we think of as the rational mind, coming to our aid when we consciously reason through something. The dichotomy has some acceptance in the psychology literature (not as a distinction within the brain, but as a theoretical distinction for studying thinking) but some of Kahneman’s colleagues objected to his personification of these “characters.” Here’s Kahneman explaining his conceit:

System 1 and System 2 are so central to the story I tell in this book that I must make it absolutely clear that they are fictious characters. Systems 1 and 2 are not systems in the standard sense of entities with interating aspects or parts. And there is no one part of the brain that either of the systems would call home. You may well ask: What is the point of introducing fictitious characters with ugly names into a serious book? The answer is that the characters are useful because of some quirks of our minds, yours and mine. A sentence is understood more easily if it describes what an agent (System 2) does than if it describes what something is, what properties it has. In other words, “System 2” is a better subject for a sentence than “mental arithmetic.” The mind – especially System 1 – appears to have a special aptitude for the construction and interpretation of stories about active agents, who have personalities, habits, and abilities. (p. 29)

I believe this is a better way of thinking about stories. It may be true that embedding information in stories generally leads to oversimplification or avoidance of uncertainty. On the other hand, plenty of stories can be nuanced and accurately relate complicated information. But even if storytelling does sacrifice something, it gives us the ability to digest and remember information much more quickly and easily. And the fact is that, in practice, most of us are working with very limited resources (most notably time). We need all the help we can get to process information. If stories can help, that’s generally a good thing.

Yet, I generally agree with Cowen’s point about stories that seem too convenient (especially Michael Moore movies!) But I’d like to propose that, rather than setting up a mental filter to resist certain types of stories, we focus our efforts on the evaluation of the sources of those stories.

Here’s where trust and credibility come in. When Kahneman says he’s going to tell me a story about two characters that make up the mind, I trust that he won’t mislead me, that he won’t overstate his case, or eschew complexity so completely that I’m left with a misguided impression. I believe that he’s trying to help me get a basic grip on very complicated information as best he can given the time I’ve allotted to learn it. That’s because he comes recommended by lots of thinkers whom I respect, and because he’s extraordinarily well credentialed. I find him credible and so I trust him.

I’d urge us all to spend more effort evaluating who we trust – whose stories we’ll buy and whose we’ll treat with Cowen-esque skepticism. And perhaps one metric for assessing credibility would in fact be to apply Cowen’s criteria (does so-and-so constantly tell black-and-white stories?) This seems a more promising path. After all, at this point if either Cowen or Kahneman told me a good vs. evil story I’d believe him.

 

Google News: Trust the algorithm

I’ve written about the potential dangers of Google and Facebook using algorithms to recommend news, with the basic fear being that they’ll recommend stories that confirm my biases rather than “feed me my vegetables.” But Nieman Lab has an interview with the founder of Google News who has quite a different take on what he’s doing:

“Over time,” he replied, “I realized that there is value in the fact that individual editors have a point of view. If everybody tried to be super-objective, then you’d get a watered-down, bland discussion,” he notes. But “you actually benefit from the fact that each publication has its own style, it has its own point of view, and it can articulate a point of view very strongly.” Provided that perspective can be balanced with another — one that, basically, speaks for another audience — that kind of algorithmic objectivity allows for a more nuanced take on news stories than you’d get from individual editors trying, individually, to strike a balance. “You really want the most articulate and passionate people arguing both sides of the equation,” Bharat says. Then, technology can step in to smooth out the edges and locate consensus. ”From the synthesis of diametrically opposing points of view,” in other words, “you can get a better experience than requiring each of them to provide a completely balanced viewpoint.”“That is the opportunity that having an objective, algorithmic intermediary provides you,” Bharat says. “If you trust the algorithm to do a fair job and really share these viewpoints, then you can allow these viewpoints to be quite biased if they want to be.”

[emphasis from Nieman Lab]

A few thoughts:

1. It is very encouraging that Krishna Bharat is thinking about this, even if only as a piece of what he’s doing.

2. He’s right that whether or not you can trust the algorithm matters tremendously.

3. I remain skeptical that there’s any incentive for the algorithm to challenge me. Does he believe doing so will provide something I want and am more likely to click such that this vision fits nicely with Google’s bottom line? Or is he suggesting that he and his team are worried about more than the bottom line?

Bottom line: it’s great he’s thinking about this but he needs to explain why we should really believe it’s a priority if he wants us to truly trust the algorithm.