Why I think the Vox crew is too cynical about bias and belief

Will this post convince anyone? I’m optimistic.

The Vox policy podcast, The Weeds, had a long segment on bias and political belief this week, which was excellent. I didn’t disagree with anything Ezra Klein, Sarah Kliff, and Matthew Yglesias said, but I think they left out some reasons for optimism. If you can only tell one story about bias, the one they told is the right one. People are really biased, and most of us struggle to interpret new information that goes against our existing beliefs. Motivated reasoning and identity-protective cognition are the norm.

All true. But there are other veins of research that paint a slightly more optimistic picture. First, we’re not all equally biased. Second, it actually is possible to make people less biased, at least in certain circumstances. And third, just because I can’t resist, algorithms can help us be less biased, if we’d just learn to trust them.

(Quick aside: Bias can refer to a lot of things. In this post by I’m thinking only about a specific type. Habits of thought that prevent people from reaching empirically correct beliefs about reality.)

We’re not all equally biased. Here I’m thinking of two lines of research. The first is about geopolitical forecasting, by Philip Tetlock, Barbara Mellers, Don Moore, and others, mostly at the University of Pennsylvania and Berkeley. Tetlock is famous for his 2005 book on political forecasting, but he’s done equally interesting work since then, summarized in a new popular book Superforecasting. I’ve written about that work here and here.

Basically, lots of people, including many experts, are really bad at making predictions about the world. But some people are much better than others. Some of what separates these “superforecasters” from everyone else are things like knowledge and intelligence. But some of it is also about their style of thinking. Good forecasters are open-minded, and tend to avoid using a single mental model to think about the future. Instead, they sort of “average” together multiple mental models. This is all covered in Tetlock’s 2005 book.

What Tetlock and company have shown in their more recent research is just how good these superforecasters are at taking new information and adjusting their beliefs accordingly. They change their mind frequently and subtly in ways that demonstrably correspond to more accurate beliefs about the state of the world. They really don’t look the standard story about bias and identity protection.

Another line of research in this same vein comes from Keith Stanovich at the University of Toronto, who has studied the idea of rationality, and written extensively about how to not only define it but identify it. He also finds that people with certain characteristics — open-minded personality, knowledge of probability — are less prone to common cognitive biases.

There are ways to make people less biased. When I first started reading and writing about bias it seemed hard to find proven ways to get around it. Just telling people to be more open-minded, for instance, doesn’t work. But even then there did seem to be one path: I latched on to the research on self-affirmation, which showed that if you had people focus on an element of their identity unrelated to politics, it made them more likely to accept countervailing evidence. Having been primed to think about their self-worth in a non-political context meant that new political knowledge was less threatening.

That method is in line with the research that the Vox crew discussed — it’s sort of a jujitsu move that turns our weird irrationality against itself, de-biasing via emotional priming.

But we now know that’s not the only way. I mentioned that Stanovich has documented that knowledge of probability helps people avoid certain biases. Tetlock has found something similar, and has proven that you don’t need to put people through a full course in the subject to get the effect. As I summarized earlier this year at HBR:

Training in probability can guard against bias. Some of the forecasters were given training in “probabilistic reasoning,” which basically means they were told to look for data on how similar cases had turned out in the past before trying to predict the future. Humans are surprisingly bad at this, and tend to overestimate the chances that the future will be different than the past. The forecasters who received this training performed better than those who did not.

There are other de-biasing techniques that can work, too. Putting people in groups can help under some circumstances, Tetlock found. And Cass Sunstein and Reid Hastie have outlined ways to help groups get past their own biases. Francesca Gino and John Beshears offer their own list of ways to address bias here.

None of this is to say it’s easy to become less biased, but it is at least sometimes possible. (Much of the work I’ve cited isn’t necessarily about politics, a particularly hard area, but recall that Tetlock’s work is on geopolitical events.)

Identifying and training rationality. So we know some people are more biased than others, and that bias can be mitigated to at least some extent through training, structured decision-making, etc. But how many organizations specifically set out to determine during the hiring process how biased someone is? How many explicitly build de-biasing into their work?

Both of these things are possible. Tetlock and his colleagues have shown that prediction tournaments work quite well at identifying who’s more and less biased. I believe Stanovich is working on ways to test for rationality. Tetlock has published an entire course on good forecasting (which is basically about being less biased) on Edge.org.

Again, I don’t really think any of this refutes what the Vox team covered. But it’s an important part of the story. Writers, political analysts, and citizens can be more or less biased, based on temperament, education, context, and training. There actually is a lot we can do to address the systematic effects of cognitive bias in political life.

If all that doesn’t work there’s always algorithms. I mostly kid, at least in the context of politics, where values are central. But algorithms already are way less biased than people in a lot of circumstances (though in many cases they can totally have biases of their own) and they’re only likely to improve.

Of course, being humans, we also have an irrational bias against deferring to algorithms, even when we know they’re more likely to be right. But as I’ve written about, research has identified de-biasing tricks that help us overcome our bias for human judgment, too.


Como se dice… bias?

One of the coolest paper abstracts I’ve read, via MR, presented without comment:

Would you make the same decisions in a foreign language as you would in your native tongue? It may be intuitive that people would make the same choices regardless of the language they are using, or that the difficulty of using a foreign language would make decisions less systematic. We discovered, however, that the opposite is true: Using a foreign language reduces decision-making biases. Four experiments show that the framing effect disappears when choices are presented in a foreign tongue. Whereas people were risk averse for gains and risk seeking for losses when choices were presented in their native tongue, they were not influenced by this framing manipulation in a foreign language. Two additional experiments show that using a foreign language reduces loss aversion, increasing the acceptance of both hypothetical and real bets with positive expected value. We propose that these effects arise because a foreign language provides greater cognitive and emotional distance than a native tongue does.


Gizmodo’s take on objectivity is regrettable

I’ve got plenty of complaints regarding the sort of he-said-she-said faux objectivity that has overwhelmed much of the traditional media, and I’ve written as much here on the blog. It’s Jay Rosen’s “quest for innocence” – the desire to be blameless that drives impartiality off the deep end to the point where it hurts readers. And at the philosophical level I recognize that “objectivity” in the abstract is impossible.

Fine. But the basic premise behind journalistic objectivity still has tremendous value. So it’s a shame to see Gizmodo editor Matt Buchanan trashing it in a post today. Here’s the opening:

Gizmodo is not objective. It never has been, I don’t think. And I hope it never will be. Because the point isn’t to be something as meaningless—and frankly, false—as objective. The point is to tell the truth.

As long we’re going to get philosophical, the whole notion of “the truth” is itself problematic. But in the context of journalism the point of “objectivity” is to tell the truth. And to appreciate that the truth is often very tricky, so a special ethic is required that is deeply skeptical of truth claims and devoted to exploring competing truth claims.

But Gizmodo ignores this and says:

But objectivity, very often, is bullshit. Even science, which proclaims to be objective more than any other discipline, is very often not, unable to decide whether or not coffee will kill you—or more tragically, has been systematically deployed over and over in history in the service of racism and misogyny. Objectively speaking, the earth was flat and the center of the universe, for a very long time.

Presumably some sort of honest Gizmodo ethic that just calls ’em like it sees ’em would have totally gotten that whole round earth thing right off the bat.

It gets worse.

Oh, and then there’s “bias.” What we hear about the most. That we’re biased about one product or another. What is an “unbiased” review of technology, or assessment of anything? A list of specifications, numbers jammed together with acronyms? What good does that do anybody?

I take from this that the author just doesn’t really spend much time thinking about bias. Here again it’s just not that hard, and it goes back to the point about telling the truth. Bias is about people making clearly false judgments in a systematic way. It’s measurable against broadly agreed upon truth claims, and in some cases it can be tempered by good practices of thought.

Journalistic objectivity is about a deep commitment to truth-telling paired with an acknowledgement of the pervasive power of bias that then leads to a skepticism of truth claims, which naturally breeds some interest in competing truth claims.

How does that work in practice when the quest for innocence is removed? I continue to go back to this great Jim Henley post describing the “blog-reporter ethos” which he sees as basically the same of that of a magazine writer:

* original reporting on first-hand sources
* a frankly stated point-of-view
* tempered by a scrupulous concern for fact
* an effort to include a fair account of differing perspectives
* ending in a willingness to plainly state conclusions about the subject

This isn’t that hard (well it’s hard to do, but it’s not that hard to grasp as a concept). We don’t just need transparency. And we don’t need the concept of objectivity to be so trashed that we can’t rebuild it.


Being told “Be Rational” doesn’t de-bias

More bias research. I’ve been digging in pretty deeply on interventions that help mitigate motivated reasoning and the results aren’t great. There’s self-affirmation, which I discussed in my Atlantic piece, but beyond that it’s pretty thin picking. Motivated reasoning doesn’t track significantly with open-mindedness, and interventions urging participants to be rational seem to have little to no effect. I’d like to see more work on this because I can imagine better pleas (like explaining basically the pervasiveness of bias, or prompting in-group loyalty to those who consider opposing arguments) but for what it’s worth, here is a bit of a paper measuring self-affirmation that also included rationality prompts:

It is of further theoretical relevance that the self-affirmation manipulation used in the present research and the identity buffering it provided exerted no effect on open-mindedness or willingness to compromise in situations heightening the importance of being rational and pragmatic. This lack of impact of selfaffirmation, we argue, reflects the fact that the identity-relevant goal of demonstrating rationality (in contrast with that of demonstrating one’s ideological fidelity or of demonstrating one’s open mindedness and flexibility) is not necessarily compromised either by accepting counter attitudinal arguments or by rejecting them.Both responses are consistent with one’s identity as a rational individual, provided that such acceptance or rejection is perceived to be warranted by the quality of those arguments.The pragmatic implication of the latter finding is worth emphasizing. It suggests that rhetorical exhortations to be rational or accusations of irrationality may succeed in heightening the individuals’ commitment to act in accord with his or her identity as arational person but fail to facilitate open-mindedness and compromise. Indeed, if one’s arguments or proposals are less than compelling, such appeals to rationality may be counterproductive.Simple pleas for open-mindedness, in the absence of addressing the identity stakes for the recipient of one’s arguments and proposals, are similarly likely to be unproductive or even counterproductive. A better strategy, our findings suggest, would be to provide the recipient with a prior opportunity for self-affirmationin a domain irrelevant to the issue under consideration and then(counterintuitively) to heighten the salience of the recipient’s partisan identity.

More discussion of this phenomenon:

Why did a focus on rationality or pragmatism alone prove a less effective debiasing strategy than the combination of identity salience and affirmation—the combination that, across all studies,proved the most effective at combating bias and closed mindedness? Two accounts seem plausible. First, the goals of rationality and pragmatism may not fully discourage the application of prior beliefs. Because people assume their own beliefs to be more valid and objective than alternative beliefs (Armor, 1999;Lord et al., 1979; Pronin, Gilovich, & Ross, 2004; Ross & Ward,1995), telling them to be rational may constitute a suggestion that they should continue to use their existing beliefs in evaluating the validity of new information (Lord, Lepper, & Preston, 1984).Second, making individuals’ political identity or their identity linked convictions salient may increase the perceived significance of the political issue under debate or negotiation. Because identities are tied to long-held values (Cohen, 2003; Turner, 1991),making those identities salient or relevant to an issue may elicit moral concern, at least when peoples’ self-integrity no longer depends on prevailing over the other party


Don’t blog on an empty stomach

(The clip above covers some basics of mental energy and depletion.)

The alternative title for this post was “I’m hungry; you’re wrong.” I’m not sure which is better… In any case, consider this bit from Kahneman:

Resisting this large collection of potential availability biases is possible, but tiresome. You must make the effort to reconsider your intuitions… Maintaining one’s vigilance against biases is a chore — but the chance to avoid a costly mistake is sometimes worth the effort.

Now as I understand it, this is basically a function of self-control. By taxing your brain to counteract biases, you’re drawing on a finite pool of mental energy. We know from studies of willpower that doing so can cause problems. As John Tierney reported in an excellent NYT Magazine piece on decision fatigue:

Decision fatigue helps explain why ordinarily sensible people get angry at colleagues and families, splurge on clothes, buy junk food at the supermarket and can’t resist the dealer’s offer to rustproof their new car. No matter how rational and high-minded you try to be, you can’t make decision after decision without paying a biological price. It’s different from ordinary physical fatigue — you’re not consciously aware of being tired — but you’re low on mental energy.

He also relates a fascinating study of Israeli parole hearings:

There was a pattern to the parole board’s decisions, but it wasn’t related to the men’s ethnic backgrounds, crimes or sentences. It was all about timing, as researchers discovered by analyzing more than 1,100 decisions over the course of a year. Judges, who would hear the prisoners’ appeals and then get advice from the other members of the board, approved parole in about a third of the cases, but the probability of being paroled fluctuated wildly throughout the day. Prisoners who appeared early in the morning received parole about 70 percent of the time, while those who appeared late in the day were paroled less than 10 percent of the time.

It gets more interesting:

As the body uses up glucose, it looks for a quick way to replenish the fuel, leading to a craving for sugar… The benefits of glucose were unmistakable in the study of the Israeli parole board. In midmorning, usually a little before 10:30, the parole board would take a break, and the judges would be served a sandwich and a piece of fruit. The prisoners who appeared just before the break had only about a 20 percent chance of getting parole, but the ones appearing right after had around a 65 percent chance. The odds dropped again as the morning wore on, and prisoners really didn’t want to appear just before lunch: the chance of getting parole at that time was only 10 percent. After lunch it soared up to 60 percent, but only briefly.

So, returning to the Kahneman bit, I wonder if we might observe a similar phenomenon with respect to political bloggers. Would ad hominem attacks follow the same pattern throughout the day? Might bloggers who had just eaten have the mental energy to counter their biases, to treat opponents with respect, etc.? And might that ability be depleted as the time between meals wears on and their mental energy is lowered? This could be tested pretty easily by analyzing the frequency of certain ad hominem clues like, say, the use of the word “idiot”, and then checking frequency against time of day. I’d love to see this data, and not just because I want an excuse to snack while I write.


Fight bias with math

I just finished the chapter in Kahneman’s book on reasoning that dealt with “taming intuitive predictions.” Basically, we make predictions that are too extreme, ignoring regression to the mean, assuming the evidence to be stronger than it is, and ignoring other variables through a phenomenon called “intensity matching.” 

Here’s an example (not from the book; made up by me):

Jane is a ferociously hard-working student who always completes her work well ahead of time.

What GPA do you think she graduates college with? Formulate it in your mind, an actual number.

So Kahneman explains “intensity matching” as being able to toggle back and forth intuitively between variables. If it sounds like Jane is in the top 10% in motivation/work ethic, she must be in the top 10% in GPA. And our mind is pretty good at adjusting between those two. I’m going to pick 3.7 as the intuitive GPA number; if yours is different you can substitute it in below.

Kahneman says this is biased because you’re ignoring regression to the mean, another way of saying that GPA and work ethic aren’t perfectly correlated. so here’s a model to use Kahneman’s trick for taming your prediction.

GPA = work ethic + other factors

What is the correlation between work ethic and GPA? Let’s guess .3 (It can be whatever you think is most accurate).

Now what is the average GPA of college students? Let’s say 2.5? (Again, doesn’t matter).

Here’s Kahneman’s formula for taming your intuitive predictions:

0.3(3.7-2.5)+2.5 = statistically reasonable prediction

So apply the correlation between GPA and work ethic to the difference between your intuitive prediction and the mean, and then go from the mean in the direction of your intuition by that amount.

I played around with some different examples here because my intuition was grappling with some issues around luck vs. static variables, but those aside, this is a neat way to counter one’s bias in the face of limited information.

I can’t help but wonder, though, if the knowledge that this exercise was designed to counter bias led anyone to avoid or at least temper intensity matching. In other words, what were your intuitions for the GPA she’d have after just reading the description of her hard work? Did the knowledge that you were biased lead you to a lower score than the one I mentioned?

Here’s what I’m getting at… If it’s possible (and this is just me riffing right now) to dial down your biases (either consciously or not) when the issue of bias is on your mind, it would seem possible that one’s intuitions could be dialed down going into this exercise, at the point of the original GPA intuition, which could ruin the outcome. Put another way, the math above relies on accurate intensity matching which is itself a bias! If someone were able to come into this with that bias dialed down, they might actually end up with a worse prediction if they also did Kahneman’s suggested process.


Why we need journalists (good ones)

I’m in the middle of Daniel Kahneman’s Thinking, Fast and Slow. From Chapter 16:

Nisbett and Borgida found that when they presented their students with a surprising statistical fact, the students managed to learn nothing at all. But when the students were surprised by individual cases – two nice people who had not helped – they immediately made the generalization and inferred that helping is more diffuclt than they thought. Nisbett and Bordiga summarize the results in a memorable sentence:

Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular.

Consider this the psychological case for man-on-the-street stories. Humanizing data with individual examples is essential to helping people absorb information.

But journalists aren’t themselves immune to this phenomenon. It’s essential that reporters assess the evidence behind their stories, and consciously try to overcome their bias to react more strongly to individual anecdotes than to data. But if journalists are able to overcome this bias and base their stories on good data, then their ability to apply individual cases to explain larger trends can be a crucial mechanism for informing the public.



So you’re smart, but are you reasonable?

I was searching for this phantom post pointing to research on verbal reasoning scores and bias (I swear I saw it!) when I came across a fascinating a 1997 paper titled Reasoning Independently of Prior Belief and Individual Differences in Actively Open-Minded Thinking. It’s got some neat if perhaps not totally unsurprising conclusions.

First, a quick disclosure: I don’t know this research area at all. If this paper got trashed by all its peers or if its results haven’t held up over time, I wouldn’t know. I’ve looked at the authors’ faculty pages and it looks like they’ve done more recent work that I’ll dig into at some point soon.

OK so what’s the point of this research: (apologies for the screenshots; it’s non-searchable PDF)

I’ll skip how they did the experiment and go right to findings. Read for yourself if you’re interested.

From the discussion:

The first question this raises in my mind is the extent to which this sort of reasoning style is alterable, both in the short and the long terms. To the extent that it is able to change over the long-term, this will have implications both for education and beyond. (Perhaps it’s possible to teach someone to reason outside of priors, and we do so in high school and college, but they lapse over time? Just one potential example of an implication.) In the short-term this interests me because the priming of epistemic goals could be a central feature of better media design aimed at negating bias. I’m looking forward to reading more on this topic.


Overcoming bias means better social processes

I know I’ve already written twice about the Mercier/Sperber argumentation research, but this NYT piece brings to mind one more point to make. Mercier and Sperber argue that we evolved our capacity for reason largely to convince one another. They make the related point that reasoning is a social rather than an individual process. Regardless of whether they’re right about the evolutionary roots of reasoning, the latter point is critical to discussions of bias. The NYT piece talks about the research with regard to the peer review process:

Doesn’t the ideal of scientific reasoning call for pure, dispassionate curiosity? Doesn’t it positively shun the ego-driven desire to prevail over our critics and the prejudicial urge to support our social values (like opposition to the death penalty)?

Perhaps not. Some academics have recently suggested that a scientist’s pigheadedness and social prejudices can peacefully coexist with — and may even facilitate — the pursuit of scientific knowledge…

…It’s salvation of a kind: our apparently irrational quirks start to make sense when we think of reasoning as serving the purpose of persuading others to accept our point of view. And by way of positive side effect, these heated social interactions, when they occur within a scientific community, can lead to the discovery of the truth.

The point I want to make here is simple and perhaps even obvious. As science illuminates various shortcomings in our ability to reason, our best hope is to design better social processes to account for them. We already do this. From the courtroom to the newsroom, we structure our intellectual processes to help overcome our own individual shortcomings. But with increasingly sophisticated research into how we think, and with the digital public sphere providing both massive amounts of data on how we communicate and the opportunity to constantly redesign our media environment, we have the chance to design better processes that allow us to overcome our individual faults and reason better.