Anti-bias calisthenics

I highly recommend this excellent piece “The Brain on Trial” from The Atlantic’s July/August issue. This bit stood out as relevant to some of the writing I’ve been doing on overcoming bias:

We may be on the cusp of finding new rehabilitative strategies as well, affording people better control of their behavior, even in the absence of external authority. To help a citizen reintegrate into society, the ethical goal is to change him as little as possible while bringing his behavior into line with society’s needs. My colleagues and I are proposing a new approach, one that grows from the understanding that the brain operates like a team of rivals, with different neural populations competing to control the single output channel of behavior. Because it’s a competition, the outcome can be tipped. I call the approach “the prefrontal workout.”

The basic idea is to give the frontal lobes practice in squelching the short-term brain circuits. To this end, my colleagues Stephen LaConte and Pearl Chiu have begun providing real-time feedback to people during brain scanning. Imagine that you’d like to quit smoking cigarettes. In this experiment, you look at pictures of cigarettes during brain imaging, and the experimenters measure which regions of your brain are involved in the craving. Then they show you the activity in those networks, represented by a vertical bar on a computer screen, while you look at more cigarette pictures. The bar acts as a thermometer for your craving: if your craving networks are revving high, the bar is high; if you’re suppressing your craving, the bar is low. Your job is to make the bar go down. Perhaps you have insight into what you’re doing to resist the craving; perhaps the mechanism is inaccessible. In any case, you try out different mental avenues until the bar begins to slowly sink. When it goes all the way down, that means you’ve successfully recruited frontal circuitry to squelch the activity in the networks involved in impulsive craving. The goal is for the long term to trump the short term. Still looking at pictures of cigarettes, you practice making the bar go down over and over, until you’ve strengthened those frontal circuits. By this method, you’re able to visualize the activity in the parts of your brain that need modulation, and you can witness the effects of different mental approaches you might take.

If these tactics can be successful for controlling the impulse to have a cigarette, could something similar work to counter one’s biases? Could we train the mental levers that are responsible for motivated reasoning? In some sense this – like much in bias research – should be intuitive. Take a deep breath. Count to 10. Keep your mind open. We know our susceptibility to bias isn’t static. What can neuroscience tell us about how to train ourselves to think more rationally?

(Note: it’s not emotion vs. reason as by now we grasp that emotion is central to all reasoning, biased or not.)


Two more items on bias interventions

I have a couple quick items to post that relate to my last Atlantic post on embedding bias-correcting interventions in our media. One of them is quite belated; the other I just came across. First, here’s the gist of my initial post:

Photo: wellohorld And via Freakonomics

Context can affect bias, and on the Web — if I can riff on Lessig — code is context. So why not design media that accounts for the user’s biases and helps him or her overcome them?

There is some evidence, for instance, that “self-affirmation” exercises can limit our susceptibility to motivated reasoning. Our political beliefs reflect our conception of who we are and what we stand for. Therefore, information that runs counter to those beliefs threatens our perceived self-worth. Multiple studies have shown that having participants reaffirm their self-worth outside of politics reduces their vulnerability to motivated reasoning. (The exercises took the form of writing about a personal value unrelated to politics.)

How might I react if the pop-up at my friend’s site prompted me to write a few sentences reaffirming my value outside of politics?

(I did a follow-up post here on my blog clarifying some things and offering other examples.)

Shortly after the initial post went up I heard from Scott Clifford, a political science grad student, via Twitter. He pointed out that self-affirmation exercises are less effective if participants are aware of what’s going on. He pointed me to this paper, the abstract of which I’ve copied below:

Three studies investigated whether self-affirmation can proceed without awareness, whether people areaware of the influence of experimental self-affirmations, and whether such awareness facilitates orundermines the self-affirmation process. The authors found that self-affirmation effects could proceedwithout awareness, as implicit self-affirming primes (utilizing sentence-unscrambling procedures) produced standard self-affirmation effects (Studies 1 and 3). People were generally unaware of selfaffirmation’s influence, and self-reported awareness was associated with decreased impact of theaffirmation (Studies 1 and 2). Finally, affirmation effects were attenuated when people learned thatself-affirmation was designed to boost self-esteem (Study 2) or told of a potential link betweenself-affirmation and evaluations of threatening information (Study 3). Together, these studies suggest notonly that affirmation processes can proceed without awareness but also that increased awareness of theaffirmation may diminish its impact.

I don’t have anything novel to add here, but Scott is right that this is a critical issue for what I’m proposing. So that’s the long overdue item.

The next item is via Freakonomics. It’s a rather amusing example of the general point that interventions can predictably alter political beliefs:

As if we needed more evidence that people often fail to practice rational, thoughtful analysis in making a decision: a new study by Travis Carter at the Center for Decision Research at the University of Chicago’s Booth School finds that people who are briefly exposed to the American flag shift toward Republican beliefs.


More on the evolution of argument

Thanks to Edge, I posted about the new research into the evolutionary basis of reason and argument well before The New York Times picked it up. But here, as a follow-up to that NYT piece, is another post that clarifies the authors’ position. Turns out it’s right in line with what I expected. Here’s what I wrote in my previous post:

The first question that comes to mind for me is this: Why, if reasoning isn’t based at least in part on developing correct beliefs, would reasons be useful for convincing others? In other words, if I’m not using reasoning in the traditional enlightenment sense then why would I treat reasons as useful input when someone else tries to convince me? Reasons would seem to be more useful tools for convincing in a world where individuals were also using them as tools for obtaining correct beliefs.

I take that to be what the authors are saying in the NYT follow-up:

We do not claim that reasoning has nothing to do with the truth. We claim that reasoning did not evolve to allow the lone reasoner to find the truth. We think it evolved to argue. But arguing is not only about trying to convince other people; it’s also about listening to their arguments. So reasoning is two-sided. On the one hand, it is used to produce arguments. Here its goal is to convince people. Accordingly, it displays a strong confirmation bias — what people see as the “rhetoric” side of reasoning. On the other hand, reasoning is also used to evaluate arguments. Here its goal is to tease out good arguments from bad ones so as to accept warranted conclusions and, if things go well, get better beliefs and make better decisions in the end.

Also, apologies for the light blogging lately. I’ve been writing a bunch about clean energy in the last few days over at the NECEC blog, so if you’re really desperate to read stuff I’m writing, you’ll find some new stuff over there.


Examples of how media could help overcome bias

I have a piece up at The Atlantic (went up Friday) titled “The Future of Media Bias” that I hope you’ll read. I suppose the title is deliberately misleading, since the topic isn’t media bias in the typical sense. Here’s the premise:

Context can affect bias, and on the Web — if I can riff on Lessig — code is context. So why not design media that accounts for the user’s biases and helps him overcome them?

Head over to The Atlantic to read it. In the meantime, I want to expand a bit on some of my ideas.

1) This is not just about pop-up ads. The conceit of the post is visiting a conservative friend’s site and being hit with red-meat pop-ups that act as priming mechanisms. But that was just a way of introducing my point. (Evidence from comments and Twitter suggest this may have distracted some.) So while pop-ups can illustrate the above premise, the premise is in no way restricted to the impacts of pop-ups either as they exist in practice to sell ads or as they might be used in theory.

2) More on self-affirmation. I might have been clearer on how self-affirmation exercises work. Although they were not described in detail in either paper I referenced. Here’s how I understand them: you’re asked to select a value that is important to your self-worth – maybe something like honesty – and then you write a few sentences about how you live by that value. Writing out a few sentences about what an honest and therefore valuable person you are makes you less worried about information threatening your self-worth.

I want to address a few potential objections to embedding an exercise like this in media. One might argue that no one would complete the exercise (I’m imagining it as a pop-up right now.) Perhaps. But you could incentivize it. Perhaps anyone who completes it gets their comments displayed higher or something like that. Build incentives into a community reputation system. Second objection is that maybe you could get people to complete it once, but it’s impractical to think anyone would do it before every article. Fair point. But perhaps you just need people to do it once, and then it’s just displayed alongside or above the content, for the reader to view, to prime them. Finally, I want to note that this is just one random example and I don’t think my argument really rides too much on it. The reason I used it was a) there was lots of good research behind it and b) it fit nicely with the pop-up conceit of the post.

3) More examples. One paper I referenced re: global warming suggests that the headline can impact susceptibility to confirmation/disconfirmation biases. So what if the headline changed depending on the user’s biases? This would be tricky in various ways, but it’s hardly inconceivable. In fact I wish I’d mentioned it since in some ways it seems more practical than the self-confirmation exercises. It would, however, introduce a lot of new difficulties into the headline writing process.

Another thing I might have mentioned is the ordering of content. Imagine you’re looking at a Room for Debate at NYT. Which post should you see first? In the course of researching the Atlantic piece, I came across some evidence that the order you receive information matters (with the first piece being privileged) but I’m having trouble finding where I saw that now. And it’s not obvious that that kind of effect would persist in cases of political information. But, still, there may well be room to explore ordering as a mechanism for dealing with bias.

Finally – and at this point I’m working off of no research and just thinking out loud – what if you established the author’s credibility by showing the work the user was most likely to agree with, in cases of disconfirmation bias (and the reverse in confirmation bias cases)? So, say I’m reading about climate change and you knew I’d be biased against evidence for it. But the author making the case for that evidence wrote something last week that I do agree with, that does fit my worldview. What if a teaser for that piece was displayed alongside the global warming content? Would that help?

I’ve also wondered if asking users to solve fairly simple math problems would prime them to think more rationally, but again, that’s not anything based on research; just a thought.

So that’s it. A few clarifications and some extra thoughts. To me, the hope of this piece would be to inject the basic idea into the dialogue, so that researchers start to think of media as an avenue for testing their theories, and so that designers, coders, journalists, etc. start thinking of this research as input for innovation into how they create new media.

UPDATE: One more cool one I forgot to mention… there’s some evidence that hitting someone with graphical information more forcefully makes a point, such that it would basically take too much mental energy to rationalize around it. You can read more about that here. This column in the Boston Globe refers to a study concluding – in the columnist’s words – “people will actually update their beliefs if you hit them “between the eyes” with bluntly presented, objective facts that contradict their preconceived ideas.” This strikes me as along the same lines as the graph experiment. And so these are things to keep in mind as well.


Your memories are bought and paid for

I’ve been reading a lot about cognitive biases lately, for a post I recently finished (that hopefully will be published soon) and I wanted to share a fascinating post only slightly related to that topic, that didn’t make it into my post on the subject. Jonah Lehrer has a characteristically fascinating post at Wired on how ads implant false memories. You really should read it all. But here’s a bit that struck me, from the perspective of someone interested in media:

A new study, published in The Journal of Consumer Research, helps explain both the success of this marketing strategy and my flawed nostalgia for Coke. It turns out that vivid commercials are incredibly good at tricking the hippocampus (a center of long-term memory in the brain) into believing that the scene we just watched on television actually happened. And it happened to us.

The experiment went like this: 100 undergraduates were introduced to a new popcorn product called “Orville Redenbacher’s Gourmet Fresh Microwave Popcorn.” (No such product exists, but that’s the point.) Then, the students were randomly assigned to various advertisement conditions. Some subjects viewed low-imagery text ads, which described the delicious taste of this new snack food. Others watched a high-imagery commercial, in which they watched all sorts of happy people enjoying this popcorn in their living room. After viewing the ads, the students were then assigned to one of two rooms. In one room, they were given an unrelated survey. In the other room, however, they were given a sample of this fictional new popcorn to taste. (A different Orville Redenbacher popcorn was actually used.)
One week later, all the subjects were quizzed about their memory of the product. Here’s where things get disturbing: While students who saw the low-imagery ad were extremely unlikely to report having tried the popcorn, those who watched the slick commercial were just as likely to have said they tried the popcorn as those who actually did. Furthermore, their ratings of the product were as favorable as those who sampled the salty, buttery treat. Most troubling, perhaps, is that these subjects were extremely confident in these made-up memories. The delusion felt true. They didn’t like the popcorn because they’d seen a good ad. They liked the popcorn because it was delicious.

Read the whole post to learn more about the science here. But isn’t that variable of text vs. video fascinating?


After the rapture

What will the response be of people like this when Judgement Day / the rapture doesn’t arrive tomorrow?

Here’s a clue:

“A MAN WITH A CONVICTION is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point.” So wrote the celebrated Stanford University psychologist Leon Festinger(PDF), in a passage that might have been referring to climate change denial—the persistent rejection, on the part of so many Americans today, of what we know about global warming and its human causes. But it was too early for that—this was the 1950s—and Festinger was actually describing a famous case study in psychology.

Festinger and several of his colleagues had infiltrated the Seekers, a small Chicago-area cult whose members thought they were communicating with aliens—including one, “Sananda,” who they believed was the astral incarnation of Jesus Christ. The group was led by Dorothy Martin, a Dianetics devotee who transcribed the interstellar messages through automatic writing.

Through her, the aliens had given the precise date of an Earth-rending cataclysm: December 21, 1954. Some of Martin’s followers quit their jobs and sold their property, expecting to be rescued by a flying saucer when the continent split asunder and a new sea swallowed much of the United States. The disciples even went so far as to remove brassieres and rip zippers out of their trousers—the metal, they believed, would pose a danger on the spacecraft.

Festinger and his team were with the cult when the prophecy failed. First, the “boys upstairs” (as the aliens were sometimes called) did not show up and rescue the Seekers. Then December 21 arrived without incident. It was the moment Festinger had been waiting for: How would people so emotionally invested in a belief system react, now that it had been soundly refuted?

At first, the group struggled for an explanation. But then rationalization set in. A new message arrived, announcing that they’d all been spared at the last minute. Festinger summarized the extraterrestrials’ new pronouncement: “The little group, sitting all night long, had spread so much light that God had saved the world from destruction.” Their willingness to believe in the prophecy had saved Earth from the prophecy!

From that day forward, the Seekers, previously shy of the press and indifferent toward evangelizing, began to proselytize. “Their sense of urgency was enormous,” wrote Festinger. The devastation of all they had believed had made them even more certain of their beliefs.

That’s from Chris Mooney’s excellent article on the science of denial.


Can your newspaper make you less biased?

Chris Mooney had a great piece at Mother Jones recently that has been making the rounds. The title is “The Science of Why We Don’t Believe in Science” and it’s a good primer on some of the literature on how we rationalize to protect our biases and more generally our worldview. If you haven’t read it yet I highly recommend it. Here’s the gist:

Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber [9] of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. “They retrieve thoughts that are consistent with their previous beliefs,” says Taber, “and that will lead them to build an argument and challenge what they’re hearing.”

In other words, when we think we’re reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt [10]: We may think we’re being scientists, but we’re actually being lawyers [11] (PDF). Our “reasoning” is a means to a predetermined end—winning our “case”—and is shot through with biases. They include “confirmation bias,” in which we give greater heed to evidence and arguments that bolster our beliefs, and “disconfirmation bias,” in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.

This is related to the concept of “sacred beliefs” that I’ve been harping on lately and the general point I want to make is that the problem Mooney sketches out can be viewed as a challenge that media can help to overcome. What if the media you’re consuming knew from your history and profile that you had certain biases, and therefore presented the information in a way that makes it easier for you to overcome those biases?

There are clearly a number of challenges here. Determining bias is tricky in the first place, because it has to be done in reference to some “truth”, which, given the nature of the problem, is likely controversial. And even once that is done there would need to be a way to measure progress in overcoming biases. But we take for granted that digital media offers the opportunity to design experiences that are customized and interactive in a way newspapers and other “old media” are not. Why not focus on cognitive personalization aimed at helping us think more rationally?


Exposing sacred arguments

Moral psychologist Jonathan Haidt gave a talk in February arguing that the social psychology field was a “moral community” by virtue of its political liberalism, and that this was compromising its ability to do good science. I want to use one piece of his argument as a jumping off point to discuss what I see as one of the biggest obstacles to productive public discussion. Haidt:

Sacredness is a central and subtle concept in sociology and anthropology, but we can get a simple working definition of it from Phil Tetlock [a social psychologist at the University of Pennsylvania]. Tetlock defines a sacred values as “any value that a moral community implicitly or explicitly treats as possessing infinite or transcendental significance …” If something is a sacred value, you can’t make utilitarian tradeoffs; you can’t think in a utilitarian way. You can’t sell a little piece of it for a lot of money, for example. Sacredness precludes tradeoffs. When sacred values are threatened, we turn into “intuitive theologians.” That is, we use our reasoning not to find the truth, but to find ways to defend what we hold sacred…

…Sacralizing distorts thinking. These distortions are easy for outsiders to see, but they are invisible to those inside the force field.

For the most part there’s nothing wrong with sacredness, per se. The problem arises when the sacred principle is challenged by someone outside the moral community. As Haidt notes, the result is that reasoning comes to the aid of justifying a principle, and that leads to sloppy arguments. If your commitment to the principle of nonviolence is challenged, for instance, you may start arguing about the ineffectiveness of military interventions. But what’s really driving that argument isn’t the facts; it’s the desire to defend a principle that in your moral vision really doesn’t even need defending. If you’re a pacifist, that’s fine. What’s not fine is marshalling weak arguments when a sacred view is challenged.

Now in practice my guess is that few things are held as entirely sacred, but many things are held nearly sacred. By that I mean that for most people, few beliefs are beyond any tradeoffs, but quite a few principles are sacred enough to require an exceptionally high bar be cleared before they’re willing to start trading it away.

There’s been some good back-and-forth in the libertarian blogosphere recently on the extent to which policy differences between liberals and libertarians are caused by different opinions on empirical matters, versus different values or principles. Ilya Somin at Volokh Conspiracy is thinking along the same lines as I am, writing:

Within political philosophy, many scholars are either pure utilitarian consequentialists (thinkers who believe that we are justified in doing whatever it takes to maximize happiness) or pure deontologists (people who argue that we must respect certain rights absolutely, regardless of consequences)… Outside philosophy departments, however, few people endorse either of these positions.

So sacredness in practice is probably a matter of extent. But that does nothing to detract from its importance in public debate. If someone is arguing in favor of a principle they hold sacred I want to know. If you’ve written an op-ed detailing all the reasons military intervention in Libya would be ill-advised, the fact that you’re a pacifist – that nonviolence is a sacred principle for you – is extremely relevant.

I see the identification of sacredness as a crucial challenge in the public sphere, and therefore a crucial challenge within media. As I’ve mentioned before, there’s lots of talk about the importance of transparency in the brave new world of online media, and I’m in favor of that. But transparency means a lot of things (again, as I’ve discussed before). It’s easy to say “I’m a liberal, I generally favor x, y and z and am a fan of these thinkers or politicians. I voted for so-and-so for president.” That’s one kind of transparency. But it’s a very thin transparency. I’d love to see some media experiments that go further and try to identify sacred principles. Let’s play around with ways of telling me the author is a pacifist.

This is a hard problem because most of us have a rough time identifying what we consider sacred. As Haidt notes, it’s often something that is obvious only outsiders. And once extents are thrown into the mix things get even messier. In a way the blogosphere offers a really rudimentary partial fix just by removing word/page limits. When there’s no limit to length you can talk endlessly about the principles behind the authors, as the libertarian discussion makes clear. But I think we can do better. I don’t have many good specifics on how just yet, but it’s something I think about. Ideas?