Bylines and “cultural credibility”

My latest Atlantic post is up:

As I wrote in a previous story, media outlets have an opportunity to design media that accounts for users’ biases. Author bios present such a chance. Without any change to the authors or their content, bios could be constructed in a way that maximizes cultural credibility by tapping into the social graph.

Please go read the rest!


Examples of how media could help overcome bias

I have a piece up at The Atlantic (went up Friday) titled “The Future of Media Bias” that I hope you’ll read. I suppose the title is deliberately misleading, since the topic isn’t media bias in the typical sense. Here’s the premise:

Context can affect bias, and on the Web — if I can riff on Lessig — code is context. So why not design media that accounts for the user’s biases and helps him overcome them?

Head over to The Atlantic to read it. In the meantime, I want to expand a bit on some of my ideas.

1) This is not just about pop-up ads. The conceit of the post is visiting a conservative friend’s site and being hit with red-meat pop-ups that act as priming mechanisms. But that was just a way of introducing my point. (Evidence from comments and Twitter suggest this may have distracted some.) So while pop-ups can illustrate the above premise, the premise is in no way restricted to the impacts of pop-ups either as they exist in practice to sell ads or as they might be used in theory.

2) More on self-affirmation. I might have been clearer on how self-affirmation exercises work. Although they were not described in detail in either paper I referenced. Here’s how I understand them: you’re asked to select a value that is important to your self-worth – maybe something like honesty – and then you write a few sentences about how you live by that value. Writing out a few sentences about what an honest and therefore valuable person you are makes you less worried about information threatening your self-worth.

I want to address a few potential objections to embedding an exercise like this in media. One might argue that no one would complete the exercise (I’m imagining it as a pop-up right now.) Perhaps. But you could incentivize it. Perhaps anyone who completes it gets their comments displayed higher or something like that. Build incentives into a community reputation system. Second objection is that maybe you could get people to complete it once, but it’s impractical to think anyone would do it before every article. Fair point. But perhaps you just need people to do it once, and then it’s just displayed alongside or above the content, for the reader to view, to prime them. Finally, I want to note that this is just one random example and I don’t think my argument really rides too much on it. The reason I used it was a) there was lots of good research behind it and b) it fit nicely with the pop-up conceit of the post.

3) More examples. One paper I referenced re: global warming suggests that the headline can impact susceptibility to confirmation/disconfirmation biases. So what if the headline changed depending on the user’s biases? This would be tricky in various ways, but it’s hardly inconceivable. In fact I wish I’d mentioned it since in some ways it seems more practical than the self-confirmation exercises. It would, however, introduce a lot of new difficulties into the headline writing process.

Another thing I might have mentioned is the ordering of content. Imagine you’re looking at a Room for Debate at NYT. Which post should you see first? In the course of researching the Atlantic piece, I came across some evidence that the order you receive information matters (with the first piece being privileged) but I’m having trouble finding where I saw that now. And it’s not obvious that that kind of effect would persist in cases of political information. But, still, there may well be room to explore ordering as a mechanism for dealing with bias.

Finally – and at this point I’m working off of no research and just thinking out loud – what if you established the author’s credibility by showing the work the user was most likely to agree with, in cases of disconfirmation bias (and the reverse in confirmation bias cases)? So, say I’m reading about climate change and you knew I’d be biased against evidence for it. But the author making the case for that evidence wrote something last week that I do agree with, that does fit my worldview. What if a teaser for that piece was displayed alongside the global warming content? Would that help?

I’ve also wondered if asking users to solve fairly simple math problems would prime them to think more rationally, but again, that’s not anything based on research; just a thought.

So that’s it. A few clarifications and some extra thoughts. To me, the hope of this piece would be to inject the basic idea into the dialogue, so that researchers start to think of media as an avenue for testing their theories, and so that designers, coders, journalists, etc. start thinking of this research as input for innovation into how they create new media.

UPDATE: One more cool one I forgot to mention… there’s some evidence that hitting someone with graphical information more forcefully makes a point, such that it would basically take too much mental energy to rationalize around it. You can read more about that here. This column in the Boston Globe refers to a study concluding – in the columnist’s words – “people will actually update their beliefs if you hit them “between the eyes” with bluntly presented, objective facts that contradict their preconceived ideas.” This strikes me as along the same lines as the graph experiment. And so these are things to keep in mind as well.


Exposing sacred arguments

Moral psychologist Jonathan Haidt gave a talk in February arguing that the social psychology field was a “moral community” by virtue of its political liberalism, and that this was compromising its ability to do good science. I want to use one piece of his argument as a jumping off point to discuss what I see as one of the biggest obstacles to productive public discussion. Haidt:

Sacredness is a central and subtle concept in sociology and anthropology, but we can get a simple working definition of it from Phil Tetlock [a social psychologist at the University of Pennsylvania]. Tetlock defines a sacred values as “any value that a moral community implicitly or explicitly treats as possessing infinite or transcendental significance …” If something is a sacred value, you can’t make utilitarian tradeoffs; you can’t think in a utilitarian way. You can’t sell a little piece of it for a lot of money, for example. Sacredness precludes tradeoffs. When sacred values are threatened, we turn into “intuitive theologians.” That is, we use our reasoning not to find the truth, but to find ways to defend what we hold sacred…

…Sacralizing distorts thinking. These distortions are easy for outsiders to see, but they are invisible to those inside the force field.

For the most part there’s nothing wrong with sacredness, per se. The problem arises when the sacred principle is challenged by someone outside the moral community. As Haidt notes, the result is that reasoning comes to the aid of justifying a principle, and that leads to sloppy arguments. If your commitment to the principle of nonviolence is challenged, for instance, you may start arguing about the ineffectiveness of military interventions. But what’s really driving that argument isn’t the facts; it’s the desire to defend a principle that in your moral vision really doesn’t even need defending. If you’re a pacifist, that’s fine. What’s not fine is marshalling weak arguments when a sacred view is challenged.

Now in practice my guess is that few things are held as entirely sacred, but many things are held nearly sacred. By that I mean that for most people, few beliefs are beyond any tradeoffs, but quite a few principles are sacred enough to require an exceptionally high bar be cleared before they’re willing to start trading it away.

There’s been some good back-and-forth in the libertarian blogosphere recently on the extent to which policy differences between liberals and libertarians are caused by different opinions on empirical matters, versus different values or principles. Ilya Somin at Volokh Conspiracy is thinking along the same lines as I am, writing:

Within political philosophy, many scholars are either pure utilitarian consequentialists (thinkers who believe that we are justified in doing whatever it takes to maximize happiness) or pure deontologists (people who argue that we must respect certain rights absolutely, regardless of consequences)… Outside philosophy departments, however, few people endorse either of these positions.

So sacredness in practice is probably a matter of extent. But that does nothing to detract from its importance in public debate. If someone is arguing in favor of a principle they hold sacred I want to know. If you’ve written an op-ed detailing all the reasons military intervention in Libya would be ill-advised, the fact that you’re a pacifist – that nonviolence is a sacred principle for you – is extremely relevant.

I see the identification of sacredness as a crucial challenge in the public sphere, and therefore a crucial challenge within media. As I’ve mentioned before, there’s lots of talk about the importance of transparency in the brave new world of online media, and I’m in favor of that. But transparency means a lot of things (again, as I’ve discussed before). It’s easy to say “I’m a liberal, I generally favor x, y and z and am a fan of these thinkers or politicians. I voted for so-and-so for president.” That’s one kind of transparency. But it’s a very thin transparency. I’d love to see some media experiments that go further and try to identify sacred principles. Let’s play around with ways of telling me the author is a pacifist.

This is a hard problem because most of us have a rough time identifying what we consider sacred. As Haidt notes, it’s often something that is obvious only outsiders. And once extents are thrown into the mix things get even messier. In a way the blogosphere offers a really rudimentary partial fix just by removing word/page limits. When there’s no limit to length you can talk endlessly about the principles behind the authors, as the libertarian discussion makes clear. But I think we can do better. I don’t have many good specifics on how just yet, but it’s something I think about. Ideas?