Partisan Fact-checkers: Motivated But Counterproductive

This past week I got a pitch in my inbox about a group that would be fact-checking half of the Scott Brown / Elizabeth Warren Senate debate. And by half of it I mean the Scott Brown half, because the organization pitching me was a progressive advocacy group. Of course, when I tweeted about how partisans are the wrong group to be fact-checking a friend replied with the hashtag #NoShit, but hey, I never claimed to be insightful. Just correct.

It’d be nice if partisans worked as fact-checkers because, after all, they’re quite motivated. Yes, they’ll only check the opposing side, but get enough partisans from various sides and the whole thing might work. Except it doesn’t. And not even because partisan fact-checkers bend the truth, although that happens too. Even partisan fact-checkers like Media Matters that have built up some reputation for accuracy while being forthcoming about the selective nature of what they fact-check ultimately can’t replace genuine nonpartisan, independent fact-checking.

A New York Times op-ed by Cass Sunstein this past week explained why: because the speaker matters.

People tend to dismiss information that would falsify their convictions. But they may reconsider if the information comes from a source they cannot dismiss. People are most likely to find a source credible if they closely identify with it or begin in essential agreement with it. In such cases, their reaction is not, “how predictable and uninformative that someone like that would think something so evil and foolish,” but instead, “if someone like that disagrees with me, maybe I had better rethink.”

What does this mean for fact-checkers? I’m not sure. But what if Factcheck.org and Politifact signed on figures from left and right who would be alerted when their party’s candidate had uttered a false or misleading statement, and urged them to share it with their networks? Could that make a dent?

Who knows. But we do know that partisans stepping into the fact-checking arena are no substitute for the real thing.

Me at Nieman Lab: Hacking Consensus

For the past few years I’ve bent quite a few ears about how much better arguments could be online. The earliest of these ear-bendings (that I can remember) was in Q1 of 2008. Since then I’ve talked to policy wonks, developers, journalists and plenty of friends and family about how I think the basic op-ed model should be improved. Four years since that first conversation, I finally have an essay on the topic that I feel comfortable standing behind.

My piece is up at Harvard’s Nieman Journalism Lab and I hope you’ll give it a read. I’d appreciate any feedback you have. Here’s the intro:

In a recent New York Times column, Paul Krugman argued that we should impose a tax on financial transactions, citing the need to reduce budget deficits, the dubious value of much financial trading, and the literature on economic growth. So should we? Assuming for a moment that you’re not deeply versed in financial economics, on what basis can you evaluate this argument? You can ask yourself whether you trust Krugman. Perhaps you can call to mind other articles you’ve seen that mentioned the need to cut the deficit or questioned the value of Wall Street trading. But without independent knowledge — and with no external links — evaluating the strength of Krugman’s argument is quite difficult.

It doesn’t have to be. The Internet makes it possible for readers to research what they read more easily than ever before, provided they have both the time and the ability to filter reliable sources from unreliable ones. But why not make it even easier for them? By re-imagining the way arguments are presented, journalism can provide content that is dramatically more useful than the standard op-ed, or even than the various “debate” formats employed at places like the Times or The Economist.

To do so, publishers should experiment in three directions: acknowledging the structure of the argument in the presentation of the content; aggregating evidence for and against each claim; and providing a credible assessment of each claim’s reliability. If all this sounds elaborate, bear in mind that each of these steps is already being taken by a variety of entrepreneurial organizations and individuals.

Please read the rest!

Who Wikipedia trusts

Lots of digital ink has been spilled on the trustworthiness of Wikipedia, and the circumstances in which it’s appropriate to use it as a source. Much more interesting, in my view, is the opposite question: what sources does Wikipedia trust? In our age of Truthiness, sorting good information from bad may be more critical than ever. It’s for that reason that factcheckers seem to be making a comeback. So how, exactly, does Wikipedia manage that sorting process? Do they differentiate between The New York Times and the National Review? Does the Congressional Budget Office count as more reliable than the Heritage Foundation? Than the Brookings Institution?

To try and find out, I visited Wikipedia’s Identifying Reliable Sources page. And while it didn’t answer many of my questions, I gleaned several interesting nuggets about Wikipedians’ idea of reliability. For instance:

  • “In general, the more people engaged in checking facts, analyzing legal issues, and scrutinizing the writing, the more reliable the publication.”
  • “When available, academic and peer-reviewed publications, scholarly monographs, and textbooks are usually the most reliable sources.”
  • “Mainstream news sources are generally considered to be reliable. However, even the most reputable news outlets occasionally contain errors. Whether a specific news story is reliable for a specific fact or statement in a Wikipedia article is something that must be assessed on a case by case basis. When using news sources, care should be taken to distinguish opinion columns from news reporting.”
  • “The statement that all or most scientists or scholars hold a certain view requires reliable sourcing that directly says that all or most scientists or scholars hold that view. Otherwise, individual opinions should be identified as those of particular, named sources… Stated simply, any statement in Wikipedia that academic consensus exists on a topic must be sourced rather than being based on the opinion or assessment of editors.”
  • “Anyone can create a website or pay to have a book published, then claim to be an expert in a certain field. For that reason self-published media—whether books, newsletters, personal websites, open wikis, blogs, personal pages on social networking sites, Internet forum postings, or tweets—are largely not acceptable. This includes any website whose content is largely user-generated, including the Internet Movie Database, Cracked.com, CBDB.com, and so forth, with the exception of material on such sites that is labeled as originating from credentialed members of the sites’ editorial staff, rather than users.”

More than anything, I was struck by how conservative these guidelines are. Wikipedia wouldn’t trust itself, for instance, being a user-generated project. On the one hand, that should put many of its more traditionally-minded critics at ease. On the other, it offers few new ideas about reliability.

Wikipedia’s reliability guidelines raise as many questions as they answer. But I think it’s important, as we all struggle to determine what’s reliable and what’s not, to look to innovative and successful collaborative projects like Wikipedia for guidance. There may not be much new there, but it’s not a bad starting point for discussion.

One point of interest to me: not only do Wikipedians maintain significant skepticism towards the press, they specifically don’t go in for the classic reporter’s line “many economists think”.

Bonus: PolitiFact has a post outlining their fact checking system here. I wish they would have gone further in identifying how they deal with reliability of sources. Perhaps I’ll ask them more about it.