Feb 272011
 

Lots of digital ink has been spilled on the trustworthiness of Wikipedia, and the circumstances in which it’s appropriate to use it as a source. Much more interesting, in my view, is the opposite question: what sources does Wikipedia trust? In our age of Truthiness, sorting good information from bad may be more critical than ever. It’s for that reason that fact-checkers seem to be making a comeback. So how, exactly, does Wikipedia manage that sorting process? Do they differentiate between The New York Times and the National Review? Does the Congressional Budget Office count as more reliable than the Heritage Foundation? Than the Brookings Institution?

To try and find out, I visited Wikipedia’s Identifying Reliable Sources page. And while it didn’t answer many of my questions, I gleaned several interesting nuggets about Wikipedians’ idea of reliability. For instance:

  • “In general, the more people engaged in checking facts, analyzing legal issues, and scrutinizing the writing, the more reliable the publication.”
  • “When available, academic and peer-reviewed publications, scholarly monographs, and textbooks are usually the most reliable sources.”
  • “Mainstream news sources are generally considered to be reliable. However, even the most reputable news outlets occasionally contain errors. Whether a specific news story is reliable for a specific fact or statement in a Wikipedia article is something that must be assessed on a case by case basis. When using news sources, care should be taken to distinguish opinion columns from news reporting.”
  • “The statement that all or most scientists or scholars hold a certain view requires reliable sourcing that directly says that all or most scientists or scholars hold that view. Otherwise, individual opinions should be identified as those of particular, named sources… Stated simply, any statement in Wikipedia that academic consensus exists on a topic must be sourced rather than being based on the opinion or assessment of editors.”
  • “Anyone can create a website or pay to have a book published, then claim to be an expert in a certain field. For that reason self-published media—whether books, newsletters, personal websites, open wikis, blogs, personal pages on social networking sites, Internet forum postings, or tweets—are largely not acceptable. This includes any website whose content is largely user-generated, including the Internet Movie Database, Cracked.com, CBDB.com, and so forth, with the exception of material on such sites that is labeled as originating from credentialed members of the sites’ editorial staff, rather than users.”

More than anything, I was struck by how conservative these guidelines are. Wikipedia wouldn’t trust itself, for instance, being a user-generated project. On the one hand, that should put many of its more traditionally-minded critics at ease. On the other, it offers few new ideas about reliability.

Wikipedia’s reliability guidelines raise as many questions as they answer. But I think it’s important, as we all struggle to determine what’s reliable and what’s not, to look to innovative and successful collaborative projects like Wikipedia for guidance. There may not be much new there, but it’s not a bad starting point for discussion.

One point of interest to me: not only do Wikipedians maintain significant skepticism towards the press, they specifically don’t go in for the classic reporter’s line “many economists think”.

Bonus: PolitiFact has a post outlining their fact checking system here. I wish they would have gone further in identifying how they deal with reliability of sources. Perhaps I’ll ask them more about it.

Share
Feb 222011
 

Photographer John Harrington has a post criticizing Lawrence Lessig’s approach to copyright and urging photographers not to give up their rights, no matter what Lessig and others might recommend. The post represents a dangerous approach to intellectual property firmly at odds with the Constitution.

The Law is only The Law until we change it

“Call me a killjoy,” writes Harrington, “but stealing music is stealing from artists. Period.” But why? Presumably because he believes the appropriation he’s describing is against the law. But why is it against the law? What is the purpose of copyright? I’m glad I asked…

To promote the Progress of Science and useful Arts

The Constitution states clearly what the purpose of copyright is, and it’s not to infinitely preserve the natural rights of artists. Its aim is to further progress. Which means we have to ask: is copyright achieving that aim? Are terms too long or too short, by that criteria? Lessig and others have argued persuasively that they are far, far too long. I’m open to debating that as an empirical matter, so long as we agree upon the basic purpose of the law.

Nothing is derivative of nothing

My fear is that Harrington would object to the very criteria by which the Constitution insists we judge copyright. My fear is that he would claim artists simply have certain natural rights to their work, and regardless of what the law says, stealing is stealing. I could go on for a while about why I find this misguided, but it’s hard to have a productive argument about a philosophical tenet. Here’s just one reason: despite what Cadillac would have you believe, nothing is derivative of nothing. Ideas, art, culture all build on what has come before. When thinking about intellectual property we often focus on output without considering input. (Put another way, everything is a remix, as the video below argues.)

Against artistic rent-seeking

Artists and content creators will always be tempted to favor strong intellectual property regimes. But self-interest is no basis for a moral theory (sorry Ayn). As Lessig has also argued, control over intellectual property has never been easier to exert. Technological limits to near-perfect control are evaporating. If content creators give in to temptation and insist on advocating for an intellectual property regime justified by a conception of natural rights – despite the Constitution’s clear position to the contrary – our culture will suffer. The line between stealing and remixing needs to be considered on the basis of maximizing progress, not just protecting property.

Share
Feb 152011
 

Today I downloaded Tyler Cowen’s new e-book The Great Stagnation for $4, along with Amazon’s Kindle app for Android. It’s got both the publishing and economics blogospheres all aflutter so I’m looking forward to the read. But what kind of blogger would I be if I didn’t comment first, read later?

Cowen had an op-ed a few weeks back in The New York Times that laid out his basic claim. Here’s one bit:

The numbers suggest that for almost 40 years, we’ve had near-universal dissemination of the major innovations stemming from the Industrial Revolution, many of which combined efficient machines with potent fossil fuels. Today, no huge improvement for the automobile or airplane is in sight, and the major struggle is to limit their pollution, not to vastly improve their capabilities.

Although America produces plenty of innovations, most are not geared toward significantly raising the average standard of living…

…Will the Internet usher in a new economic growth explosion? Quite possibly, but it hasn’t delivered very good macroeconomic performance over the last decade.

Cowen is brilliant, and I’m looking forward to the longer version of his argument, but I want to consider a more optimistic take on our economic moment by another brilliant economist, Paul Romer. What follows is from Romer, interviewed by libertarians Arnold Kling and Nick Schulz in their book From Poverty to Prosperity (which I can testify is valuable and interesting even to non-libertarians).

Romer:

…this may be the most important question in human history: why have we had technological change and why has it been speeding up over time? …

…it may be inherent in the process of discovery that the more we learn the faster we can learn. It’s a notion that was captured by Newton when he said that he could see farther because he stood on the shoulders of giants. That was the first model that I tried for the speeding up phenomenon, that the more we learn the more we can learn…

…So it’s that kind of analysis, thinking of ideas as recipes – really, instructions for combining together small numbers of physical objects – that persuades, I think, anybody who works through the logic that the number of things we could have even tried up to this point in time is so small compared with the number of things that are possible, that we’re just extremely early in this discovery process. For as far as you want to project into the future of humans, we won’t run out of new things to discover. And as I conjectured in the beginning, it may even be that the more we learn about this process – the science of DNA, the science of materials, or our understanding of quantum mechanics – the more we learn about this stuff, the better we get at finding new, ever more valuable mixtures…

…The second-round answer, which I think may actually capture more of the truth, is that it may get easier to discover as you learn more things, or it may not, but what we’ve done is created better institutions over time so that we now exploit the opportunities much more effectively than we used to.

I believe Cowen wants to challenge this both on whether technological change has in fact been speeding up, and whether the technological changes we’ve seen recently are the same kind of economic game-changers we saw in the 20th century. I don’t know what role he would attribute to institutions. Which model is more accurate? I have no answer to that. But the Romer interview at least offers an alternative conceptual lens to Cowen’s. (You can read the full interview with Romer here.)

Cowen also argues that America has eaten up a lot of the “low-hanging fruit”, in education for instance, as described in the clip below:

It’s an interesting point. But once again for a more optimistic take I’ll point to Romer:

We’re going through this shift in the economy where the fraction of human effort that goes into actually physically rearranging things – bending metal, doing manufacturing, and so on – is going down over time and the fraction that’s going into discovering the right formula or recipe is going up over time. And that’s a really good thing, because all of the value really comes from finding the new recipes. If you picture the innovative activity of one hundred years ago and you think of it as U.S. Steel, most of the workers there were involved in literally bending or melting metal – doing the physical rearrangement – and a relatively small number of people were coming up with, say, better ways to extract iron from ore…

…So we’re going through this transformation where a larger and larger fraction of the labor force is engaged in problem-solving, sifting through possible ideas, and a relatively small fraction of people actually stamps out the pills or bends the metal.

(That bit combined with some other comments offers an interesting take on Julian Simon’s work, both approving and disapproving of his work on population and innovation.)

So in education the question may be whether we can innovate fast enough to outpace declining marginal student aptitude. Romer’s model suggests that as more minds spend time thinking about how we educate, rather than spending that time grading tests, passing out papers, etc., that innovation could speed up. Cowen’s suggests that each student we attempt to get through school is likely to be harder than the last. Will more minds working on better educational recipes mean better education, and therefore more minds devoted to finding other useful recipes? Or have we already eaten all the low-hanging fruit?

Perhaps I’ll have more to say after I finish Cowen’s book.

Share
Feb 062011
 

Last year The Atlantic Wire ran a terrific series in which it asked various thinkers and bloggers to write up a description of “What I Read.” From Clay Shirky to David Brooks, every response was interesting in its own way. Go read them all.

I’ve been meaning for a while to write my own answer, and my media diet has shifted a bit lately, so now is as good a time as any. You’ll notice I titled this post “How” rather than “What”. My aim is to focus more on the technologies and strategies than on the content (though there will be plenty of that baked in). That way, you can take any suggestions you like and plug in your own content based on your own interests.

So here’s how my information diet works…

NPR Hourly News Summary

As I leave my house on the way to work every morning I plug headphones into my Android phone and open up the (free) NPR app and click “Hourly News Summary”. This gives me a 5 minute rundown of all the biggest news items and corresponds absolutely perfectly with my walk to the subway (up here in Boston, we call it the T).

Morning Email Newsletters

I have about a 20-25 minute subway trip to work, and here’s why that’s tricky: I’m underground almost the whole way, without 3G. So I need ways to read that don’t require my mobile browser.

For that reason, I subscribe to a couple of daily newsletters. Normally, I HATE mixing my info diet with my inbox, but I make a couple of exceptions for the sake of my commute.

I start by reading Ezra Klein’s Wonkbook, a morning roundup of public policy-related news and analysis. I just added The Boston Globe’s top headlines email, since everyone I work with will have seen the Globe’s front page by the time I get to the office. I’m considering adding Politico’s Morning Energy to my inbox, since I work in clean energy; right now it’s in my RSS reader.

Once I get through the newsletters, I usually go to Everpaper on my phone. More on that later.

Twitter

As soon as I get to work, I fire up both Twitter and Google Reader. For a lot of people, Twitter is their primary news source. I’m not there yet. I follow maybe 150 people through two different accounts, and have them broken into lists like “Friends”, “Professional Contacts”, and “Influencers.” I follow these lists with Tweetdeck.

I use Twitter to share lots of what I read, to comment on stories, and inevitably to find links. Who I follow on Twitter is biased towards who I actually interact and converse with, like David Roberts at Grist or Nick Jackson at The Atlantic Tech.

I also occasionally use Twitter to try out new bloggers or news sources, to see if I want to make them a staple of my reading. Which gets me to a major point: I try not to ever rely on Twitter for my reading. I’ll explain what I mean in the next section.

Google Reader

I’m a big RSS-addict and Google Reader is still far-and-away the backbone of my reading. But even I spend less time in it than I used to, due to Twitter. So while I don’t necessarily read every item in my Reader every day, I think of it as my backstop at this point. If I’m busy at work, I may miss most of what’s being shared on Twitter; my RSS reader is the place I go to catch up on whatever I may have missed.

At this point I’m picky about the feeds I subscribe to for that reason. My RSS Reader is my “bare minimum” reading, the stuff I’ll make sure to get to at least once a week, whether or not I have time to take in all the stuff that passes through my Twitter feed.

If you do care about the “What” in addition to the “How”, here are my feeds by topic area: General, Politics/Policy, Internet/Media/Tech, New York Times, Climate/Energy (that last one is way more than bare minimum since I rely on it for work) plus I subscribe to some Boston-specific news.

Instapaper

With no 3G on the subway, Instapaper is a lifesaver. (Actually, since I have an Android not an iPhone, I use Instapaper+Everpaper, which works well.) I come across a lot of articles throughout the day via Twitter, RSS or email that are too long to read on the spot, so I add them to Instapaper, using a plugin for my browser. Instapaper saves all the text from the article and then Everpaper downloads the articles directly to my phone. So on the way home from work, I usually dig into a magazine-length piece I found earlier in the week.

Also…

I’d be remiss if I declined to mention that I find a lot of great stuff via friends on email and Facebook. Though, truth be told, that’s secondary to my evolving diet of RSS and Twitter feeds. Also, I swear I do read books, although almost exclusively nonfiction. If I’m in a good book, I usually read that on my way home from work, instead of a magazine article on my phone.

How do you read?

I really do recommend reading The Atlantic Wire series. But I also recommend sharing your own methods. How do you manage your intake of information? I’m always adjusting my process (podcasts and email newsletters are both relatively new for me) and I’d love to hear how others read. Feel free to share tips in the comments, or if you write your own post, leave me a link to it. Or let me know on Twitter (@wfrick).

Share