Initial thoughts on Eli Pariser

Eli Pariser, president of the board at, has a new book out called The Filter Bubble, and based on his recent NYT op-ed and some interviews he’s done I’m extremely excited to read it. Pariser hits on one of my pet issues: the danger of Facebook, Google, etc. personalizing our news feeds in a way that limits our exposure to news and analysis that challenges us. (I’ve written about that here, here, and here.) In this interview with Mashable he even uses the same metaphor of feeding users their vegetables!

The Filter Bubble - Eli Pariser

So, thus far my opinion of Pariser’s work is very high. But what kind of blogger would I be if I didn’t quibble? So here goes…

From the Mashable interview (Mashable in bold; Pariser non-bold):

Isn’t seeking out a diversity of information a personal responsibility? And haven’t citizens always lived in bubbles of their own making by watching a single news network or subscribing to a single newspaper?

There are a few important ways that the new filtering regime differs from the old one. First, it’s invisible — most people aren’t aware that their Google search results, Yahoo News links, or Facebook feed is being tailored in this way.

When you turn on Fox News, you know what the editing rule is — what kind of information is likely to get through and what kind is likely to be left out. But you don’t know who Google thinks you are or on what basis it’s editing your results, and therefore you don’t know what you’re missing.

I’m just not sure that this is true. I completely recognize the importance of algorithmic transparency, given the terrific power they have over our lives. But it’s not obvious to me that we’re living in a less transparent world. Do we really know more about how Fox’s process works than we do about how Google’s does? It seems to me that in each case we have a rough sketch of the primary factors that drive decisions, but in neither do we have perfect information.

But to me there is an important difference: Google knows how its process works better than Fox knows how its process works. Such is the nature of algorithmic decision-making. At least to the people who can see the algorithm, it’s quite easy to tell how the filter works. This seems fundamentally different than the Fox newsroom, where even those involved probably have imperfect knowledge about the filtering process.

Life offline might feel transparent, but I’m not sure it is. Back in November I wrote a post responding to The Atlantic’s Alexis Madrigal and a piece he’d written on algorithms and online dating. Here was my argument then:

Madrigal points out that dating algorithms are 1) not transparent and 2) can accelerate disturbing social phenomena, like racial inequity.

True enough, but is this any different from offline dating?  The social phenomena in question are presumably the result of the state of the offline world, so the issue then is primarily transparency.

Does offline dating foster transparency in a way online dating does not?  I’m not sure.  Think about the circumstances by which you might meet someone offline.  Perhaps a friend’s party.  How much information do you really have about the people you’re seeing?  You know a little, certainly.  Presumably they are all connected to the host in some way.  But beyond that, it’s not clear that you know much more than you do when you fire up OkCupid.  On what basis were they invited to the party?  Did the host consciously invite certain groups of friends and not others, based on who he or she thought would get along together?

Is it at least possible that, given the complexity of life, we are no more aware of the real-world “algorithms” that shape our lives?

So to conclude… I’m totally sympathetic to Pariser’s focus and can’t wait to read his book. I completely agree that we need to push for greater transparency with regard to the code and the algorithms that increasingly shape our lives. But I hesitate to call a secret algorithm less transparent than the offline world, simply because I’m not convinced anyone really understood how our offline filters worked either.


Explainers and Transparency

Since the recent unrest began in the Middle East, Mother Jones has gotten attention for their invaluable explainer posts like this one on Egypt. These posts do more than report on events. They begin by asking and briefly answering questions like “How did this all start?” and “Why Are Egyptians Unhappy?” It’s a deceptively simple format, but the posts go a long way to providing some basic context prior to reporting what’s new. The Wall Street Journal has a similar feature today on “How Nuclear Reactors work… And the Dangers When They Don’t”.

There’s a lot of room to experiment with these sort of explainer features. (Jay Rosen at NYU is leading a project that explores the issue in depth at But context is arguably trickier than news reporting when it comes to providing some level of “objectivity.” There are often multiple reports of what happened, but even more of why it happened.

So explainers will need to think extra carefully about how to update “objectivity” – a thorny subject under the best circumstances – to fit these features.

The excellent analysis offered by Mother Jones and WSJ reminded of a post I wrote about back in August: the magazine-reporter ethos. The original post is by Jim Henley, and here are his key points:

* original reporting on first-hand sources
* a frankly stated point-of-view
* tempered by a scrupulous concern for fact
* an effort to include a fair account of differing perspectives
* ending in a willingness to plainly state conclusions about the subject

It’s relatively easy to come up with rough guidelines like these, or these ones by

1. Keep an open mind
2. Ask the right questions
3. Cross-check
4. Consider the source
5. Weigh the evidence

And explainers would do well to incorporate these guidelines into their efforts. But I’d argue they need to go even further, beyond rough guidelines, and develop more detailed rules and descriptions of their process. There are lots of advocates of transparency in future-of-news circles, often as a substitute for “objectivity.” But too often those calling for transparency focus on explaining the writer’s perspective – I’m liberal, this is my worldview, etc. – and less on transparency of process – we consider x to be a more reliable source than y and shaped our analysis accordingly, etc. Let’s see more process transparency. And keep up the great explainer experiments.

* original reporting on first-hand sources
* a frankly stated point-of-view
* tempered by a scrupulous concern for fact
* an effort to include a fair account of differing perspectives
* ending in a willingness to plainly state conclusions about the subject

On Wikileaks

I’ve held off posting anything about Wikileaks, as the subject’s complexity is a little daunting.  I still don’t have any polished thoughts, but I’ll offer a few unpolished ones alongside some reading recommendations:

If you haven’t been following this story, check out The Beginner’s Guide to Wikileaks at The Atlantic.

One of the basic aspects of the story I was missing at first was the extent to which Wikileaks worked with media organizations and even governments to redact the documents and decide what to publish.  To learn more about that, read Glenn Greenwald here.

The one question that consistently hurt my ability to think clearly about this story was Is Wikileaks good or bad? or, put another way, Should these cables have been published? So my advice is to put that aside for now and focus on a few other interesting aspects of the story, like…

Is Wikileaks a new kind of media organization or a new kind of source? The New York Times treats it as “a source, not a partner”, according to NYT executive editor Bill Keller.  An excellent summary of his comments on Wikileaks is available at the Nieman Lab.  For a different perspective try Matthew Ingram of GigaOM arguing “Like It or Not, WikiLeaks is a Media Organization”. NYT’s David Carr has thoughts here.

Another interesting line of inquiry looks at how governments can exert indirect control over organizations like Wikileaks in cases where they lack the ability to exert direct control.  Henry Farrell at Crooked Timber has a good post on this topic.

Another thing I’ve been pondering is what predispositions predict one’s opinion of Wikileaks.  This post by Tom Slee, which I found via both Clay Shirky and Crooked Timber, puts it this way:

Your answer to “what data should the government make public?” depends not so much on what you think about data, but what you think about the government.

I think he’s only partly right.  What you think about government matters tremendously.  But I wouldn’t downplay data.  I’m finding, in reading and conversations, that what you think about it does also hinge on what you think about technology.  All else equal, if you’re bullish about technology’s prospects for improving the world, you’re more likely to approve of Wikileaks’ data dump.  Ditto if you’re already sympathetic to hacker culture.  Or if you generally view increased access to information as crucial to improving society.

Put these two items – thoughts on government and thoughts on technology – together and I think it explains much of the disconnect between the standard Washingtonian’s view of Wikileaks and the standard geek view.  The latter is dominated by a combination of liberals and libertarians, both of which are likely to harbor deep suspicions about the government’s handling of international affairs.  Add to that a predisposition towards technology – contrasted with a view where tech can cause as many problems as it solves – and a true disconnect is revealed.  For these two reasons, the geek world has a much stronger bias towards transparency than the beltway insider.

I don’t mean “bias” in a pejorative way, and certainly don’t mean to suggest that one or the other view is closer to being right.  My own sympathies in this case are all over the map.  But I’d love to test my theory.  How much power would questions about Iraq and waterboarding have in predicting sympathy to Wikileaks? I imagine quite a lot.  But what about one’s reaction to a statement like “information wants to be free”? I’d bet that has some predictive power as well.

In closing, in place of any master synthesis or confident opinion, I’ll simply link to Clay Shirky’s post on the topic, which I think lays out the issues nicely.

For more reading, The Atlantic has a terrific roundup of reactions here.  My Delicious links on Wikileaks are here.