This is a great scene. Not only is it a funny combination, but if you’re like me you may actually be a bit moved in favor of Timeline. But would Don really favor it? Or would he agree with this guy that forgetting is important? It is, after all, what his life is based around. For now, I’ll stick with my previous take: the important thing is that we get the filters right.
Jonathan Rauch seems to willfully ignore how easy it is to create good filters that locate excellent content:
3) If we had but world enough and time (that’s poetry, btw), we could search for good stuff all day long and the average low quality of the blogosphere might not matter. But average people on average time-budgets have to care if average quality drops, because that’s what they’re dealing with on an average day. (The same will be true, by the way, of historians. Can you imagine the task they’ll face, wading through all that online dreck?)
This is very clearly how he’d prefer things to be. Because if only he’s right things might just go back to the way they were before… or something.
Now I do believe that it’s important to think both about how the web exists in terms of potential use, and how it’s actually used. So if the argument is that people aren’t using good filters then that’s an argument I actually respect. But the idea that it just takes too long to figure out good filters? Patently absurd. You could set yourself up with a decent filter in 5 minutes with Twitter or an RSS reader. Perfecting the filter? Sure, that takes time. But that’s a whole other ball of wax.
Eli Pariser, president of the board at MoveOn.org, has a new book out called The Filter Bubble, and based on his recent NYT op-ed and some interviews he’s done I’m extremely excited to read it. Pariser hits on one of my pet issues: the danger of Facebook, Google, etc. personalizing our news feeds in a way that limits our exposure to news and analysis that challenges us. (I’ve written about that here, here, and here.) In this interview with Mashable he even uses the same metaphor of feeding users their vegetables!
So, thus far my opinion of Pariser’s work is very high. But what kind of blogger would I be if I didn’t quibble? So here goes…
From the Mashable interview (Mashable in bold; Pariser non-bold):
Isn’t seeking out a diversity of information a personal responsibility? And haven’t citizens always lived in bubbles of their own making by watching a single news network or subscribing to a single newspaper?
There are a few important ways that the new filtering regime differs from the old one. First, it’s invisible — most people aren’t aware that their Google search results, Yahoo News links, or Facebook feed is being tailored in this way.
When you turn on Fox News, you know what the editing rule is — what kind of information is likely to get through and what kind is likely to be left out. But you don’t know who Google thinks you are or on what basis it’s editing your results, and therefore you don’t know what you’re missing.
I’m just not sure that this is true. I completely recognize the importance of algorithmic transparency, given the terrific power they have over our lives. But it’s not obvious to me that we’re living in a less transparent world. Do we really know more about how Fox’s process works than we do about how Google’s does? It seems to me that in each case we have a rough sketch of the primary factors that drive decisions, but in neither do we have perfect information.
But to me there is an important difference: Google knows how its process works better than Fox knows how its process works. Such is the nature of algorithmic decision-making. At least to the people who can see the algorithm, it’s quite easy to tell how the filter works. This seems fundamentally different than the Fox newsroom, where even those involved probably have imperfect knowledge about the filtering process.
Life offline might feel transparent, but I’m not sure it is. Back in November I wrote a post responding to The Atlantic’s Alexis Madrigal and a piece he’d written on algorithms and online dating. Here was my argument then:
Madrigal points out that dating algorithms are 1) not transparent and 2) can accelerate disturbing social phenomena, like racial inequity.
True enough, but is this any different from offline dating? The social phenomena in question are presumably the result of the state of the offline world, so the issue then is primarily transparency.
Does offline dating foster transparency in a way online dating does not? I’m not sure. Think about the circumstances by which you might meet someone offline. Perhaps a friend’s party. How much information do you really have about the people you’re seeing? You know a little, certainly. Presumably they are all connected to the host in some way. But beyond that, it’s not clear that you know much more than you do when you fire up OkCupid. On what basis were they invited to the party? Did the host consciously invite certain groups of friends and not others, based on who he or she thought would get along together?
Is it at least possible that, given the complexity of life, we are no more aware of the real-world “algorithms” that shape our lives?
So to conclude… I’m totally sympathetic to Pariser’s focus and can’t wait to read his book. I completely agree that we need to push for greater transparency with regard to the code and the algorithms that increasingly shape our lives. But I hesitate to call a secret algorithm less transparent than the offline world, simply because I’m not convinced anyone really understood how our offline filters worked either.