Will Google Reader save Google+? Or will G+ ruin RSS?

Becca Rosen has an interesting piece at The Atlantic on Google+:

But two changes Google is making may put a bit of life back into the site. First, Google Reader, the company’s RSS aggregator, will soon be better integrated with the site. You’ll be able to share through Google+, not just through your Reader connections. For people who use Reader, the wall of separation between these two services has always been a frustration.

My first thought is that this could get me active on G+. I’m not very social with Reader (on purpose) but I still do use it. Even with Twitter as my main info feed throughout the day, Reader is my backstop; I check it once a week and make sure I don’t miss anything good. It’s still a staple for me. So the integration could in theory make G+ more appealing. I could see using it as a Tumblr-esque spot, for thoughts and interactions too long for Twitter or Facebook but too short or raw for a blog post.

But here’s Dave Winer, creator of RSS, on the dominance of Reader in the shrinking RSS market:

Google seems to have the power to either seriously injure RSS, or perhaps set it free. Not sure which would happen if they radically changed course. I just know that users have made the other RSS reading tools be dependent on it. And that’s not a great way to do things. What makes RSS useful is its power tode-centralize. To re-centralize it for a little convenience is to miss out on the variety that’s possible if you’re willing to suffer a bit. Software is full of tradeoffs.

He reports that there are rumors of big changes coming to Reader, and I’d imagine G+ integration is one of those. So count me as both nervous and curious. I can imagine Reader/G+ integration working for me, but that’s a small issue. More important is that RSS remains a vibrant technology. What Google does with Reader will have a big impact on the use of RSS.



Initial thoughts on Eli Pariser

Eli Pariser, president of the board at MoveOn.org, has a new book out called The Filter Bubble, and based on his recent NYT op-ed and some interviews he’s done I’m extremely excited to read it. Pariser hits on one of my pet issues: the danger of Facebook, Google, etc. personalizing our news feeds in a way that limits our exposure to news and analysis that challenges us. (I’ve written about that here, here, and here.) In this interview with Mashable he even uses the same metaphor of feeding users their vegetables!

The Filter Bubble - Eli Pariser

So, thus far my opinion of Pariser’s work is very high. But what kind of blogger would I be if I didn’t quibble? So here goes…

From the Mashable interview (Mashable in bold; Pariser non-bold):

Isn’t seeking out a diversity of information a personal responsibility? And haven’t citizens always lived in bubbles of their own making by watching a single news network or subscribing to a single newspaper?

There are a few important ways that the new filtering regime differs from the old one. First, it’s invisible — most people aren’t aware that their Google search results, Yahoo News links, or Facebook feed is being tailored in this way.

When you turn on Fox News, you know what the editing rule is — what kind of information is likely to get through and what kind is likely to be left out. But you don’t know who Google thinks you are or on what basis it’s editing your results, and therefore you don’t know what you’re missing.

I’m just not sure that this is true. I completely recognize the importance of algorithmic transparency, given the terrific power they have over our lives. But it’s not obvious to me that we’re living in a less transparent world. Do we really know more about how Fox’s process works than we do about how Google’s does? It seems to me that in each case we have a rough sketch of the primary factors that drive decisions, but in neither do we have perfect information.

But to me there is an important difference: Google knows how its process works better than Fox knows how its process works. Such is the nature of algorithmic decision-making. At least to the people who can see the algorithm, it’s quite easy to tell how the filter works. This seems fundamentally different than the Fox newsroom, where even those involved probably have imperfect knowledge about the filtering process.

Life offline might feel transparent, but I’m not sure it is. Back in November I wrote a post responding to The Atlantic’s Alexis Madrigal and a piece he’d written on algorithms and online dating. Here was my argument then:

Madrigal points out that dating algorithms are 1) not transparent and 2) can accelerate disturbing social phenomena, like racial inequity.

True enough, but is this any different from offline dating?  The social phenomena in question are presumably the result of the state of the offline world, so the issue then is primarily transparency.

Does offline dating foster transparency in a way online dating does not?  I’m not sure.  Think about the circumstances by which you might meet someone offline.  Perhaps a friend’s party.  How much information do you really have about the people you’re seeing?  You know a little, certainly.  Presumably they are all connected to the host in some way.  But beyond that, it’s not clear that you know much more than you do when you fire up OkCupid.  On what basis were they invited to the party?  Did the host consciously invite certain groups of friends and not others, based on who he or she thought would get along together?

Is it at least possible that, given the complexity of life, we are no more aware of the real-world “algorithms” that shape our lives?

So to conclude… I’m totally sympathetic to Pariser’s focus and can’t wait to read his book. I completely agree that we need to push for greater transparency with regard to the code and the algorithms that increasingly shape our lives. But I hesitate to call a secret algorithm less transparent than the offline world, simply because I’m not convinced anyone really understood how our offline filters worked either.


Google won’t feed me my vegetables

I had a post months back called “Who will feed me my vegetables?” about the dangers of social news feeds. Here was the gist:

Consider politics.  Facebook knows I self-designate as “liberal”.  They know I’m a “fan” of Barack Obama and the Times’ Nick Kristof.  They can see I’m more likely to “like” stories from liberal outlets.So what kind of political news stories will they send my way?  If the algorithm’s aim is merely to feed me stories I will like then it’s not hard to imagine the feed becoming an echo chamber.

Imagine if Facebook were designing an algorithm to deliver food instead of news.  It wouldn’t be hard to determine the kind of food I enjoy, but if the goal is just to feed me what I like I’d be in trouble.  I’d eat nothing but pizza, burgers and fries.

This is not just idle speculation. Here’s an entry today at the Google News Blog:

Last summer we redesigned Google News with new personalization features that let you tell us which subjects and sources you’d like to see more or less often. Starting today — if you’re logged in — you may also find stories based on articles you’ve clicked on before.

For signed-in users in the Personalized U.S. Edition, “News for You” will now include stories based on your news-related web history. For example, if you click on a lot of articles about baseball, we’ll make sure that you get a chance to see breaking baseball stories. We found in testing that more users clicked on more stories when we added this automatic personalization, sending more traffic to publishers.

Emphasis mine. In many ways this is obviously useful. But it carries real risks. And I bolded that last line to emphasize the driving force behind these efforts: profit. What you should be reading is nowhere in the equation. Even what you want to read is useful only to the extent that it serves up traffic and ad revenue. Somewhat related…I’m increasingly curious about the possibility for “responsible algorithms” to add a new layer to the web experience for users on an opt-in basis. That’s something I’ll expand on in a future post.


Bad arguments against net neutrality

Net neutrality is something where my bias is so clear (in favor) that I try to be extra careful not to stake out a position before I’ve thoroughly researched the issue.  For that reason I’m still not sure what I think about it even in principle, much less the FCC’s recently proposed rules.

So I was interested to see what points libertarian magazine Reason put forth in this anti-net neutrality video:

To start, I want to zero in on this line:

If AT&T DSL blocked your access to Google because they wanted you to use Yahoo, what would you do? Probably cancel your plan and go to a provider that gives you easy access to your favorite sites.

There are multiple things wrong with this statement.

First, this scenario completely ignores the widespread lack of competition between ISPs.  The consumer behavior the video describes only makes sense in the context of competitive internet service.  The libertarian’s response is to point out that the answer to that is reforming telecom regulation to foster competition.  But if that’s really your argument, you have to make some reference to it instead of misleading viewers into thinking that competition already exists.  In other words, the fact that the scenario Reason describes is only even possible if significant reforms pass first seems relevant.

Ok, now put that aside for a moment, and imagine enough competition existed for this kind of consumer behavior to be possible.  Is it plausible?  The video uses a nice trick to make us think it is: appealing to our universal love of Google.  So let’s flip it around and try that on for size:

If AT&T DSL blocked your access to Yahoo because they wanted you to use Google, what would you do?  Off the top of my head my answer is “Probably nothing.”

The third misleading thing about that example is the choice between incumbents.  Yahoo or Google?  If you tell me I can’t use Google I know enough about how much I love Google to seek out another provider.

But what about Google vs. the next great search company?  If Google can reach into its deep pockets to ensure its searches are delivered faster, that makes it a lot harder for an emerging company/technology to compete for market share.  New entrants not only lack the deep pockets to pay ISPs, they lack the name recognition required to convince consumers to seek out neutral ISPs.  Even if I would switch ISPs to make sure Google isn’t disadvantaged relative to Yahoo, would I actively switch from an ISP that equally prioritized incumbents in order to access new entrants I’d never heard of?

I don’t consider any of these to be case-closed arguments in favor of net neutrality.  But if these are the best arguments against net neutrality, they’re fairly weak.  The libertarian trump card, played at the end of the video, is unforeseen consequences, and I take that point seriously.  That’s one of the reasons I’ve not yet reached a firm position on net neutrality.

But while I may not have a stance on the issue, I do have a starting point, and it’s this: something incredibly important is at stake here.  Both at the software layer and the content layer we are seeing the rise of a fascinating model of information production.  I’ll once again defer on defining that model, except to say that is significantly non-commercial.  Non-commercial production is threatened by a non-neutral net, even more so than emergent commercial entities.  If you care about preserving that non-commercial aspect, if only to learn more about it and to see its full potential, you should care a lot about net neutrality.