The creative case for work-life balance

A while back I read an interesting NYT piece on how entrepreneurs often exhibit manic tendencies.  Most extreme was Scvngr CEO Seth Priebatsch:

To keep the pace of his thoughts and conversation at manageable levels, he runs on a track every morning until he literally collapses. He can work 96 hours in a row. He plans to live in his office, crashing in a sleeping bag. He describes anything that distracts him and his future colleagues, even for minutes, as “evil.”

Intense.  After reading this, I began to wonder about how crucial this sort of intensity and stamina is to success.  Is it possible to compete with this personality type while getting 8 hours of sleep every night?  While having a life?

I was reminded of this by a post by Matt Douglas, himself a startup CEO.  Douglas zeroes in on a number of Priebatsch quotes from various sources and argues the merits of work-life balance.  It’s well worth a read.

As I reconsidered Priebatsch’s case, I recalled a line from Steven Johnson’s latest book: Where Good Ideas Come From.

I’ll be posting more about the book – and on topics more relevant to this blog’s core focus – but I wanted to share a bit that I count as an argument against the Priebatsch model.

Writing about the importance of “exapting” ideas from one field to another, Johnson relates the discovery of the double-helix structure of DNA, including this bit:

It is a fitting footnote to the story that Watson and Crick were notorious for taking long, rambling coffee breaks, where they tossed around ideas in a more playful setting outside the lab –a practice that was generally scorned by their more fastidious colleagues.  With their weak-tie connections to disparate fields, and their exaptative intelligence, Watson and Crick worked their way to a Nobel Prize in their own private coffeehouse.

An anecdote like that is hardly compelling evidence on its own, but the lesson here is consistent with the book’s larger thesis.

On the one hand, work-life balance recommends itself and doesn’t need to lean on arguments about fostering innovation.  On the other, I’d sure love to be able to work effectively on 3 or 4 hours of sleep every night.

But just in case other mere mortals are discouraged by stories of the Priebatsch’s of the world, they ought to take heart: a coffee break, a bit of pleasure reading, perhaps even a bit of day-dreaming can foster creativity. It seems at least possible that the very same focus that is helping Priebatsch succeed could also be holding him back.

Imagine a smart chair

Hearing others’ visions for the future of the Net can be inspiring.  But a lot of the time it’s not.  One thing I’m struck by with the explosion of social media, in particular, is the shallow nature of the industry’s ambition.  For every person writing about how Twitter can enable political change, five others are preparing slidedecks on how social media can offset your firm’s direct mail budget.  There’s a place for that, of course.  But one of the great things about the internet is that it invites us to consider more radical possibilities for change.

The Success of Open Source

As I was thinking about this I was reminded of a quote from the end of Steven Weber’s 2004 book The Success of Open Source, and I decided it was worth sharing.

(He’s just finished describing Wired editor Kevin Kelly’s vision of smart objects, priced in real-time.)  Weber:

Imagine a smart chair, connected to a lot of other smart things, with huge bandwidth between them, bringing transaction costs effectively to zero.  Now ask yourself, With all that processing power and all that connectivity, why would a smart chair (or a smart car or a smart person) choose to exchange information with other smart things through the incredibly small aperture of a “price”? A price is a single, mono-layered piece of data that carries extraordinarily little information in and of itself.  (Of course it is a compressed form of lots of other information, but an awful lot is lost in the process of compression.)  The question for the perfect market that I’ve envisioned above is, Why compress?  My point is that even a perfect market exchange is an incredibly thin kind of interaction, whether it happens between chairs or between people, whether it is an exchange of goods, ideas, or political bargains.  I want to emphasize that communities, regimes, and other public spheres can come in many different shapes and forms.  The “marketized” version is only one, and it is in many ways an extraordinarily narrow one that barely makes use of the technology at hand.

So there you are.  The point of this blog, really, is to take the internet up on its invitation, and to think more creatively about society and its future.

Who are you calling reduced?

Zadie Smith has a… I’ll say frustrating… essay in The New York Review of Books about Facebook, The Social Network and Jaron Lanier’s book You Are Not a Gadget.  While she raises some interesting questions, and while I look forward to reading Lanier’s book, there’s a lot I don’t accept.  Over at The Atlantic Alexis Madrigal has a smart and tempered response taking on, among other things, the charge that Facebook promotes homogenization.

Smith’s central point, as I read her, is this:

When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendships. Language. Sensibility. In a way it’s a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears.

Who among us has lost our messy feelings, our desires and our fears?  Anyone? Bueller?  I thought not.  As I keep pointing out, we use online tools to supplement our offline lives rather than to replace them.

That doesn’t mean that there’s no reason for concern.  Smith quotes Lanier:

Different media designs stimulate different potentials in human nature.

So what potentials are we stimulating with today’s web tools?  Smith seems to think we’re not stimulating much worthwhile. Ezra Klein has a different take:

…if you’re someone who likes to spend Saturday in a quiet room with a good book and a long time to think about it, you might find Facebook unnerving. And Zadie Smith and Ross Douthat do. Sometimes, I’d guess, we all do. Conversely, if you’re someone who likes people but has trouble meeting them, or gets shy in unfamiliar social settings, you probably don’t think the Internet has made you less human.

It’s worth reading Ezra’s whole post.  He references “Alter Ego”,  a book matching photos of online gamers with their avatars.  Ezra’s post highlights a particularly compelling example of “becoming human” online, so please give it a read.

For a more philosophical examination of how the web is contributing to human self-actualization, I recommend Yochai Benkler’s The Wealth of Networks.

Benkler argues persuasively that the ‘net is enhancing our autonomy and enabling our individuality.  This is not guaranteed by the technology.  And so many of Smith’s concerns end up being important to guaranteeing that the net continues to improve human welfare.  Yet, we have not been reduced and little has been lost.  Rather, much has already been gained.

Code is law, and also romance

Alexis Madrigal has an interesting column in this month’s Atlantic on the use of algorithms in online dating.  If data mining and algorithms can help people more efficiently find matches, what could be wrong with that?  Plenty, says Madrigal:

The company can quantify things you could guess but might rather not prove. For instance, all races of women respond better to white men than they should based on the men’s looks. Black women, as a group, are the least likely to have their missives returned, but they are the most likely to respond to messages.

I asked Yagan whether OkCupid might try tailoring its algorithm to surface more statistically successful racial combinations. Such a measure wasn’t out of the question, he said. “Imagine we did a lot of research, and we found that there were certain demographic or psychographic attributes that were predictors of three-ways. Hispanic men and Indian women, say,” Yagan suggested. “If we thought that drove success, we could tweak it so those matches showed up more often. Not because of a social mission, but because if it’s working, there needs to be more of it.”

So perhaps it’s a bit tricker than we might think.  Moreover, it’s hard to disagree with his basic point:

Algorithms are made to restrict the amount of information the user sees—that’s their raison d’être. By drawing on data about the world we live in, they end up reinforcing whatever societal values happen to be dominant, without our even noticing. They are normativity made into code—albeit a code that we barely understand, even as it shapes our lives.

We’re not going to stop using algorithms. They’re too useful. But we need to be more aware of the algorithmic perversity that’s creeping into our lives.

Quite so.  This point is in line with Lawrence Lessig’s argument that “code is law”, and I certainly agree that we need to care, as a society, about the values underlying our code.

That said, Madrigal points out that dating algorithms are 1) not transparent and 2) can accelerate disturbing social phenomena, like racial inequity.

True enough, but is this any different from offline dating?  The social phenomena in question are presumably the result of the state of the offline world, so the issue then is primarily transparency.

Does offline dating foster transparency in a way online dating does not?  I’m not sure.  Think about the circumstances by which you might meet someone offline.  Perhaps a friend’s party.  How much information do you really have about the people you’re seeing?  You know a little, certainly.  Presumably they are all connected to the host in some way.  But beyond that, it’s not clear that you know much more than you do when you fire up OkCupid.  On what basis were they invited to the party?  Did the host consciously invite certain groups of friends and not others, based on who he or she thought would get along together?

Is it at least possible that, given the complexity of life, we are no more aware of the real-world “algorithms” that shape our lives?

None of this takes away from the salience of Madrigal’s point: we should want to know more about the algorithms that dictate our online behavior.  Not because we aren’t used to the opaque complexity of circumstance, but because we are.

(FWIW, I highly recommend OkCupid’s blog, OkTrends.  They put the scary amount of data to which they have access to consistently interesting use.)

Facebook and face-to-face

I’ve blogged about this before, but I wanted to share a great post from Ed Glaeser at NYT’s Economix on how social networking – in this case Facebook – supplements in-person interaction, rather than replacing it:Facebook-icon

it isn’t clear if Facebook will increase or decrease the demand for face-to-face interactions.When theory is ambiguous, we need to turn to the data, and it seems empirically that Facebook supports, rather than replaces, in-person meetings. For example, surveys of Facebook users have found that the use of “Facebook to meet previously unknown people remained low and stable” and that “students view the primary audience for their profile to be people with whom they share an offline connection.” In other words, Facebook seems to be typically used to connect people who have connected through some other medium, like being in the same class or meeting at a party, which seems to suggest complementarity between meeting face-to-face and connecting on Facebook.

Another paper looks at whether people who are good at face-to-face interactions made greater use of social-networking sites. The study examined a group of 13- to 14-year-olds in 1998-9 and rated their ability to connect well in person with a close friend. In 2006-8, those same people were asked about their involvement with social-networking sites.

The people who were better at interacting face-to-face in adolescence had more friends on social-networking sites as young adults. Again, electronic interactions seem to complement face-to-face connections.

Transparency or objectivity? Yes.

Jay Rosen linked on Twitter to this post by Terry Heaton, a consultant and journalism professor, on new media ethics that frames the subject in a damaging manner:

There are basically two forms of ethical conduct in the press today. One espouses a traditional set of canons and exists with self-restraint as a guide. In this world, objectivity — or attempts at objectivity — are the norm, for balance and fairness are the goals. Here, truth is presented as existing between two or more “sides” to stories. In the second world, however, transparency replaces objectivity in the belief that the audience can determine bias and figure out where the writer is coming from. In this view, objectivity is a farce and truth determination is up to the reader.

I don’t have much to say about the specific example Heaton examines: a gripe against American Express by TechCrunch’s Michael Arrington.

I have to object, though, to the general frame of objectivity vs. transparency.  I do take issue with the press’s misguided preoccupation with objectivity, and I do support greater transparency.  But I’d hate to see the two unnecessarily juxtaposed.

Not either or

For all the problems with today’s impartiality in journalism – and Rosen’s “Cult of Innocence” post is the place to start on that – there is real value in factually accurate reporting that aspires to a certain kind of objectivity.  (Here’s how I think that could work in the internet age.)  And transparency is aiding experiments in how a new kind of objectivity might look.

The best example I’m aware of here is Wikipedia.  Its community aspires to a certain kind of objectivity, based on a neutral point of view, and is heavily reliant on transparency.  While its process is by no means perfect, it offers some insight into the potential interaction between objectivity and transparency.

Transparency has real advantages if your goal is successfully transforming journalistic objectivity.  First, it offers outsiders a chance to help improve the process.  It’s much easier for the average reader to discover shortcomings in Wikipedia’s process, and to suggest improvements, than it is to do the same with a traditional media outlet.

Second, transparency helps build trust.  And producing accurate journalism that aspires to some version of objectivity is meaningless if no one trusts your process.

Transparency is not enough

Transparency may offer journalists the opportunity to reinvent objectivity within the press, yet transparency alone is no cure-all for journalism.  Just as we’re beginning to see innovative experiments in transparency, I hope we’ll continue to see experiments in objectivity building on successes like Wikipedia.

UPDATE: Also via Rosen, I just came across this 2009 David Weinberger post “Transparency is the new objectivity”.  He makes a good case, but I continue to think the juxtaposition will do damage in the long run, if only via those who fail to read past the subject lines.

Who will feed me my vegetables?

Here’s a snippet from a post imagining a news aggregator built into Facebook, which the author refers to as “inevitable”:

Suddenly, Facebook will funnel news to you from a variety of sources based on data it already knows about you and your friends. Whereas Google News (theoretically) knows little about you until you personalize it, Facebook knows your demographic, your interests, stories and pages you’ve liked, your friends and news they’ve read, liked and commented on.

From the perspective of the user, the potential benefits are obvious.  If Facebook can determine from your profile, your friendships and your conversations that you want to read news items about digital music, cars and technology startups it can save you the time and effort required to customize a news diet as you would through Google News or Google Reader.

But what about the negatives?

Readers won’t realize they’re consuming news from an echo chamber designed by Facebook’s feed algorithm.

This might not matter for certain types of news items, but it matters a lot for others.  Consider politics.  Facebook knows I self-designate as “liberal”.  They know I’m a “fan” of Barack Obama and the Times’ Nick Kristof.  They can see I’m more likely to “like” stories from liberal outlets.

So what kind of political news stories will they send my way?  If the algorithm’s aim is merely to feed me stories I will like then it’s not hard to imagine the feed becoming an echo chamber.

Imagine if Facebook were designing an algorithm to deliver food instead of news.  It wouldn’t be hard to determine the kind of food I enjoy, but if the goal is just to feed me what I like I’d be in trouble.  I’d eat nothing but pizza, burgers and fries.

You might argue that a sophisticated algorithm could identify what we could call “second-order desires”, like wanting eat healthy or wanting to read balanced news coverage.

Perhaps.  But human will power is weak.  Just as we’re bad at sticking to our diets, we’re bad at seeking out perspectives with which we disagree.

For the sake of the public sphere, we need news diets that insist on feeding us our vegetables.

Hierarchies and/or networks

The world really doesn’t need another response to Malcolm Gladwell’s article on Twitter and social revolutions so instead of offering my full thoughts, I’ll just make one point.  Gladwell:

Facebook and the like are tools for building networks, which are the opposite, in structure and character, of hierarchies…

There are many things, though, that networks don’t do well. Car companies sensibly use a network to organize their hundreds of suppliers, but not to design their cars...

Because networks don’t have a centralized leadership structure and clear lines of authority, they have real difficulty reaching consensus and setting goals. They can’t think strategically; they are chronically prone to conflict and error.

This brought to mind a similar point from a recent National Journal piece, How Tea Party Organizes Without Leaders:

Headless organizations have other problems. They are much better at mobilizing to stop a proposal or person they dislike than at agreeing on an alternative. They are bad at negotiating and compromising, because no one can speak for them, and many of their members regard compromising as selling out.

Successful open source projects clearly utilize networks effectively.  But that doesn’t mean that they are paralyzed in the face of decisions.  That’s because they employ alternative decision-making structures – including hierarchies – to tackle tasks for which networks are ill-suited.  Take Linux, for instance.  While a global network of programmers contributes to the project, the community employs a hierarchical decision-making structure to handle changes.  (For more on how that works, I recommend this book.)

When considering the implications of networked political movements, it’s worth remembering that this in no way precludes the use of hierarchies for certain tasks.  Gladwell’s own example of car companies demonstrates the viability of a blended approach.  But he somehow chooses to ignore it when criticizing the usefulness of networks in political change.  Successful political movements will increasingly utilize a blend of the two as we’ve seen in the context of open source software.

(If you’re looking for other reactions here’s Tyler Cowen, Henry Farrell, and a bunch more at Nieman Lab.)

Facebook’s “Photo Memories” and filter failure

If you’ve scanned your friends’ photo albums on Facebook recently, you may have noticed a new feature on the right sidebar labeled “Photo Memories.”  This raises an important issue that I’ve been meaning to blog about for a while: our collective digital memory.  It’s the subject of a fairly new book titled Delete: The Virtue of Forgetting in the Digital Age.

I’ve not yet read the book, but I listened to a talk by the author, Viktor Mayer-Schonberger, at Harvard’s Berkman Center, as well as his conversation with Farhad Manjoo of Slate on Bloggingheads TV.  (While the Berkman talk is a more thorough discussion of his ideas, in some ways the Bloggingheads talk is clearer and more illuminating.)

Here’s how Berkman describes the book:

DELETE argues that in our quest for perfect digital memories where we can store everything from recipes and family photographs to work emails and personal information, we’ve put ourselves in danger of losing a very human quality—the ability and privilege of forgetting. Our digital memories have become double-edged swords—we expect people to “remember” information that is stored in their computers, yet we also may find ourselves wishing to “forget” inappropriate pictures and mis-addressed emails. And, as Mayer-Schönberger demonstrates, it is becoming harder and harder to “forget” these things as digital media becomes more accessible and portable and the lines of ownership blur (see the recent Facebook controversy over changes to their user agreement).

Mayer-Schönberger examines the technology that’s facilitating the end of forgetting—digitization, cheap storage and easy retrieval, global access, and increasingly powerful software—and proposes an ingeniously simple solution: expiration dates on information.

The cataloging of our lives online is a relatively new phenomenon, so we haven’t had much time to consider its impact.  But it’s going to be interesting.  Here’s Facebook VP Christopher Cox explaining the potential impact of Facebook Places:

Too many of our human stories are still collecting dust on the shelves of our collections at home…Those stories are going to be pinned to a physical location so that maybe one day in 20 years our children will go to Ocean Beach in San Francisco, and their little magical thing will start to vibrate and say, ‘This is where your parents first kissed.’

Google CEO Eric Schmidt puts it this way:

I don’t believe society understands what happens when everything is available, knowable and recorded by everyone all the time.

According to the Wall Street Journal, Schmidt  “predicts, apparently seriously, that every young person one day will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends’ social media sites.”

In case all of this wasn’t tricky enough, applying expiration dates to information, should we want to take Mayer-Schonberger’s advice, is somewhere between difficult and impossible.  Luckily, deleting information isn’t the only way to control our collective memory.  As Clay Shirky says, “There’s no such thing as information overload – only filter failure.”  What we view online is only partially determined by what’s online, because everything’s online.  What we view is determined largely by our filters.  Facebook is a filter, as is Google.  My RSS reader is a filter, as is my email inbox.

Blogging for Foreign Policy, Evgeny Morozov is thinking along the same lines:

So what else could we do, given that expiration-date-technology capable of destroying all copies is not an option? This is an easy one: make offensive information harder to find. After all, it’s the fact that our data is findable – most commonly through search engines – that makes us really concerned.

To apply Shirky’s maxim to the question of digital memory, let’s return to Facebook’s Photo Memories.  The albums of photos of my friends from college have been on Facebook for years.  But, until recently, I had to dig to look at them.  They only registered on my feed if someone tagged someone or commented on them. Typically, as time passed, this happened less and less and then not at all.  And so those photos, though available, were no longer viewed.  And then the filter was changed.  Suddenly, albums from years ago are being thrust in front of me and I’m looking through some of them again.  The point is that digital memory is about more than availability.  In practice, it’s about filters.

What do we want to remember and what do we want to forget?  I’m not sure.  I find several of Mayer-Schonberger’s examples of the dangers of remembering to be quite compelling.  But, in general, the availability of more and more information also has plenty of upside.  Obviously, the question is about balancing the two, and I think it’s clear that we don’t yet have any idea where that balance should be struck.

In the meantime, perhaps we should focus on improving the design and governance of our filters.  We should be in favor of openness, transparency, democracy and individual autonomy.  This isn’t the same as saying we want information to be available and transparent.  But if our filters are open, transparent, and democratic, we’ll at least have an easier time evaluating and improving them.

Perhaps we won’t miss forgetting as much as we think.  (Will we wistfully look back at old photos and fondly remember forgetting?)  Yet, there’s reason to think that if we do ever want to forget, the solution lies in filtering the past rather than deleting it.