The Progress of Urban Development

This is a quick departure from the blog’s main topics, but I’ve been writing a series for The Atlantic Cities launch, sponsored by Dow, on “The Progress of Urban Development.” The last post went up today, so all 31 are up here. I like some posts better than others, so I thought I’d share a few of my favorites here:

High-Speed Rail and the New Regional Economics

Aside from the particular merits of any individual project, high-speed rail is arguably a crucial component of an economic strategy centered around mega-regions, defined as large, contiguous economic areas containing multiple cities and their surrounding suburbs. As AtlanticSenior Editor and academic Richard Florida has argued, mega-regions are the relevant economic unit for the 21st century. And even a casual comparison between U.S. mega-regions and proposed high-speed rail lines reveals a connection.

How Cities Innovate

Technology is a major driver of economic growth in the modern world. But technological progress is not equally distributed around the globe. It is no accident that web startups are concentrated in Silicon Valley, biotech firms in Boston, and so on. In fact, approximately 20 metropolitan regions account for most of the world’s technological innovation, as measured by patents, an imperfect but widely acknowledged measure. It is no surprise that cities are drivers of technology innovation worldwide, and that a couple dozen cities are disproportionately innovative. Understanding why is crucial for promoting economic growth as well as for informing cities’ strategies for boosting innovation. The economic logic behind this is fairly simple and revolves around the concept of “clusters.”

A World With Fewer Farmers

In 1900, 41% of the U.S. workforce was employed in agriculture. Today, that number isless than 2%. Yet, the sector has kept pace with increasing demand for agricultural goods, thanks in large part to advances in mechanization and labor productivity. To many, these trends are inexplicably linked to the worst features of industrial agriculture, including unsustainable use of pesticides and commercial fertilizers, or to the decline of the family farm. But the relationship between urbanization and agriculture is more complicated. Understanding it is critical to the promotion of sustainable development throughout the world.

What “Food Miles” Misses

Scholars emphasize that assessing the environmental impact of food is an extraordinarily difficult task. But there is agreement that food miles are just one piece of the puzzle. For city dwellers this offers some relief. Eating local is not the end-all-be-all of environmental consciousness. So how can consumers make better food choices? The place to start is agreeing on a common definition for sustainable food production. And life-cycle analysis, though difficult, may offer a path forward. Some have pushed for food labels to include greenhouse gas emissions across the entire life-cycle. Others have emphasized that a price on greenhouse gas emissions would help food prices reflect their environmental impact. Eating sustainably is certainly still up for debate. And asking just how far food traveled to arrive on your plate is still a relevant question. It just isn’t the only one.


Examples of how media could help overcome bias

I have a piece up at The Atlantic (went up Friday) titled “The Future of Media Bias” that I hope you’ll read. I suppose the title is deliberately misleading, since the topic isn’t media bias in the typical sense. Here’s the premise:

Context can affect bias, and on the Web — if I can riff on Lessig — code is context. So why not design media that accounts for the user’s biases and helps him overcome them?

Head over to The Atlantic to read it. In the meantime, I want to expand a bit on some of my ideas.

1) This is not just about pop-up ads. The conceit of the post is visiting a conservative friend’s site and being hit with red-meat pop-ups that act as priming mechanisms. But that was just a way of introducing my point. (Evidence from comments and Twitter suggest this may have distracted some.) So while pop-ups can illustrate the above premise, the premise is in no way restricted to the impacts of pop-ups either as they exist in practice to sell ads or as they might be used in theory.

2) More on self-affirmation. I might have been clearer on how self-affirmation exercises work. Although they were not described in detail in either paper I referenced. Here’s how I understand them: you’re asked to select a value that is important to your self-worth – maybe something like honesty – and then you write a few sentences about how you live by that value. Writing out a few sentences about what an honest and therefore valuable person you are makes you less worried about information threatening your self-worth.

I want to address a few potential objections to embedding an exercise like this in media. One might argue that no one would complete the exercise (I’m imagining it as a pop-up right now.) Perhaps. But you could incentivize it. Perhaps anyone who completes it gets their comments displayed higher or something like that. Build incentives into a community reputation system. Second objection is that maybe you could get people to complete it once, but it’s impractical to think anyone would do it before every article. Fair point. But perhaps you just need people to do it once, and then it’s just displayed alongside or above the content, for the reader to view, to prime them. Finally, I want to note that this is just one random example and I don’t think my argument really rides too much on it. The reason I used it was a) there was lots of good research behind it and b) it fit nicely with the pop-up conceit of the post.

3) More examples. One paper I referenced re: global warming suggests that the headline can impact susceptibility to confirmation/disconfirmation biases. So what if the headline changed depending on the user’s biases? This would be tricky in various ways, but it’s hardly inconceivable. In fact I wish I’d mentioned it since in some ways it seems more practical than the self-confirmation exercises. It would, however, introduce a lot of new difficulties into the headline writing process.

Another thing I might have mentioned is the ordering of content. Imagine you’re looking at a Room for Debate at NYT. Which post should you see first? In the course of researching the Atlantic piece, I came across some evidence that the order you receive information matters (with the first piece being privileged) but I’m having trouble finding where I saw that now. And it’s not obvious that that kind of effect would persist in cases of political information. But, still, there may well be room to explore ordering as a mechanism for dealing with bias.

Finally – and at this point I’m working off of no research and just thinking out loud – what if you established the author’s credibility by showing the work the user was most likely to agree with, in cases of disconfirmation bias (and the reverse in confirmation bias cases)? So, say I’m reading about climate change and you knew I’d be biased against evidence for it. But the author making the case for that evidence wrote something last week that I do agree with, that does fit my worldview. What if a teaser for that piece was displayed alongside the global warming content? Would that help?

I’ve also wondered if asking users to solve fairly simple math problems would prime them to think more rationally, but again, that’s not anything based on research; just a thought.

So that’s it. A few clarifications and some extra thoughts. To me, the hope of this piece would be to inject the basic idea into the dialogue, so that researchers start to think of media as an avenue for testing their theories, and so that designers, coders, journalists, etc. start thinking of this research as input for innovation into how they create new media.

UPDATE: One more cool one I forgot to mention… there’s some evidence that hitting someone with graphical information more forcefully makes a point, such that it would basically take too much mental energy to rationalize around it. You can read more about that here. This column in the Boston Globe refers to a study concluding – in the columnist’s words – “people will actually update their beliefs if you hit them “between the eyes” with bluntly presented, objective facts that contradict their preconceived ideas.” This strikes me as along the same lines as the graph experiment. And so these are things to keep in mind as well.


Reliable sources: An interview with

I have a post up at The Atlantic Tech featuring an interview I did with Brooks Jackson, Director of about determining reliable sources. is a terrific resource, and Brooks’ insights are excellent. Please head over to The Atlantic and read the interview. Here’s a taste:

We tend to be more skeptical of assertions that run counter to our existing worldview. How can we adjust for this bias of “motivated skepticism“? In such situations, it seems our reasoning capabilities are coming to the service of our emotions, to ill effect. Is it ever the case that we ought to employ less critical thinking?

In unSpun, Kathleen Jamieson and I argue that to keep from being fooled by this common human tendency, its a good idea to keep asking yourself “Am I missing something? Does the other guy have a point here?” It also helps to be aware of this universal psychological tendency, and for teachers to point out examples of it.

Kathleen doesn’t like the term “critical thinking” because it implies to some that they should automatically be critical. We prefer “analytical thinking.” If you look at it that way, I think there’s no danger of being too analytical. I agree that there is a danger of automatically distrusting anything said by people in authority. In that sense, yes, there is a danger of too much “critical” thinking. It’s one thing to be skeptical, which is good. It’s another to be cynical, which is a sort of naive belief that everybody is lying.

For more on this from, check out their Tools of the Trade:

A Process for Avoiding Deception

1. Keep an open mind. Most of us have biases, and we can easily fool ourselves if we don’t make a conscious effort to keep our minds open to new information. Psychologists have shown over and over again that humans naturally tend to accept any information that supports what they already believe, even if the information isn’t very reliable. And humans also naturally tend to reject information that conflicts with those beliefs, even if the information is solid. These predilections are powerful. Unless we make an active effort to listen to all sides we can become trapped into believing something that isn’t so, and won’t even know it.

2. Ask the right questions. Don’t accept claims at face value; test them by asking a few questions. Who is speaking, and where are they getting their information? How can I validate what they’re saying? What facts would prove this claim wrong? Does the evidence presented really back up what’s being said? If an ad says a product is “better,” for instance, what does that mean? Better than what?

3. Cross-check. Don’t rely on one source or one study, but look to see what others say. When two or three reliable sources independently report the same facts or conclusions, you can be more confident of them. But when two independent sources contradict each other, you know you need to dig more deeply to discover who’s right.

4. Consider the source. Not all sources are equal. As any CSI viewer knows, sometimes physical evidence is a better source than an eyewitness, whose memory can play tricks. And an eyewitness is more credible than somebody telling a story they heard from somebody else. By the same token, an Internet website that offers primary source material is more trustworthy than one that publishes information gained second- or third-hand. For example, official vote totals posted by a county clerk or state election board are more authoritative than election returns reported by a political blog or even a newspaper, which can be out of date or mistaken.

5. Weigh the evidence. Know the difference between random anecdotes and real scientific data from controlled studies. Know how to avoid common errors of reasoning, such as assuming that one thing causes another simply because the two happen one after the other. Does a rooster’s crowing cause the sun to rise? Only a rooster would think so.


The epistemology of Wikipedia

The Atlantic tech has a great feature for Wikipedia’s 10th anniversary, featuring thoughts from a number of excellent contributors, including Shirky, Benkler, Zuckerman, Rosen and more.  Check it out.

One point of interest for me was a contrast in epistemologies offered by novelist Jonathan Lethem and Clay Shirky.  Lethem:

Question: hadn’t we more or less come to understand that no piece of extended description of reality is free of agendas or ideologies? This lie, which any Encyclopedia implicitly tells, is cubed by the infinite regress of Wikepedia tinkering-unto-mediocrity. The generation of an infinite number of bogusly ‘objective’ sentences in an English of agonizing patchwork mediocrity is no cause for celebration

Now compare that to Shirky:

A common complaint about Wikipedia during its first decade is that it is “not authoritative,” as if authority was a thing which Encyclopedia Britannica had and Wikipedia doesn’t. This view, though, hides the awful truth — authority is a social characteristic, not a brute fact.

So far, that’s basically the same critique that Lethem offers.  But unlike Lethem, Shirky offers a pragmatic version of epistemology:

Authoritativeness adheres to persons or institutions who, we jointly agree, have enough of a process for getting things right that we trust them. This bit of epistemological appraisal seems awfully abstract, but it can show up in some pretty concrete cases.DARPA, the Pentagon’s famous R&D lab, launched something in late 2009 called “The Red Balloon Challenge.” They put up ten red weather balloons around the country, and  said to contestants “If you can tell us the latitude and longitude of these balloons, within a mile of their actual positions, we’ll give you $40,000.” However, because the Earth is curved, DARPA also had to explain the Haversine forumla, which converts latitude and longitude to distance.

Now, did DARPA want to write up a long, technical description of the Haversine formula?  No, they did not; they had better things to do. So they did what you or I would have done: They pointed to Wikipedia. DARPA, in essence, told contestants “If you want to compete for this $40,000, you should understand the this formula, and if you don’t, go look at this Wikipedia article.”

Shirky’s account strikes me as the kind of pragmatism advocated by Richard Rorty, of whom I’m a big fan.  What makes something true in a post-metaphysical world?  Well, how about whether or not it helps you track down the balloons and win $40K?  Hurray, pragmatism!

I recognize all of the above is less about Wikipedia and more about philosophy…so thanks for indulging me this post. But do go read The Atlantic’s package.  Particularly Benkler’s response.  I’ll leave you with this Benkler nugget:

That, to me, is the biggest gift Wikipedia has given us; a way of looking at the world around us and seeing the possibility of effective human cooperation, on really complex, large projects, without relying on either market or government processes.