May 202011
 

What will the response be of people like this when Judgement Day / the rapture doesn’t arrive tomorrow?

Here’s a clue:

“A MAN WITH A CONVICTION is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point.” So wrote the celebrated Stanford University psychologist Leon Festinger(PDF), in a passage that might have been referring to climate change denial—the persistent rejection, on the part of so many Americans today, of what we know about global warming and its human causes. But it was too early for that—this was the 1950s—and Festinger was actually describing a famous case study in psychology.

Festinger and several of his colleagues had infiltrated the Seekers, a small Chicago-area cult whose members thought they were communicating with aliens—including one, “Sananda,” who they believed was the astral incarnation of Jesus Christ. The group was led by Dorothy Martin, a Dianetics devotee who transcribed the interstellar messages through automatic writing.

Through her, the aliens had given the precise date of an Earth-rending cataclysm: December 21, 1954. Some of Martin’s followers quit their jobs and sold their property, expecting to be rescued by a flying saucer when the continent split asunder and a new sea swallowed much of the United States. The disciples even went so far as to remove brassieres and rip zippers out of their trousers—the metal, they believed, would pose a danger on the spacecraft.

Festinger and his team were with the cult when the prophecy failed. First, the “boys upstairs” (as the aliens were sometimes called) did not show up and rescue the Seekers. Then December 21 arrived without incident. It was the moment Festinger had been waiting for: How would people so emotionally invested in a belief system react, now that it had been soundly refuted?

At first, the group struggled for an explanation. But then rationalization set in. A new message arrived, announcing that they’d all been spared at the last minute. Festinger summarized the extraterrestrials’ new pronouncement: “The little group, sitting all night long, had spread so much light that God had saved the world from destruction.” Their willingness to believe in the prophecy had saved Earth from the prophecy!

From that day forward, the Seekers, previously shy of the press and indifferent toward evangelizing, began to proselytize. “Their sense of urgency was enormous,” wrote Festinger. The devastation of all they had believed had made them even more certain of their beliefs.

That’s from Chris Mooney’s excellent article on the science of denial.

Share
May 182011
 

By far the most obnoxious line in Bill Keller’s ornery new anti-social media New York Times Magazine column is this bit:

The shortcomings of social media would not bother me awfully if I did not suspect that Facebook friendship and Twitter chatter are displacing real rapport and real conversation, just as Gutenberg’s device displaced remembering.

This kind of thing is completely forgivable in ordinary conversation; there’s nothing wrong with having this suspicion and with bringing it up in a discussion with your friends. But if you’re the editor of the nation’s leading newspaper, and making the merits of digital media your new hobby horse, it seems reasonable to ask that you look into your suspicions just a bit. Heck, have an intern do it.

It wouldn’t take long to learn that the best available research cuts against that suspicion. This 2009 survey data from Pew is still some of the best work on this subject. Their conclusions consistently undermine the thesis that social media use leads to isolation. Here’s just one bit from the Executive Summary:

Some have worried that internet use limits people’s participation in their local communities, but we find that most internet activities have little or a positive relationship to local activity. For instance, internet users are as likely as anyone else to visit with their neighbors in person.

There’s more at the link.

The point is that Keller’s “suspicion” doesn’t fit with what data we have. Unless he has some reason to doubt the data, or wants to clarify just why the data doesn’t capture what he’s talking about, he needs to lose the suspicion.

Also, I can’t help but counter his snark that outsourcing memory “frees a lot of gray matter for important pursuits like …’Real Housewives’” by reminding readers that the use of digital communication cuts into the time Americans spend watching TV, which I mentioned here.

The one critique of Keller’s that I’d love to learn more about is this:

Robert Bjork, who studies memory and learning at U.C.L.A., has noticed that even very smart students, conversant in the Excel spreadsheet, don’t pick up patterns in data that would be evident if they had not let the program do so much of the work.

“Unless there is some actual problem solving and decision making, very little learning happens,” Bjork e-mailed me. “We are not recording devices.”

But nothing more is said about this, and Bjork’s quote hardly proves the theory. Has he studied this? Does he have data? If so, that would be fascinating to see! But Keller has no time for that in his rush to share the results of a little hashtag experiment. (BREAKING: There are stupid people on Twitter.)

One more bit to point out:

My mistrust of social media is intensified by the ephemeral nature of these communications. They are the epitome of in-one-ear-and-out-the-other, which was my mother’s trope for a failure to connect.

This resonates. With so much information going in one ear, it’s hard to absorb it all. One way I cope with that is by cataloguing it all on Delicious so I can go back to it. (What good stuff have I come across in the past year on intelligence?) The other way I process it is by blogging. It’s easy to skim an article and let it go in one ear and out the other. The beauty of writing is that it forces you to process it, and that the post is always there for you should you ultimately forget.

UPDATE: Zeynep Tufekci, sociology professor at U.Maryland, has a great response on her blog. She makes some fascinating points about oral vs. writing cultures (can you guess which one social media fits into?) but here she is on my above point:

Keller argues that “there is something decidedly faux about the camaraderie of Facebook, something illusory about the connectedness of Twitter.” This line of argument, that our social ties are being hollowed out by digital sociality, is also fairly common. I’d like to start by saying that it is not supported by empirical research. Almost all research I have seen shows that people who are social online tend to be social offline, or at most the effect is neutral, and that most people interact socially online with people with whom they also interact offline—i.e. the relationship between online and offline sociality is mostly one of complement and reinforcement rather than displacement and replacement. Increasing numbers of people even make connections online which then they turn into offline connections (See Wang and Wellman, for example), so that even actual “virtual” connections –which I have just argued are less common—are valuable for many communities who otherwise do not have abundant peers around them, say cancer patients or gay youth in small towns.

Share
May 162011
 

People are bad at making predictions. That’s the conclusion of two New York Times columns from the past week. One explains the pervasiveness of “optimism bias”, which leads us to consistently overestimate our chances of success in our endeavors. The other argues that we tend to underestimate our adaptability and therefore overestimate the importance of various outcomes to our well-being. And both predictive shortcomings have benefits.

On the optimism bias:

We now know that underestimating the obstacles life has in store lowers stress and anxiety, leading to better health and well-being. This is one reason optimists recover faster from illnesses and live longer. Believing a goal is attainable motivates us to get closer to our dreams.

And on adaptability:

…an assistant professor at a distinguished university…agonizes for years about whether he will be promoted. Ultimately, his department turns him down. As anticipated, he’s abjectly miserable — but only for a few months. The next year, he’s settled in a new position at a less selective university, and by all available measures is as happy as he’s ever been…

…According to Charles Darwin, the motivational structures within the human brain were forged by natural selection over millions of years. In his framework, the brain has evolved not to make us happy, but to motivate actions that help push our DNA into the next round. Much of the time, in fact, the brain accomplishes that by making us unhappy.

In other words, we are motivated by 1) believing that our odds of success are higher than they actually are and 2) because we convince ourselves that success matters more than it does. So we strive, despite the reality that our odds of success aren’t all that high, and that it doesn’t matter much in the first place.

It’s a curious puzzle. Are we suckers? It depends on how much weight you place on subjective reports of happiness.

Behavioral economists often note that while people who become physically paralyzed experience the expected emotional devastation immediately after their accidents, they generally bounce back surprisingly quickly. Within six months, many have a daily mix of moods similar to their pre-accident experience.

This finding is often interpreted to mean that becoming physically disabled isn’t as bad as most people imagine it to be. The evidence, however, strongly argues otherwise. Many paraplegics, for instance, say they’d submit to a mobility-restoring operation even if its mortality risk were 50 percent.

The point is that when misfortune befalls us, it’s not helpful to mope around endlessly. It’s far better, of course, to adapt as quickly as possible and to make the best of the new circumstances. And that’s roughly what a brain forged by the ruthless pressures of natural selection urges us to do.

One thing I wonder… Yes, our brains evolved this way. And the optimism piece even notes that those coping with depression have unusually accurate predictive capabilities. But I’d still like to know what impact an intervention would have that conveyed accurate predictions on a subject. If you go to an entrepreneur and make the case that the odds of success are quite low 1) are you able to change his predictive beliefs and 2) if so, does it impact his ability to succeed?

It seems possible based on the article that the answers are 1) likely not and 2) likely yes. But I can’t help but hope that some sort of “compatabilism” exists in this arena, where we can simultaneously hold accurate predictive beliefs, but keep them more or less separate from our hopes and motivational beliefs. The article seems to be saying this isn’t how it works. But I guess what I’m asking is how hard have we tried to test this?

F. Scott Fitzgerald once said “The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” Could that be true of the nexus between motivation and prediction?

Share
May 112011
 

At The Atlantic Tech David Wheeler has a piece on the decline of witty headline writing:

In a widely circulated 2010 article criticizing SEO practices, Washington Post columnist Gene Weingarten made the same point by citing a Post article about Conan O’Brien’s refusal to accept a later time slot on NBC. The print headline: “Better never than late.” Online: “Conan O’Brien won’t give up ‘Tonight Show’ time slot to make room for Jay Leno.”

The dearth of witty headlines on the Web is enough to make a copy editor cry. But rather than settle for a humorless future, some online editors are fighting back by refusing to embrace SEO guidelines for every story.

Why is headline word play under threat? It’s not just “because Google doesn’t laugh” although that’s a great line. Yes, SEO is obviously a major reason but I’d argue it’s more than that: it’s about competition.

There is simply more competition for eyeballs than every before. I know I come across more interesting content each day than I could ever hope to read. I’m constantly bookmarking and instapapering and starring stuff and trying to carve out time to read it all. It’s overwhelming.

So when I’m skimming through Google Reader, I want headlines that make it clear what the story is about. “Better never than late” is likely to just get Marked Read. Who has time to figure out what that’s all about?

I don’t mind the nostalgia here; clever headlines are an art, and I’ll miss them if they do end up disappearing. But I’m also pretty sympathetic to no-nonsense headlines, and not just because of SEO. The bottom line is that they save time. And that matters in the attention economy.

(You may have noticed that my “headlines” on this blog are neither useful nor clever, not optimized for search or wit.)

Share
May 072011
 

I’ve written about the potential dangers of Google and Facebook using algorithms to recommend news, with the basic fear being that they’ll recommend stories that confirm my biases rather than “feed me my vegetables.” But Nieman Lab has an interview with the founder of Google News who has quite a different take on what he’s doing:

“Over time,” he replied, “I realized that there is value in the fact that individual editors have a point of view. If everybody tried to be super-objective, then you’d get a watered-down, bland discussion,” he notes. But “you actually benefit from the fact that each publication has its own style, it has its own point of view, and it can articulate a point of view very strongly.” Provided that perspective can be balanced with another — one that, basically, speaks for another audience — that kind of algorithmic objectivity allows for a more nuanced take on news stories than you’d get from individual editors trying, individually, to strike a balance. “You really want the most articulate and passionate people arguing both sides of the equation,” Bharat says. Then, technology can step in to smooth out the edges and locate consensus. ”From the synthesis of diametrically opposing points of view,” in other words, “you can get a better experience than requiring each of them to provide a completely balanced viewpoint.”“That is the opportunity that having an objective, algorithmic intermediary provides you,” Bharat says. “If you trust the algorithm to do a fair job and really share these viewpoints, then you can allow these viewpoints to be quite biased if they want to be.”

[emphasis from Nieman Lab]

A few thoughts:

1. It is very encouraging that Krishna Bharat is thinking about this, even if only as a piece of what he’s doing.

2. He’s right that whether or not you can trust the algorithm matters tremendously.

3. I remain skeptical that there’s any incentive for the algorithm to challenge me. Does he believe doing so will provide something I want and am more likely to click such that this vision fits nicely with Google’s bottom line? Or is he suggesting that he and his team are worried about more than the bottom line?

Bottom line: it’s great he’s thinking about this but he needs to explain why we should really believe it’s a priority if he wants us to truly trust the algorithm.

Share
May 042011
 

Edge has a conversation with cognitive scientist Hugo Mercier on a paper he co-wrote on “the argumentative theory” of human reasoning. Here’s the gist:

In Western thought, for at least the last couple hundred years, people have thought that reasoning was purely for individual reasons. But Dan challenged this idea and said that it was a purely social phenomenon and that the goal was argumentative, the goal was to convince others and to be careful when others try to convince us.

And the beauty of this theory is that not only is it more evolutionarily plausible, but it also accounts for a wide range of data in psychology. Maybe the most salient of phenomena that the argumentative theory explains is the confirmation bias.

This is a neat idea. The first question that comes to mind for me is this: Why, if reasoning isn’t based at least in part on developing correct beliefs, would reasons be useful for convincing others? In other words, if I’m not using reasoning in the traditional enlightenment sense then why would I treat reasons as useful input when someone else tries to convince me? Reasons would seem to be more useful tools for convincing in a world where individuals were also using them as tools for obtaining correct beliefs.

It seems plausible that the argumentative aspect of reasoning is a crucial component evolutionarily, but off the top of my head it seems like a stretch to say it’s the the whole enchilada. Of course I know very little about this area so that’s nothing more than a thought. Edge notes that this theory has been met with some controversy, although it also seems to be viewed as a credible contribution.

Interesting stuff.

Share
May 012011
 

Chris Mooney had a great piece at Mother Jones recently that has been making the rounds. The title is “The Science of Why We Don’t Believe in Science” and it’s a good primer on some of the literature on how we rationalize to protect our biases and more generally our worldview. If you haven’t read it yet I highly recommend it. Here’s the gist:

Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber [9] of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. “They retrieve thoughts that are consistent with their previous beliefs,” says Taber, “and that will lead them to build an argument and challenge what they’re hearing.”

In other words, when we think we’re reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt [10]: We may think we’re being scientists, but we’re actually being lawyers [11] (PDF). Our “reasoning” is a means to a predetermined end—winning our “case”—and is shot through with biases. They include “confirmation bias,” in which we give greater heed to evidence and arguments that bolster our beliefs, and “disconfirmation bias,” in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.

This is related to the concept of “sacred beliefs” that I’ve been harping on lately and the general point I want to make is that the problem Mooney sketches out can be viewed as a challenge that media can help to overcome. What if the media you’re consuming knew from your history and profile that you had certain biases, and therefore presented the information in a way that makes it easier for you to overcome those biases?

There are clearly a number of challenges here. Determining bias is tricky in the first place, because it has to be done in reference to some “truth”, which, given the nature of the problem, is likely controversial. And even once that is done there would need to be a way to measure progress in overcoming biases. But we take for granted that digital media offers the opportunity to design experiences that are customized and interactive in a way newspapers and other “old media” are not. Why not focus on cognitive personalization aimed at helping us think more rationally?

Share