Storytelling, Trust and Bias

One intellectual rule of thumb on which I rely is that one disagrees with Tyler Cowen at one’s peril. Cowen is an economist at George Mason, and is one of the two bloggers at Marginal Revolution, a very popular blog on economics and culture. So while I won’t call what I’m about to write a disagreement, I do want to offer thoughts on Cowen’s TEDx talk on storytelling. (The talk is from 2009, but I just wandered across a transcript of it for the first time today.) Here’s Cowen’s typically interesting premise:

So what are the problems of relying too heavily on stories? …I think of a few major problems when we think too much in terms of narrative. First, narratives tend to be too simple. The point of a narrative is to strip it way, not just into 18 minutes, but most narratives you could present in a sentence or two. So when you strip away detail, you tend to tell stories in terms of good vs. evil, whether it’s a story about your own life or a story about politics. Now, some things actually are good vs. evil. We all know this, right? But I think, as a general rule, we’re too inclined to tell the good vs. evil story. As a simple rule of thumb, just imagine every time you’re telling a good vs. evil story, you’re basically lowering your IQ by ten points or more. If you just adopt that as a kind of inner mental habit, it’s, in my view, one way to get a lot smarter pretty quickly. You don’t have to read any books. Just imagine yourself pressing a button every time you tell the good vs. evil story, and by pressing that button you’re lowering your IQ by ten points or more.

Another set of stories that are popular – if you know Oliver Stone movies or Michael Moore movies. You can’t make a movie and say, “It was all a big accident.” No, it has to be a conspiracy, people plotting together, because a story is about intention. A story is not about spontaneous order or complex human institutions which are the product of human action but not of human design. No, a story is about evil people plotting together. So you hear stories about plots, or even stories about good people plotting things together, just like when you’re watching movies. This, again, is reason to be suspicious.

It is certainly true that we rely heavily on stories. And it’s further true that in doing so we tend towards oversimplification. The human mind doesn’t do well with uncertainty; we seek narrative coherence even where it isn’t justified. And yet stories are deeply useful. They help us process information more easily. In his recent book Thinking, Fast and Slow Nobel prize winning psychologist Daniel Kahneman relied on a kind of story to relate the way we think, through the use of “characters” System 1 and System 2. The former is the intuitive mind, making quick decisions below the level of consciousness. The latter is what we think of as the rational mind, coming to our aid when we consciously reason through something. The dichotomy has some acceptance in the psychology literature (not as a distinction within the brain, but as a theoretical distinction for studying thinking) but some of Kahneman’s colleagues objected to his personification of these “characters.” Here’s Kahneman explaining his conceit:

System 1 and System 2 are so central to the story I tell in this book that I must make it absolutely clear that they are fictious characters. Systems 1 and 2 are not systems in the standard sense of entities with interating aspects or parts. And there is no one part of the brain that either of the systems would call home. You may well ask: What is the point of introducing fictitious characters with ugly names into a serious book? The answer is that the characters are useful because of some quirks of our minds, yours and mine. A sentence is understood more easily if it describes what an agent (System 2) does than if it describes what something is, what properties it has. In other words, “System 2” is a better subject for a sentence than “mental arithmetic.” The mind – especially System 1 – appears to have a special aptitude for the construction and interpretation of stories about active agents, who have personalities, habits, and abilities. (p. 29)

I believe this is a better way of thinking about stories. It may be true that embedding information in stories generally leads to oversimplification or avoidance of uncertainty. On the other hand, plenty of stories can be nuanced and accurately relate complicated information. But even if storytelling does sacrifice something, it gives us the ability to digest and remember information much more quickly and easily. And the fact is that, in practice, most of us are working with very limited resources (most notably time). We need all the help we can get to process information. If stories can help, that’s generally a good thing.

Yet, I generally agree with Cowen’s point about stories that seem too convenient (especially Michael Moore movies!) But I’d like to propose that, rather than setting up a mental filter to resist certain types of stories, we focus our efforts on the evaluation of the sources of those stories.

Here’s where trust and credibility come in. When Kahneman says he’s going to tell me a story about two characters that make up the mind, I trust that he won’t mislead me, that he won’t overstate his case, or eschew complexity so completely that I’m left with a misguided impression. I believe that he’s trying to help me get a basic grip on very complicated information as best he can given the time I’ve allotted to learn it. That’s because he comes recommended by lots of thinkers whom I respect, and because he’s extraordinarily well credentialed. I find him credible and so I trust him.

I’d urge us all to spend more effort evaluating who we trust – whose stories we’ll buy and whose we’ll treat with Cowen-esque skepticism. And perhaps one metric for assessing credibility would in fact be to apply Cowen’s criteria (does so-and-so constantly tell black-and-white stories?) This seems a more promising path. After all, at this point if either Cowen or Kahneman told me a good vs. evil story I’d believe him.



Being told “Be Rational” doesn’t de-bias

More bias research. I’ve been digging in pretty deeply on interventions that help mitigate motivated reasoning and the results aren’t great. There’s self-affirmation, which I discussed in my Atlantic piece, but beyond that it’s pretty thin picking. Motivated reasoning doesn’t track significantly with open-mindedness, and interventions urging participants to be rational seem to have little to no effect. I’d like to see more work on this because I can imagine better pleas (like explaining basically the pervasiveness of bias, or prompting in-group loyalty to those who consider opposing arguments) but for what it’s worth, here is a bit of a paper measuring self-affirmation that also included rationality prompts:

It is of further theoretical relevance that the self-affirmation manipulation used in the present research and the identity buffering it provided exerted no effect on open-mindedness or willingness to compromise in situations heightening the importance of being rational and pragmatic. This lack of impact of selfaffirmation, we argue, reflects the fact that the identity-relevant goal of demonstrating rationality (in contrast with that of demonstrating one’s ideological fidelity or of demonstrating one’s open mindedness and flexibility) is not necessarily compromised either by accepting counter attitudinal arguments or by rejecting them.Both responses are consistent with one’s identity as a rational individual, provided that such acceptance or rejection is perceived to be warranted by the quality of those arguments.The pragmatic implication of the latter finding is worth emphasizing. It suggests that rhetorical exhortations to be rational or accusations of irrationality may succeed in heightening the individuals’ commitment to act in accord with his or her identity as arational person but fail to facilitate open-mindedness and compromise. Indeed, if one’s arguments or proposals are less than compelling, such appeals to rationality may be counterproductive.Simple pleas for open-mindedness, in the absence of addressing the identity stakes for the recipient of one’s arguments and proposals, are similarly likely to be unproductive or even counterproductive. A better strategy, our findings suggest, would be to provide the recipient with a prior opportunity for self-affirmationin a domain irrelevant to the issue under consideration and then(counterintuitively) to heighten the salience of the recipient’s partisan identity.

More discussion of this phenomenon:

Why did a focus on rationality or pragmatism alone prove a less effective debiasing strategy than the combination of identity salience and affirmation—the combination that, across all studies,proved the most effective at combating bias and closed mindedness? Two accounts seem plausible. First, the goals of rationality and pragmatism may not fully discourage the application of prior beliefs. Because people assume their own beliefs to be more valid and objective than alternative beliefs (Armor, 1999;Lord et al., 1979; Pronin, Gilovich, & Ross, 2004; Ross & Ward,1995), telling them to be rational may constitute a suggestion that they should continue to use their existing beliefs in evaluating the validity of new information (Lord, Lepper, & Preston, 1984).Second, making individuals’ political identity or their identity linked convictions salient may increase the perceived significance of the political issue under debate or negotiation. Because identities are tied to long-held values (Cohen, 2003; Turner, 1991),making those identities salient or relevant to an issue may elicit moral concern, at least when peoples’ self-integrity no longer depends on prevailing over the other party


Willpower and belief

I’ve blogged a bunch now about Roy Baumeister’s work on self-control, including the idea that willpower is finite in the short-term, and is depleted throughout the day as you use it. So I feel compelled to post this NYT op-ed claiming something quite different. I don’t know who’s right, but here’s the gist:

In research that we conducted with the psychologist Veronika Job, we confirmed that willpower can indeed be quite limited — but only if you believe it is. When people believe that willpower is fixed and limited, their willpower is easily depleted. But when people believe that willpower is self-renewing — that when you work hard, you’re energized to work more; that when you’ve resisted one temptation, you can better resist the next one — then people successfully exert more willpower. It turns out that willpower is in your head…

…You may contend that these results show only that some people just happen to have more willpower — and know that they do. But on the contrary, we found that anyone can be prompted to think that willpower is not so limited. When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.

I’ll keep my eyes open for a response to this from Baumeister or his colleagues, and let me know if you see one. Meanwhile, this reminded me of a similar phenomenon with respect to IQ:

Yet social psychologists Aronson, Fried, and Good (2001) have developed a possible antidote to stereotype threat. They taught African American and European American college students to think of intelligence as changeable, rather than fixed – a lesson that many psychological studies suggests is true. Students in a control group did not receive this message. Those students who learned about IQ’s malleability improved their grades more than did students who did not receive this message, and also saw academics as more important than did students in the control group. Even more exciting was the finding that Black students benefited more from learning about the malleable nature of intelligence than did White students, showing that this intervention may successfully counteract stereotype threat.

Both of these lines of research suggest that belief matters. Fascinating stuff.


Don’t blog on an empty stomach

(The clip above covers some basics of mental energy and depletion.)

The alternative title for this post was “I’m hungry; you’re wrong.” I’m not sure which is better… In any case, consider this bit from Kahneman:

Resisting this large collection of potential availability biases is possible, but tiresome. You must make the effort to reconsider your intuitions… Maintaining one’s vigilance against biases is a chore — but the chance to avoid a costly mistake is sometimes worth the effort.

Now as I understand it, this is basically a function of self-control. By taxing your brain to counteract biases, you’re drawing on a finite pool of mental energy. We know from studies of willpower that doing so can cause problems. As John Tierney reported in an excellent NYT Magazine piece on decision fatigue:

Decision fatigue helps explain why ordinarily sensible people get angry at colleagues and families, splurge on clothes, buy junk food at the supermarket and can’t resist the dealer’s offer to rustproof their new car. No matter how rational and high-minded you try to be, you can’t make decision after decision without paying a biological price. It’s different from ordinary physical fatigue — you’re not consciously aware of being tired — but you’re low on mental energy.

He also relates a fascinating study of Israeli parole hearings:

There was a pattern to the parole board’s decisions, but it wasn’t related to the men’s ethnic backgrounds, crimes or sentences. It was all about timing, as researchers discovered by analyzing more than 1,100 decisions over the course of a year. Judges, who would hear the prisoners’ appeals and then get advice from the other members of the board, approved parole in about a third of the cases, but the probability of being paroled fluctuated wildly throughout the day. Prisoners who appeared early in the morning received parole about 70 percent of the time, while those who appeared late in the day were paroled less than 10 percent of the time.

It gets more interesting:

As the body uses up glucose, it looks for a quick way to replenish the fuel, leading to a craving for sugar… The benefits of glucose were unmistakable in the study of the Israeli parole board. In midmorning, usually a little before 10:30, the parole board would take a break, and the judges would be served a sandwich and a piece of fruit. The prisoners who appeared just before the break had only about a 20 percent chance of getting parole, but the ones appearing right after had around a 65 percent chance. The odds dropped again as the morning wore on, and prisoners really didn’t want to appear just before lunch: the chance of getting parole at that time was only 10 percent. After lunch it soared up to 60 percent, but only briefly.

So, returning to the Kahneman bit, I wonder if we might observe a similar phenomenon with respect to political bloggers. Would ad hominem attacks follow the same pattern throughout the day? Might bloggers who had just eaten have the mental energy to counter their biases, to treat opponents with respect, etc.? And might that ability be depleted as the time between meals wears on and their mental energy is lowered? This could be tested pretty easily by analyzing the frequency of certain ad hominem clues like, say, the use of the word “idiot”, and then checking frequency against time of day. I’d love to see this data, and not just because I want an excuse to snack while I write.


Algorithms and the future of divorce

In Chapter 21 of Thinking, Fast and Slow Dan Kahneman discusses the frequent superiority of algorithms over intuition. He documents a wide range of studies showing that algorithms tend to beat expert intuition in areas such as medicine, business, career satisfaction and more. In general, the value of algorithms tends to be in “low-validity environments” which are characterized by “a significant degree of uncertainty and unpredictability.”*

Further, says Kahneman, the algorithms in question need not be complex:

…it is possible to develop useful algorithms without any prior statistical research. Smple equally weighted formulas based on existing statistics or on common sense are often very good predctors of significant outcomes. In a memorable example, Daws showed that marital stability is well predicted by a formula:

frequency of lovemaking minus frequency of quarrels

You con’t want your result to be a negative number.

Kahneman concludes the chapter with an example of how this might be used practically: hiring someone at work.

A vast amount of research offers a promise: you are much more likely to find the best candidate if you use this procedure than if you do what people normally do in such situations, which is to go into the interview unprepared and to make choices by an overall intuitive judgment such as “I looked into his eyes and liked what I saw.”

All of this makes me think of online dating. This is an area where we are transitioning from almost entirely intuition to a mixture of algorithms and intuition. Though algorithms aren’t making any final decisions, they are increasingly playing a major role in shaping peoples’ dating activity. If Kahneman is right, and if finding a significant other is a “low-validity environment”, will our increased use of algorithms lead to more optimal outcomes? What truly excites me about this is that we should be able to measure it. Of course, doing so will require very careful attention to the various confounding variables, but I can’t help but wonder: will couples that meet online have a lower divorce rate in 20 years than couples that didn’t? Will individuals who spent significant time dating online be less likely to have been divorced than those that never tried it?

*One might reasonably object that this definition stacks the deck against intuition, and I think this aspect of the debate deserved a mention in the chapter. The focus on “low-validity environments” is the focus on areas where intuition is lousy. So how shocking is it that these are cases where other methods do better? And yet, the conclusions here are extremely valuable. Even though we know that these “low-validity” scenarios are tough to predict, we still generally tend to overrate our ability to predict via intuition and underrate the value of simple algorithms. So in the end this caveat – while worth making – doesn’t really take away from Kahneman’s point.


Fight bias with math

I just finished the chapter in Kahneman’s book on reasoning that dealt with “taming intuitive predictions.” Basically, we make predictions that are too extreme, ignoring regression to the mean, assuming the evidence to be stronger than it is, and ignoring other variables through a phenomenon called “intensity matching.” 

Here’s an example (not from the book; made up by me):

Jane is a ferociously hard-working student who always completes her work well ahead of time.

What GPA do you think she graduates college with? Formulate it in your mind, an actual number.

So Kahneman explains “intensity matching” as being able to toggle back and forth intuitively between variables. If it sounds like Jane is in the top 10% in motivation/work ethic, she must be in the top 10% in GPA. And our mind is pretty good at adjusting between those two. I’m going to pick 3.7 as the intuitive GPA number; if yours is different you can substitute it in below.

Kahneman says this is biased because you’re ignoring regression to the mean, another way of saying that GPA and work ethic aren’t perfectly correlated. so here’s a model to use Kahneman’s trick for taming your prediction.

GPA = work ethic + other factors

What is the correlation between work ethic and GPA? Let’s guess .3 (It can be whatever you think is most accurate).

Now what is the average GPA of college students? Let’s say 2.5? (Again, doesn’t matter).

Here’s Kahneman’s formula for taming your intuitive predictions:

0.3(3.7-2.5)+2.5 = statistically reasonable prediction

So apply the correlation between GPA and work ethic to the difference between your intuitive prediction and the mean, and then go from the mean in the direction of your intuition by that amount.

I played around with some different examples here because my intuition was grappling with some issues around luck vs. static variables, but those aside, this is a neat way to counter one’s bias in the face of limited information.

I can’t help but wonder, though, if the knowledge that this exercise was designed to counter bias led anyone to avoid or at least temper intensity matching. In other words, what were your intuitions for the GPA she’d have after just reading the description of her hard work? Did the knowledge that you were biased lead you to a lower score than the one I mentioned?

Here’s what I’m getting at… If it’s possible (and this is just me riffing right now) to dial down your biases (either consciously or not) when the issue of bias is on your mind, it would seem possible that one’s intuitions could be dialed down going into this exercise, at the point of the original GPA intuition, which could ruin the outcome. Put another way, the math above relies on accurate intensity matching which is itself a bias! If someone were able to come into this with that bias dialed down, they might actually end up with a worse prediction if they also did Kahneman’s suggested process.


Poverty, culture, economics

If you’re at all interested in the science of willpower, self-control, or decision-making (and I am) you really should read John Tierney’s excellent NYT Magazine piece on the subject. Here’s one nugget:

Spears and other researchers argue that this sort of decision fatigue is a major — and hitherto ignored — factor in trapping people in poverty. Because their financial situation forces them to make so many trade-offs, they have less willpower to devote to school, work and other activities that might get them into the middle class. It’s hard to know exactly how important this factor is, but there’s no doubt that willpower is a special problem for poor people. Study after study has shown that low self-control correlates with low income as well as with a host of other problems, including poor achievement in school, divorce, crime, alcoholism and poor health. Lapses in self-control have led to the notion of the “undeserving poor” — epitomized by the image of the welfare mom using food stamps to buy junk food — but Spears urges sympathy for someone who makes decisions all day on a tight budget. In one study, he found that when the poor and the rich go shopping, the poor are much more likely to eat during the shopping trip. This might seem like confirmation of their weak character — after all, they could presumably save money and improve their nutrition by eating meals at home instead of buying ready-to-eat snacks like Cinnabons, which contribute to the higher rate of obesity among the poor. But if a trip to the supermarket induces more decision fatigue in the poor than in the rich — because each purchase requires more mental trade-offs — by the time they reach the cash register, they’ll have less willpower left to resist the Mars bars and Skittles. Not for nothing are these items called impulse purchases.

When we talk about poverty, we inevitably talk about various “cultural” issues, by which we mostly mean “non-economic” issues. Economic improvement can’t pull people out of poverty until we solve various cultural issues that are holding people back, or so the story goes. But we should really look at these as all part of the same cycle. Being poor puts you at a distinct and empirically demonstrable disadvantage when it comes to exerting self-control. Lack of self-control tends to play a large role in life outcomes. Much of what we think of as the “culture” of poverty may in fact be very much an economic issue.

So you’re smart, but are you reasonable?

I was searching for this phantom post pointing to research on verbal reasoning scores and bias (I swear I saw it!) when I came across a fascinating a 1997 paper titled Reasoning Independently of Prior Belief and Individual Differences in Actively Open-Minded Thinking. It’s got some neat if perhaps not totally unsurprising conclusions.

First, a quick disclosure: I don’t know this research area at all. If this paper got trashed by all its peers or if its results haven’t held up over time, I wouldn’t know. I’ve looked at the authors’ faculty pages and it looks like they’ve done more recent work that I’ll dig into at some point soon.

OK so what’s the point of this research: (apologies for the screenshots; it’s non-searchable PDF)

I’ll skip how they did the experiment and go right to findings. Read for yourself if you’re interested.

From the discussion:

The first question this raises in my mind is the extent to which this sort of reasoning style is alterable, both in the short and the long terms. To the extent that it is able to change over the long-term, this will have implications both for education and beyond. (Perhaps it’s possible to teach someone to reason outside of priors, and we do so in high school and college, but they lapse over time? Just one potential example of an implication.) In the short-term this interests me because the priming of epistemic goals could be a central feature of better media design aimed at negating bias. I’m looking forward to reading more on this topic.


Examples of how media could help overcome bias

I have a piece up at The Atlantic (went up Friday) titled “The Future of Media Bias” that I hope you’ll read. I suppose the title is deliberately misleading, since the topic isn’t media bias in the typical sense. Here’s the premise:

Context can affect bias, and on the Web — if I can riff on Lessig — code is context. So why not design media that accounts for the user’s biases and helps him overcome them?

Head over to The Atlantic to read it. In the meantime, I want to expand a bit on some of my ideas.

1) This is not just about pop-up ads. The conceit of the post is visiting a conservative friend’s site and being hit with red-meat pop-ups that act as priming mechanisms. But that was just a way of introducing my point. (Evidence from comments and Twitter suggest this may have distracted some.) So while pop-ups can illustrate the above premise, the premise is in no way restricted to the impacts of pop-ups either as they exist in practice to sell ads or as they might be used in theory.

2) More on self-affirmation. I might have been clearer on how self-affirmation exercises work. Although they were not described in detail in either paper I referenced. Here’s how I understand them: you’re asked to select a value that is important to your self-worth – maybe something like honesty – and then you write a few sentences about how you live by that value. Writing out a few sentences about what an honest and therefore valuable person you are makes you less worried about information threatening your self-worth.

I want to address a few potential objections to embedding an exercise like this in media. One might argue that no one would complete the exercise (I’m imagining it as a pop-up right now.) Perhaps. But you could incentivize it. Perhaps anyone who completes it gets their comments displayed higher or something like that. Build incentives into a community reputation system. Second objection is that maybe you could get people to complete it once, but it’s impractical to think anyone would do it before every article. Fair point. But perhaps you just need people to do it once, and then it’s just displayed alongside or above the content, for the reader to view, to prime them. Finally, I want to note that this is just one random example and I don’t think my argument really rides too much on it. The reason I used it was a) there was lots of good research behind it and b) it fit nicely with the pop-up conceit of the post.

3) More examples. One paper I referenced re: global warming suggests that the headline can impact susceptibility to confirmation/disconfirmation biases. So what if the headline changed depending on the user’s biases? This would be tricky in various ways, but it’s hardly inconceivable. In fact I wish I’d mentioned it since in some ways it seems more practical than the self-confirmation exercises. It would, however, introduce a lot of new difficulties into the headline writing process.

Another thing I might have mentioned is the ordering of content. Imagine you’re looking at a Room for Debate at NYT. Which post should you see first? In the course of researching the Atlantic piece, I came across some evidence that the order you receive information matters (with the first piece being privileged) but I’m having trouble finding where I saw that now. And it’s not obvious that that kind of effect would persist in cases of political information. But, still, there may well be room to explore ordering as a mechanism for dealing with bias.

Finally – and at this point I’m working off of no research and just thinking out loud – what if you established the author’s credibility by showing the work the user was most likely to agree with, in cases of disconfirmation bias (and the reverse in confirmation bias cases)? So, say I’m reading about climate change and you knew I’d be biased against evidence for it. But the author making the case for that evidence wrote something last week that I do agree with, that does fit my worldview. What if a teaser for that piece was displayed alongside the global warming content? Would that help?

I’ve also wondered if asking users to solve fairly simple math problems would prime them to think more rationally, but again, that’s not anything based on research; just a thought.

So that’s it. A few clarifications and some extra thoughts. To me, the hope of this piece would be to inject the basic idea into the dialogue, so that researchers start to think of media as an avenue for testing their theories, and so that designers, coders, journalists, etc. start thinking of this research as input for innovation into how they create new media.

UPDATE: One more cool one I forgot to mention… there’s some evidence that hitting someone with graphical information more forcefully makes a point, such that it would basically take too much mental energy to rationalize around it. You can read more about that here. This column in the Boston Globe refers to a study concluding – in the columnist’s words – “people will actually update their beliefs if you hit them “between the eyes” with bluntly presented, objective facts that contradict their preconceived ideas.” This strikes me as along the same lines as the graph experiment. And so these are things to keep in mind as well.


Your memories are bought and paid for

I’ve been reading a lot about cognitive biases lately, for a post I recently finished (that hopefully will be published soon) and I wanted to share a fascinating post only slightly related to that topic, that didn’t make it into my post on the subject. Jonah Lehrer has a characteristically fascinating post at Wired on how ads implant false memories. You really should read it all. But here’s a bit that struck me, from the perspective of someone interested in media:

A new study, published in The Journal of Consumer Research, helps explain both the success of this marketing strategy and my flawed nostalgia for Coke. It turns out that vivid commercials are incredibly good at tricking the hippocampus (a center of long-term memory in the brain) into believing that the scene we just watched on television actually happened. And it happened to us.

The experiment went like this: 100 undergraduates were introduced to a new popcorn product called “Orville Redenbacher’s Gourmet Fresh Microwave Popcorn.” (No such product exists, but that’s the point.) Then, the students were randomly assigned to various advertisement conditions. Some subjects viewed low-imagery text ads, which described the delicious taste of this new snack food. Others watched a high-imagery commercial, in which they watched all sorts of happy people enjoying this popcorn in their living room. After viewing the ads, the students were then assigned to one of two rooms. In one room, they were given an unrelated survey. In the other room, however, they were given a sample of this fictional new popcorn to taste. (A different Orville Redenbacher popcorn was actually used.)
One week later, all the subjects were quizzed about their memory of the product. Here’s where things get disturbing: While students who saw the low-imagery ad were extremely unlikely to report having tried the popcorn, those who watched the slick commercial were just as likely to have said they tried the popcorn as those who actually did. Furthermore, their ratings of the product were as favorable as those who sampled the salty, buttery treat. Most troubling, perhaps, is that these subjects were extremely confident in these made-up memories. The delusion felt true. They didn’t like the popcorn because they’d seen a good ad. They liked the popcorn because it was delicious.

Read the whole post to learn more about the science here. But isn’t that variable of text vs. video fascinating?