Why did America get rich?

We published a piece last week at HBR.org by Martin Feldstein, the widely respected Harvard economist and conservative, in which he lays out ten reasons why the U.S. remains richer than its peers. It just so happens that at the same time I was reading American Amnesia, a defense of government and the mixed economy, by political scientists Jacob Hacker and Paul Pierson. And they offer their own ten reasons why economies grow. I thought it’d be instructive to compare them.

Here’s Feldstein:

  1. An entrepreneurial culture.

  2. A financial system that supports entrepreneurship.

  3. World-class research universities.

  4. Labor markets that generally link workers and jobs unimpeded by large trade unions, state-owned enterprises, or excessively restrictive labor regulations.

  5. A growing population, including from immigration.

  6. A culture (and a tax system) that encourages hard work and long hours.

  7. A supply of energy that makes North America energy independent.

  8. A favorable regulatory environment.

  9. A smaller size of government than in other industrial countries.

  10. A decentralized political system in which states compete.

Here’s Hacker and Pierson, on what they consider factors for why nations get rich that “would make most analysts’ lists”:

  1. private property rights and legally secure contracts backed up by an independent legal system;

  2. a well-functioning financial system, including a central bank to provide a common currency, manage the macroeconomy, and serve as lender of last resort;

  3. internal markets linked by high-quality communications and transportation infrastructure;

  4. policies supporting and regulating external trade and financial flows;

  5. substantial public investment in R&D and education;

  6. regulation of markets to protect against externalities, such as pollution, and help consumers make informed decisions;

  7. public provision of goods that won’t be provided at all or sufficiently if left to markets, such as public health;

  8. inclusiont o f all sectors of society in the economy, so that human capital isn’t wasted;

  9. reasonably independent and representative political institutions, so that elite capture and rent seeking aren’t rife; and

  10. reasonably capable and autonomous public administration–including an effective tax system that citizens view as legitimate — so that items 1 through 9 can be carried out in relatively efficient and unbaised ways.

These are very different lists, and they mirror America’s political divisions in fairly predictable ways. They disagree over the size of government, for example, and about unions.

Yet, there are some key themes. Scientific research makes both lists, as does education and the role of human capital in general. Effective government shows up on both lists, though the authors disagree about how to encourage it. A financial system that allocates capital well makes both lists, though again with different points of emphasis. Feldstein would likely agree that private property and secure contracts are critical; and I suspect Hacker and Pierson would agree that entrepreneurship belongs on the list.

A couple of years ago I posted about “How to promote economic growth, in one paragraph” and unsurprisingly there’s considerable overlap between that paragraph — by McAfee and Brynjolfsson — and the one I just wrote. In that paragraph, they emphasized education and investments in technology. Like Hacker and Pierson, they highlighted infrastructure.

It’s sometimes easier to talk about economic growth, I think, if you put size of government to the side to begin with. Economies grow largely because of the spread of new and useful ideas. Those ideas are often produced by government-financed research and disseminated by entrepreneurs. Every part of that process depends on people, which means that investments in education are critical. The role of finance is largely to fund that process.

If we can agree on that much, then we can debate the role of government with reference to it. As the two lists I started with show, there are aspects of economic growth where experts disagree. But it’s easy to overstate the disagreement. In fact, as McAfee and Brynjolfsson wrote in that Foreign Affairs piece a few years back, in terms of promoting growth “there is a near consensus among serious economists about many of the policies that are necessary.”

The status quo gap

Everything in America seems polarized right now, so perhaps it’s no surprise that innovation is as well. In Silicon Valley, a small fraction of the population wants to push every technical boundary, transform every industry, re-imagine every job, and re-engineer the limits of life itself. Meanwhile, much of the rest of the country takes solace in tradition, and seeks to preserve the way things are.

That’s the theme of my column in the latest issue of HBR, about a few new books and a film, all in one way or another about innovation. Here’s the gist:

Wolfe focuses on PayPal’s founder, Peter Thiel, and a cohort of teenagers selected by his foundation to forgo college and start companies, but she also lets us peek into a variety of tech subcultures—from seasteaders to polygamists to those who, like Thiel, chase immortality through investments in life-extending technology. The suggestion isn’t that these pursuits are inherently flawed; it’s that they stem from a single-minded desire to push boundaries—technological and social.

Meanwhile, in the rest of the country, Cowen argues, most “Americans are in fact working much harder than before to postpone change, or to avoid it altogether.”

The column covers The Circle, The Upstarts, Valley of the Gods, and The Complacent Class. But my deadline was in February, and since then I’ve read a couple books that feel relevant to this theme.

I wrote recently about A Culture of Growth, in which economic historian Joel Mokyr argues that the connection between intellectuals and artisans helped propel the Industrial Revolution. The cultural divide between Silicon Valley and the rest of the world doesn’t quite map onto Mokyr’s divide, but his broader point about the economic importance of connections between different parts of society surely applies here. The innovation sector works best when it’s attached to the rest of society — to scientists, intellectuals, politicians, and the public. When the divide between the innovative sector and the rest of society grows too large, bad things can happen.

A version of this shows up in Kurt Vonnegut’s first novel, Player Piano, which anticipates basically all of our current debate over artificial intelligence stealing jobs. In the book, the managers and the engineers are separated from the public, physically, as well as ideologically. Neither side ends up looking good, and the ending is not one we should hope for.

Evidence that too much social media is making us unhappy

We published a piece at HBR this past week on Facebook and well-being, by two researchers writing about their recent study:

Overall, our results showed that, while real-world social networks were positively associated with overall well-being, the use of Facebook was negatively associated with overall well-being. These results were particularly strong for mental health; most measures of Facebook use in one year predicted a decrease in mental health in a later year. We found consistently that both liking others’ content and clicking links significantly predicted a subsequent reduction in self-reported physical health, mental health, and life satisfaction.

My view on this has changed over the years, based on both research and experience. In 2012, I was skeptical about an Atlantic piece saying Facebook was making people lonely. It seemed more likely that lonely people were more heavily using Facebook. By 2013, I was at least willing to entertain the idea that social media was bad for us. By 2014, I was willing to say that “it depends,” and to acknowledge that the balance of evidence might be shifting.

Of course, it still depends. The impact of social media can be positive or negative, depending on a host of factors. But what if we’re just communicating with each other too much? At best, it seems social media is running into diminishing marginal returns; at worst, we’re on the other side of an upside-down-U. The internet lets us communicate with each other. As it spreads, people communicate more — email, then Facebook, then Twitter, etc. — and at first welfare improves. Then either welfare plateaus (diminishing returns), or it actually starts going down (upside-down-U).

Sure enough, the research we covered suggests something similar:

Overall our results suggests that well-being declines are also matter of quantity of use rather than only quality of use. If this is the case, our results contrast with previous research arguing that the quantity of social media interaction is irrelevant, and that only the quality of those interactions matter.

There’s some forthcoming research on the benefit to consumers from various online services. I’ll hopefully cover it when it’s out, but at a glance it suggests diminishing returns: the benefits of email are far larger than from Snapchat, for instance.

That’s starting to look like the best case scenario. Either social media is fine, but each platform isn’t much better than the one that came before, or it’s harmful, at least for the people who use it most.

 

From Francis Bacon to Bob Langer

My colleague Steve Prokesch had an excellent profile of Bob Langer in the March-April issue of HBR. Langer runs Langer Labs at MIT, which Steve describes as “one of the most productive research facilities in the world.” Steve summarizes Langer’s recipe for success in a paragraph:

He has a five-pronged approach to accelerating the pace of discoveries and ensuring that they make it out of academia and into the real world as products. It includes a focus on high-impact ideas, a process for crossing the proverbial “valley of death” between research and commercial development, methods for facilitating multidisciplinary collaboration, ways to make the constant turnover of researchers and the limited duration of project funding a plus, and a leadership style that balances freedom and support.

It’s the DARPA formula, about which more here.

Langer’s success is extreme, and not every academic aspires to do work that is socially beneficial. Still, it’s notable how uncontroversial much of that formula is. Who wouldn’t want academia to tackle pressing real-world problems?

However, that ideal — that the fruits of academic inquiry actually be applied to create a better future — may explain quite a lot about how the world became more prosperous, according to economic historian Joel Mokyr. In A Culture of Growth, he reminds readers that Francis Bacon’s contribution to science was not just to make the case for data and experimentation:

Bacon’s work reinforced the trend in the West to build bridges between the realm of natural philosophy and that of the artisan and farmer. These bridges are critical to technological progress… One of the most remarkable trends in the cultural development of European intellectuals after 1500 was the slowly ripening notion that ‘intellectuals should involve themselves in practical matters traditionally considered beneath them’ and that their priorities ‘should take artisans newly seriously.’

From our current vantage, Langer is most notable for his particular approach, and his success in commercializing his science. But while the details of how his work gets diffused are interesting, from the broad sweep of history what’s notable is that he and his colleagues seek to make that happen at all.

3 good lines on when to trust experts

When should we defer to experts, and on what grounds? My general view is More often than most people do. But it’s a complicated question; as Phil Tetlock has shown, not every credentialed expert is worthy of our deference, for example. And even if you want to defer, summarizing what the experts actually think can be a challenge.

But I’ve come across three separate tidbits, each related in some way to this question, and I thought I’d preserve them here.

Dan Kahneman on deference to scientific consensus

One reason you might not trust certain experts is the ongoing “crisis” in social science. That’s a subject for another post, but here’s the famed psychologist Dan Kahneman responding to a post challenging his belief in the literature on priming:

My position when I wrote “Thinking, Fast and Slow” was that if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional. This position still seems reasonable to me – it is why I think people should believe in climate change. But the argument only holds when all relevant results are published.

Kahneman writes that belief in scientific conclusions is “not optional” because, I believe, as a student of bias he realizes that the alternative is far worse. At least for most people, not believing what’s been published in peer-reviewed journals doesn’t mean a sort of enlightened skepticism, it means falling back on the hunches you’d always wanted to believe in the first place.

Fact checking and triangulating the truth

This one comes from Lucas Graves’ excellent new book Deciding What’s Trueabout modern fact-checking. I’m writing more about the book, but for now I want to highlight one phrase and a few sentences that surround it:

The need to consult experts presents a real problem for fact-checkers, pulling them inevitably into the busy national market of data and analysis retailed in the service of ideological agendas. “You’re going to seek independent, nonbiased sources — of which in today’s world there are non,” a PolitiFact editor joked during training…

PolitiFact items often feature analysis from experts or groups with opposing ideologies, a strategy described internally as “triangulating the truth.” “Seek multiple sources,” an editor told new fact-checkers during a training session. “If you can’t get an independent source on something, go to a conservative and go to a liberal and see where they overlap.” Such “triangulation” is not a matter of artificial balance, the editor argued: the point is to make a decisive ruling by forcing these experts to “focus on the facts.” As noted earlier, fact-checkers cannot claim expertise in the complex areas of public policy their work touches on. But they are confident in their ability to choose the right experts and to distill useful information from political arguments.

Bertrand Russell on expert consensus

This one is a wonderful quote from Bertrand Russell, and it comes via an article in the most recent Foreign Affairs called “How America Lost Faith in Experts,” by Tom Nichols, a professor at the U.S. Naval War College. Here’s the Russell quote:

The skepticism that I advocate amounts only to this: (1) that when the experts are agreed, the opposite opinion cannot be held to be certain; (2) that when they are not agreed, no opinion can be regarded as certain by a non-expert; and (3) that when they all hold that no sufficient grounds for a positive opinion exist, the ordinary man would do well to suspend his judgment… These propositions may seem mild, yet, if accepted, they would absolutely revolutionize human life.

The New Industrialism

At the very end of 2012, I wrote a piece for MIT Technology Review (paywall) about an interesting schism in Democratic economic policy circles. Gene Sperling had replaced Larry Summers as director of the National Economic Council in early 2011, and over the next couple of years the Obama administration seemed to talk more about industrial policy, although they didn’t call it that.

The term was “advanced manufacturing.”

But in early 2012, Christina Romer, former chair of Obama’s Council of Economic Advisers, questioned the administration’s manufacturing agenda publicly, writing in The New York Times:

AS an economic historian, I appreciate what manufacturing has contributed to the United States. It was the engine of growth that allowed us to win two world wars and provided millions of families with a ticket to the middle class. But public policy needs to go beyond sentiment and history. It should be based on hard evidence of market failures, and reliable data on the proposals’ impact on jobs and income inequality. So far, a persuasive case for a manufacturing policy remains to be made, while that for many other economic policies is well established.

A survey of economists from around the same time confirmed that the conventional wisdom in the field was firmly against intervention to boost manufacturing.

I didn’t get to interview the players in this debate, but my piece highlighted a group of policy wonks willing to defend a certain sort of industrial policy. I talked to Mark Muro of The Brookings Metropolitan Program, which had released research emphasizing the importance of manufacturing, and Rob Atkinson of ITIF, a think tank promoting an aggressive innovation policy agenda.

Not much came of this debate, at least that I could see. But perhaps a broader warming to some revised form of industrial policy is now perceptible?

Enter economist Noah Smith at Bloomberg View, writing about new ideas in economic growth. Neoliberalism still has its adherents, but what’s the competition?

Looking around, I see the glimmer of a new idea forming. I’m tentatively calling it “New Industrialism.” Its sources are varied — they include liberal think tanks, Silicon Valley thought leaders and various economists. But the central idea is to reform the financial system and government policy to boost business investment.

Business investment — buying equipment, building buildings, training employees, doing research, etc. — is key to growth. It’s also the most volatile component of the economy, meaning that when investment booms, everything is good. The problem is that we have very little idea of how to get businesses to invest more.

My Tech Review story called this group the “institutionalists,” which one of my sources coined when pressed to distinguish his faction. But “New Industrialism” is far clearer.

So who’s a part of this agenda? Smith mentions The Roosevelt Institute’s excellent reports on short-termism; ITIF and Brookings Metro belong on the list; we’ve published a lot at HBR that I think would count (a few examples here, here, here, here).

And one could argue that Brad DeLong and Stephen Cohen’s forthcoming book on Hamiltonian economic policy (my employer is the publisher) is in this conversation, except arguing that such an agenda isn’t new at all. Here’s their recent Fortune piece:

Hamilton’s system was constructed of four drivers that reinforced one another, not just economically but politically: high tariffs; high spending on infrastructure; assumption of the states’ debts by the federal government; and a central bank.

As Smith writes,  “New Industrialism… is not yet mainstream,” and frankly there’s still a lot to be fleshed out before we can even ask whether such an agenda is superior to the alternatives. But he concludes that “it could be just the thing we need to fix the holes in our neoliberal growth policy.” He may just be right.

Win-win economics

The assumption that economic growth and equality are necessarily at odds is fading fast.

wsj productivity inequality

That’s via Greg Ip and The Wall Street Journal.

If inequality were the price we paid for growth, you’d expect productivity and income captured by the rich to go hand-in-hand. Instead, we see virtually the opposite.

Sure enough, the conversation here is changing. Suddenly, you’re no longer considered a pollyanna for suggesting we can further growth and equality at the same time. For instance…

Christine Lagarde, managing director of the IMF in The Boston Globe:

The traditional argument has been that income inequality is a necessary by-product of growth, that redistributive policies to mitigate excessive inequality hinder growth, or that inequality will solve itself if you sustain growth at any cost.

Based upon world-wide research, the IMF has challenged these notions. In fact, we have found that countries that have managed to reduce excessive inequality have enjoyed both faster and more sustainable growth. In addition, our research shows that when redistributive policies have been well designed and implemented, there has been little adverse effect on growth.

Indeed, low growth and high inequality are two sides of the same coin: Economic policies need to pay attention to both prosperity and equity.

And Larry Summers:

Trade-offs have long been at the center of economics. The aphorism “there is no such thing as a free lunch” captures a central economic idea: You cannot get something for nothing. Among the many trade-offs emphasized in economics courses are guns vs. butter, public vs. private, efficiency vs. equity, environmental protection vs. economic growth, consumption vs. investment, inflation vs. unemployment, quality vs. quantity or cost and short-term vs. long-term performance…

Yet I am increasingly convinced that “no free lunch” oversimplifies matters and makes economics too dismal a science. It would be true in a world where all opportunities to make things better had been fully exploited — where, to use another cliché, there were no $100 bills lying on the street. But recent experience suggests that by improving incentives or making strategic investments, we can achieve apparently conflicting objectives to a greater extent than conventional wisdom would suggest…

A quite different example involves the alleged trade-off between equity and efficiency — specifically, the concern that redistribution hurts economic performance and stymies growth. It is true that tax increases produce at least some adverse incentives and that providing income-based government benefits involves implicit taxes. But matters are much more complex than a simple trade-off. Antitrust laws that attack rent-seeking promote both equity and efficiency, as do measures that increase educational opportunity. The rational strengthening of financial regulation reduces the incidence of financial crises, thus improving economic performance while promoting fairness by helping consumers. In demand-short economies, the greater equity achieved through more progressive taxation means more spending and fuller employment of resources. These examples do not deny trade-offs between equity and efficiency. They do, however, suggest that there is nothing ineluctable about them. Both can be enhanced through proper policy.

And David Wessel, reviewing Robert Gordon’s new book:

Whatever the causes of the distressing slowdown in the growth of productivity (the amount of stuff produced for each hour of work) and the increase in inequality, what policies might both increase productivity and decrease inequality?

Many years ago, economist Art Okun argued that we had to choose between policies that increased efficiency and those that increased equity. Perhaps. But  if there are policies that could achieve both, it’s time to try them.

Tradeoffs remain real, as do unintended consequences. Here are two examples that challenged my biases. Activist hedge funds seem to represent a tradeoff between rising productivity and rising wages. And there’s some reason to think that new overtime rules won’t work the way they’re intended.

Not everything works out the way you’d hope. But sometimes the interests of growth and equality are happily aligned. Which is good, because we could use a whole lot more of both.

What I wrote about in 2015

When you’re writing regularly, even weekly, the stories can start to blur together. For me at least, it can get to the point where it’s hard to answer the question What have you been working on lately? So I decided this year to look back at everything I wrote in 2015. And as I suspected, a couple major themes emerged. I’ve grouped them together here, mostly for my own clarity. Here’s what I wrote about in 2015:

Algorithms, bias, and decision-making

I spent a lot of time reading, writing, and editing about how humans feel about robots and algorithms, and it culminated in this piece for the June issue of HBR on the subject. Long story short, we’re skeptical of algorithms, but give them a voice and put them inside a robot’s body and we start to become more trusting. If you just want to read about the research on our fear of algorithms, I wrote about that here.

If you read too much about algorithms, you can come away believing that people are pretty hopeless at decision-making by comparison. There’s some truth to that. But another theme I covered this year is just how good some people are at making decisions. I wrote about Philip Tetlock’s latest work, I wrote about his work and others on what good thinkers do, I wrote about why people can come to different conclusions about the same data, and then here on this blog I tried to sum it all up and to offer an optimistic view on bias and human belief.

Inequality, wages, and labor

I was excited to write more about inequality this year, but along the way some of the most interesting assignments were about the more fundamental question: how do labor markets work? This piece asked that question from the perspective of a CEO considering raising wages. This one compared skills and market power as explanations for inequality.

I also wrote once more about whether robots are going to take all our jobs.

I also wrote a few narrower pieces about inequality. Could profit sharing help? (Yes.) What about just treating workers better? Amazingly, that can help too.

But it’s not all good news. Here’s why I’m skeptical of new rules to help more people get paid for overtime. And what do you do when finance seems to improve the productivity of businesses, but at the expense of workers?

I also had the chance to interview some thoughtful people on these topics, including Larry Summers, Robert Reich, and Commerce Secretary Penny Pritzker.

Policy

One of my favorites from the past year was this essay for The Atlantic about how the welfare state encourages entrepreneurship. I wrote about some of the research underlying that thesis for HBR, too.

A bunch of other stuff

CEOs are luckier than they are smart, and here’s what interim CEOs do differently from permanent ones.

Sometimes distrust makes you more effective.

Scientists require more money if their employer won’t let them publish.

Regulators go easier on socially responsible firms, and the values on your company’s website may matter after all.

Predicting a startup’s success based on idea alone is easier in some industries than others.

Startup “joiners” are sort of like founders, but different.

Companies in happier cities invest more for the long-term.

Is it smart to talk back to a jerk boss?

People use instant messaging more in a blizzard.

Sometimes multitasking works.

And media aggregation goes back to the very beginning of American journalism.

Explaining expert disagreement

The IGM Forum recently published its latest poll of economists, and it reminds me of one of the reasons that these polls are so interesting. They illustrate that expert disagreement is seldom a 50/50 split between two diametrically opposed viewpoints. And incorporating these more complicated disagreements into journalism isn’t always easy.

One big, well known challenge in reporting is to try to be fair to various viewpoints, without resorting to the most naive type of “false balance”, like including the views of a climate change denier just to make sure you have “the other side of the story.”

But false balance isn’t the only complication when reporting on experts’ views on an empirical topic. How do you portray disagreement? Typically, the easiest way is to quote one source on one side of the disagreement, and another source on the other side. But that assumes the expert disagreement is split down the middle, between only two camps.

Sometimes expert opinion really is symmetrical, like economists on the $15 minimum wage and whether it would decrease total employment:

igm min wage

 

This data (from the IGM poll) helps visualize a symmetrical disagreement among experts, arguably the easiest case for reporters to deal with. But even here there’s a subtlety. If you get a source to say a $15 minimum wage will kill jobs, and one to say that it won’t, have you correctly reported on the disagreement among economists?

Sort of. But you’ve left out the single biggest chunk of experts: the ones who aren’t sure. Should you quote an agnostic in your piece? Should you give the agnostic’s arguments more attention, since they represent the most prominent expert viewpoint? I’ve tried writing about stories like this, but making uncertainty into a snappy headline isn’t easy.

Or consider one of my favorite IGM polls, about the long-term effects of the stimulus bill, a subject of endless political debate:

igm stimulus

You can see here that there is disagreement over the effects of the stimulus. But it’s not that a lot of economists think it was worth it, and a bunch think it wasn’t. It’s that a lot of economists think it was worth it, and a smaller but still significant group just aren’t sure.

What’s neat about this is that if you believe the results*, they really can help guide the way a reporter should balance a story on this topic. Obviously, you’ll want to find someone to explain why the stimulus was a good idea. But when you’re searching for a viewpoint to counter that, you don’t actually want to find an opponent, at least if your goal is to faithfully explain the debate between experts. Instead, you want to find someone who has serious doubts, someone who isn’t sure.

The IGM poll demonstrates the complexity of these disagreements, and it actually serves as a useful heuristic if you’re writing a story about one of these topics. I’m not saying journalists should be devout in allocating space in their stories to exactly match polls of experts — there’s more to good reporting than getting experts’ views right; these polls don’t necessarily do enough to weight the views of specialists in the area of interest; even on economic stories there are other experts to consult beyond just economists; these polls aren’t the final word on what economists think*; journalists should consider evidence directly and not rely exclusively on experts; etc.

Nonetheless, they’re a nice check. If your story gives as much space to stimulus skeptics as to advocates, you’ve probably succumb to false balance. On the other hand, if it’s only citing stimulus advocates, maybe there’s room to throw some uncertainty into the mix.

I’m thinking of all of this because of the most recent poll, on the Fed and interest rates:

igm fedThe Fed is likely going to raise interest rates. What do experts think of that, and how should you report on it? Well, it’s complicated, but polls like this do give you a rough sense of things.

There’s a lot of support for the move — it’s the single biggest position within this group, so it probably should be represented in your story. But the uncertains and the disagrees are almost as big a group, if they’re combined. That’s probably worth a mention, too.

*After I’d finished a draft of this post, Tyler Cowen posted a spirited critique of the latest IGM results. Worth keeping in mind. If only someone could run a meta-survey of how trustworthy experts deem such results!

Why I think the Vox crew is too cynical about bias and belief

Will this post convince anyone? I’m optimistic.

The Vox policy podcast, The Weeds, had a long segment on bias and political belief this week, which was excellent. I didn’t disagree with anything Ezra Klein, Sarah Kliff, and Matthew Yglesias said, but I think they left out some reasons for optimism. If you can only tell one story about bias, the one they told is the right one. People are really biased, and most of us struggle to interpret new information that goes against our existing beliefs. Motivated reasoning and identity-protective cognition are the norm.

All true. But there are other veins of research that paint a slightly more optimistic picture. First, we’re not all equally biased. Second, it actually is possible to make people less biased, at least in certain circumstances. And third, just because I can’t resist, algorithms can help us be less biased, if we’d just learn to trust them.

(Quick aside: Bias can refer to a lot of things. In this post by I’m thinking only about a specific type. Habits of thought that prevent people from reaching empirically correct beliefs about reality.)

We’re not all equally biased. Here I’m thinking of two lines of research. The first is about geopolitical forecasting, by Philip Tetlock, Barbara Mellers, Don Moore, and others, mostly at the University of Pennsylvania and Berkeley. Tetlock is famous for his 2005 book on political forecasting, but he’s done equally interesting work since then, summarized in a new popular book Superforecasting. I’ve written about that work here and here.

Basically, lots of people, including many experts, are really bad at making predictions about the world. But some people are much better than others. Some of what separates these “superforecasters” from everyone else are things like knowledge and intelligence. But some of it is also about their style of thinking. Good forecasters are open-minded, and tend to avoid using a single mental model to think about the future. Instead, they sort of “average” together multiple mental models. This is all covered in Tetlock’s 2005 book.

What Tetlock and company have shown in their more recent research is just how good these superforecasters are at taking new information and adjusting their beliefs accordingly. They change their mind frequently and subtly in ways that demonstrably correspond to more accurate beliefs about the state of the world. They really don’t look the standard story about bias and identity protection.

Another line of research in this same vein comes from Keith Stanovich at the University of Toronto, who has studied the idea of rationality, and written extensively about how to not only define it but identify it. He also finds that people with certain characteristics — open-minded personality, knowledge of probability — are less prone to common cognitive biases.

There are ways to make people less biased. When I first started reading and writing about bias it seemed hard to find proven ways to get around it. Just telling people to be more open-minded, for instance, doesn’t work. But even then there did seem to be one path: I latched on to the research on self-affirmation, which showed that if you had people focus on an element of their identity unrelated to politics, it made them more likely to accept countervailing evidence. Having been primed to think about their self-worth in a non-political context meant that new political knowledge was less threatening.

That method is in line with the research that the Vox crew discussed — it’s sort of a jujitsu move that turns our weird irrationality against itself, de-biasing via emotional priming.

But we now know that’s not the only way. I mentioned that Stanovich has documented that knowledge of probability helps people avoid certain biases. Tetlock has found something similar, and has proven that you don’t need to put people through a full course in the subject to get the effect. As I summarized earlier this year at HBR:

Training in probability can guard against bias. Some of the forecasters were given training in “probabilistic reasoning,” which basically means they were told to look for data on how similar cases had turned out in the past before trying to predict the future. Humans are surprisingly bad at this, and tend to overestimate the chances that the future will be different than the past. The forecasters who received this training performed better than those who did not.

There are other de-biasing techniques that can work, too. Putting people in groups can help under some circumstances, Tetlock found. And Cass Sunstein and Reid Hastie have outlined ways to help groups get past their own biases. Francesca Gino and John Beshears offer their own list of ways to address bias here.

None of this is to say it’s easy to become less biased, but it is at least sometimes possible. (Much of the work I’ve cited isn’t necessarily about politics, a particularly hard area, but recall that Tetlock’s work is on geopolitical events.)

Identifying and training rationality. So we know some people are more biased than others, and that bias can be mitigated to at least some extent through training, structured decision-making, etc. But how many organizations specifically set out to determine during the hiring process how biased someone is? How many explicitly build de-biasing into their work?

Both of these things are possible. Tetlock and his colleagues have shown that prediction tournaments work quite well at identifying who’s more and less biased. I believe Stanovich is working on ways to test for rationality. Tetlock has published an entire course on good forecasting (which is basically about being less biased) on Edge.org.

Again, I don’t really think any of this refutes what the Vox team covered. But it’s an important part of the story. Writers, political analysts, and citizens can be more or less biased, based on temperament, education, context, and training. There actually is a lot we can do to address the systematic effects of cognitive bias in political life.

If all that doesn’t work there’s always algorithms. I mostly kid, at least in the context of politics, where values are central. But algorithms already are way less biased than people in a lot of circumstances (though in many cases they can totally have biases of their own) and they’re only likely to improve.

Of course, being humans, we also have an irrational bias against deferring to algorithms, even when we know they’re more likely to be right. But as I’ve written about, research has identified de-biasing tricks that help us overcome our bias for human judgment, too.