The New Industrialism

At the very end of 2012, I wrote a piece for MIT Technology Review (paywall) about an interesting schism in Democratic economic policy circles. Gene Sperling had replaced Larry Summers as director of the National Economic Council in early 2011, and over the next couple of years the Obama administration seemed to talk more about industrial policy, although they didn’t call it that.

The term was “advanced manufacturing.”

But in early 2012, Christina Romer, former chair of Obama’s Council of Economic Advisers, questioned the administration’s manufacturing agenda publicly, writing in The New York Times:

AS an economic historian, I appreciate what manufacturing has contributed to the United States. It was the engine of growth that allowed us to win two world wars and provided millions of families with a ticket to the middle class. But public policy needs to go beyond sentiment and history. It should be based on hard evidence of market failures, and reliable data on the proposals’ impact on jobs and income inequality. So far, a persuasive case for a manufacturing policy remains to be made, while that for many other economic policies is well established.

A survey of economists from around the same time confirmed that the conventional wisdom in the field was firmly against intervention to boost manufacturing.

I didn’t get to interview the players in this debate, but my piece highlighted a group of policy wonks willing to defend a certain sort of industrial policy. I talked to Mark Muro of The Brookings Metropolitan Program, which had released research emphasizing the importance of manufacturing, and Rob Atkinson of ITIF, a think tank promoting an aggressive innovation policy agenda.

Not much came of this debate, at least that I could see. But perhaps a broader warming to some revised form of industrial policy is now perceptible?

Enter economist Noah Smith at Bloomberg View, writing about new ideas in economic growth. Neoliberalism still has its adherents, but what’s the competition?

Looking around, I see the glimmer of a new idea forming. I’m tentatively calling it “New Industrialism.” Its sources are varied — they include liberal think tanks, Silicon Valley thought leaders and various economists. But the central idea is to reform the financial system and government policy to boost business investment.

Business investment — buying equipment, building buildings, training employees, doing research, etc. — is key to growth. It’s also the most volatile component of the economy, meaning that when investment booms, everything is good. The problem is that we have very little idea of how to get businesses to invest more.

My Tech Review story called this group the “institutionalists,” which one of my sources coined when pressed to distinguish his faction. But “New Industrialism” is far clearer.

So who’s a part of this agenda? Smith mentions The Roosevelt Institute’s excellent reports on short-termism; ITIF and Brookings Metro belong on the list; we’ve published a lot at HBR that I think would count (a few examples here, here, here, here).

And one could argue that Brad DeLong and Stephen Cohen’s forthcoming book on Hamiltonian economic policy (my employer is the publisher) is in this conversation, except arguing that such an agenda isn’t new at all. Here’s their recent Fortune piece:

Hamilton’s system was constructed of four drivers that reinforced one another, not just economically but politically: high tariffs; high spending on infrastructure; assumption of the states’ debts by the federal government; and a central bank.

As Smith writes,  “New Industrialism… is not yet mainstream,” and frankly there’s still a lot to be fleshed out before we can even ask whether such an agenda is superior to the alternatives. But he concludes that “it could be just the thing we need to fix the holes in our neoliberal growth policy.” He may just be right.

Win-win economics

The assumption that economic growth and equality are necessarily at odds is fading fast.

wsj productivity inequality

That’s via Greg Ip and The Wall Street Journal.

If inequality were the price we paid for growth, you’d expect productivity and income captured by the rich to go hand-in-hand. Instead, we see virtually the opposite.

Sure enough, the conversation here is changing. Suddenly, you’re no longer considered a pollyanna for suggesting we can further growth and equality at the same time. For instance…

Christine Lagarde, managing director of the IMF in The Boston Globe:

The traditional argument has been that income inequality is a necessary by-product of growth, that redistributive policies to mitigate excessive inequality hinder growth, or that inequality will solve itself if you sustain growth at any cost.

Based upon world-wide research, the IMF has challenged these notions. In fact, we have found that countries that have managed to reduce excessive inequality have enjoyed both faster and more sustainable growth. In addition, our research shows that when redistributive policies have been well designed and implemented, there has been little adverse effect on growth.

Indeed, low growth and high inequality are two sides of the same coin: Economic policies need to pay attention to both prosperity and equity.

And Larry Summers:

Trade-offs have long been at the center of economics. The aphorism “there is no such thing as a free lunch” captures a central economic idea: You cannot get something for nothing. Among the many trade-offs emphasized in economics courses are guns vs. butter, public vs. private, efficiency vs. equity, environmental protection vs. economic growth, consumption vs. investment, inflation vs. unemployment, quality vs. quantity or cost and short-term vs. long-term performance…

Yet I am increasingly convinced that “no free lunch” oversimplifies matters and makes economics too dismal a science. It would be true in a world where all opportunities to make things better had been fully exploited — where, to use another cliché, there were no $100 bills lying on the street. But recent experience suggests that by improving incentives or making strategic investments, we can achieve apparently conflicting objectives to a greater extent than conventional wisdom would suggest…

A quite different example involves the alleged trade-off between equity and efficiency — specifically, the concern that redistribution hurts economic performance and stymies growth. It is true that tax increases produce at least some adverse incentives and that providing income-based government benefits involves implicit taxes. But matters are much more complex than a simple trade-off. Antitrust laws that attack rent-seeking promote both equity and efficiency, as do measures that increase educational opportunity. The rational strengthening of financial regulation reduces the incidence of financial crises, thus improving economic performance while promoting fairness by helping consumers. In demand-short economies, the greater equity achieved through more progressive taxation means more spending and fuller employment of resources. These examples do not deny trade-offs between equity and efficiency. They do, however, suggest that there is nothing ineluctable about them. Both can be enhanced through proper policy.

And David Wessel, reviewing Robert Gordon’s new book:

Whatever the causes of the distressing slowdown in the growth of productivity (the amount of stuff produced for each hour of work) and the increase in inequality, what policies might both increase productivity and decrease inequality?

Many years ago, economist Art Okun argued that we had to choose between policies that increased efficiency and those that increased equity. Perhaps. But  if there are policies that could achieve both, it’s time to try them.

Tradeoffs remain real, as do unintended consequences. Here are two examples that challenged my biases. Activist hedge funds seem to represent a tradeoff between rising productivity and rising wages. And there’s some reason to think that new overtime rules won’t work the way they’re intended.

Not everything works out the way you’d hope. But sometimes the interests of growth and equality are happily aligned. Which is good, because we could use a whole lot more of both.

What I wrote about in 2015

When you’re writing regularly, even weekly, the stories can start to blur together. For me at least, it can get to the point where it’s hard to answer the question What have you been working on lately? So I decided this year to look back at everything I wrote in 2015. And as I suspected, a couple major themes emerged. I’ve grouped them together here, mostly for my own clarity. Here’s what I wrote about in 2015:

Algorithms, bias, and decision-making

I spent a lot of time reading, writing, and editing about how humans feel about robots and algorithms, and it culminated in this piece for the June issue of HBR on the subject. Long story short, we’re skeptical of algorithms, but give them a voice and put them inside a robot’s body and we start to become more trusting. If you just want to read about the research on our fear of algorithms, I wrote about that here.

If you read too much about algorithms, you can come away believing that people are pretty hopeless at decision-making by comparison. There’s some truth to that. But another theme I covered this year is just how good some people are at making decisions. I wrote about Philip Tetlock’s latest work, I wrote about his work and others on what good thinkers do, I wrote about why people can come to different conclusions about the same data, and then here on this blog I tried to sum it all up and to offer an optimistic view on bias and human belief.

Inequality, wages, and labor

I was excited to write more about inequality this year, but along the way some of the most interesting assignments were about the more fundamental question: how do labor markets work? This piece asked that question from the perspective of a CEO considering raising wages. This one compared skills and market power as explanations for inequality.

I also wrote once more about whether robots are going to take all our jobs.

I also wrote a few narrower pieces about inequality. Could profit sharing help? (Yes.) What about just treating workers better? Amazingly, that can help too.

But it’s not all good news. Here’s why I’m skeptical of new rules to help more people get paid for overtime. And what do you do when finance seems to improve the productivity of businesses, but at the expense of workers?

I also had the chance to interview some thoughtful people on these topics, including Larry Summers, Robert Reich, and Commerce Secretary Penny Pritzker.


One of my favorites from the past year was this essay for The Atlantic about how the welfare state encourages entrepreneurship. I wrote about some of the research underlying that thesis for HBR, too.

A bunch of other stuff

CEOs are luckier than they are smart, and here’s what interim CEOs do differently from permanent ones.

Sometimes distrust makes you more effective.

Scientists require more money if their employer won’t let them publish.

Regulators go easier on socially responsible firms, and the values on your company’s website may matter after all.

Predicting a startup’s success based on idea alone is easier in some industries than others.

Startup “joiners” are sort of like founders, but different.

Companies in happier cities invest more for the long-term.

Is it smart to talk back to a jerk boss?

People use instant messaging more in a blizzard.

Sometimes multitasking works.

And media aggregation goes back to the very beginning of American journalism.

Explaining expert disagreement

The IGM Forum recently published its latest poll of economists, and it reminds me of one of the reasons that these polls are so interesting. They illustrate that expert disagreement is seldom a 50/50 split between two diametrically opposed viewpoints. And incorporating these more complicated disagreements into journalism isn’t always easy.

One big, well known challenge in reporting is to try to be fair to various viewpoints, without resorting to the most naive type of “false balance”, like including the views of a climate change denier just to make sure you have “the other side of the story.”

But false balance isn’t the only complication when reporting on experts’ views on an empirical topic. How do you portray disagreement? Typically, the easiest way is to quote one source on one side of the disagreement, and another source on the other side. But that assumes the expert disagreement is split down the middle, between only two camps.

Sometimes expert opinion really is symmetrical, like economists on the $15 minimum wage and whether it would decrease total employment:

igm min wage


This data (from the IGM poll) helps visualize a symmetrical disagreement among experts, arguably the easiest case for reporters to deal with. But even here there’s a subtlety. If you get a source to say a $15 minimum wage will kill jobs, and one to say that it won’t, have you correctly reported on the disagreement among economists?

Sort of. But you’ve left out the single biggest chunk of experts: the ones who aren’t sure. Should you quote an agnostic in your piece? Should you give the agnostic’s arguments more attention, since they represent the most prominent expert viewpoint? I’ve tried writing about stories like this, but making uncertainty into a snappy headline isn’t easy.

Or consider one of my favorite IGM polls, about the long-term effects of the stimulus bill, a subject of endless political debate:

igm stimulus

You can see here that there is disagreement over the effects of the stimulus. But it’s not that a lot of economists think it was worth it, and a bunch think it wasn’t. It’s that a lot of economists think it was worth it, and a smaller but still significant group just aren’t sure.

What’s neat about this is that if you believe the results*, they really can help guide the way a reporter should balance a story on this topic. Obviously, you’ll want to find someone to explain why the stimulus was a good idea. But when you’re searching for a viewpoint to counter that, you don’t actually want to find an opponent, at least if your goal is to faithfully explain the debate between experts. Instead, you want to find someone who has serious doubts, someone who isn’t sure.

The IGM poll demonstrates the complexity of these disagreements, and it actually serves as a useful heuristic if you’re writing a story about one of these topics. I’m not saying journalists should be devout in allocating space in their stories to exactly match polls of experts — there’s more to good reporting than getting experts’ views right; these polls don’t necessarily do enough to weight the views of specialists in the area of interest; even on economic stories there are other experts to consult beyond just economists; these polls aren’t the final word on what economists think*; journalists should consider evidence directly and not rely exclusively on experts; etc.

Nonetheless, they’re a nice check. If your story gives as much space to stimulus skeptics as to advocates, you’ve probably succumb to false balance. On the other hand, if it’s only citing stimulus advocates, maybe there’s room to throw some uncertainty into the mix.

I’m thinking of all of this because of the most recent poll, on the Fed and interest rates:

igm fedThe Fed is likely going to raise interest rates. What do experts think of that, and how should you report on it? Well, it’s complicated, but polls like this do give you a rough sense of things.

There’s a lot of support for the move — it’s the single biggest position within this group, so it probably should be represented in your story. But the uncertains and the disagrees are almost as big a group, if they’re combined. That’s probably worth a mention, too.

*After I’d finished a draft of this post, Tyler Cowen posted a spirited critique of the latest IGM results. Worth keeping in mind. If only someone could run a meta-survey of how trustworthy experts deem such results!

Why I think the Vox crew is too cynical about bias and belief

Will this post convince anyone? I’m optimistic.

The Vox policy podcast, The Weeds, had a long segment on bias and political belief this week, which was excellent. I didn’t disagree with anything Ezra Klein, Sarah Kliff, and Matthew Yglesias said, but I think they left out some reasons for optimism. If you can only tell one story about bias, the one they told is the right one. People are really biased, and most of us struggle to interpret new information that goes against our existing beliefs. Motivated reasoning and identity-protective cognition are the norm.

All true. But there are other veins of research that paint a slightly more optimistic picture. First, we’re not all equally biased. Second, it actually is possible to make people less biased, at least in certain circumstances. And third, just because I can’t resist, algorithms can help us be less biased, if we’d just learn to trust them.

(Quick aside: Bias can refer to a lot of things. In this post by I’m thinking only about a specific type. Habits of thought that prevent people from reaching empirically correct beliefs about reality.)

We’re not all equally biased. Here I’m thinking of two lines of research. The first is about geopolitical forecasting, by Philip Tetlock, Barbara Mellers, Don Moore, and others, mostly at the University of Pennsylvania and Berkeley. Tetlock is famous for his 2005 book on political forecasting, but he’s done equally interesting work since then, summarized in a new popular book Superforecasting. I’ve written about that work here and here.

Basically, lots of people, including many experts, are really bad at making predictions about the world. But some people are much better than others. Some of what separates these “superforecasters” from everyone else are things like knowledge and intelligence. But some of it is also about their style of thinking. Good forecasters are open-minded, and tend to avoid using a single mental model to think about the future. Instead, they sort of “average” together multiple mental models. This is all covered in Tetlock’s 2005 book.

What Tetlock and company have shown in their more recent research is just how good these superforecasters are at taking new information and adjusting their beliefs accordingly. They change their mind frequently and subtly in ways that demonstrably correspond to more accurate beliefs about the state of the world. They really don’t look the standard story about bias and identity protection.

Another line of research in this same vein comes from Keith Stanovich at the University of Toronto, who has studied the idea of rationality, and written extensively about how to not only define it but identify it. He also finds that people with certain characteristics — open-minded personality, knowledge of probability — are less prone to common cognitive biases.

There are ways to make people less biased. When I first started reading and writing about bias it seemed hard to find proven ways to get around it. Just telling people to be more open-minded, for instance, doesn’t work. But even then there did seem to be one path: I latched on to the research on self-affirmation, which showed that if you had people focus on an element of their identity unrelated to politics, it made them more likely to accept countervailing evidence. Having been primed to think about their self-worth in a non-political context meant that new political knowledge was less threatening.

That method is in line with the research that the Vox crew discussed — it’s sort of a jujitsu move that turns our weird irrationality against itself, de-biasing via emotional priming.

But we now know that’s not the only way. I mentioned that Stanovich has documented that knowledge of probability helps people avoid certain biases. Tetlock has found something similar, and has proven that you don’t need to put people through a full course in the subject to get the effect. As I summarized earlier this year at HBR:

Training in probability can guard against bias. Some of the forecasters were given training in “probabilistic reasoning,” which basically means they were told to look for data on how similar cases had turned out in the past before trying to predict the future. Humans are surprisingly bad at this, and tend to overestimate the chances that the future will be different than the past. The forecasters who received this training performed better than those who did not.

There are other de-biasing techniques that can work, too. Putting people in groups can help under some circumstances, Tetlock found. And Cass Sunstein and Reid Hastie have outlined ways to help groups get past their own biases. Francesca Gino and John Beshears offer their own list of ways to address bias here.

None of this is to say it’s easy to become less biased, but it is at least sometimes possible. (Much of the work I’ve cited isn’t necessarily about politics, a particularly hard area, but recall that Tetlock’s work is on geopolitical events.)

Identifying and training rationality. So we know some people are more biased than others, and that bias can be mitigated to at least some extent through training, structured decision-making, etc. But how many organizations specifically set out to determine during the hiring process how biased someone is? How many explicitly build de-biasing into their work?

Both of these things are possible. Tetlock and his colleagues have shown that prediction tournaments work quite well at identifying who’s more and less biased. I believe Stanovich is working on ways to test for rationality. Tetlock has published an entire course on good forecasting (which is basically about being less biased) on

Again, I don’t really think any of this refutes what the Vox team covered. But it’s an important part of the story. Writers, political analysts, and citizens can be more or less biased, based on temperament, education, context, and training. There actually is a lot we can do to address the systematic effects of cognitive bias in political life.

If all that doesn’t work there’s always algorithms. I mostly kid, at least in the context of politics, where values are central. But algorithms already are way less biased than people in a lot of circumstances (though in many cases they can totally have biases of their own) and they’re only likely to improve.

Of course, being humans, we also have an irrational bias against deferring to algorithms, even when we know they’re more likely to be right. But as I’ve written about, research has identified de-biasing tricks that help us overcome our bias for human judgment, too.

The history of media aggregation

I recently finished reading Christopher Daly’s Covering America, a narrative history of journalism in America. (I’d recommend it to anyone interested in journalism — you’ll learn a lot, and it’s just a genuinely enjoyable read.) One thing that struck me was how many of our current debates over digital journalism have historical precedents.

For instance, can someone become a journalist just by the act of going out and pursuing journalism? That’s how Edward Murrow, one of the field’s most hallowed names, did it. Murrow got his start at CBS booking radio guests in a role called “Director of Talks.” Even when he took the role of European Director for CBS in 1937, his job was to book experts, not to speak on the air himself. As a result, when he applied to become a member of the foreign correspondents’ association they rejected him. Here was someone the field didn’t consider a journalist, because he didn’t have quite the right job and worked in a newer medium.coveringamerica

But when Hitler annexed Austria all that changed. Murrow headed to Vienna and as Daly writes, “with no training in journalism, either in school or on the job, Murrow plunged in.” In doing so, he and his colleagues essentially invented live war reporting. What mattered wasn’t Murrow’s credentials but his work ethic, talent, and commitment to principles, as well as his access to an audience.

There are a lot of other interesting historic corollaries to digital, particularly around what’s worth covering, what isn’t, and who should decide. (Is the fact that the audience can’t get enough sufficient reason to publish?) But there’s one in particular I want to focus on here: aggregation.

Daly himself does not draw this comparison when he talks about digital aggregation in the end of the book, but I was struck by how deep its roots run.

Here’s Daly talking about the seeds of journalism sprouting in the American colonies prior to the revolution:

The beginnings of postal service gave rise to an important new position in society — that of the colonial postmaster. A royal appointee in most cases, the postmaster in any colony had the opportunity to see the people who sent letters and to talk to them as they visited his shop to send or pick up mail. What’s more, many letters were opened in the course of transit, and so the postmasters became masters as well of all sorts of information that was being posted. They also got to read the newspapers arriving from Europe, even before their ultimate recipients did. Thus, in each of the growing port cities, the postmaster was one of the best-informed people in his area… it is no surprise that most postmasters would find a way to peddle the news that was coming their way.

One of those postmasters, John Campbell, was in charge of the mail in Boston. He followed the custom of many other postmasters by periodically writing, in longhand, a summary of the most noteworthy items of information that passed through his post office and circulating that newsletter to a small group of friends. Eventually, drawing on models that he would have been familiar with from his years in London, Campell took the next logical step and started having his letters printed. In 1704, Campbell launched the first successful English-language newspaper in the New World. He called it the Boston News-Letter.

So the postmaster read all the newspapers before most people had the chance, worked in some tidbits gleaned from conversations with the community, and repackaged it in a useful way. Sound familiar?

Or what about the launch of Time magazine, by two twenty-something Yale graduates. Their idea? Writes Daly:

What if someone summarized news from around the country and the world, wrought each item into a concise, polished story with some history and context, added coverage of the arts and culture, and priced the whole thing at fifteen cents? With such a magazine, people who were too busy to sit down and read three or four newspapers every day could keep up on the important news, or at least feel that they were keeping up…

In editing Time each week, Hadden confronted a fundamental problem that is familiar to anyone who has ever worked on a rewrite desk: how to make information that is stale or dull sound fresh and important. There are essentially two answers to this question: either do more reporting and advance the story somehow, or polish the prose to make it sound punchy or cute or profound. Lacking a reporting staff at this stage (that would come later) Hadden went to work punching up the copy, giving each article that special Time treatment.

Of course that included a healthy dose of opinion disguised as explanation.

I don’t bring all of this up to defend or admonish aggregators. I just want to point out that this debate over how much a new journalistic outlet should be allowed to rely on others’ reporting is not in any way a new one. It’s been going on as long as American journalism has existed.

What we know about jobs and technology

Experts are deeply divided over whether robots are coming for our jobs. Still, there are some things we do know about the relationship between technology, employment, and wages. There are a lot of recent-ish books and essays out about this, and my aim is to briefly summarize their findings here, if only so I don’t forget it all.

Historically, technology has not caused long-term unemployment

As I summarized in a previous post:

Historically, fears of technology-driven unemployment have failed to materialize both because demand for goods and services continued to rise, and because workers learned new skills and so found new work. We might need fewer workers to produce food than we once did, but we’ve developed appetites for bigger houses, faster cars, and more elaborate entertainment that more than make up for the difference. Farmworkers eventually find employment producing those things, and society moves on.

This is called the lump of labor fallacy and it’s explained nicely in The Second Machine Age, by Andrew McAfee and Erik Brynjolfsson.

But that process can take a while

In the short-term, technology can displace human labor and disrupt many peoples’ lives. New jobs are created, but even then it can take a while for wages to rise in these new professions, as James Bessen recounts in Learning By Doing. He argues wages don’t start to rise until technology matures, skills become standardized, and labor markets sprout up.

Technology can substitute for human labor, but it can also complement it

MIT’s David Autor puts it simply in a recent paper:

Workers benefit from automation if they supply tasks that are complemented by automation but not if they primarily (or exclusively) supply tasks that are substituted. A worker who knows how to operate a shovel but not an excavator will generally experience falling wages as automation advances.

Luckily, the tasks humans supply aren’t set in stone. As long as there is more work left for humans to do, the question is how quickly workers can transition from supplying skills that have been replaced to supplying ones that are complements.

Technology is not uniformly biased toward skilled or unskilled labor

These days, there’s lots of talk about “skill-biased technological change,” meaning change that favors the educated or highly skilled. Think about the demand for computer programmers vs. the demand for unskilled labor. But it isn’t always that way. Lots of early industrial technology was arguably “deskilling” — think of a skilled artisan vs. a factory where each worker simply completes one part of the process. In The Race Between Technology and Jobs, Claudia Goldin and Lawrence Katz argue that technology was largely deskilling up to the 20th century, and since then it’s been largely biased toward those with skills and education. Bessen sees the industrial revolution as less de-skilling, so none of this is beyond debate. The point is just that different technologies can shape the demand for skill and education differently.

The single biggest way to combat technological unemployment is human capital

Goldin and Katz argue that for most of the 20th century, technology didn’t lead to increased inequality even though the technologies of the time were skill-biased. The reason: America was simultaneously becoming more educated. Bessen emphasizes human capital as well, though he focuses more directly on skill than formal education. Both matter, and the broader point is that the level of human capital in a society shapes the society’s ability to reap the benefits of technology, and to do so in an equitable way. But left to its own devices, the market won’t adequately invest in human capital, as Stiglitz and Greenwald explain in Creating a Learning Society.


In other words, the labor supply matters as much as the demand for skill

In the early 20th century, this worked in society’s favor. Skill was in ever greater demand, but thanks to rising education, there were also more skilled workers. Therefore, inequality didn’t increase significantly and the benefits of technology were realized and broadly shared. Today, something different is happening. The demand for skill is increasing, arguably faster than ever, but supply is not keeping pace. The result is that more and more workers are competing for fewer and fewer unskilled jobs. This dynamic is described by Goldin and Katz, and by Autor, but Tyler Cowen’s Average is Over is excellent as well.

Still, the result of all this can be labor market polarization

Can technology be biased toward high and low-skilled workers simultaneously, at the expense of the middle? Autor argues that’s what’s been happening, as relatively unskilled service jobs have been growing, as have high-skilled jobs in techie jobs.

Cumulatively, these two trends of rapid employment growth in both high and low-education jobs have substantially reduced the share of employment accounted for by ‘middle skill’ jobs.

Cowen sketches out something similar, as does Ryan Avent in this post.

What about artificial intelligence. Are all bets off?

So far, most of this is relatively non-controversial. Though technology displaces some workers, it raises the productivity of other workers. People learn new skills and the eventual result is net positive. In the long-term it has not caused unemployment. It can raise wages or lower them, depending on how it impacts the demand for skill and how the labor market responds to those changes. Of all the potential policy responses, investment in human capital is probably the most important.

The one area where current thinkers disagree sharply is how artificial intelligence fits into all of this. McAfee and Brynjolfsson are bullish on its potential, and worried about it. They see AI as accelerating, such that more and more jobs will be taken over by machines. Cowen paints a similar picture. There are two dangers here: first, if the pace of change increases it might be harder for workers to learn new skills fast enough to keep up. Second, the lump of labor fallacy relies on the fact that there are still some things to be done that humans can do but machines can’t. If AI advances as fast as futurists predict, what will be left for humans to do? Bessen sees technology as moving slower, at least in certain sectors, and so less worried about AI. Autor argues that AI can only replace tasks that humans fully understand, and since there’s still so much that we don’t understand about how we do the things we humans do, there’s still plenty that humans will be uniquely able to do. But machine learning experts might argue that this isn’t true, and that with enough data AI can actually learn things that we humans don’t really ever understand.

Anyway, the basic point is that the controversy has to do with the pace of change, and the ability of AI to replace the bulk of human labor quickly.

Socialism and data science


Perhaps the best argument for capitalism is that no central planner can adequately understand the entire economy, however good their intentions. This point is made by Hayek, as Cass Sunstein summarizes in his book Infotopia:

Hayek claims that the great advantage of prices is that they aggregate both the information and the tastes of numerous people, incorporating far more material than could possibly be assembled by any central planner or board… For Hayek, the key economics question is how to incorporate that unorganized and dispersed knowledge.  That problem cannot possibly be solved by any particular person or board.  Central planners cannot have access to all of the knowledge held by particular people.  Taken as a whole, the knowledge held by those people is far greater than that held by even the most well-chosen experts.

Free market advocates are prone to take this point too far, overlooking pervasive market failures and the well documented merits of the welfare state and mixed economies. Nonetheless, it remains a central argument for the use of markets to organize much of our economy.

But what if planners could actually aggregate and make sense of all that information?

There’s an interesting piece in The New Yorker on the history of this idea, the interplay between “big data” and socialism. The piece looks at Chile under Allende, and the quest to utilize computer modeling to aid central planning:

At the center of Project Cybersyn (for “cybernetics synergy”) was the Operations Room, where cybernetically sound decisions about the economy were to be made. Those seated in the op room would review critical highlights—helpfully summarized with up and down arrows—from a real-time feed of factory data from around the country… Four screens could show hundreds of pictures and figures at the touch of a button, delivering historical and statistical information about production—the Datafeed… In addition to the Datafeed, there was a screen that simulated the future state of the Chilean economy under various conditions. Before you set prices, established production quotas, or shifted petroleum allocations, you could see how your decision would play out.

As you can imagine, the modeling that was possible in the 70’s wasn’t all that sophisticated, and so it’s no surprise that the system didn’t overcome Hayek’s critique. But the example is a reminder that the efficacy of central planning isn’t necessarily static. With increasingly ubiquitous data collection and more and more advanced data analysis tools and even artificial intelligence, might planners one day rival the market’s ability to distribute scarce resources?

We’re not nearly there yet, however numerous pieces in recent weeks have detailed government’s increasing interest in data science as a tool for conducting policy. Cities like Chicago are using analytics to improve public health, by better targeting regulators’ interventions based on predictive models. More ambitiously, India now has a dashboard that logs attendance of government workers throughout the country. Perhaps the most aggressive is Singapore, which is collecting a frankly scary amount of data:

Across Singapore’s national ministries and departments today, armies of civil servants use scenario-based planning and big-data analysis from RAHS for a host of applications beyond fending off bombs and bugs. They use it to plan procurement cycles and budgets, make economic forecasts, inform immigration policy, study housing markets, and develop education plans for Singaporean schoolchildren — and they are looking to analyze Facebook posts, Twitter messages, and other social media in an attempt to “gauge the nation’s mood” about everything from government social programs to the potential for civil unrest.

There are any number of objections to raise here, starting with privacy. And The New Yorker piece makes clear that the fragile political economy in Chile mattered more than modeling limitations in limiting that particular experiment. Public choice critiques of planning, based on bureaucratic incentives, likely remain poignant even as technical barriers are removed.

And yet in an age where we are able to imagine automated offices, robotic managers, and markets ruled in real-time by algorithms, why not allow ourselves to consider, even briefly, what the same technologies could do for government? Not just to improve a rule here or a program there, but to perhaps revise what the optimal economic system looks like.

My favorite, if oversimplified ,description of the choice between markets and central planning comes from political scientist Charles Lindblom. As I’ve written of it previously:

In his 1977 book “Politics and Markets”, political scientist Charles Lindblom describes the “key difference” between markets and central planning as “the role of intellect in social organization” with “on the one side, a confident distinctive view of man using his intelligence in social organization [central planning]; on the other side, a skeptical view of his capacity [markets].”

If this is true, then the belief that ubiquitous data collection, cheap computing power, and machine intelligence are making us smarter over time should be mirrored by a belief that planning is becoming more plausible. Perhaps more techo-utopians ought to be aspiring socialists, too.

Image via

How to make better predictions

Over the weekend I argued that people are really quite good at making predictions, when you zoom out and think of all the various ways we do so in science and in everyday life. Talk about how “predictions are hard, especially about the future” tends to concentrate on a narrow band of particularly difficult topics.

But even in those cases there are ways to improve your ability to predict the future. The classic book on the subject is Phil Tetlock’s Expert Political Judgment which I recommend. And if you want the short version, and happen to have a subscription to The Financial Times you’re in luck: Tim Harford’s latest column there gives a useful summary of Tetlock’s research.

His early research basically uncovered the role of personality in forecasting accuracy. More open-minded thinkers — prone to caution and the appreciation of uncertainty, who tended to weigh multiple mental models about how the world work against each other — make more accurate predictions than other people. (They still fail to do better than even simple algorithms.)

I’ll excerpt just the last bit, which focuses on Tetlock’s latest project, an ongoing forecasting tournament (I’m participating in the current round; it’s a lot of fun and quite difficult). Here’s the nickel summary of how to be a better forecaster, beyond cultivating open-mindedness:

How to be a superforecaster

Some participants in the Good Judgment Project were given advice on how to transform their knowledge about the world into a probabilistic forecast – and this training, while brief, led to a sharp improvement in forecasting performance.

The advice, a few pages in total, was summarised with the acronym CHAMP:

● Comparisons are important: use relevant comparisons as a starting point;

● Historical trends can help: look at history unless you have a strong reason to expect change;

● Average opinions: experts disagree, so find out what they think and pick a midpoint;

● Mathematical models: when model-based predictions are available, you should take them into account;

● Predictable biases exist and can be allowed for. Don’t let your hopes influence your forecasts, for example; don’t stubbornly cling to old forecasts in the face of news.

Humans are great at prediction

Humans are terrible at making forecasts, we’re often told. Here’s one recent example at Bloomberg View:

I don’t mean to pick on either of those folks; you can randomly name any 10 strategists, forecasters, pundits and commentators and the vast majority of their predictions will be wrong. Not just a little wrong, but wildly, hilariously off.

The author is talking specifically about the economy, and I mostly agree with what I think he’s trying to say. But I’m tired of this framing:

Every now and again, it is worth reminding ourselves just how terrible humans are at predicting what will happen in markets and/or the economy.

Humans are amazing at predicting the future, and yes that includes what will happen in the economy. It’s just that when we sit down to talk about forecasting, for some reason we decide to throw out all the good predictions, and focus on the stuff that’s just hard enough to be beyond our reach.

There are two main avenues through which this happens. The first is that we idolize precision, and ignore the fact that applying a probability distribution to a range of possibilities is a type of prediction. So the piece above is right that it’s incredibly difficult for an economist to predict exactly the number of jobs that will be added in a given month. But experts can assign probabilities to different outcomes. They can say with a high confidence, for example, that the unemployment rate for August will be somewhere between say 5.5% and 6.5%.

You might think that’s not very impressive. But it’s a prediction, and a useful one. The knowledge that the unemployment rate is unlikely to spike over any given month allows businesses to feel confident in making investments, and workers to feel confident making purchases. I’m not saying we’re perfect at this probabilistic approach — recessions still surprise us. But it’s a legitimate form of prediction at which we do far better than random.

That example leads me to the second way in which we ignore good predictions. Talk of how terrible we are at forecasting ignores the “easy” cases. Will the sun rise tomorrow? Will Google still be profitable in a week? Will the price of milk triple over the next 1o days? We can answer these questions fairly easily, with high confidence. Yes, they seem easy. But they seem easy precisely because human knowledge and the scientific process have been so successfully incorporated into modern life.

And there are plenty of other predictions between these easy cases and the toughest ones that get thrown around. If you invest in the stock market for the long-term, you’re likely to make money. Somewhere around a third of venture-backed startups won’t make it to their 10th birthday. A few years down the line, today’s college graduates will have higher wages on average than their peers without a degree. None of these things are certain. But we can assign probabilities to them that would exceed that of a dart-throwing chimp. Perhaps you’re not impressed, but to me this is the foundation of modern society.

None of this is to say we shouldn’t hold pundits and experts more accountable for their bad predictions, or that we shouldn’t work to improve our predictions where possible. (And research suggests such improvement is often possible.)

But let’s not lose sight of all the ways in which we excel at prediction. Forecasting is a really broad category of thinking that is at the center of modern science. And compared to our ancestors, we’re pretty good at it.