What I wrote about in 2015

When you’re writing regularly, even weekly, the stories can start to blur together. For me at least, it can get to the point where it’s hard to answer the question What have you been working on lately? So I decided this year to look back at everything I wrote in 2015. And as I suspected, a couple major themes emerged. I’ve grouped them together here, mostly for my own clarity. Here’s what I wrote about in 2015:

Algorithms, bias, and decision-making

I spent a lot of time reading, writing, and editing about how humans feel about robots and algorithms, and it culminated in this piece for the June issue of HBR on the subject. Long story short, we’re skeptical of algorithms, but give them a voice and put them inside a robot’s body and we start to become more trusting. If you just want to read about the research on our fear of algorithms, I wrote about that here.

If you read too much about algorithms, you can come away believing that people are pretty hopeless at decision-making by comparison. There’s some truth to that. But another theme I covered this year is just how good some people are at making decisions. I wrote about Philip Tetlock’s latest work, I wrote about his work and others on what good thinkers do, I wrote about why people can come to different conclusions about the same data, and then here on this blog I tried to sum it all up and to offer an optimistic view on bias and human belief.

Inequality, wages, and labor

I was excited to write more about inequality this year, but along the way some of the most interesting assignments were about the more fundamental question: how do labor markets work? This piece asked that question from the perspective of a CEO considering raising wages. This one compared skills and market power as explanations for inequality.

I also wrote once more about whether robots are going to take all our jobs.

I also wrote a few narrower pieces about inequality. Could profit sharing help? (Yes.) What about just treating workers better? Amazingly, that can help too.

But it’s not all good news. Here’s why I’m skeptical of new rules to help more people get paid for overtime. And what do you do when finance seems to improve the productivity of businesses, but at the expense of workers?

I also had the chance to interview some thoughtful people on these topics, including Larry Summers, Robert Reich, and Commerce Secretary Penny Pritzker.

Policy

One of my favorites from the past year was this essay for The Atlantic about how the welfare state encourages entrepreneurship. I wrote about some of the research underlying that thesis for HBR, too.

A bunch of other stuff

CEOs are luckier than they are smart, and here’s what interim CEOs do differently from permanent ones.

Sometimes distrust makes you more effective.

Scientists require more money if their employer won’t let them publish.

Regulators go easier on socially responsible firms, and the values on your company’s website may matter after all.

Predicting a startup’s success based on idea alone is easier in some industries than others.

Startup “joiners” are sort of like founders, but different.

Companies in happier cities invest more for the long-term.

Is it smart to talk back to a jerk boss?

People use instant messaging more in a blizzard.

Sometimes multitasking works.

And media aggregation goes back to the very beginning of American journalism.

Explaining expert disagreement

The IGM Forum recently published its latest poll of economists, and it reminds me of one of the reasons that these polls are so interesting. They illustrate that expert disagreement is seldom a 50/50 split between two diametrically opposed viewpoints. And incorporating these more complicated disagreements into journalism isn’t always easy.

One big, well known challenge in reporting is to try to be fair to various viewpoints, without resorting to the most naive type of “false balance”, like including the views of a climate change denier just to make sure you have “the other side of the story.”

But false balance isn’t the only complication when reporting on experts’ views on an empirical topic. How do you portray disagreement? Typically, the easiest way is to quote one source on one side of the disagreement, and another source on the other side. But that assumes the expert disagreement is split down the middle, between only two camps.

Sometimes expert opinion really is symmetrical, like economists on the $15 minimum wage and whether it would decrease total employment:

igm min wage

 

This data (from the IGM poll) helps visualize a symmetrical disagreement among experts, arguably the easiest case for reporters to deal with. But even here there’s a subtlety. If you get a source to say a $15 minimum wage will kill jobs, and one to say that it won’t, have you correctly reported on the disagreement among economists?

Sort of. But you’ve left out the single biggest chunk of experts: the ones who aren’t sure. Should you quote an agnostic in your piece? Should you give the agnostic’s arguments more attention, since they represent the most prominent expert viewpoint? I’ve tried writing about stories like this, but making uncertainty into a snappy headline isn’t easy.

Or consider one of my favorite IGM polls, about the long-term effects of the stimulus bill, a subject of endless political debate:

igm stimulus

You can see here that there is disagreement over the effects of the stimulus. But it’s not that a lot of economists think it was worth it, and a bunch think it wasn’t. It’s that a lot of economists think it was worth it, and a smaller but still significant group just aren’t sure.

What’s neat about this is that if you believe the results*, they really can help guide the way a reporter should balance a story on this topic. Obviously, you’ll want to find someone to explain why the stimulus was a good idea. But when you’re searching for a viewpoint to counter that, you don’t actually want to find an opponent, at least if your goal is to faithfully explain the debate between experts. Instead, you want to find someone who has serious doubts, someone who isn’t sure.

The IGM poll demonstrates the complexity of these disagreements, and it actually serves as a useful heuristic if you’re writing a story about one of these topics. I’m not saying journalists should be devout in allocating space in their stories to exactly match polls of experts — there’s more to good reporting than getting experts’ views right; these polls don’t necessarily do enough to weight the views of specialists in the area of interest; even on economic stories there are other experts to consult beyond just economists; these polls aren’t the final word on what economists think*; journalists should consider evidence directly and not rely exclusively on experts; etc.

Nonetheless, they’re a nice check. If your story gives as much space to stimulus skeptics as to advocates, you’ve probably succumb to false balance. On the other hand, if it’s only citing stimulus advocates, maybe there’s room to throw some uncertainty into the mix.

I’m thinking of all of this because of the most recent poll, on the Fed and interest rates:

igm fedThe Fed is likely going to raise interest rates. What do experts think of that, and how should you report on it? Well, it’s complicated, but polls like this do give you a rough sense of things.

There’s a lot of support for the move — it’s the single biggest position within this group, so it probably should be represented in your story. But the uncertains and the disagrees are almost as big a group, if they’re combined. That’s probably worth a mention, too.

*After I’d finished a draft of this post, Tyler Cowen posted a spirited critique of the latest IGM results. Worth keeping in mind. If only someone could run a meta-survey of how trustworthy experts deem such results!