Jan 262014

Last month, I wrote a piece at HBR about how humans and algorithms will collaborate, based on the writings of Tyler Cowen and Andrew McAfee. The central tension was whether that collaboration would be set up such that algorithms were providing an input for humans to make decisions, or whether human reasoning would be the input for algorithms to make them.

One thing I thought about in the process of writing that piece but didn’t include was the question of whether one of these two models offered humans a higher value role. In other words, are you more likely to be highly compensated when you’re the decider adding value on top of an algorithm, or when you’re providing input to one?

I was initially leaning toward the former, but I wasn’t sure and so didn’t raise the question in the post. But the more I think about it, the more it seems to me that there will be opportunities for highly paid and poorly paid (or even unpaid) contributions in both cases.

Here’s a quick outline of what I’m thinking:

Screenshot 2014-01-26 at 3.26.26 PMIt seems totally possible for the post-algorithm “decider” to be an extremely low level, poorly paid contribution. I’m imagining someone whose job is basically just to review algorithmic decisions and make sure nothing is totally out of whack. Think of someone on an assembly line responsible for quality control who pulls the cord if something looks amiss. Just because this position in the algorithmic example is closer to the final decision point doesn’t mean it will be high value or well paid.

Likewise, it’s totally possible to imagine pre-algorithm positions that are high value. Given that the aggregation of expert opinion can often produce a better prediction than any expert on his or her own, you can easily imagine these expert-as-algorithmic-input positions as being relatively high value.

Still, the onus is on the experts to truly prove useful in this scenario. Because if they’re not adding discernible value, waiting in the wings is the possibility for the algorithm to aggregate an even greater range of human judgment — say via social media — that could be done cheaply or even for free.

I’m not sure where this leaves us except to say that I don’t see much reason for us to be “rooting” for algorithms to be inputs to humans or vice versa. In all likelihood this is not the right question. The relevant question, and a harder one, is simply how do we apply human judgment in a way that enhances our increasingly impressive computational decision-making powers.


Jan 112014

Steve Jobs Announces the iPhone in 2007

One of the most common responses to my post on middle class incomes was to point out the role of technological progress. If the average American family went back in time to 1989, I wrote, they’d make just as much money but work longer fewer hours to do it. But, some responded, they wouldn’t have iPhones. That isn’t meant to sound trivial, and as someone optimistic about technology I don’t consider it to be. Improvements in technology are an important piece of any conversation about progress. But do they change the story about middle class incomes?

Yes and no.

Short version: All of the data I included adjusted for inflation, which accounts for certain kinds of technological progress but not others. Some new technologies – like the iPhone – aren’t currently captured in that data. Others are. If new technological inventions like the iPhone were able to be included in common inflation measures, the incomes of the middle class would indeed look at least a bit higher.

Here’s the long version, starting with a short overview of inflation.

Measures of inflation track the price of goods over time, and although it’s technically an oversimplification, you can think of such measures – like the Consumer Price Index (CPI) – as a proxy for the cost of living. If the stuff you need to get by costs, in total, $100 per week today, but next year that same stuff costs $200 per week, you’d need to be making twice as much money just to be keeping up. So if you hadn’t gotten any raise over the course of that year, an inflation-adjusted (“real”) accounting of your income would say that your income dropped 50%. Inflation-adjusted income measures account for how much stuff costs.

Prices don’t all change together of course, so the CPI uses a bunch of “baskets” of goods. Food is one part of that. Let’s say the price apples goes up, but the price of bananas goes down. If those changes average out, from the CPI’s perspective, “prices” haven’t changed. (If this happened, you might choose to buy only bananas for a while, in order to take advantage of the low prices. So this is an example of when the CPI starts to diverge from cost-of-living. That’s called the substitution effect and it’s one of the big challenges to measuring inflation, but it’s a bit outside the scope of this post.)

In theory, technological improvements should be captured in measures inflation. Say one of the things most people do is to send letters, documents, and other information to each other. It used to require going to Staples, buying envelopes, paying to print, then paying for postage, etc. Now you can just email them from a relatively inexpensive computer in your home. The price of sending all this stuff, one of your regular life activities, just got cheaper. Inflation is about measuring prices, so a measure of inflation should capture this price decrease. If the inflation measure captures it, it would mean that inflation-adjusted income (like I used in my previous post) would capture the impact of tech.

But in practice, measures of inflation have a really hard time capturing new technologies. To see when inflation does and doesn’t capture technology, let’s go back to the food example.

The kind of technological change that inflation is relatively well set up to track is the kind that results in decreased prices for an existing good. Say a farmer comes up with a new way to grow apples and the result is that the exact same kind of apple you’re used to buying suddenly costs half as much as it used to. The CPI will capture that decrease, and so inflation-adjusted income will reflect the improvement.

But say an agricultural scientist invents some new health shake, unlike any food out there on the market, which provides all your daily calories and nutrients. This counts as a “new good” and inflation measures don’t really have any way to account for it. In practice, if a bunch of people start buying the health shake, after a while the Bureau of Labor Statistics will decide to add it to the CPI and start tracking changes to its price going forward, but this misses the value of the new invention in two respects.

The first, and simpler, problem is that the BLS only updates the CPI’s “baskets” every four years. And for some technologies, prices can drop a lot over that amount of time. So imagine the health shake debuts at $100 per serving, but four years later, by the time the BLS gets around to counting it, it’s going for $20 per serving. That price decrease will be missed.

The second issue is a trickier. The very act of invention, if the new product is novel enough, is simply not accounted for at all in inflation statistics. Here’s how a report from The National Academies puts it:

Without an explicit decision to change the list of goods to be priced, standard indexing procedures will not pick up any of the effect of such newly introduced items on consumers’ living standards or costs…

…If significant numbers of new goods are continually invented and successfully marketed, an upward bias will be imparted to the overall price index, relative to an unqualified [Cost of Living Index]…

…Proponents of more traditional price index methodologies argue that it is a perversion of the language to argue that the effect of, say, the introduction of cell phones or the birth control pill is to reduce the price level, a result that comes from confusing the concept of a price level with that of the cost of living. Their position is tempered somewhat by the realization that, outside of price measurement, there is nowhere else in the national accounts for such product quality improvements to be included and, as Nordhaus (1998) and others have argued, real growth in the economy is thereby understated.

How would the introduction of a brand new good be translated into a change in price? The idea here is that sometimes a new good comes to market at a price lower than some consumers would have been willing to pay. Our magic shake example comes to market at $100 per serving; but perhaps some consumers would have been willing to pay $200 per serving for it, but just never got the chance because the technologies that make it possible hadn’t yet been invented. This difference represents value that inflation measures won’t catch. (An interesting note for innovation econ nerds: this is less likely to be a problem to the extent you see technological innovation as a demand or “pull” driven process. It’s really supply shocks that will cause big problems for inflation measures.) There are econometric techniques that some experts believe could be used to capture this value, but they are complex, controversial, and not yet in use.

To sum up, here’s how to think about it: when Amazon uses better software to make retail more efficient and therefore makes a bunch of consumer products cheaper, that’s captured in our most common measure of inflation. But when a radically new consumer product — like the iPhone — is introduced, some portion of the new value will go uncounted. If the iPhone gets cheaper over the first few years before it is incorporated into the CPI, that value will be lost. But once it is included, improvements in technology that make the iPhone cheaper will be captured.

The result is that inflation-adjusted income measures do fail to account for certain kinds of technological progress. How big is that bias? Best I can tell, we don’t really know. Some have suggested it is sizable, but there is no consensus.

So as for the response — sure, middle class incomes were the same a decade or two ago, for fewer hours worked, but now we have iPhones — it is on to something. It’s perfectly reasonable to point out that certain new tech products are available now and weren’t then, and that income data doesn’t fully capture that. But be careful with this argument. It’s not all new tech that goes un-captured. Lots of the behind-the-scenes increases in efficiency due to tech that result in lower consumer prices are captured, as is at least a portion of the continuing decreases in price for consumer tech products once they’ve been in the market for a while.

So it’s a good point, but a nuanced one.

UPDATE 2/5/14: Martin Wolf at FT nicely captures this in two sentences: “Its price was infinite. The fall from an infinite to a definite price is not reflected in the price indices.”

Jan 092014

Screenshot 2014-01-06 at 4.27.09 PMDepending on who you ask, the incomes of the American middle class over the past few decades have either a) risen only a little b) stagnated, i.e. stayed flat or c) declined. When President Obama declared in his State of the Union speech that family incomes had “barely budged” from 1979 to 2007, The Washington Post called it inaccurate, noting that median household income increased substantially over that period. And yet barely a day goes by without a story that references stagnating wages for the middle class.

So which is it?

The one thing everyone agrees on is the fact that the rich are getting richer much, much faster than anyone else. And so in one sense, it doesn’t much matter if the answer is (a), (b), or (c). In either case, rising inequality represents a gross misallocation of the nation’s wealth. Still, the possibility that the average American is worse off economically than one or two generations ago makes the issue feel all the more urgent.

Unfortunately, the seemingly simple question of whether Americans are making more money today than in decades past is a bit tricky. The answer depends on the timeframe and the measure.

Short version: The average American family was making modest gains in income over the past few decades, but was working longer hours to do it. Then the recession happened and set the average family back 10 to 20 years.

Now here’s the full story.

The easiest place to start is household “market income”, which just means how much money a household makes before taxes or government transfers are counted. (Importantly, employer-based health care benefits are included in this measure.) Here, via the Congressional Budget Office, is the snapshot over time:

market income CBO

The thing to note here is that the median household income rose nearly 20 percent between 1979 and 2007. That the mean income rose faster hints at the fact that the rich got richer at a much faster rate (see the chart at the top), but nonetheless, seen at this level the story looks like one of modest progress.

Things actually look better still when you consider the impact of taxes and transfers, shown here again via the CBO:

median after tax income CBO

Once taxes and government transfers are accounted for, the median American’s income has risen more than 30 percent from 1979 to 2007, making the story of minor progress a bit more progress-y. (It’s this after-tax measure, accounting for taxes and government transfers, that you saw in the chart of all income inequality at the top of this post.)

There’s one more upside to note: since the average household is smaller than a few decades ago, these gains are slightly larger when size of family is accounted for. Unfortunately, that’s the beginning, not the end, of the story.

It turns out that the median household’s income has only increased because that household has been working more. The New York Times summarizes data from Brookings from 1975 to 2009:

Median wages for two-parent families have grown 23 percent since 1975, after adjusting for inflation. The collective number of hours worked by both parents over the course of a year, however, has risen 26 percent. That means their wages haven’t even grown as much as their working hours would imply they should.

The increase in hours worked is largely the impact of women entering the workforce. To make that point a bit more clear, we can look at this chart from Brookings:

Screenshot 2014-01-06 at 8.23.10 PM
If modest increases in household income for the median family are the result of more hours worked, what do wages look like on an hourly basis? For that we can turn to the Economic Policy Institute:

Screenshot 2014-01-06 at 8.17.41 PM

As you can see, the story is one of stagnation since the 70’s, with a modest boost in the late 90’s. This is what stories about “stagnant wages” are talking about. The average American doesn’t make much more for his or her time than in the 1970’s. To bring in more income requires working longer hours.

But here’s where it goes from depressing to downright infuriating. That modest increase in household income that the median family earned by working longer hours? Well, not surprisingly, the Great Recession pretty much wiped it out:

 since the recession
As for the recovery? Well, in its first year, from 2009 to 2010, the top 1% captured 93% of the income gains, according to Stanford:
Screenshot 2014-01-06 at 8.34.41 PM
What about since then? According to Pew, the top 7% saw its wealth (not income) rise 28 percent from 2009 to 2011 while the bottom 93 percent of Americans saw their net worth decrease by 4 percent.
Screenshot 2014-01-06 at 8.37.45 PM

So there you have it. Wages are flat, incomes were up but only because of more hours worked, and then got hammered by the recession. If the average American family could take a time machine back to 1989 they’d make just as much money, and would work fewer hours to make it.

The typical argument as to why we can’t do anything to fix this claims that intervening would jeopardize economic growth. Even if that were true, what’s the good of growth if it doesn’t make anyone richer except the rich? And let’s be clear: that is what economic growth has done.

Here’s where income growth has gone from 1979 to 2007:

Screenshot 2014-01-06 at 8.42.00 PM

But remember: that was the pre-recession distribution. It’s only gotten worse.


Nov 202013

The most intuitive way to structure a paywall with respect to premium content — like, say, a longform reported magazine piece or a Snowfall-style multimedia feature — is to offer the cheaper content for free and put the premium stuff behind the gate. I say intuitive simply because it costs you more to produce that stuff; it makes sense, to the extent you’ve decided to charge at all, that you don’t give it away.

But recently I’ve been thinking about the argument for a simpler metered approach where all content counts equally, that came to mind thanks to this quote from Nate Silver (who wasn’t talking at all about paywalls). Here he is via Nieman Lab:

On balancing features and blogging-style analysis

We see them as two related, familial, but separate content silos. From a practical economic standpoint, one of the wonderful things about blogs, as they were originally invented, was you had relatively low transaction cost for producing a blog post. Not that it doesn’t have to be high quality — but you’re not necessarily spending as much of an editor’s time on it, you’re not going through multiple iterations. It’s more thinking about things in real time.
So in some ways, we want to, on our blog, get back more to what we think are the core differentiating values of blogging, and not this kind of in-between space a lot of news organizations have wound up in where everything became called a blog, and then it became unfashionable, so nothing gets called a blog.We do make a distinction based mostly on how quickly the content is turned out. What we call a feature is something where it’s assigned, generally in advance, and goes through at least one, maybe multiple rounds of edits.A blog is something which still has to be very good — and it’s as hard, relatively, to hire bloggers as it is to hire feature writers. It’s something that might get a quick read, and maybe has a little bit more voice, but also saying “this is my thinking in real time,” or my work in progress. How we’ll flesh that out exactly in practice, I’m not sure, but I feel like there is an important distinction to be made between the two.

This got me thinking about the awesomeness of truly good blogging, the way it makes you want to check in every 10 minutes to see if the author has something new to say. It’s why I still want to read everything Kevin Drum, Matt Yglesias, or Tyler Cowen writes.

Now here’s Silver on balancing between loyal readers and broader traffic:

I think with almost any web product you have two types of audience. You have your core, everyday readers and then you have the people you reach out to from time to time. I think that having the right content mix, where you can have big spikes in content by doing something interesting and different from time to time, but also making sure that people who are reading the site every day feel they’re getting a good healthy breakfast, lunch, and dinner everyday, full of FiveThirtyEight content.

All of which made me suddenly reconsider my intuition on paywalls and premium content. I hear Silver making the case for a metered model that treats everything equally. The high quality stuff can “travel” on social, reaching readers who otherwise wouldn’t stop by, and because they haven’t used up their content quota, they can view it for free. It’s essentially a loss leader that attempts to draw in more regular readers. And what those devotees are paying for isn’t high production value or in depth reporting so much as immediacy and consistency. They want to read all (or lots of) what you put out.

We see both models today: The New York Times and The Washington Post are metered; The New Yorker makes it harder to get to its premium magazine content than to its blogs. But when I think about my own willingness to pay (or lack thereof) the metered approach strikes me as a bit more plausible, because it pulls out all the stops to build an affinity to the brand. Put another way, you’re making it extra difficult to gain paying customers when you put your best products out of reach.

Nov 162013


I recently read Tyler Cowen’s latest book Average is Over, and I’d recommend it to anyone thinking about technology and the future of the economy. It’s a highly readable vision of what the coming age of ubiquitous intelligent machines will mean for workers and the economy. Here’s a bit from Chapter 1 that captures Cowen’s thinking:

Workers more and more will come to be classified into two categories. The key questions will be: Are you good at working with intelligent machines or not? Are your skills a complement to the skills of the computer, or is the computer doing better without you? … If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch. Ever more people are starting to fall on one side of the divide or the other. That’s why average is over.

To be clear: the book is not about whether this is a good or bad thing, or whether its results will be positive or negative. But his articulation of what the world will look like is bleak:

We will move from a society based on the pretense that everyone is given an okay standard of living to a society in which people are expected to fend for themselves much more than they do now. I imagine a world where, say, 10 to 15 percent of the citizenry is extremely wealthy and has fantastically comfortable and stimulating lives, the equivalent of current-day millionaires albeit with better health care.

Much of the rest of the country will have stagnant or maybe even falling wages in dollar terms, but a lot more opportunities for cheap fun and also cheap education. Many of these people will live quite well, and those will be the people who have the discipline to benefit from all the free or near-free services modern technology has made available. Others will fall by the wayside.

(You can read more about what Cowen thinks this will do to U.S. politics in this excerpt at Politico.)

If we accept for the moment that Cowen’s vision of how machine intelligence will transform the economy is basically right, how might we avoid this sad state of political and distributional affairs?

Enter the minimum income. As The New York Times recently explained:

Every month, every Swiss person would receive a check from the government, no matter how rich or poor, how hardworking or lazy, how old or young. Poverty would disappear. Economists, needless to say, are sharply divided on what would reappear in its place — and whether such a basic-income scheme might have some appeal for other, less socialist countries too.

(You can read Cowen on some of the shortcomings here.)

I see a few reasons why a guaranteed minimum income would fit nicely with the future Cowen describes.

1. Supplement the incomes of those unable to compete in the labor force. This is obvious. But the guaranteed minimum income strikes me as a way to maintain the notion of a guaranteed standard of living for all. Moreover, as intelligent machines put pressure on the labor market, tying that standard to work — as we do through the minimum wage — may make less sense.

2. Incentives to work would matter less than they do today. As the Times notes, one of the biggest concerns around a minimum income is that it would serve as a disincentive to work. But that should matter much less in the world Cowen envisions, where unskilled labor is largely displaced by machines. In that world, the fact that some citizens opt not to work would matter less to GDP. Moreover, it would be unlikely to impact the motivations of higher skilled workers, who would set out to earn far more than the guaranteed income provides.* This logic applies both to citizens working despite the option of a guaranteed minimum income and to the disincentive of higher marginal tax rates on the wealthy to support such a program. Don’t buy this? It would be even more true given #3.

3. Increased cultural emphasis on self-motivation will already be necessary. The world Cowen describes prizes self-motivation above almost everything else. If you’re motivated, you’ll take advantage of cheap education to work your way into the most productive echelons of the labor market. In such a world, a cultural focus on increasing self-motivation would be extremely helpful. Moreover, such a cultural emphasis would serve to bolster my arguments in #2, undercutting the argument that a guaranteed minimum income disincentivizes work. In other words, we would accept that financial incentives to avoid work existed, but overcome them in part by becoming a society obsessed with promoting curiosity-based learning and a quest for mastery or “flow.”

4. Some slack in the economy will be necessary to promote art and entrepreneurship. In this hyper-efficient, ultra-competitive world, the creation of a strong safety net would arguably be even more necessary to promote things like entrepreneurship and the arts. Entrepreneurship is mentioned as an argument for the guaranteed minimum income in the Times piece and I think it is a strong argument in two senses. First, a stronger safety net helps to de-risk entrepreneurship; if you forgo a higher income to found a startup and then fail, you’ll at least fall back on some level of comfort. Second, a minimum income would help to subsidize entrepreneurs’ incubation period, as already happens through “entrepreneur-in-residence” programs. If entrepreneurs can eat and live somewhat comfortably while working on their idea (but before it is at the point of making revenue or being attractive to investors), they’ll be more likely to take the plunge.

As for the arts, the tight labor market envisioned in Cowen’s book would put even more negative pressure on the wages of artists. Huge swaths of those without the skills to succeed in complementing machines would seek to become musicians, actors, painters, etc. bidding down wages for those industries. A minimum income would make it possible for artists to do their work.

For all these reasons, I see a guaranteed minimum income as a natural fit with the world Cowen describes. To be clear, I’m not saying I think we’ll necessarily get that world, or that a minimum income is actually a good idea. But as we ponder a world of machine intelligence and a bifurcated labor market, it’s something to at least consider.

Image via Wikipedia

*Even today, I would argue that the most productive workers are largely not motivated by money. Rather, they’re motivated by status, curiosity, and a sense of mastery. To the extent that money is a motivator, it’s largely as a substitute for status.

Nov 162013


There was a piece in Fortune earlier this month with which I strongly disagreed, on the subject of healthcare, technology, and “gamification”. The post centers around a health tech hackathon and, I think, in dismissing the promise of gamification, misses one of the most promising aspects of health IT. Here’s the gist:

Several months ago, I sat in on a case competition at Boston University’s School of Management. The event played out over two days, during which 15 teams of five students from B-schools all over the world — India, South Korea, Canada, but mostly the U.S. — pitched their ideas for a company, one that would revolutionize health care (the stated goal was particularly jargon filled: “to leverage information technology to transform global health care and create value”)…

Immediately, a theme emerged, and the theme was games. “How do we gamify health care?”… As the day wore on, one of the Merck representatives finally asked, in exasperation, “Why would you make a game out of taking a pill? This will never be fun,” which is true…

I happen to think this is a bit needlessly cynical with respect to drug adherence, but the point I want to make is different. The term “health IT” tends to conjure the thought of medical records and the efficiency of medicine more broadly. But one of the most promising areas in my mind, specifically with mobile technology, is in gamifying health.

If you look at what’s driving U.S. healthcare costs, a huge chunk is driven by diseases directly caused by poor health behaviors like smoking, overeating, and lack of exercise. As I put it in a post a little over a year ago:

Want to crack healthcare costs? Help at-risk individuals smoke less, drink less, exercise more and eat better.

This is where the potential for gamification lies. (If you don’t like the buzzword, call it behavior modification.) Think of it like this: using a doctor to treat the fact that you eat too much and don’t get enough exercise is a terribly inefficient health plan. You go in every few months, the doctor scolds you for not sticking to your diet and exercise regimen, you go home and don’t change.

The opportunity is to leverage the fact that we now all carry powerful computers connected to the internet with us at all times (in the form of smartphones) to nudge us toward better behavior. This is by no means easy! And for now it’s way worse than the alternative of relying on a mix of social support from family and friends along with willpower and attempts to form better habits. But is it out of the question to think that mobile technology can supplement those things?

Think about RunKeeper, the running app, or GymPact, the workout commitment app, in this context. They’re both, basically, turning fitness into a kind of game, and they’re both using different motivational levers to try and increase your likelihood of exercising. This kind of thing — the good behavior layer — is where the potential for gamification lies. Not in making it more fun to take your pills or to receive a medical diagnosis.

The area that excites me in terms of health technology isn’t revolutionizing medicine, as big a deal as that may be, but revolutionizing health.

Oct 042013

Contrary to what you’ve heard:

Screen Shot 2013-10-04 at 9.51.13 AM


That’s via a report from the London School of Economics, which concludes:

The music industry may be stagnating, but the drastic decline in revenues warnedof by the lobby associations of record labels is not in evidence.

This isn’t to say all is well in the music industry, and it doesn’t speak to the distribution of revenue between artists, studios, platforms like Spotify, etc. But the report points out that the music industry is making exaggerated claims about the harm that piracy is causing, in order to advocate for stronger intellectual property protection. Whatever the challenges faced by musicians, making it harder to remix and share culture isn’t the answer.

Sep 022013


The questions of whether we can trust economists and what economics is good for are back, thanks to a New York Times post last week making an outrageous claim:

The fact that the discipline of economics hasn’t helped us improve our predictive abilities suggests it is still far from being a science

At Bloomberg, economist Justin Wolfers notes that the post contains no actual evidence for that claim (though Wolfers himself does not offer any). My take is that economics almost certainly has improved our predictive ability, but I won’t attempt to make that case here. (I’ll also note that much research into whether experts predict things well tends to focus on the most difficult cases in the field. Related: Wolfers has documented the strong degree of consensus within the economics profession, which doesn’t always come through in the media.)

But apart from all this, it strikes me that whether we can trust economists is an impoverished debate. How to trust economists – how to interact with their claims, when to trust those claims and when not to – is more interesting, relevant, and important. Yes, economists too often take on the role of philosopher or political scientist. Yes, sometimes their biases shine through in their recommendations. Yes, economics doesn’t have all the answers. And yes, some of those answers even turn out to be wrong *cough* Great Moderation *cough*. But plenty of the time, listening to economists is a good idea!

So I thought I’d transcribe the last page of a book called The Assumptions Economists Make, a very pessimistic tour of economic models throughout history by HBS professor Jonathan Schlefer. It’s an entertaining read; Schlefer has a PhD in Political Science, and his quite critical take focuses on the inconsistencies in economic models throughout time. (If the book has a central flaw it’s that it tends to do battle with these abstractions in the abstract; their usefulness is mostly secondary to their validity.)

Nonetheless, his concluding recommendations are a must-read for anyone grappling with how to trust economists:

  • Economists should transparently describe critical assumptions. These assumptions should be realistic and pertinent to the situations that a particular model seeks to explain.
  • Economists should explain the structure of their models. The structure of a model constitutes the perspective it sheds on some crucial aspects of an economy. Thoughtful individuals should not believe, and policymakers should not use, an unexplained model.
  • However realistic its assumptions, a model stands an excellent chance of ignoring crucial aspects of an economy because, among other things, incorporating them might well make the model too complex to handle. Think what factors it might miss.
  • There are always conflicting models to explain any given aspects of an economy  In looking for practical conclusions, weigh conflicting models.
  • Macroeconomies are incredibly complex. One of the most useful things economists can do is explain publicly what they do not know.

Though many of these are focused on what economists should do, turn them around and you have a nice list of questions to ask economists about their recommendations. The fourth one is particular important. I’d add that we should what data the model successfully does and does not explain.

Taken together, this provides at least the beginnings of a toolkit for politicians, journalists, and citizens to engage with the recommendations of economists. And engagement, ultimately, will be more fruitful than either blind acceptance or dismissal.

Aug 282013

A series of recent blog posts and news items – here, here, here, here – have me worried about how Silicon Valley* and the startup world relate to the rest of culture and society. There’s a reason Wall Street has a bad name; Silicon Valley shouldn’t have to end up like that. Part of it is just not saying really silly, out-of-touch stuff. But an even bigger part has to do with what startup culture is really about.

I’m only an observer, so far be it from me to really define what startup culture is or isn’t. But it seems to me there are a few related factors that are emphasized to different degrees by different people and groups. They are: building new things; adding value; and making money.

In an ideal world, all three fit together. But it should be obvious that it’s possible to make money without really adding any value to the world, just as it’s possible to create something that’s new but doesn’t add value. And it’s the value-add piece of the startup and tech culture that I really think needs to be emphasized. I love hearing that community talk about their obsession with solving difficult problems, with providing utility for their users. And I actually think, considering how much money flows through startup land, the culture there does at least an OK job of not making it all about financial success.**

So what I’m a bit concerned about is a culture that’s overly obsessed with newness. Drop that heuristic – that new is good – into most places of the world and it’d be an improvement. But with the cost of creating something new now so low (at least in the software space) might Silicon Valley culture be over-emphasizing the desirability of merely bringing new things into the world?

A friend asked me recently what the latest wave of internet-enabled technology had actually done to improve the world, and I had to think a bit longer than I was comfortable with. I think there are compelling answers, but we have to provide them, and argue for them. It’s not enough to simply assume that new equals better.

If Silicon Valley wants to avoid sharing Wall Street’s reputation, it’s going to need to think about how it relates to the rest of society. In doing so, I recommend emphasizing a culture of solving problems and adding value, rather than just focusing on what’s new.

*I’m using ‘Silicon Valley’ more to refer to startup culture broadly than to the geography

**Maybe my view from the East Coast is skewed on this one and it’s not as good as I think

Aug 262013

In the spirit of blogging more regularly, I just want to quickly flag something Fred Wilson said this month, though I don’t have the time to give it the attention I’d like to. Here he is, via Business Insider:

However, now that Android is winning, he [Wilson] says, “My new worry is that Android could run the table.”

After seeing that Google has 80% of the smartphone market, and more hip tech snoots are into Android, he’s worried that Google will completely control the smartphone market.

He’s hoping Apple releases a truly low-cost iPhone to gain marketshare.

“I find myself rooting hard for Apple now. I sense the danger they are in and I don’t want either smartphone OS to be so dominant that we lose the level playing field we have now,” says Wilson. “It’s very important for startups, innovation, and an open mobile ecosystem for all.”

Business Insider goes on to zero in on the idea that Apple could be in trouble. I want to touch on something else.

I like Wilson’s writing a lot, but I am very confused by the implication, as I read this, that more competition in the smartphone OS market will mean a more open mobile ecosystem. Really? Apple is historically the antithesis of open. And while Android isn’t Linux, there is a very real sense in which it is more open than iOS.

Put another way, “open” and “competitive” are different, though more of either of them tends to mean less control by any single company in a given market. That doesn’t mean they’re the same. “Open” is vague and means different things in different contexts, so let’s go to Tim Wu’s definition:

First, “open” and “closed” can refer to how permissive a tech firm is, with respect to who can partner with or interconnect with its products to reach consumers. We say an operating system like Linux is “open” because anyone can design a device that runs Linux.

We can agree, hopefully, that by this definition more Apple market share is unlikely to mean more “openness” in mobile.

That doesn’t mean there’s nothing good about a more competitive landscape, or nothing to be feared by Google’s dominance. Ideally, I’d like to see relatively more open systems like Android push out more closed systems like iOS, but in a way where no single company has inordinate control of the open system(s) that are left. That’s not how things are looking with Android.

So we can argue about whether we need Apple and iOS as a counterweight to Google, and whether such a counterweight is good for startups and for innovation. But I don’t see the case for an increase in Apple’s market share leading to more openness.