Apr 262014
 

tuxWith the FCC reportedly considering allowing paid “fast lanes” for internet traffic, the principle of net neutrality looks more at risk than ever. One of the big concerns of net neutrality advocates is that its absence might empower incumbent firms over newer, smaller, more innovative ones. That is a very valid and important concern.

But small firms represent only one sort of innovator, and arguably not the one most at risk from pay-to-play operations.

Here are a couple examples of the protect-the-startups meme in recent coverage. From NYT:

Consumer groups immediately attacked the proposal, saying that not only would costs rise, but also that big, rich companies with the money to pay large fees to Internet service providers would be favored over small start-ups with innovative business models — stifling the birth of the next Facebook or Twitter.

And here’s an editorial from The Financial Times, arguing that net neutrality may no longer be the right goal:

The fine detail of the FCC’s decision will matter. The regulator will have to ensure its reforms do not create barriers to entry for small and innovative companies – the internet giants of the future.

At The New Yorker, net neutrality advocate and media scholar Tim Wu goes a bit broader:

We take it for granted that bloggers, start-ups, or nonprofits on an open Internet reach their audiences roughly the same way as everyone else. Now they won’t.

To the extent that we can protect innovative new firms from being crushed by incumbents before they get off the ground, that’s great. But the next Facebook or Twitter, while at risk, also has the ability to raise capital and spend it on faster content delivery. (The details of the FCC regulations aren’t yet clear but it sounds like there will be a requirement that similar pay-to-play offers be available to all comers.)

An even bigger risk then is for non-professional content producers and for peer-to-peer, commons-based production. The bloggers Wu mentions could fall into this category, though if they’re using a proprietary platform like Tumblr or Medium they might not. A peer-to-peer project like Wikipedia has its nonprofit arm, but little ability to raise the capital necessary to ensure delivery. Less organized peer production efforts would be at even greater risk. A distributed network of independent bloggers might produce great content, but that content will be delivered slower than content produced by professionals, or by amateurs who’ve bought into a commercial platform. Suddenly, peer production is at a huge disadvantage relative to commercial production unless it has the weight of a commercial enterprise behind it.

The promise of large-scale production outside of firms or governments, from open source software to Wikipedia to independent blogging, was once one of the greatest promises of the internet. And it is even more at risk from the legalization of pay-to-play than are startups. Sure, incumbents might lean on startups who can’t afford to pay for faster delivery. But just as worrying is the thought that startups might raise venture capital to pay for faster delivery in order to crowd out commons-based peer production.

The net neutrality debate isn’t just about small vs. big. It’s also about commercial vs. the commons.

Share
Apr 052014
 

NYT now

There are few if any media outlets that can really go up against the big social networks and have a prayer of stealing away attention. The New York Times might be an exception.

When I first heard about NYT Now I didn’t think twice. It seemed like yet another addition to an already complicated, expensive offering. And its name suggested the reason I didn’t need it: speed is not the primary thing I’m looking for in consuming the Times’ content.

But a piece at Nieman Lab has me rethinking my skepticism:

NYT Now can be seen in part as an Empire Strikes Back play: It aims to take readership back from Twitter and Facebook.

In most cases, for most publications, this will be a losing battle. Still, I can’t help but feel that the moment is ripe for some modest progress here, and The New York Times might be the ones to do it.

I’m not the only one backing away from the social platforms, turned off by the chattering torrent therein. It seems harder every day to maintain a decent signal to noise ratio, which in theory is something the platforms themselves could change. They are developing better filters, and will continue to do so. But with their entire business strategies hinging on more eyeballs on more content for longer, they have a hard time actually making progress on this problem. Essentially, I want tools that make it easier for me to spend less time on Twitter and still find all that I want. That’s in conflict with Twitter’s business plan.

Of course, The Times wants eyeballs on its content for as long as possible, too. But the fact that its business model now includes a subscription component helps here. Once I’m paying for the content, the economics of providing a product I’m not obsessively checking constantly work better. So I’m hopeful that The Times might successfully offer a news feed app that works without being a hopelessly addictive time suck.

A big piece here is that NYT Now plans to include some stories from elsewhere, overcoming one of the biggest barrier to news apps in general, which is that no single publication can ever have all the content you want to read.

There’s also the price. My hesitation in paying for The Times is well documented, and can be summarized as:

1) As much as I like the Times, I don’t need it. And I’d rather be asked to support good journalism than forced to pay for it.

2) If I am going to pay money to support good journalism, I want to know that my money is going directly to that cause.

I wish The Times were structured more like The Guardian, with an endowment funding its efforts. But that’s not the case, and I’m more amenable to paying for it than I was a couple years ago, for various reasons. But at nearly $9/week, the price for the full digital subscription is still high for me.

My basic benchmark in terms of what feels reasonable is Netflix and Spotify: the $7-10/month range. Sure enough, NYT Now falls squarely in that range, at $2/week. That’s getting cheap enough that I might pay merely to support the paper’s mission.

The final reason I’m excited is that I’ve found Circa’s Android app more satisfying than I would have thought, in large part because it is sparing with its notifications. (It seems to only push out truly major news, as opposed to The Times, which pushes alerts about the Final Four.) The presentation in Circa is so clean and condensed that it for the first time has me inclined to see real value in the news app, above and beyond the content, where previously it has always seemed that an RSS reader or Twitter handles the app layer just fine.

For all these reasons, I could see NYT Now working for me (once it comes out for Android). It’s a relatively inexpensive way to support good journalism, and a less noisy way to stay on top of the news as opposed to social media. And they’ve finally learned the lesson that aggregation doesn’t dilute the brand. I may finally have found a news app I want to pay for.

(Note: here’s another Nieman review.)

Share
Apr 042014
 

One of the most interesting bits of the debate over high frequency trading sparked by Michael Lewis’s new book is the question of why we should care that some Wall Street firms are ripping off other ones. One reason might be if Main Street’s money is disproportionately tied up in the funds that are getting ripped off. That raises the question of just who has how much money in the stock market. Not surprisingly, the vast majority of stock is owned by the wealthy.

Start with this overview of the stock market via Business Insider:

chart-of-the-day-who-owns-the-stock-market-november-2012The first thing to note here is how small the pension fund slice is, and how it has shrunk over time. So pensions make up a relatively small slice of stock ownership, but the average American still might be holding stock via a mutual fund or as individuals, right? Well, here’s a look at the percentage of Americans at each income level that own any stock or bonds, including via a mutual fund:

Screen Shot 2014-04-04 at 1.34.15 PM

 

And here’s a look at how much Americans have put away for retirement, by income level:

Screen Shot 2014-04-04 at 1.36.23 PM

 

 

 

 

 

 

 

 

 

 

As for the households piece, fewer and fewer Americans are investing in the stock market:

Screen Shot 2014-04-04 at 1.39.27 PM

So can we put this all together to get a sense of how much of the stock market is owned by whom? Here’s one attempt that goes beyond just stock by the Institute for Policy Studies via ThinkProgress:

Screen Shot 2014-04-04 at 1.40.48 PM

 

And here’s confirmation that that distribution holds for stock by The New York Times:

The richest 10 percent of households own about 90 percent of the stock, expanding both their net worth and their incomes when they cash out or receive dividends.

The point is clear: the vast majority of stock is owned by the rich, even once you take into account retirement funds. Of course, that doesn’t mean we don’t have to care about the stock market. Among other reasons, the small amount of money the non-rich have invested there represents a decent chunk of their wealth.

But it’s nonetheless worth keeping this distribution in mind when talking about who is ripping off whom in the stock market. For most Americans, the great economic injustice is stagnant wages, not high frequency trading.

UPDATE: Ben Walsh vs. Michael Lewis on stock ownership as it relates to HFT.

 

 

Share
Mar 192014
 

Interviewed by Goldman Sachs, he comes to much the same conclusion as I did in my previous post on the attendant fees:

Eric Posner: I think there could be some advantages of usingBitcoin over existing payment systems, but these advantages are not as obvious as they might seem. For example, probably themost compelling advantage is that Bitcoin transactions seem to becheaper. Existing payment systems are often quite expensiveeither because somebody effectively has a monopoly, there are alot of government regulations that are costly to comply with, or thecompanies that offer these services provide certain protectionsthat people want and are willing to pay for.In the case of Bitcoin as it stands now, these costs are largely avoided, at least to the extent that you can technically send bitcoins from one wallet to another wallet without incurring fees;no middlemen are required to do this. The problem is that mostpeople will end up relying on intermediaries when they use bitcoin,not in least part due to security concerns around storing bitcoin onhard drives that can crash, be hacked, or, as in one famous case,thrown away. Most people will buy bitcoins from exchanges and use bitcoin service providers like Coinbase or Bitpay to store their bitcoins and transfer money to somebody in another part of the country or the world. Then that person will maintain their bitcoinswith a service provider and/or will convert the bitcoins back into themoney they use. And perhaps the same or other intermediaries willprovide insurance or protection from exchange rate volatility. Whenyou throw in all of these things, the effective price of using bitcoinis going to be greater than zero. Is it going to be as much as itcosts right now to use your credit card or a bank wire? Maybe not, but it is too soon to tell.

Share
Feb 282014
 

As excitement over Bitcoin as a new form of currency has met with strong pushback from the economics world, the smarter commentators have shifted focus to cryptocurrency as a new way to move money around on the Internet. But how would that really work? The first thing you hear in these discussions is that Bitcoin transactions bypass traditional credit card fees. My first question was why? Specifically, isn’t at least part of the point of those fees to cover necessary services like fraud protection that will need to be implemented with Bitcoin as well? I asked Bitcoin entrepreneur Jeremy Allaire about this in an interview for HBR, and he gave a plausible answer. Basically, with Bitcoin no single entity bears the cost of clearing the transaction or maintaining the network to do so. Companies like Allaire’s Circle still need to provide anti-theft protection, and so it’s far from certain that the economics for Bitcoin will be superior, but for the purposes of this piece I’ll grant his assumption. As we’ll see, things only get more complicated from there.

So my starting point is the assumption that a Bitcoin transaction is cheaper than a (non-cash) dollar transaction. Could you build a payment app that just uses Bitcoin to exchange dollars, similar to how Business Insider’s Joe Weisenthal describes, and if so would it be cheaper? Here’s Weisenthal:

Bitcoin is fundamentally a way to make transactions in a fiat currency. If you want to sell me something for $850, I could pay you in cash, credit card, via PayPal, bank wire, or possibly Bitcoin. How many Bitcoins this transaction requires (currently it would be right around one) is a function of fluctuating Bitcoin prices, but essentially we’re carrying out a dollar-priced transaction and using the Bitcoin as the payment system.

You could build such an app, but you’d quickly run up against another kind of fee. If I want to buy $10 worth of something from a merchant with dollars, but using Bitcoin, the app needs to make two currency conversions. First, you need to take my $10 and convert it into Bitcoin, for which you’ll be charged a fee. And then you need to take the x Bitcoin that is transferred to the merchant and you need to convert it back into dollars. So you effectively have two transactions here, each with a fee that is roughly comparable to a credit card transaction. So you’re not really saving any money.

You might think that a way around this would be to use batching — as an example think of the way people put money onto Starbucks gift cards. I put on $50 which requires a one-time conversion fee to turn into Bitcoin, and then I make several purchases with Bitcoin over time. This does save you some money, since the Bitcoin transactions are cheaper than credit card transactions (per our assumption) and because converting all $50 into Bitcoin at once saves some money. (This is because interchange fees and currency fees both involve a mix of flat and percentage-based fees. So if I only do one fee-based transaction in the beginning and then do a bunch of cheap/free Bitcoin transactions, I’m saving a bit relative to the flat fee that would be assessed doing each transaction in dollars.)

But there’s a problem here, too. Why not just scrap Bitcoin, and batch payments in dollars? The savings in the above case are really coming from lumping together transactions to minimize flat fees — but we don’t need Bitcoin to do that. Indeed, the mobile payment startup LevelUp does this, best I can tell, on a monthly basis. (An email from the company says it’s to “to help local businesses save on card processing fees.”) Bitcoin does have the advantage of verifying at each transaction that the buyer actually has the money, so you don’t get to the end of a batch period and find out someone can’t pay. But the fact that LevelUp is using this approach suggests that the advantage might be quite small.

So we’re really back to square one, or at least close. Unless currency conversion fees are lower than interchange fees, I have trouble seeing how Bitcoin amounts to a cheaper way to move dollars around the Internet. In other words, if Bitcoin doesn’t work as a currency, I struggle to see how it will work as a payment protocol. Of course, we are early days and all that. Mostly, I’d really like to read a detailed account of how Bitcoin-as-payment-protocol would work so please point me in the right direction.

Addendum

In the process of thinking all this through, I came up with an interesting hypothetical. Assume for a minute that the fixed supply of Bitcoin suggests its value will appreciate over time (and put aside volatility for a moment). Imagine you built a digital payment app that used Bitcoin but didn’t bill itself as being about Bitcoin at all.

Here’s how it would work: I put $20 onto the app, which behind the scenes is immediately exchanged for Bitcoin. But rather than telling me I have x Bitcoin to spend, you simply guarantee I can spend $20 dollars. I spend that money over time — say a month — during which the value of Bitcoin appreciates. Because of that, once I’ve spent my $20 there’s still some Bitcoin left over. And because this isn’t a Bitcoin app — I don’t know/care that that’s how you’re processing my payments — you just go ahead and pocket that residual to cover fees and for a profit.

Of course, the company behind the app needs to worry about the loss it would face if the price of Bitcoin goes down over that period. But if you believe there’s a fundamental deflationary bias in Bitcoin, that suggests that to the extent that it is used, its value will appreciate. Moreover, it may be advantageous to have firms rather than individuals taking on the risk around Bitcoin’s price volatility, in both directions. I don’t have to worry (much) since my dollars are guaranteed so long as the app company is solvent.

What really interests me in this scenario is the fact that the deflationary spiral that supposedly occurs when a currency appreciates over time is driven by hoarding. If you know your money will be worth lots more tomorrow, you’re unlikely to spend it. But in this case, I don’t really even know that I’m using Bitcoin so I’m unlikely to hoard it. So demand keeps ticking along. Could such a dynamic help mute the recessionary tendencies of a fixed supply currency?

No doubt there is a reason why this whole thing wouldn’t work — look forward to hearing folks’ thoughts as to why not.

Share
Feb 262014
 

It’s been interesting to see the debate over Bitcoin start with currency, move on to payments, and now there are rumblings that its real use isn’t for either. Rather, cryptocurrency could be a way of marking digital goods as unique, making them in effect rival. Technology Review explains:

Or take digital art. Larry Smith, a partner at the business architecture consultancy The matix and an analyst with long experience in digital advertising and digital finance, asks us to “imagine digital items that can’t be reproduced.” If we attached a coin identifier to a digital image, Smith says, “we could now call that a unique, one-of-a-kind digital entity.” Media on the Internet—where unlimited copying and sharing has become a scourge to rights holders—would suddenly be provably unique, permanently identified, and attached to an unambiguous monetary value.

This would almost certainly be bad news for the Internet. The ability of users to copy digital content has put necessary pressure on rights holders to lower costs, change business models, and innovate. We live under a ludicrous intellectual property regime in which rights holders lobby their way to ever-extending copyright terms. Making it easier for rights holders to enforce copyright online would result in more expensive content and would take all the pressure off both for improved digital distribution models and intellectual property reform. Put simply, the non-rivalry of digital goods is one of the things that has made the Internet such a boon for consumers. If cryptocurrency undoes that, it would be a shame.

Share
Feb 222014
 

Recall the classic utilitarian morality puzzle (via Wikipedia):

There is a runaway trolley barrelling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. Unfortunately, you notice that there is one person on the side track. You do not have the ability to operate the lever in a way that would cause the trolley to derail without loss of life (for example, holding the lever in an intermediate position so that the trolley goes between the two sets of tracks, or pulling the lever after the front wheels pass the switch, but before the rear wheels do). You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

How should we program robots to answer this question? Specifically, what about self-driving cars? Should they be programmed to injure or kill their driver in order to save many others? The question is raised at minute three of this short video on robots and ethics. The whole video is worth your time.

Share
Jan 302014
 

It’s no secret that I’m a bit obsessive about my news habits and filters, manifested most obviously in constant tweaking of my RSS feeds but also in who I follow on Twitter, what email newsletters I allow into my inbox, etc. The challenge in these adjustments is always to strike a balance between drinking from a firehose and a strange kind of fear of missing out. Call it the Goldilocks news filter problem: how much is just right?

Today, the challenge is much more about avoiding the allure of too much and too often rather than too late or too little. Like BuzzFeed’s Charlie Warzel, I recently went on a big unfollow craze with Twitter, and even beyond that use private lists to further filter out the noise. But more so I’m finding lately a real increased affinity for email, as well as an undiminished love for RSS. Specifically, I’m finding that daily is, in almost every case, a regular enough cadence for following the topics I care about.

Part of this is professional. Last summer I moved from a job that included following breaking news and writing about it as quickly as possible to one that is less dependent on the latest developments. But that’s just a piece of it. Even for topics that I don’t write about, I’m finding daily to be roughly the right approach.

Every morning I read three email newsletters and sort through 15 RSS feeds, mostly from media organizations, and another couple dozen feeds that are more infrequent from individual bloggers. Some of it I simply skim the headline and mark as read; some I open and read; some I save to Pocket for later.

Sure, I browse Twitter at points throughout the day, but less than I used to. Mostly, I’m finding, things can wait until the next day.

Share
Jan 262014
 

Last month, I wrote a piece at HBR about how humans and algorithms will collaborate, based on the writings of Tyler Cowen and Andrew McAfee. The central tension was whether that collaboration would be set up such that algorithms were providing an input for humans to make decisions, or whether human reasoning would be the input for algorithms to make them.

One thing I thought about in the process of writing that piece but didn’t include was the question of whether one of these two models offered humans a higher value role. In other words, are you more likely to be highly compensated when you’re the decider adding value on top of an algorithm, or when you’re providing input to one?

I was initially leaning toward the former, but I wasn’t sure and so didn’t raise the question in the post. But the more I think about it, the more it seems to me that there will be opportunities for highly paid and poorly paid (or even unpaid) contributions in both cases.

Here’s a quick outline of what I’m thinking:

Screenshot 2014-01-26 at 3.26.26 PMIt seems totally possible for the post-algorithm “decider” to be an extremely low level, poorly paid contribution. I’m imagining someone whose job is basically just to review algorithmic decisions and make sure nothing is totally out of whack. Think of someone on an assembly line responsible for quality control who pulls the cord if something looks amiss. Just because this position in the algorithmic example is closer to the final decision point doesn’t mean it will be high value or well paid.

Likewise, it’s totally possible to imagine pre-algorithm positions that are high value. Given that the aggregation of expert opinion can often produce a better prediction than any expert on his or her own, you can easily imagine these expert-as-algorithmic-input positions as being relatively high value.

Still, the onus is on the experts to truly prove useful in this scenario. Because if they’re not adding discernible value, waiting in the wings is the possibility for the algorithm to aggregate an even greater range of human judgment — say via social media — that could be done cheaply or even for free.

I’m not sure where this leaves us except to say that I don’t see much reason for us to be “rooting” for algorithms to be inputs to humans or vice versa. In all likelihood this is not the right question. The relevant question, and a harder one, is simply how do we apply human judgment in a way that enhances our increasingly impressive computational decision-making powers.

 

Share
Jan 112014
 

Steve Jobs Announces the iPhone in 2007

One of the most common responses to my post on middle class incomes was to point out the role of technological progress. If the average American family went back in time to 1989, I wrote, they’d make just as much money but work longer fewer hours to do it. But, some responded, they wouldn’t have iPhones. That isn’t meant to sound trivial, and as someone optimistic about technology I don’t consider it to be. Improvements in technology are an important piece of any conversation about progress. But do they change the story about middle class incomes?

Yes and no.

Short version: All of the data I included adjusted for inflation, which accounts for certain kinds of technological progress but not others. Some new technologies – like the iPhone – aren’t currently captured in that data. Others are. If new technological inventions like the iPhone were able to be included in common inflation measures, the incomes of the middle class would indeed look at least a bit higher.

Here’s the long version, starting with a short overview of inflation.

Measures of inflation track the price of goods over time, and although it’s technically an oversimplification, you can think of such measures – like the Consumer Price Index (CPI) – as a proxy for the cost of living. If the stuff you need to get by costs, in total, $100 per week today, but next year that same stuff costs $200 per week, you’d need to be making twice as much money just to be keeping up. So if you hadn’t gotten any raise over the course of that year, an inflation-adjusted (“real”) accounting of your income would say that your income dropped 50%. Inflation-adjusted income measures account for how much stuff costs.

Prices don’t all change together of course, so the CPI uses a bunch of “baskets” of goods. Food is one part of that. Let’s say the price apples goes up, but the price of bananas goes down. If those changes average out, from the CPI’s perspective, “prices” haven’t changed. (If this happened, you might choose to buy only bananas for a while, in order to take advantage of the low prices. So this is an example of when the CPI starts to diverge from cost-of-living. That’s called the substitution effect and it’s one of the big challenges to measuring inflation, but it’s a bit outside the scope of this post.)

In theory, technological improvements should be captured in measures inflation. Say one of the things most people do is to send letters, documents, and other information to each other. It used to require going to Staples, buying envelopes, paying to print, then paying for postage, etc. Now you can just email them from a relatively inexpensive computer in your home. The price of sending all this stuff, one of your regular life activities, just got cheaper. Inflation is about measuring prices, so a measure of inflation should capture this price decrease. If the inflation measure captures it, it would mean that inflation-adjusted income (like I used in my previous post) would capture the impact of tech.

But in practice, measures of inflation have a really hard time capturing new technologies. To see when inflation does and doesn’t capture technology, let’s go back to the food example.

The kind of technological change that inflation is relatively well set up to track is the kind that results in decreased prices for an existing good. Say a farmer comes up with a new way to grow apples and the result is that the exact same kind of apple you’re used to buying suddenly costs half as much as it used to. The CPI will capture that decrease, and so inflation-adjusted income will reflect the improvement.

But say an agricultural scientist invents some new health shake, unlike any food out there on the market, which provides all your daily calories and nutrients. This counts as a “new good” and inflation measures don’t really have any way to account for it. In practice, if a bunch of people start buying the health shake, after a while the Bureau of Labor Statistics will decide to add it to the CPI and start tracking changes to its price going forward, but this misses the value of the new invention in two respects.

The first, and simpler, problem is that the BLS only updates the CPI’s “baskets” every four years. And for some technologies, prices can drop a lot over that amount of time. So imagine the health shake debuts at $100 per serving, but four years later, by the time the BLS gets around to counting it, it’s going for $20 per serving. That price decrease will be missed.

The second issue is a trickier. The very act of invention, if the new product is novel enough, is simply not accounted for at all in inflation statistics. Here’s how a report from The National Academies puts it:

Without an explicit decision to change the list of goods to be priced, standard indexing procedures will not pick up any of the effect of such newly introduced items on consumers’ living standards or costs…

…If significant numbers of new goods are continually invented and successfully marketed, an upward bias will be imparted to the overall price index, relative to an unqualified [Cost of Living Index]…

…Proponents of more traditional price index methodologies argue that it is a perversion of the language to argue that the effect of, say, the introduction of cell phones or the birth control pill is to reduce the price level, a result that comes from confusing the concept of a price level with that of the cost of living. Their position is tempered somewhat by the realization that, outside of price measurement, there is nowhere else in the national accounts for such product quality improvements to be included and, as Nordhaus (1998) and others have argued, real growth in the economy is thereby understated.

How would the introduction of a brand new good be translated into a change in price? The idea here is that sometimes a new good comes to market at a price lower than some consumers would have been willing to pay. Our magic shake example comes to market at $100 per serving; but perhaps some consumers would have been willing to pay $200 per serving for it, but just never got the chance because the technologies that make it possible hadn’t yet been invented. This difference represents value that inflation measures won’t catch. (An interesting note for innovation econ nerds: this is less likely to be a problem to the extent you see technological innovation as a demand or “pull” driven process. It’s really supply shocks that will cause big problems for inflation measures.) There are econometric techniques that some experts believe could be used to capture this value, but they are complex, controversial, and not yet in use.

To sum up, here’s how to think about it: when Amazon uses better software to make retail more efficient and therefore makes a bunch of consumer products cheaper, that’s captured in our most common measure of inflation. But when a radically new consumer product — like the iPhone — is introduced, some portion of the new value will go uncounted. If the iPhone gets cheaper over the first few years before it is incorporated into the CPI, that value will be lost. But once it is included, improvements in technology that make the iPhone cheaper will be captured.

The result is that inflation-adjusted income measures do fail to account for certain kinds of technological progress. How big is that bias? Best I can tell, we don’t really know. Some have suggested it is sizable, but there is no consensus.

So as for the response — sure, middle class incomes were the same a decade or two ago, for fewer hours worked, but now we have iPhones — it is on to something. It’s perfectly reasonable to point out that certain new tech products are available now and weren’t then, and that income data doesn’t fully capture that. But be careful with this argument. It’s not all new tech that goes un-captured. Lots of the behind-the-scenes increases in efficiency due to tech that result in lower consumer prices are captured, as is at least a portion of the continuing decreases in price for consumer tech products once they’ve been in the market for a while.

So it’s a good point, but a nuanced one.

UPDATE 2/5/14: Martin Wolf at FT nicely captures this in two sentences: “Its price was infinite. The fall from an infinite to a definite price is not reflected in the price indices.”

Share