Eric Posner: I think there could be some advantages of usingBitcoin over existing payment systems, but these advantages are not as obvious as they might seem. For example, probably themost compelling advantage is that Bitcoin transactions seem to becheaper. Existing payment systems are often quite expensiveeither because somebody effectively has a monopoly, there are alot of government regulations that are costly to comply with, or thecompanies that offer these services provide certain protectionsthat people want and are willing to pay for.In the case of Bitcoin as it stands now, these costs are largely avoided, at least to the extent that you can technically send bitcoins from one wallet to another wallet without incurring fees;no middlemen are required to do this. The problem is that mostpeople will end up relying on intermediaries when they use bitcoin,not in least part due to security concerns around storing bitcoin onhard drives that can crash, be hacked, or, as in one famous case,thrown away. Most people will buy bitcoins from exchanges and use bitcoin service providers like Coinbase or Bitpay to store their bitcoins and transfer money to somebody in another part of the country or the world. Then that person will maintain their bitcoinswith a service provider and/or will convert the bitcoins back into themoney they use. And perhaps the same or other intermediaries willprovide insurance or protection from exchange rate volatility. Whenyou throw in all of these things, the effective price of using bitcoinis going to be greater than zero. Is it going to be as much as itcosts right now to use your credit card or a bank wire? Maybe not, but it is too soon to tell.
As excitement over Bitcoin as a new form of currency has met with strong pushback from the economics world, the smarter commentators have shifted focus to cryptocurrency as a new way to move money around on the Internet. But how would that really work? The first thing you hear in these discussions is that Bitcoin transactions bypass traditional credit card fees. My first question was why? Specifically, isn’t at least part of the point of those fees to cover necessary services like fraud protection that will need to be implemented with Bitcoin as well? I asked Bitcoin entrepreneur Jeremy Allaire about this in an interview for HBR, and he gave a plausible answer. Basically, with Bitcoin no single entity bears the cost of clearing the transaction or maintaining the network to do so. Companies like Allaire’s Circle still need to provide anti-theft protection, and so it’s far from certain that the economics for Bitcoin will be superior, but for the purposes of this piece I’ll grant his assumption. As we’ll see, things only get more complicated from there.
So my starting point is the assumption that a Bitcoin transaction is cheaper than a (non-cash) dollar transaction. Could you build a payment app that just uses Bitcoin to exchange dollars, similar to how Business Insider’s Joe Weisenthal describes, and if so would it be cheaper? Here’s Weisenthal:
Bitcoin is fundamentally a way to make transactions in a fiat currency. If you want to sell me something for $850, I could pay you in cash, credit card, via PayPal, bank wire, or possibly Bitcoin. How many Bitcoins this transaction requires (currently it would be right around one) is a function of fluctuating Bitcoin prices, but essentially we’re carrying out a dollar-priced transaction and using the Bitcoin as the payment system.
You could build such an app, but you’d quickly run up against another kind of fee. If I want to buy $10 worth of something from a merchant with dollars, but using Bitcoin, the app needs to make two currency conversions. First, you need to take my $10 and convert it into Bitcoin, for which you’ll be charged a fee. And then you need to take the x Bitcoin that is transferred to the merchant and you need to convert it back into dollars. So you effectively have two transactions here, each with a fee that is roughly comparable to a credit card transaction. So you’re not really saving any money.
You might think that a way around this would be to use batching — as an example think of the way people put money onto Starbucks gift cards. I put on $50 which requires a one-time conversion fee to turn into Bitcoin, and then I make several purchases with Bitcoin over time. This does save you some money, since the Bitcoin transactions are cheaper than credit card transactions (per our assumption) and because converting all $50 into Bitcoin at once saves some money. (This is because interchange fees and currency fees both involve a mix of flat and percentage-based fees. So if I only do one fee-based transaction in the beginning and then do a bunch of cheap/free Bitcoin transactions, I’m saving a bit relative to the flat fee that would be assessed doing each transaction in dollars.)
But there’s a problem here, too. Why not just scrap Bitcoin, and batch payments in dollars? The savings in the above case are really coming from lumping together transactions to minimize flat fees — but we don’t need Bitcoin to do that. Indeed, the mobile payment startup LevelUp does this, best I can tell, on a monthly basis. (An email from the company says it’s to “to help local businesses save on card processing fees.”) Bitcoin does have the advantage of verifying at each transaction that the buyer actually has the money, so you don’t get to the end of a batch period and find out someone can’t pay. But the fact that LevelUp is using this approach suggests that the advantage might be quite small.
So we’re really back to square one, or at least close. Unless currency conversion fees are lower than interchange fees, I have trouble seeing how Bitcoin amounts to a cheaper way to move dollars around the Internet. In other words, if Bitcoin doesn’t work as a currency, I struggle to see how it will work as a payment protocol. Of course, we are early days and all that. Mostly, I’d really like to read a detailed account of how Bitcoin-as-payment-protocol would work so please point me in the right direction.
In the process of thinking all this through, I came up with an interesting hypothetical. Assume for a minute that the fixed supply of Bitcoin suggests its value will appreciate over time (and put aside volatility for a moment). Imagine you built a digital payment app that used Bitcoin but didn’t bill itself as being about Bitcoin at all.
Here’s how it would work: I put $20 onto the app, which behind the scenes is immediately exchanged for Bitcoin. But rather than telling me I have x Bitcoin to spend, you simply guarantee I can spend $20 dollars. I spend that money over time — say a month — during which the value of Bitcoin appreciates. Because of that, once I’ve spent my $20 there’s still some Bitcoin left over. And because this isn’t a Bitcoin app — I don’t know/care that that’s how you’re processing my payments — you just go ahead and pocket that residual to cover fees and for a profit.
Of course, the company behind the app needs to worry about the loss it would face if the price of Bitcoin goes down over that period. But if you believe there’s a fundamental deflationary bias in Bitcoin, that suggests that to the extent that it is used, its value will appreciate. Moreover, it may be advantageous to have firms rather than individuals taking on the risk around Bitcoin’s price volatility, in both directions. I don’t have to worry (much) since my dollars are guaranteed so long as the app company is solvent.
What really interests me in this scenario is the fact that the deflationary spiral that supposedly occurs when a currency appreciates over time is driven by hoarding. If you know your money will be worth lots more tomorrow, you’re unlikely to spend it. But in this case, I don’t really even know that I’m using Bitcoin so I’m unlikely to hoard it. So demand keeps ticking along. Could such a dynamic help mute the recessionary tendencies of a fixed supply currency?
No doubt there is a reason why this whole thing wouldn’t work — look forward to hearing folks’ thoughts as to why not.
It’s been interesting to see the debate over Bitcoin start with currency, move on to payments, and now there are rumblings that its real use isn’t for either. Rather, cryptocurrency could be a way of marking digital goods as unique, making them in effect rival. Technology Review explains:
Or take digital art. Larry Smith, a partner at the business architecture consultancy The matix and an analyst with long experience in digital advertising and digital finance, asks us to “imagine digital items that can’t be reproduced.” If we attached a coin identifier to a digital image, Smith says, “we could now call that a unique, one-of-a-kind digital entity.” Media on the Internet—where unlimited copying and sharing has become a scourge to rights holders—would suddenly be provably unique, permanently identified, and attached to an unambiguous monetary value.
This would almost certainly be bad news for the Internet. The ability of users to copy digital content has put necessary pressure on rights holders to lower costs, change business models, and innovate. We live under a ludicrous intellectual property regime in which rights holders lobby their way to ever-extending copyright terms. Making it easier for rights holders to enforce copyright online would result in more expensive content and would take all the pressure off both for improved digital distribution models and intellectual property reform. Put simply, the non-rivalry of digital goods is one of the things that has made the Internet such a boon for consumers. If cryptocurrency undoes that, it would be a shame.
Recall the classic utilitarian morality puzzle (via Wikipedia):
There is a runaway trolley barrelling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. Unfortunately, you notice that there is one person on the side track. You do not have the ability to operate the lever in a way that would cause the trolley to derail without loss of life (for example, holding the lever in an intermediate position so that the trolley goes between the two sets of tracks, or pulling the lever after the front wheels pass the switch, but before the rear wheels do). You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?
How should we program robots to answer this question? Specifically, what about self-driving cars? Should they be programmed to injure or kill their driver in order to save many others? The question is raised at minute three of this short video on robots and ethics. The whole video is worth your time.
It’s no secret that I’m a bit obsessive about my news habits and filters, manifested most obviously in constant tweaking of my RSS feeds but also in who I follow on Twitter, what email newsletters I allow into my inbox, etc. The challenge in these adjustments is always to strike a balance between drinking from a firehose and a strange kind of fear of missing out. Call it the Goldilocks news filter problem: how much is just right?
Today, the challenge is much more about avoiding the allure of too much and too often rather than too late or too little. Like BuzzFeed’s Charlie Warzel, I recently went on a big unfollow craze with Twitter, and even beyond that use private lists to further filter out the noise. But more so I’m finding lately a real increased affinity for email, as well as an undiminished love for RSS. Specifically, I’m finding that daily is, in almost every case, a regular enough cadence for following the topics I care about.
Part of this is professional. Last summer I moved from a job that included following breaking news and writing about it as quickly as possible to one that is less dependent on the latest developments. But that’s just a piece of it. Even for topics that I don’t write about, I’m finding daily to be roughly the right approach.
Every morning I read three email newsletters and sort through 15 RSS feeds, mostly from media organizations, and another couple dozen feeds that are more infrequent from individual bloggers. Some of it I simply skim the headline and mark as read; some I open and read; some I save to Pocket for later.
Sure, I browse Twitter at points throughout the day, but less than I used to. Mostly, I’m finding, things can wait until the next day.
Last month, I wrote a piece at HBR about how humans and algorithms will collaborate, based on the writings of Tyler Cowen and Andrew McAfee. The central tension was whether that collaboration would be set up such that algorithms were providing an input for humans to make decisions, or whether human reasoning would be the input for algorithms to make them.
One thing I thought about in the process of writing that piece but didn’t include was the question of whether one of these two models offered humans a higher value role. In other words, are you more likely to be highly compensated when you’re the decider adding value on top of an algorithm, or when you’re providing input to one?
I was initially leaning toward the former, but I wasn’t sure and so didn’t raise the question in the post. But the more I think about it, the more it seems to me that there will be opportunities for highly paid and poorly paid (or even unpaid) contributions in both cases.
Here’s a quick outline of what I’m thinking:
It seems totally possible for the post-algorithm “decider” to be an extremely low level, poorly paid contribution. I’m imagining someone whose job is basically just to review algorithmic decisions and make sure nothing is totally out of whack. Think of someone on an assembly line responsible for quality control who pulls the cord if something looks amiss. Just because this position in the algorithmic example is closer to the final decision point doesn’t mean it will be high value or well paid.
Likewise, it’s totally possible to imagine pre-algorithm positions that are high value. Given that the aggregation of expert opinion can often produce a better prediction than any expert on his or her own, you can easily imagine these expert-as-algorithmic-input positions as being relatively high value.
Still, the onus is on the experts to truly prove useful in this scenario. Because if they’re not adding discernible value, waiting in the wings is the possibility for the algorithm to aggregate an even greater range of human judgment — say via social media — that could be done cheaply or even for free.
I’m not sure where this leaves us except to say that I don’t see much reason for us to be “rooting” for algorithms to be inputs to humans or vice versa. In all likelihood this is not the right question. The relevant question, and a harder one, is simply how do we apply human judgment in a way that enhances our increasingly impressive computational decision-making powers.
One of the most common responses to my post on middle class incomes was to point out the role of technological progress. If the average American family went back in time to 1989, I wrote, they’d make just as much money but work
longer fewer hours to do it. But, some responded, they wouldn’t have iPhones. That isn’t meant to sound trivial, and as someone optimistic about technology I don’t consider it to be. Improvements in technology are an important piece of any conversation about progress. But do they change the story about middle class incomes?
Yes and no.
Short version: All of the data I included adjusted for inflation, which accounts for certain kinds of technological progress but not others. Some new technologies – like the iPhone – aren’t currently captured in that data. Others are. If new technological inventions like the iPhone were able to be included in common inflation measures, the incomes of the middle class would indeed look at least a bit higher.
Here’s the long version, starting with a short overview of inflation.
Measures of inflation track the price of goods over time, and although it’s technically an oversimplification, you can think of such measures – like the Consumer Price Index (CPI) – as a proxy for the cost of living. If the stuff you need to get by costs, in total, $100 per week today, but next year that same stuff costs $200 per week, you’d need to be making twice as much money just to be keeping up. So if you hadn’t gotten any raise over the course of that year, an inflation-adjusted (“real”) accounting of your income would say that your income dropped 50%. Inflation-adjusted income measures account for how much stuff costs.
Prices don’t all change together of course, so the CPI uses a bunch of “baskets” of goods. Food is one part of that. Let’s say the price apples goes up, but the price of bananas goes down. If those changes average out, from the CPI’s perspective, “prices” haven’t changed. (If this happened, you might choose to buy only bananas for a while, in order to take advantage of the low prices. So this is an example of when the CPI starts to diverge from cost-of-living. That’s called the substitution effect and it’s one of the big challenges to measuring inflation, but it’s a bit outside the scope of this post.)
In theory, technological improvements should be captured in measures inflation. Say one of the things most people do is to send letters, documents, and other information to each other. It used to require going to Staples, buying envelopes, paying to print, then paying for postage, etc. Now you can just email them from a relatively inexpensive computer in your home. The price of sending all this stuff, one of your regular life activities, just got cheaper. Inflation is about measuring prices, so a measure of inflation should capture this price decrease. If the inflation measure captures it, it would mean that inflation-adjusted income (like I used in my previous post) would capture the impact of tech.
But in practice, measures of inflation have a really hard time capturing new technologies. To see when inflation does and doesn’t capture technology, let’s go back to the food example.
The kind of technological change that inflation is relatively well set up to track is the kind that results in decreased prices for an existing good. Say a farmer comes up with a new way to grow apples and the result is that the exact same kind of apple you’re used to buying suddenly costs half as much as it used to. The CPI will capture that decrease, and so inflation-adjusted income will reflect the improvement.
But say an agricultural scientist invents some new health shake, unlike any food out there on the market, which provides all your daily calories and nutrients. This counts as a “new good” and inflation measures don’t really have any way to account for it. In practice, if a bunch of people start buying the health shake, after a while the Bureau of Labor Statistics will decide to add it to the CPI and start tracking changes to its price going forward, but this misses the value of the new invention in two respects.
The first, and simpler, problem is that the BLS only updates the CPI’s “baskets” every four years. And for some technologies, prices can drop a lot over that amount of time. So imagine the health shake debuts at $100 per serving, but four years later, by the time the BLS gets around to counting it, it’s going for $20 per serving. That price decrease will be missed.
The second issue is a trickier. The very act of invention, if the new product is novel enough, is simply not accounted for at all in inflation statistics. Here’s how a report from The National Academies puts it:
Without an explicit decision to change the list of goods to be priced, standard indexing procedures will not pick up any of the effect of such newly introduced items on consumers’ living standards or costs…
…If significant numbers of new goods are continually invented and successfully marketed, an upward bias will be imparted to the overall price index, relative to an unqualified [Cost of Living Index]…
…Proponents of more traditional price index methodologies argue that it is a perversion of the language to argue that the effect of, say, the introduction of cell phones or the birth control pill is to reduce the price level, a result that comes from confusing the concept of a price level with that of the cost of living. Their position is tempered somewhat by the realization that, outside of price measurement, there is nowhere else in the national accounts for such product quality improvements to be included and, as Nordhaus (1998) and others have argued, real growth in the economy is thereby understated.
How would the introduction of a brand new good be translated into a change in price? The idea here is that sometimes a new good comes to market at a price lower than some consumers would have been willing to pay. Our magic shake example comes to market at $100 per serving; but perhaps some consumers would have been willing to pay $200 per serving for it, but just never got the chance because the technologies that make it possible hadn’t yet been invented. This difference represents value that inflation measures won’t catch. (An interesting note for innovation econ nerds: this is less likely to be a problem to the extent you see technological innovation as a demand or “pull” driven process. It’s really supply shocks that will cause big problems for inflation measures.) There are econometric techniques that some experts believe could be used to capture this value, but they are complex, controversial, and not yet in use.
To sum up, here’s how to think about it: when Amazon uses better software to make retail more efficient and therefore makes a bunch of consumer products cheaper, that’s captured in our most common measure of inflation. But when a radically new consumer product — like the iPhone — is introduced, some portion of the new value will go uncounted. If the iPhone gets cheaper over the first few years before it is incorporated into the CPI, that value will be lost. But once it is included, improvements in technology that make the iPhone cheaper will be captured.
The result is that inflation-adjusted income measures do fail to account for certain kinds of technological progress. How big is that bias? Best I can tell, we don’t really know. Some have suggested it is sizable, but there is no consensus.
So as for the response — sure, middle class incomes were the same a decade or two ago, for fewer hours worked, but now we have iPhones — it is on to something. It’s perfectly reasonable to point out that certain new tech products are available now and weren’t then, and that income data doesn’t fully capture that. But be careful with this argument. It’s not all new tech that goes un-captured. Lots of the behind-the-scenes increases in efficiency due to tech that result in lower consumer prices are captured, as is at least a portion of the continuing decreases in price for consumer tech products once they’ve been in the market for a while.
So it’s a good point, but a nuanced one.
UPDATE 2/5/14: Martin Wolf at FT nicely captures this in two sentences: “Its price was infinite. The fall from an infinite to a definite price is not reflected in the price indices.”
Depending on who you ask, the incomes of the American middle class over the past few decades have either a) risen only a little b) stagnated, i.e. stayed flat or c) declined. When President Obama declared in his State of the Union speech that family incomes had “barely budged” from 1979 to 2007, The Washington Post called it inaccurate, noting that median household income increased substantially over that period. And yet barely a day goes by without a story that references stagnating wages for the middle class.
So which is it?
The one thing everyone agrees on is the fact that the rich are getting richer much, much faster than anyone else. And so in one sense, it doesn’t much matter if the answer is (a), (b), or (c). In either case, rising inequality represents a gross misallocation of the nation’s wealth. Still, the possibility that the average American is worse off economically than one or two generations ago makes the issue feel all the more urgent.
Unfortunately, the seemingly simple question of whether Americans are making more money today than in decades past is a bit tricky. The answer depends on the timeframe and the measure.
Short version: The average American family was making modest gains in income over the past few decades, but was working longer hours to do it. Then the recession happened and set the average family back 10 to 20 years.
Now here’s the full story.
The easiest place to start is household “market income”, which just means how much money a household makes before taxes or government transfers are counted. (Importantly, employer-based health care benefits are included in this measure.) Here, via the Congressional Budget Office, is the snapshot over time:
The thing to note here is that the median household income rose nearly 20 percent between 1979 and 2007. That the mean income rose faster hints at the fact that the rich got richer at a much faster rate (see the chart at the top), but nonetheless, seen at this level the story looks like one of modest progress.
Things actually look better still when you consider the impact of taxes and transfers, shown here again via the CBO:
Once taxes and government transfers are accounted for, the median American’s income has risen more than 30 percent from 1979 to 2007, making the story of minor progress a bit more progress-y. (It’s this after-tax measure, accounting for taxes and government transfers, that you saw in the chart of all income inequality at the top of this post.)
There’s one more upside to note: since the average household is smaller than a few decades ago, these gains are slightly larger when size of family is accounted for. Unfortunately, that’s the beginning, not the end, of the story.
It turns out that the median household’s income has only increased because that household has been working more. The New York Times summarizes data from Brookings from 1975 to 2009:
Median wages for two-parent families have grown 23 percent since 1975, after adjusting for inflation. The collective number of hours worked by both parents over the course of a year, however, has risen 26 percent. That means their wages haven’t even grown as much as their working hours would imply they should.
The increase in hours worked is largely the impact of women entering the workforce. To make that point a bit more clear, we can look at this chart from Brookings:
As you can see, the story is one of stagnation since the 70’s, with a modest boost in the late 90’s. This is what stories about “stagnant wages” are talking about. The average American doesn’t make much more for his or her time than in the 1970’s. To bring in more income requires working longer hours.
But here’s where it goes from depressing to downright infuriating. That modest increase in household income that the median family earned by working longer hours? Well, not surprisingly, the Great Recession pretty much wiped it out:
So there you have it. Wages are flat, incomes were up but only because of more hours worked, and then got hammered by the recession. If the average American family could take a time machine back to 1989 they’d make just as much money, and would work fewer hours to make it.
The typical argument as to why we can’t do anything to fix this claims that intervening would jeopardize economic growth. Even if that were true, what’s the good of growth if it doesn’t make anyone richer except the rich? And let’s be clear: that is what economic growth has done.
Here’s where income growth has gone from 1979 to 2007:
But remember: that was the pre-recession distribution. It’s only gotten worse.
The most intuitive way to structure a paywall with respect to premium content — like, say, a longform reported magazine piece or a Snowfall-style multimedia feature — is to offer the cheaper content for free and put the premium stuff behind the gate. I say intuitive simply because it costs you more to produce that stuff; it makes sense, to the extent you’ve decided to charge at all, that you don’t give it away.
But recently I’ve been thinking about the argument for a simpler metered approach where all content counts equally, that came to mind thanks to this quote from Nate Silver (who wasn’t talking at all about paywalls). Here he is via Nieman Lab:
On balancing features and blogging-style analysisWe see them as two related, familial, but separate content silos. From a practical economic standpoint, one of the wonderful things about blogs, as they were originally invented, was you had relatively low transaction cost for producing a blog post. Not that it doesn’t have to be high quality — but you’re not necessarily spending as much of an editor’s time on it, you’re not going through multiple iterations. It’s more thinking about things in real time.So in some ways, we want to, on our blog, get back more to what we think are the core differentiating values of blogging, and not this kind of in-between space a lot of news organizations have wound up in where everything became called a blog, and then it became unfashionable, so nothing gets called a blog.We do make a distinction based mostly on how quickly the content is turned out. What we call a feature is something where it’s assigned, generally in advance, and goes through at least one, maybe multiple rounds of edits.A blog is something which still has to be very good — and it’s as hard, relatively, to hire bloggers as it is to hire feature writers. It’s something that might get a quick read, and maybe has a little bit more voice, but also saying “this is my thinking in real time,” or my work in progress. How we’ll flesh that out exactly in practice, I’m not sure, but I feel like there is an important distinction to be made between the two.
This got me thinking about the awesomeness of truly good blogging, the way it makes you want to check in every 10 minutes to see if the author has something new to say. It’s why I still want to read everything Kevin Drum, Matt Yglesias, or Tyler Cowen writes.
Now here’s Silver on balancing between loyal readers and broader traffic:
I think with almost any web product you have two types of audience. You have your core, everyday readers and then you have the people you reach out to from time to time. I think that having the right content mix, where you can have big spikes in content by doing something interesting and different from time to time, but also making sure that people who are reading the site every day feel they’re getting a good healthy breakfast, lunch, and dinner everyday, full of FiveThirtyEight content.
All of which made me suddenly reconsider my intuition on paywalls and premium content. I hear Silver making the case for a metered model that treats everything equally. The high quality stuff can “travel” on social, reaching readers who otherwise wouldn’t stop by, and because they haven’t used up their content quota, they can view it for free. It’s essentially a loss leader that attempts to draw in more regular readers. And what those devotees are paying for isn’t high production value or in depth reporting so much as immediacy and consistency. They want to read all (or lots of) what you put out.
We see both models today: The New York Times and The Washington Post are metered; The New Yorker makes it harder to get to its premium magazine content than to its blogs. But when I think about my own willingness to pay (or lack thereof) the metered approach strikes me as a bit more plausible, because it pulls out all the stops to build an affinity to the brand. Put another way, you’re making it extra difficult to gain paying customers when you put your best products out of reach.
I recently read Tyler Cowen’s latest book Average is Over, and I’d recommend it to anyone thinking about technology and the future of the economy. It’s a highly readable vision of what the coming age of ubiquitous intelligent machines will mean for workers and the economy. Here’s a bit from Chapter 1 that captures Cowen’s thinking:
Workers more and more will come to be classified into two categories. The key questions will be: Are you good at working with intelligent machines or not? Are your skills a complement to the skills of the computer, or is the computer doing better without you? … If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch. Ever more people are starting to fall on one side of the divide or the other. That’s why average is over.
To be clear: the book is not about whether this is a good or bad thing, or whether its results will be positive or negative. But his articulation of what the world will look like is bleak:
We will move from a society based on the pretense that everyone is given an okay standard of living to a society in which people are expected to fend for themselves much more than they do now. I imagine a world where, say, 10 to 15 percent of the citizenry is extremely wealthy and has fantastically comfortable and stimulating lives, the equivalent of current-day millionaires albeit with better health care.
Much of the rest of the country will have stagnant or maybe even falling wages in dollar terms, but a lot more opportunities for cheap fun and also cheap education. Many of these people will live quite well, and those will be the people who have the discipline to benefit from all the free or near-free services modern technology has made available. Others will fall by the wayside.
(You can read more about what Cowen thinks this will do to U.S. politics in this excerpt at Politico.)
If we accept for the moment that Cowen’s vision of how machine intelligence will transform the economy is basically right, how might we avoid this sad state of political and distributional affairs?
Enter the minimum income. As The New York Times recently explained:
Every month, every Swiss person would receive a check from the government, no matter how rich or poor, how hardworking or lazy, how old or young. Poverty would disappear. Economists, needless to say, are sharply divided on what would reappear in its place — and whether such a basic-income scheme might have some appeal for other, less socialist countries too.
(You can read Cowen on some of the shortcomings here.)
I see a few reasons why a guaranteed minimum income would fit nicely with the future Cowen describes.
1. Supplement the incomes of those unable to compete in the labor force. This is obvious. But the guaranteed minimum income strikes me as a way to maintain the notion of a guaranteed standard of living for all. Moreover, as intelligent machines put pressure on the labor market, tying that standard to work — as we do through the minimum wage — may make less sense.
2. Incentives to work would matter less than they do today. As the Times notes, one of the biggest concerns around a minimum income is that it would serve as a disincentive to work. But that should matter much less in the world Cowen envisions, where unskilled labor is largely displaced by machines. In that world, the fact that some citizens opt not to work would matter less to GDP. Moreover, it would be unlikely to impact the motivations of higher skilled workers, who would set out to earn far more than the guaranteed income provides.* This logic applies both to citizens working despite the option of a guaranteed minimum income and to the disincentive of higher marginal tax rates on the wealthy to support such a program. Don’t buy this? It would be even more true given #3.
3. Increased cultural emphasis on self-motivation will already be necessary. The world Cowen describes prizes self-motivation above almost everything else. If you’re motivated, you’ll take advantage of cheap education to work your way into the most productive echelons of the labor market. In such a world, a cultural focus on increasing self-motivation would be extremely helpful. Moreover, such a cultural emphasis would serve to bolster my arguments in #2, undercutting the argument that a guaranteed minimum income disincentivizes work. In other words, we would accept that financial incentives to avoid work existed, but overcome them in part by becoming a society obsessed with promoting curiosity-based learning and a quest for mastery or “flow.”
4. Some slack in the economy will be necessary to promote art and entrepreneurship. In this hyper-efficient, ultra-competitive world, the creation of a strong safety net would arguably be even more necessary to promote things like entrepreneurship and the arts. Entrepreneurship is mentioned as an argument for the guaranteed minimum income in the Times piece and I think it is a strong argument in two senses. First, a stronger safety net helps to de-risk entrepreneurship; if you forgo a higher income to found a startup and then fail, you’ll at least fall back on some level of comfort. Second, a minimum income would help to subsidize entrepreneurs’ incubation period, as already happens through “entrepreneur-in-residence” programs. If entrepreneurs can eat and live somewhat comfortably while working on their idea (but before it is at the point of making revenue or being attractive to investors), they’ll be more likely to take the plunge.
As for the arts, the tight labor market envisioned in Cowen’s book would put even more negative pressure on the wages of artists. Huge swaths of those without the skills to succeed in complementing machines would seek to become musicians, actors, painters, etc. bidding down wages for those industries. A minimum income would make it possible for artists to do their work.
For all these reasons, I see a guaranteed minimum income as a natural fit with the world Cowen describes. To be clear, I’m not saying I think we’ll necessarily get that world, or that a minimum income is actually a good idea. But as we ponder a world of machine intelligence and a bifurcated labor market, it’s something to at least consider.
Image via Wikipedia
*Even today, I would argue that the most productive workers are largely not motivated by money. Rather, they’re motivated by status, curiosity, and a sense of mastery. To the extent that money is a motivator, it’s largely as a substitute for status.