Feb 262014
 

It’s been interesting to see the debate over Bitcoin start with currency, move on to payments, and now there are rumblings that its real use isn’t for either. Rather, cryptocurrency could be a way of marking digital goods as unique, making them in effect rival. Technology Review explains:

Or take digital art. Larry Smith, a partner at the business architecture consultancy The matix and an analyst with long experience in digital advertising and digital finance, asks us to “imagine digital items that can’t be reproduced.” If we attached a coin identifier to a digital image, Smith says, “we could now call that a unique, one-of-a-kind digital entity.” Media on the Internet—where unlimited copying and sharing has become a scourge to rights holders—would suddenly be provably unique, permanently identified, and attached to an unambiguous monetary value.

This would almost certainly be bad news for the Internet. The ability of users to copy digital content has put necessary pressure on rights holders to lower costs, change business models, and innovate. We live under a ludicrous intellectual property regime in which rights holders lobby their way to ever-extending copyright terms. Making it easier for rights holders to enforce copyright online would result in more expensive content and would take all the pressure off both for improved digital distribution models and intellectual property reform. Put simply, the non-rivalry of digital goods is one of the things that has made the Internet such a boon for consumers. If cryptocurrency undoes that, it would be a shame.

Share
Feb 222014
 

Recall the classic utilitarian morality puzzle (via Wikipedia):

There is a runaway trolley barrelling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. Unfortunately, you notice that there is one person on the side track. You do not have the ability to operate the lever in a way that would cause the trolley to derail without loss of life (for example, holding the lever in an intermediate position so that the trolley goes between the two sets of tracks, or pulling the lever after the front wheels pass the switch, but before the rear wheels do). You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

How should we program robots to answer this question? Specifically, what about self-driving cars? Should they be programmed to injure or kill their driver in order to save many others? The question is raised at minute three of this short video on robots and ethics. The whole video is worth your time.

Share
Jan 302014
 

It’s no secret that I’m a bit obsessive about my news habits and filters, manifested most obviously in constant tweaking of my RSS feeds but also in who I follow on Twitter, what email newsletters I allow into my inbox, etc. The challenge in these adjustments is always to strike a balance between drinking from a firehose and a strange kind of fear of missing out. Call it the Goldilocks news filter problem: how much is just right?

Today, the challenge is much more about avoiding the allure of too much and too often rather than too late or too little. Like BuzzFeed’s Charlie Warzel, I recently went on a big unfollow craze with Twitter, and even beyond that use private lists to further filter out the noise. But more so I’m finding lately a real increased affinity for email, as well as an undiminished love for RSS. Specifically, I’m finding that daily is, in almost every case, a regular enough cadence for following the topics I care about.

Part of this is professional. Last summer I moved from a job that included following breaking news and writing about it as quickly as possible to one that is less dependent on the latest developments. But that’s just a piece of it. Even for topics that I don’t write about, I’m finding daily to be roughly the right approach.

Every morning I read three email newsletters and sort through 15 RSS feeds, mostly from media organizations, and another couple dozen feeds that are more infrequent from individual bloggers. Some of it I simply skim the headline and mark as read; some I open and read; some I save to Pocket for later.

Sure, I browse Twitter at points throughout the day, but less than I used to. Mostly, I’m finding, things can wait until the next day.

Share
Jan 262014
 

Last month, I wrote a piece at HBR about how humans and algorithms will collaborate, based on the writings of Tyler Cowen and Andrew McAfee. The central tension was whether that collaboration would be set up such that algorithms were providing an input for humans to make decisions, or whether human reasoning would be the input for algorithms to make them.

One thing I thought about in the process of writing that piece but didn’t include was the question of whether one of these two models offered humans a higher value role. In other words, are you more likely to be highly compensated when you’re the decider adding value on top of an algorithm, or when you’re providing input to one?

I was initially leaning toward the former, but I wasn’t sure and so didn’t raise the question in the post. But the more I think about it, the more it seems to me that there will be opportunities for highly paid and poorly paid (or even unpaid) contributions in both cases.

Here’s a quick outline of what I’m thinking:

Screenshot 2014-01-26 at 3.26.26 PMIt seems totally possible for the post-algorithm “decider” to be an extremely low level, poorly paid contribution. I’m imagining someone whose job is basically just to review algorithmic decisions and make sure nothing is totally out of whack. Think of someone on an assembly line responsible for quality control who pulls the cord if something looks amiss. Just because this position in the algorithmic example is closer to the final decision point doesn’t mean it will be high value or well paid.

Likewise, it’s totally possible to imagine pre-algorithm positions that are high value. Given that the aggregation of expert opinion can often produce a better prediction than any expert on his or her own, you can easily imagine these expert-as-algorithmic-input positions as being relatively high value.

Still, the onus is on the experts to truly prove useful in this scenario. Because if they’re not adding discernible value, waiting in the wings is the possibility for the algorithm to aggregate an even greater range of human judgment — say via social media — that could be done cheaply or even for free.

I’m not sure where this leaves us except to say that I don’t see much reason for us to be “rooting” for algorithms to be inputs to humans or vice versa. In all likelihood this is not the right question. The relevant question, and a harder one, is simply how do we apply human judgment in a way that enhances our increasingly impressive computational decision-making powers.

 

Share
Jan 112014
 

Steve Jobs Announces the iPhone in 2007

One of the most common responses to my post on middle class incomes was to point out the role of technological progress. If the average American family went back in time to 1989, I wrote, they’d make just as much money but work longer fewer hours to do it. But, some responded, they wouldn’t have iPhones. That isn’t meant to sound trivial, and as someone optimistic about technology I don’t consider it to be. Improvements in technology are an important piece of any conversation about progress. But do they change the story about middle class incomes?

Yes and no.

Short version: All of the data I included adjusted for inflation, which accounts for certain kinds of technological progress but not others. Some new technologies – like the iPhone – aren’t currently captured in that data. Others are. If new technological inventions like the iPhone were able to be included in common inflation measures, the incomes of the middle class would indeed look at least a bit higher.

Here’s the long version, starting with a short overview of inflation.

Measures of inflation track the price of goods over time, and although it’s technically an oversimplification, you can think of such measures – like the Consumer Price Index (CPI) – as a proxy for the cost of living. If the stuff you need to get by costs, in total, $100 per week today, but next year that same stuff costs $200 per week, you’d need to be making twice as much money just to be keeping up. So if you hadn’t gotten any raise over the course of that year, an inflation-adjusted (“real”) accounting of your income would say that your income dropped 50%. Inflation-adjusted income measures account for how much stuff costs.

Prices don’t all change together of course, so the CPI uses a bunch of “baskets” of goods. Food is one part of that. Let’s say the price apples goes up, but the price of bananas goes down. If those changes average out, from the CPI’s perspective, “prices” haven’t changed. (If this happened, you might choose to buy only bananas for a while, in order to take advantage of the low prices. So this is an example of when the CPI starts to diverge from cost-of-living. That’s called the substitution effect and it’s one of the big challenges to measuring inflation, but it’s a bit outside the scope of this post.)

In theory, technological improvements should be captured in measures inflation. Say one of the things most people do is to send letters, documents, and other information to each other. It used to require going to Staples, buying envelopes, paying to print, then paying for postage, etc. Now you can just email them from a relatively inexpensive computer in your home. The price of sending all this stuff, one of your regular life activities, just got cheaper. Inflation is about measuring prices, so a measure of inflation should capture this price decrease. If the inflation measure captures it, it would mean that inflation-adjusted income (like I used in my previous post) would capture the impact of tech.

But in practice, measures of inflation have a really hard time capturing new technologies. To see when inflation does and doesn’t capture technology, let’s go back to the food example.

The kind of technological change that inflation is relatively well set up to track is the kind that results in decreased prices for an existing good. Say a farmer comes up with a new way to grow apples and the result is that the exact same kind of apple you’re used to buying suddenly costs half as much as it used to. The CPI will capture that decrease, and so inflation-adjusted income will reflect the improvement.

But say an agricultural scientist invents some new health shake, unlike any food out there on the market, which provides all your daily calories and nutrients. This counts as a “new good” and inflation measures don’t really have any way to account for it. In practice, if a bunch of people start buying the health shake, after a while the Bureau of Labor Statistics will decide to add it to the CPI and start tracking changes to its price going forward, but this misses the value of the new invention in two respects.

The first, and simpler, problem is that the BLS only updates the CPI’s “baskets” every four years. And for some technologies, prices can drop a lot over that amount of time. So imagine the health shake debuts at $100 per serving, but four years later, by the time the BLS gets around to counting it, it’s going for $20 per serving. That price decrease will be missed.

The second issue is a trickier. The very act of invention, if the new product is novel enough, is simply not accounted for at all in inflation statistics. Here’s how a report from The National Academies puts it:

Without an explicit decision to change the list of goods to be priced, standard indexing procedures will not pick up any of the effect of such newly introduced items on consumers’ living standards or costs…

…If significant numbers of new goods are continually invented and successfully marketed, an upward bias will be imparted to the overall price index, relative to an unqualified [Cost of Living Index]…

…Proponents of more traditional price index methodologies argue that it is a perversion of the language to argue that the effect of, say, the introduction of cell phones or the birth control pill is to reduce the price level, a result that comes from confusing the concept of a price level with that of the cost of living. Their position is tempered somewhat by the realization that, outside of price measurement, there is nowhere else in the national accounts for such product quality improvements to be included and, as Nordhaus (1998) and others have argued, real growth in the economy is thereby understated.

How would the introduction of a brand new good be translated into a change in price? The idea here is that sometimes a new good comes to market at a price lower than some consumers would have been willing to pay. Our magic shake example comes to market at $100 per serving; but perhaps some consumers would have been willing to pay $200 per serving for it, but just never got the chance because the technologies that make it possible hadn’t yet been invented. This difference represents value that inflation measures won’t catch. (An interesting note for innovation econ nerds: this is less likely to be a problem to the extent you see technological innovation as a demand or “pull” driven process. It’s really supply shocks that will cause big problems for inflation measures.) There are econometric techniques that some experts believe could be used to capture this value, but they are complex, controversial, and not yet in use.

To sum up, here’s how to think about it: when Amazon uses better software to make retail more efficient and therefore makes a bunch of consumer products cheaper, that’s captured in our most common measure of inflation. But when a radically new consumer product — like the iPhone — is introduced, some portion of the new value will go uncounted. If the iPhone gets cheaper over the first few years before it is incorporated into the CPI, that value will be lost. But once it is included, improvements in technology that make the iPhone cheaper will be captured.

The result is that inflation-adjusted income measures do fail to account for certain kinds of technological progress. How big is that bias? Best I can tell, we don’t really know. Some have suggested it is sizable, but there is no consensus.

So as for the response — sure, middle class incomes were the same a decade or two ago, for fewer hours worked, but now we have iPhones — it is on to something. It’s perfectly reasonable to point out that certain new tech products are available now and weren’t then, and that income data doesn’t fully capture that. But be careful with this argument. It’s not all new tech that goes un-captured. Lots of the behind-the-scenes increases in efficiency due to tech that result in lower consumer prices are captured, as is at least a portion of the continuing decreases in price for consumer tech products once they’ve been in the market for a while.

So it’s a good point, but a nuanced one.

UPDATE 2/5/14: Martin Wolf at FT nicely captures this in two sentences: “Its price was infinite. The fall from an infinite to a definite price is not reflected in the price indices.”

Share
Jan 092014
 

Screenshot 2014-01-06 at 4.27.09 PMDepending on who you ask, the incomes of the American middle class over the past few decades have either a) risen only a little b) stagnated, i.e. stayed flat or c) declined. When President Obama declared in his State of the Union speech that family incomes had “barely budged” from 1979 to 2007, The Washington Post called it inaccurate, noting that median household income increased substantially over that period. And yet barely a day goes by without a story that references stagnating wages for the middle class.

So which is it?

The one thing everyone agrees on is the fact that the rich are getting richer much, much faster than anyone else. And so in one sense, it doesn’t much matter if the answer is (a), (b), or (c). In either case, rising inequality represents a gross misallocation of the nation’s wealth. Still, the possibility that the average American is worse off economically than one or two generations ago makes the issue feel all the more urgent.

Unfortunately, the seemingly simple question of whether Americans are making more money today than in decades past is a bit tricky. The answer depends on the timeframe and the measure.

Short version: The average American family was making modest gains in income over the past few decades, but was working longer hours to do it. Then the recession happened and set the average family back 10 to 20 years.

Now here’s the full story.

The easiest place to start is household “market income”, which just means how much money a household makes before taxes or government transfers are counted. (Importantly, employer-based health care benefits are included in this measure.) Here, via the Congressional Budget Office, is the snapshot over time:

market income CBO

The thing to note here is that the median household income rose nearly 20 percent between 1979 and 2007. That the mean income rose faster hints at the fact that the rich got richer at a much faster rate (see the chart at the top), but nonetheless, seen at this level the story looks like one of modest progress.

Things actually look better still when you consider the impact of taxes and transfers, shown here again via the CBO:

median after tax income CBO

Once taxes and government transfers are accounted for, the median American’s income has risen more than 30 percent from 1979 to 2007, making the story of minor progress a bit more progress-y. (It’s this after-tax measure, accounting for taxes and government transfers, that you saw in the chart of all income inequality at the top of this post.)

There’s one more upside to note: since the average household is smaller than a few decades ago, these gains are slightly larger when size of family is accounted for. Unfortunately, that’s the beginning, not the end, of the story.

It turns out that the median household’s income has only increased because that household has been working more. The New York Times summarizes data from Brookings from 1975 to 2009:

Median wages for two-parent families have grown 23 percent since 1975, after adjusting for inflation. The collective number of hours worked by both parents over the course of a year, however, has risen 26 percent. That means their wages haven’t even grown as much as their working hours would imply they should.

The increase in hours worked is largely the impact of women entering the workforce. To make that point a bit more clear, we can look at this chart from Brookings:

Screenshot 2014-01-06 at 8.23.10 PM
If modest increases in household income for the median family are the result of more hours worked, what do wages look like on an hourly basis? For that we can turn to the Economic Policy Institute:

Screenshot 2014-01-06 at 8.17.41 PM

As you can see, the story is one of stagnation since the 70’s, with a modest boost in the late 90’s. This is what stories about “stagnant wages” are talking about. The average American doesn’t make much more for his or her time than in the 1970’s. To bring in more income requires working longer hours.

But here’s where it goes from depressing to downright infuriating. That modest increase in household income that the median family earned by working longer hours? Well, not surprisingly, the Great Recession pretty much wiped it out:

 since the recession
As for the recovery? Well, in its first year, from 2009 to 2010, the top 1% captured 93% of the income gains, according to Stanford:
Screenshot 2014-01-06 at 8.34.41 PM
What about since then? According to Pew, the top 7% saw its wealth (not income) rise 28 percent from 2009 to 2011 while the bottom 93 percent of Americans saw their net worth decrease by 4 percent.
Screenshot 2014-01-06 at 8.37.45 PM

So there you have it. Wages are flat, incomes were up but only because of more hours worked, and then got hammered by the recession. If the average American family could take a time machine back to 1989 they’d make just as much money, and would work fewer hours to make it.

The typical argument as to why we can’t do anything to fix this claims that intervening would jeopardize economic growth. Even if that were true, what’s the good of growth if it doesn’t make anyone richer except the rich? And let’s be clear: that is what economic growth has done.

Here’s where income growth has gone from 1979 to 2007:

Screenshot 2014-01-06 at 8.42.00 PM

But remember: that was the pre-recession distribution. It’s only gotten worse.

 

Share
Nov 202013
 

The most intuitive way to structure a paywall with respect to premium content — like, say, a longform reported magazine piece or a Snowfall-style multimedia feature — is to offer the cheaper content for free and put the premium stuff behind the gate. I say intuitive simply because it costs you more to produce that stuff; it makes sense, to the extent you’ve decided to charge at all, that you don’t give it away.

But recently I’ve been thinking about the argument for a simpler metered approach where all content counts equally, that came to mind thanks to this quote from Nate Silver (who wasn’t talking at all about paywalls). Here he is via Nieman Lab:

On balancing features and blogging-style analysis

We see them as two related, familial, but separate content silos. From a practical economic standpoint, one of the wonderful things about blogs, as they were originally invented, was you had relatively low transaction cost for producing a blog post. Not that it doesn’t have to be high quality — but you’re not necessarily spending as much of an editor’s time on it, you’re not going through multiple iterations. It’s more thinking about things in real time.
So in some ways, we want to, on our blog, get back more to what we think are the core differentiating values of blogging, and not this kind of in-between space a lot of news organizations have wound up in where everything became called a blog, and then it became unfashionable, so nothing gets called a blog.We do make a distinction based mostly on how quickly the content is turned out. What we call a feature is something where it’s assigned, generally in advance, and goes through at least one, maybe multiple rounds of edits.A blog is something which still has to be very good — and it’s as hard, relatively, to hire bloggers as it is to hire feature writers. It’s something that might get a quick read, and maybe has a little bit more voice, but also saying “this is my thinking in real time,” or my work in progress. How we’ll flesh that out exactly in practice, I’m not sure, but I feel like there is an important distinction to be made between the two.

This got me thinking about the awesomeness of truly good blogging, the way it makes you want to check in every 10 minutes to see if the author has something new to say. It’s why I still want to read everything Kevin Drum, Matt Yglesias, or Tyler Cowen writes.

Now here’s Silver on balancing between loyal readers and broader traffic:

I think with almost any web product you have two types of audience. You have your core, everyday readers and then you have the people you reach out to from time to time. I think that having the right content mix, where you can have big spikes in content by doing something interesting and different from time to time, but also making sure that people who are reading the site every day feel they’re getting a good healthy breakfast, lunch, and dinner everyday, full of FiveThirtyEight content.

All of which made me suddenly reconsider my intuition on paywalls and premium content. I hear Silver making the case for a metered model that treats everything equally. The high quality stuff can “travel” on social, reaching readers who otherwise wouldn’t stop by, and because they haven’t used up their content quota, they can view it for free. It’s essentially a loss leader that attempts to draw in more regular readers. And what those devotees are paying for isn’t high production value or in depth reporting so much as immediacy and consistency. They want to read all (or lots of) what you put out.

We see both models today: The New York Times and The Washington Post are metered; The New Yorker makes it harder to get to its premium magazine content than to its blogs. But when I think about my own willingness to pay (or lack thereof) the metered approach strikes me as a bit more plausible, because it pulls out all the stops to build an affinity to the brand. Put another way, you’re making it extra difficult to gain paying customers when you put your best products out of reach.

Share
Nov 162013
 

Watson_Jeopardy

I recently read Tyler Cowen’s latest book Average is Over, and I’d recommend it to anyone thinking about technology and the future of the economy. It’s a highly readable vision of what the coming age of ubiquitous intelligent machines will mean for workers and the economy. Here’s a bit from Chapter 1 that captures Cowen’s thinking:

Workers more and more will come to be classified into two categories. The key questions will be: Are you good at working with intelligent machines or not? Are your skills a complement to the skills of the computer, or is the computer doing better without you? … If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch. Ever more people are starting to fall on one side of the divide or the other. That’s why average is over.

To be clear: the book is not about whether this is a good or bad thing, or whether its results will be positive or negative. But his articulation of what the world will look like is bleak:

We will move from a society based on the pretense that everyone is given an okay standard of living to a society in which people are expected to fend for themselves much more than they do now. I imagine a world where, say, 10 to 15 percent of the citizenry is extremely wealthy and has fantastically comfortable and stimulating lives, the equivalent of current-day millionaires albeit with better health care.

Much of the rest of the country will have stagnant or maybe even falling wages in dollar terms, but a lot more opportunities for cheap fun and also cheap education. Many of these people will live quite well, and those will be the people who have the discipline to benefit from all the free or near-free services modern technology has made available. Others will fall by the wayside.

(You can read more about what Cowen thinks this will do to U.S. politics in this excerpt at Politico.)

If we accept for the moment that Cowen’s vision of how machine intelligence will transform the economy is basically right, how might we avoid this sad state of political and distributional affairs?

Enter the minimum income. As The New York Times recently explained:

Every month, every Swiss person would receive a check from the government, no matter how rich or poor, how hardworking or lazy, how old or young. Poverty would disappear. Economists, needless to say, are sharply divided on what would reappear in its place — and whether such a basic-income scheme might have some appeal for other, less socialist countries too.

(You can read Cowen on some of the shortcomings here.)

I see a few reasons why a guaranteed minimum income would fit nicely with the future Cowen describes.

1. Supplement the incomes of those unable to compete in the labor force. This is obvious. But the guaranteed minimum income strikes me as a way to maintain the notion of a guaranteed standard of living for all. Moreover, as intelligent machines put pressure on the labor market, tying that standard to work — as we do through the minimum wage — may make less sense.

2. Incentives to work would matter less than they do today. As the Times notes, one of the biggest concerns around a minimum income is that it would serve as a disincentive to work. But that should matter much less in the world Cowen envisions, where unskilled labor is largely displaced by machines. In that world, the fact that some citizens opt not to work would matter less to GDP. Moreover, it would be unlikely to impact the motivations of higher skilled workers, who would set out to earn far more than the guaranteed income provides.* This logic applies both to citizens working despite the option of a guaranteed minimum income and to the disincentive of higher marginal tax rates on the wealthy to support such a program. Don’t buy this? It would be even more true given #3.

3. Increased cultural emphasis on self-motivation will already be necessary. The world Cowen describes prizes self-motivation above almost everything else. If you’re motivated, you’ll take advantage of cheap education to work your way into the most productive echelons of the labor market. In such a world, a cultural focus on increasing self-motivation would be extremely helpful. Moreover, such a cultural emphasis would serve to bolster my arguments in #2, undercutting the argument that a guaranteed minimum income disincentivizes work. In other words, we would accept that financial incentives to avoid work existed, but overcome them in part by becoming a society obsessed with promoting curiosity-based learning and a quest for mastery or “flow.”

4. Some slack in the economy will be necessary to promote art and entrepreneurship. In this hyper-efficient, ultra-competitive world, the creation of a strong safety net would arguably be even more necessary to promote things like entrepreneurship and the arts. Entrepreneurship is mentioned as an argument for the guaranteed minimum income in the Times piece and I think it is a strong argument in two senses. First, a stronger safety net helps to de-risk entrepreneurship; if you forgo a higher income to found a startup and then fail, you’ll at least fall back on some level of comfort. Second, a minimum income would help to subsidize entrepreneurs’ incubation period, as already happens through “entrepreneur-in-residence” programs. If entrepreneurs can eat and live somewhat comfortably while working on their idea (but before it is at the point of making revenue or being attractive to investors), they’ll be more likely to take the plunge.

As for the arts, the tight labor market envisioned in Cowen’s book would put even more negative pressure on the wages of artists. Huge swaths of those without the skills to succeed in complementing machines would seek to become musicians, actors, painters, etc. bidding down wages for those industries. A minimum income would make it possible for artists to do their work.

For all these reasons, I see a guaranteed minimum income as a natural fit with the world Cowen describes. To be clear, I’m not saying I think we’ll necessarily get that world, or that a minimum income is actually a good idea. But as we ponder a world of machine intelligence and a bifurcated labor market, it’s something to at least consider.

Image via Wikipedia

*Even today, I would argue that the most productive workers are largely not motivated by money. Rather, they’re motivated by status, curiosity, and a sense of mastery. To the extent that money is a motivator, it’s largely as a substitute for status.

Share
Nov 162013
 

tetris

There was a piece in Fortune earlier this month with which I strongly disagreed, on the subject of healthcare, technology, and “gamification”. The post centers around a health tech hackathon and, I think, in dismissing the promise of gamification, misses one of the most promising aspects of health IT. Here’s the gist:

Several months ago, I sat in on a case competition at Boston University’s School of Management. The event played out over two days, during which 15 teams of five students from B-schools all over the world — India, South Korea, Canada, but mostly the U.S. — pitched their ideas for a company, one that would revolutionize health care (the stated goal was particularly jargon filled: “to leverage information technology to transform global health care and create value”)…

Immediately, a theme emerged, and the theme was games. “How do we gamify health care?”… As the day wore on, one of the Merck representatives finally asked, in exasperation, “Why would you make a game out of taking a pill? This will never be fun,” which is true…

I happen to think this is a bit needlessly cynical with respect to drug adherence, but the point I want to make is different. The term “health IT” tends to conjure the thought of medical records and the efficiency of medicine more broadly. But one of the most promising areas in my mind, specifically with mobile technology, is in gamifying health.

If you look at what’s driving U.S. healthcare costs, a huge chunk is driven by diseases directly caused by poor health behaviors like smoking, overeating, and lack of exercise. As I put it in a post a little over a year ago:

Want to crack healthcare costs? Help at-risk individuals smoke less, drink less, exercise more and eat better.

This is where the potential for gamification lies. (If you don’t like the buzzword, call it behavior modification.) Think of it like this: using a doctor to treat the fact that you eat too much and don’t get enough exercise is a terribly inefficient health plan. You go in every few months, the doctor scolds you for not sticking to your diet and exercise regimen, you go home and don’t change.

The opportunity is to leverage the fact that we now all carry powerful computers connected to the internet with us at all times (in the form of smartphones) to nudge us toward better behavior. This is by no means easy! And for now it’s way worse than the alternative of relying on a mix of social support from family and friends along with willpower and attempts to form better habits. But is it out of the question to think that mobile technology can supplement those things?

Think about RunKeeper, the running app, or GymPact, the workout commitment app, in this context. They’re both, basically, turning fitness into a kind of game, and they’re both using different motivational levers to try and increase your likelihood of exercising. This kind of thing — the good behavior layer — is where the potential for gamification lies. Not in making it more fun to take your pills or to receive a medical diagnosis.

The area that excites me in terms of health technology isn’t revolutionizing medicine, as big a deal as that may be, but revolutionizing health.

Share
Oct 042013
 

Contrary to what you’ve heard:

Screen Shot 2013-10-04 at 9.51.13 AM

 

That’s via a report from the London School of Economics, which concludes:

The music industry may be stagnating, but the drastic decline in revenues warnedof by the lobby associations of record labels is not in evidence.

This isn’t to say all is well in the music industry, and it doesn’t speak to the distribution of revenue between artists, studios, platforms like Spotify, etc. But the report points out that the music industry is making exaggerated claims about the harm that piracy is causing, in order to advocate for stronger intellectual property protection. Whatever the challenges faced by musicians, making it harder to remix and share culture isn’t the answer.

Share