Was ‘Shareholder Value’ ever a good idea?

Joe Nocera had a great column recently on shareholder value, based in part on a recent feature we published in HBR. In it, he pauses to consider why the idea of putting shareholders first might ever have made sense to anyone other than investors:

It seemed like such a good idea at the time, back in the late 1970s and 1980s. For too long, the compensation of top executives was disconnected from any performance criteria, including whether they made money for shareholders. CEOs did pretty much whatever they wanted, with no fear of consequences. Thus, companies that needed to slim down, wouldn’t. Companies that needed to deploy capital more intelligently, didn’t. Executives who should have been fired, weren’t.

But, as Nocera writes, be careful what you wish for.

In “The Error at the Heart of Corporate Leadership” in HBR, Joseph Bower and Lynne Paine take issue with shareholders’ primacy, writing that:

We are concerned that the agency-based model of governance and management is being practiced in ways that are weakening companies and—if applied even more widely, as experts predict—could be damaging to the broader economy. In particular we are concerned about the effects on corporate strategy and resource allocation. Over the past few decades the agency model has provided the rationale for a variety of changes in governance and management practices that, taken together, have increased the power and influence of certain types of shareholders over other types and further elevated the claims of shareholders over those of other important constituencies—without establishing any corresponding responsibility or accountability on the part of shareholders who exercise that power. As a result, managers are under increasing pressure to deliver ever faster and more predictable returns and to curtail riskier investments aimed at meeting future needs and finding creative solutions to the problems facing people around the world.

Instead, they put forward a “company-centered” model:

A better model, we submit, would have at its core the health of the enterprise rather than near-term returns to its shareholders. Such a model would start by recognizing that corporations are independent entities endowed by law with the potential for indefinite life. With the right leadership, they can be managed to serve markets and society over long periods of time.

It is helpful to remember, as Nocera does, why shareholder primacy might have ever been appealing. But I’m squarely with Bower and Paine that it’s gone much too far. (Whether agency theory has been a productive academic framework for studying firms is a separate question.) I’m a bit less convinced that activist investors are the primary culprit (see here, here, here, here) and the question of whether firms are too short-term focused remains a matter of debate (see here, here, here) but these are quibbles. Shareholders are taking home a larger and larger share of the economic pie, and that’s a deeply troubling phenomenon.

My favorite formulation in the Bower/Paine piece is this one:

It is important to note that much of what activists call value creation is more accurately described as value transfer. When cash is paid out to shareholders rather than used to fund research, launch new ventures, or grow existing businesses, value has not been created. Nothing has been created. Rather, cash that would have been invested to generate future returns is simply being paid out to current shareholders.

This is the right way to think about companies’ job in the economy: to create real economic value, not just paper value, and not just to transfer value from one group to another. The main way to create value is through innovation. But would a move away from shareholder primacy help or hurt?

In one view, a move away from shareholder primacy helps innovation, as companies feel free to reinvest profits into R&D, and to spend more on training workers. In another view, absent pressure from shareholders incumbents get too cozy, and a “company-centered” model prioritizes the needs of existing organizations over the creation of new ones. That latter situation would be a problem, since we have good evidence linking firm entry and exit with economic growth and job creation.

I see no reason why a move away from shareholder primacy should lead to a less dynamic economy; the opposite seems more likely. You can imagine a 2×2 matrix, with Dynamic/Complacent on one axis and Shareholders-First/Balanced on the other. Over the past 50 years, the U.S. economy moved from the Balanced side of the grid toward Shareholders-First. There’s at least some evidence that the economy became less dynamic over that very same period of time.

Dynamic+Balanced clearly seems like the best part of the grid to be in. Putting the interests of shareholders on a more equal footing with the interests of employees, customers, and broader society seems like a step in the right direction.

More thoughts on Snap and digital strategy

I published a piece at HBR about Snap’s inability to keep Facebook/Instagram from copying its best features. It was pretty pessimistic about Snap’s “strategy” of building great, creative products. And I argued that CEO Evan Spiegel drew the wrong lessons from Google’s success in search.

That got me thinking about what the best strategic case for Snap would be. Before getting to that, here are a few other good reads on Snap and strategy. Joshua Gans on the tradeoffs between execution and control; Nathan Furr on turning products into platforms; and Ben Thompson on Snap and Apple.

So what would the best case be?

Architectural innovation

Here’s another quote from Spiegel on the first Snap earnings call:

“When Google came along, everyone really felt like they needed a search strategy. When Facebook came along, everyone felt they needed a social strategy. And now I think with Snap, with our company, we believe that everyone is going to develop a camera strategy.”

One (perhaps generous) reading of this would be that the shift to camera-based social applications represents a shift in the architecture of social products, not just in their components. In 1990, Rebecca Henderson (then of MIT now Harvard Business School) and Kim Clark (also HBS) published a key paper on innovation. In it, they suggested that “incremental” and “radical” innovation were insufficient categories. Instead, they differentiated between changes to the individual components of a product and changes to its architecture.

As they described it:

For example, a room fan’s major components include the blade, the motor that drives it, the blade guard, the control system, and the mechanical housing. The overall architecture of the product lays out how the components will work together. Taken together, a fan’s architecture and its components create a system for moving air in a room.

A component is defined here as a physically distinct portion of the product that embodies a core design concept (Clark, 1985) and performs a well-defined function. In the fan, a particular motor is a component of the design that delivers power to turn the fan. There are several design concepts one could use to deliver power. The choice of one of them-the decision to use an electric motor, for example, establishes a core concept of the design. The actual component-the electric motor-is then a physical implementation of this design concept.

The distinction between the product as a system and the product as a set of components underscores the idea that successful product development requires two types of knowledge. First, it requires component knowledge, or knowledge about each of the core design concepts and the way in which they are implemented in a particular component. Second, it requires architectural knowledge or knowledge about the ways in which the components are integrated and linked together into a coherent whole. The distinction between architectural and component knowledge, or between the components themselves and the links between them, is a source of insight into the ways in which innovations differ from each other.

In a recent book and HBR article, Joshua Gans of the University of Toronto has dubbed this theory of architectural innovation “supply-side disruption.”

Henderson and Clark argue that architectural innovation creates problems for established firms:

Architectural innovation presents established firms with a more subtle challenge. Much of what the firm knows is useful and needs to be applied in the new product, but some of what it knows is not only not useful but may actually handicap the firm. Recognizing what is useful and what is not, and acquiring and applying new knowledge when necessary, may be quite difficult for an established firm because of the way knowledge-particularly architectural knowledge-is organized and managed

Perhaps the move to make smartphone cameras the center of social applications represents such an architectural innovation. If that’s true, maybe it’s enough to trip up incumbent players, offering Snap the chance to become the market leader.

This doesn’t seem likely, for two reasons. First, Instagram is already a camera-first company. Second, Facebook showed with its shift to mobile that it’s committed to and quite good at adjusting to architectural innovation.

The Snap paradox

Another possibility, mentioned in my original piece, is segmentation. Maybe Snap can be the market leader among a subset of users. That’s totally plausible, but at odds with Spiegel’s view that “Longer term, obviously, we really believe that Snapchat is for everyone.” He could argue that buy-in from a certain segment will allow him to get there — maybe you win millennials first, and later everyone else follows them — but that argument is self-defeating. That’s what Facebook did, and if that advantage is so fleeting that Snap can come along and do the same thing a few years later, what does that say about the value of Snap?

That is the Snap paradox. If it shows it can beat Facebook, it shows that social networks aren’t as valuable as they might seem, because, as Thompson put it, “Snap is declaring that moats no longer exist.”

One final thought: there may still be room for Snap to follow in Apple’s footsteps. In my piece I wrote that:

Thompson has compared Snap favorably to Apple, an extremely successful company known for its hard-to-imitate product expertise. But Apple has benefited from controlling key ecosystems, like iTunes and iOS. And it has benefited from focusing on a particularly lucrative part of the hardware market: high-end, high-margin devices aimed at wealthier customers. It’s not clear what the equivalent is in Snap’s case.

Just because Spiegel isn’t fully articulating Apple’s actual strategy doesn’t necessarily mean Snap can’t get there. It may be that the combination of branding, constant attention to design and product innovation, and segmentation on the advertising side — some way to offer high-end ad experiences in a way that other firms find hard to imitate or otherwise unappealing — could work.

But that last part still requires some theory why Facebook can’t or won’t copy you. It still requires strategy.

Gig work as a symptom of economic insecurity

Over the last 40 years, America has gradually shifted risk from government and other large institutions like businesses back onto its citizens. (For much of the previous century, it was going in the other direction.) This shift was political as well as economic. 

It’s tempting to see the gig economy as furthering this shift, moving risk from companies to workers, and that’s clearly part of the story.

But the risk shift was well underway before Uber, Lyft, and Taskrabbit arrived on the scene. So it seems more accurate to think of these jobs at least largely as a consequence of risk shifting, and only secondarily as a cause.

Consider Uber. The majority of Uber drivers have another job, and more than half of UberX drivers drive less than 15 hours a week.  In their 2015 paper on Uber drivers, economists Jonathan Hall and Alan Kreuger conclude that “Uber’s driver-partners also often cited the desire to smooth fluctuations in their income as a reason for partnering with Uber.” 

That makes sense, since incomes have become significantly more volatile in recent decades. And as Jonathan Morduch and Rachel Schneider wrote on HBR:

Our first big finding was that the households’ incomes were highly unstable, even for those with full-time workers. We counted spikes and dips in earning, defined as months in which a household’s income was either 25% more or 25% less than the average. It turned out that households experienced an average of five months per year with either a spike or dip. In other words, incomes were far from average almost half of the time. Income volatility was more extreme for poorer families, but middle class families felt it too.

Uber drivers are responding to this phenomenon, using gig work to “top off” their income as necessary.

How you feel about this likely depends on your comparison. If you take the great risk shift as a given, this might be a mild improvement on the status quo since gig work can help supplement incomes in times of need. But if you compare it to a world where incomes are less volatile and/or risk is shifted back to government or large institutions, this arrangement looks lousy. 

The real culprit here is politics. Yes, Uber should treat drivers better, and companies in general can do more to make work less precarious. But the enthusiasm for gig work is made possible by a political system that has failed a large number of people. 

On crony capitalism

A good paragraph, from American Amnesiapage 97:

Government does not always work well — indeed, we wrote this book because it is working less and less well. And when well regulated, markets often work very well indeed. But there is no recipe for prosperity that doesn’t involve extensive reliance on effective political authority. The conservative vision of shrinking government to a size that will make it ‘safe’ from cronyism is the economic equivalent of bloodletting. The cure is far worse than the disease. Prosperous societies need a lot of government. Because they do, the incentives for rent seeking will always be present. Making a mixed economy work requires keeping that cronyism, and the other dangers we have recounted, within tolerable limits — and that requires not less government (or necessarily more government) but effective government.

Why did America get rich?

We published a piece last week at HBR.org by Martin Feldstein, the widely respected Harvard economist and conservative, in which he lays out ten reasons why the U.S. remains richer than its peers. It just so happens that at the same time I was reading American Amnesia, a defense of government and the mixed economy, by political scientists Jacob Hacker and Paul Pierson. And they offer their own ten reasons why economies grow. I thought it’d be instructive to compare them.

Here’s Feldstein:

  1. An entrepreneurial culture.

  2. A financial system that supports entrepreneurship.

  3. World-class research universities.

  4. Labor markets that generally link workers and jobs unimpeded by large trade unions, state-owned enterprises, or excessively restrictive labor regulations.

  5. A growing population, including from immigration.

  6. A culture (and a tax system) that encourages hard work and long hours.

  7. A supply of energy that makes North America energy independent.

  8. A favorable regulatory environment.

  9. A smaller size of government than in other industrial countries.

  10. A decentralized political system in which states compete.

Here’s Hacker and Pierson, on what they consider factors for why nations get rich that “would make most analysts’ lists”:

  1. private property rights and legally secure contracts backed up by an independent legal system;

  2. a well-functioning financial system, including a central bank to provide a common currency, manage the macroeconomy, and serve as lender of last resort;

  3. internal markets linked by high-quality communications and transportation infrastructure;

  4. policies supporting and regulating external trade and financial flows;

  5. substantial public investment in R&D and education;

  6. regulation of markets to protect against externalities, such as pollution, and help consumers make informed decisions;

  7. public provision of goods that won’t be provided at all or sufficiently if left to markets, such as public health;

  8. inclusiont o f all sectors of society in the economy, so that human capital isn’t wasted;

  9. reasonably independent and representative political institutions, so that elite capture and rent seeking aren’t rife; and

  10. reasonably capable and autonomous public administration–including an effective tax system that citizens view as legitimate — so that items 1 through 9 can be carried out in relatively efficient and unbaised ways.

These are very different lists, and they mirror America’s political divisions in fairly predictable ways. They disagree over the size of government, for example, and about unions.

Yet, there are some key themes. Scientific research makes both lists, as does education and the role of human capital in general. Effective government shows up on both lists, though the authors disagree about how to encourage it. A financial system that allocates capital well makes both lists, though again with different points of emphasis. Feldstein would likely agree that private property and secure contracts are critical; and I suspect Hacker and Pierson would agree that entrepreneurship belongs on the list.

A couple of years ago I posted about “How to promote economic growth, in one paragraph” and unsurprisingly there’s considerable overlap between that paragraph — by McAfee and Brynjolfsson — and the one I just wrote. In that paragraph, they emphasized education and investments in technology. Like Hacker and Pierson, they highlighted infrastructure.

It’s sometimes easier to talk about economic growth, I think, if you put size of government to the side to begin with. Economies grow largely because of the spread of new and useful ideas. Those ideas are often produced by government-financed research and disseminated by entrepreneurs. Every part of that process depends on people, which means that investments in education are critical. The role of finance is largely to fund that process.

If we can agree on that much, then we can debate the role of government with reference to it. As the two lists I started with show, there are aspects of economic growth where experts disagree. But it’s easy to overstate the disagreement. In fact, as McAfee and Brynjolfsson wrote in that Foreign Affairs piece a few years back, in terms of promoting growth “there is a near consensus among serious economists about many of the policies that are necessary.”

The status quo gap

Everything in America seems polarized right now, so perhaps it’s no surprise that innovation is as well. In Silicon Valley, a small fraction of the population wants to push every technical boundary, transform every industry, re-imagine every job, and re-engineer the limits of life itself. Meanwhile, much of the rest of the country takes solace in tradition, and seeks to preserve the way things are.

That’s the theme of my column in the latest issue of HBR, about a few new books and a film, all in one way or another about innovation. Here’s the gist:

Wolfe focuses on PayPal’s founder, Peter Thiel, and a cohort of teenagers selected by his foundation to forgo college and start companies, but she also lets us peek into a variety of tech subcultures—from seasteaders to polygamists to those who, like Thiel, chase immortality through investments in life-extending technology. The suggestion isn’t that these pursuits are inherently flawed; it’s that they stem from a single-minded desire to push boundaries—technological and social.

Meanwhile, in the rest of the country, Cowen argues, most “Americans are in fact working much harder than before to postpone change, or to avoid it altogether.”

The column covers The Circle, The Upstarts, Valley of the Gods, and The Complacent Class. But my deadline was in February, and since then I’ve read a couple books that feel relevant to this theme.

I wrote recently about A Culture of Growth, in which economic historian Joel Mokyr argues that the connection between intellectuals and artisans helped propel the Industrial Revolution. The cultural divide between Silicon Valley and the rest of the world doesn’t quite map onto Mokyr’s divide, but his broader point about the economic importance of connections between different parts of society surely applies here. The innovation sector works best when it’s attached to the rest of society — to scientists, intellectuals, politicians, and the public. When the divide between the innovative sector and the rest of society grows too large, bad things can happen.

A version of this shows up in Kurt Vonnegut’s first novel, Player Piano, which anticipates basically all of our current debate over artificial intelligence stealing jobs. In the book, the managers and the engineers are separated from the public, physically, as well as ideologically. Neither side ends up looking good, and the ending is not one we should hope for.

Evidence that too much social media is making us unhappy

We published a piece at HBR this past week on Facebook and well-being, by two researchers writing about their recent study:

Overall, our results showed that, while real-world social networks were positively associated with overall well-being, the use of Facebook was negatively associated with overall well-being. These results were particularly strong for mental health; most measures of Facebook use in one year predicted a decrease in mental health in a later year. We found consistently that both liking others’ content and clicking links significantly predicted a subsequent reduction in self-reported physical health, mental health, and life satisfaction.

My view on this has changed over the years, based on both research and experience. In 2012, I was skeptical about an Atlantic piece saying Facebook was making people lonely. It seemed more likely that lonely people were more heavily using Facebook. By 2013, I was at least willing to entertain the idea that social media was bad for us. By 2014, I was willing to say that “it depends,” and to acknowledge that the balance of evidence might be shifting.

Of course, it still depends. The impact of social media can be positive or negative, depending on a host of factors. But what if we’re just communicating with each other too much? At best, it seems social media is running into diminishing marginal returns; at worst, we’re on the other side of an upside-down-U. The internet lets us communicate with each other. As it spreads, people communicate more — email, then Facebook, then Twitter, etc. — and at first welfare improves. Then either welfare plateaus (diminishing returns), or it actually starts going down (upside-down-U).

Sure enough, the research we covered suggests something similar:

Overall our results suggests that well-being declines are also matter of quantity of use rather than only quality of use. If this is the case, our results contrast with previous research arguing that the quantity of social media interaction is irrelevant, and that only the quality of those interactions matter.

There’s some forthcoming research on the benefit to consumers from various online services. I’ll hopefully cover it when it’s out, but at a glance it suggests diminishing returns: the benefits of email are far larger than from Snapchat, for instance.

That’s starting to look like the best case scenario. Either social media is fine, but each platform isn’t much better than the one that came before, or it’s harmful, at least for the people who use it most.

 

From Francis Bacon to Bob Langer

My colleague Steve Prokesch had an excellent profile of Bob Langer in the March-April issue of HBR. Langer runs Langer Labs at MIT, which Steve describes as “one of the most productive research facilities in the world.” Steve summarizes Langer’s recipe for success in a paragraph:

He has a five-pronged approach to accelerating the pace of discoveries and ensuring that they make it out of academia and into the real world as products. It includes a focus on high-impact ideas, a process for crossing the proverbial “valley of death” between research and commercial development, methods for facilitating multidisciplinary collaboration, ways to make the constant turnover of researchers and the limited duration of project funding a plus, and a leadership style that balances freedom and support.

It’s the DARPA formula, about which more here.

Langer’s success is extreme, and not every academic aspires to do work that is socially beneficial. Still, it’s notable how uncontroversial much of that formula is. Who wouldn’t want academia to tackle pressing real-world problems?

However, that ideal — that the fruits of academic inquiry actually be applied to create a better future — may explain quite a lot about how the world became more prosperous, according to economic historian Joel Mokyr. In A Culture of Growth, he reminds readers that Francis Bacon’s contribution to science was not just to make the case for data and experimentation:

Bacon’s work reinforced the trend in the West to build bridges between the realm of natural philosophy and that of the artisan and farmer. These bridges are critical to technological progress… One of the most remarkable trends in the cultural development of European intellectuals after 1500 was the slowly ripening notion that ‘intellectuals should involve themselves in practical matters traditionally considered beneath them’ and that their priorities ‘should take artisans newly seriously.’

From our current vantage, Langer is most notable for his particular approach, and his success in commercializing his science. But while the details of how his work gets diffused are interesting, from the broad sweep of history what’s notable is that he and his colleagues seek to make that happen at all.

3 good lines on when to trust experts

When should we defer to experts, and on what grounds? My general view is More often than most people do. But it’s a complicated question; as Phil Tetlock has shown, not every credentialed expert is worthy of our deference, for example. And even if you want to defer, summarizing what the experts actually think can be a challenge.

But I’ve come across three separate tidbits, each related in some way to this question, and I thought I’d preserve them here.

Dan Kahneman on deference to scientific consensus

One reason you might not trust certain experts is the ongoing “crisis” in social science. That’s a subject for another post, but here’s the famed psychologist Dan Kahneman responding to a post challenging his belief in the literature on priming:

My position when I wrote “Thinking, Fast and Slow” was that if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional. This position still seems reasonable to me – it is why I think people should believe in climate change. But the argument only holds when all relevant results are published.

Kahneman writes that belief in scientific conclusions is “not optional” because, I believe, as a student of bias he realizes that the alternative is far worse. At least for most people, not believing what’s been published in peer-reviewed journals doesn’t mean a sort of enlightened skepticism, it means falling back on the hunches you’d always wanted to believe in the first place.

Fact checking and triangulating the truth

This one comes from Lucas Graves’ excellent new book Deciding What’s Trueabout modern fact-checking. I’m writing more about the book, but for now I want to highlight one phrase and a few sentences that surround it:

The need to consult experts presents a real problem for fact-checkers, pulling them inevitably into the busy national market of data and analysis retailed in the service of ideological agendas. “You’re going to seek independent, nonbiased sources — of which in today’s world there are non,” a PolitiFact editor joked during training…

PolitiFact items often feature analysis from experts or groups with opposing ideologies, a strategy described internally as “triangulating the truth.” “Seek multiple sources,” an editor told new fact-checkers during a training session. “If you can’t get an independent source on something, go to a conservative and go to a liberal and see where they overlap.” Such “triangulation” is not a matter of artificial balance, the editor argued: the point is to make a decisive ruling by forcing these experts to “focus on the facts.” As noted earlier, fact-checkers cannot claim expertise in the complex areas of public policy their work touches on. But they are confident in their ability to choose the right experts and to distill useful information from political arguments.

Bertrand Russell on expert consensus

This one is a wonderful quote from Bertrand Russell, and it comes via an article in the most recent Foreign Affairs called “How America Lost Faith in Experts,” by Tom Nichols, a professor at the U.S. Naval War College. Here’s the Russell quote:

The skepticism that I advocate amounts only to this: (1) that when the experts are agreed, the opposite opinion cannot be held to be certain; (2) that when they are not agreed, no opinion can be regarded as certain by a non-expert; and (3) that when they all hold that no sufficient grounds for a positive opinion exist, the ordinary man would do well to suspend his judgment… These propositions may seem mild, yet, if accepted, they would absolutely revolutionize human life.

The New Industrialism

At the very end of 2012, I wrote a piece for MIT Technology Review (paywall) about an interesting schism in Democratic economic policy circles. Gene Sperling had replaced Larry Summers as director of the National Economic Council in early 2011, and over the next couple of years the Obama administration seemed to talk more about industrial policy, although they didn’t call it that.

The term was “advanced manufacturing.”

But in early 2012, Christina Romer, former chair of Obama’s Council of Economic Advisers, questioned the administration’s manufacturing agenda publicly, writing in The New York Times:

AS an economic historian, I appreciate what manufacturing has contributed to the United States. It was the engine of growth that allowed us to win two world wars and provided millions of families with a ticket to the middle class. But public policy needs to go beyond sentiment and history. It should be based on hard evidence of market failures, and reliable data on the proposals’ impact on jobs and income inequality. So far, a persuasive case for a manufacturing policy remains to be made, while that for many other economic policies is well established.

A survey of economists from around the same time confirmed that the conventional wisdom in the field was firmly against intervention to boost manufacturing.

I didn’t get to interview the players in this debate, but my piece highlighted a group of policy wonks willing to defend a certain sort of industrial policy. I talked to Mark Muro of The Brookings Metropolitan Program, which had released research emphasizing the importance of manufacturing, and Rob Atkinson of ITIF, a think tank promoting an aggressive innovation policy agenda.

Not much came of this debate, at least that I could see. But perhaps a broader warming to some revised form of industrial policy is now perceptible?

Enter economist Noah Smith at Bloomberg View, writing about new ideas in economic growth. Neoliberalism still has its adherents, but what’s the competition?

Looking around, I see the glimmer of a new idea forming. I’m tentatively calling it “New Industrialism.” Its sources are varied — they include liberal think tanks, Silicon Valley thought leaders and various economists. But the central idea is to reform the financial system and government policy to boost business investment.

Business investment — buying equipment, building buildings, training employees, doing research, etc. — is key to growth. It’s also the most volatile component of the economy, meaning that when investment booms, everything is good. The problem is that we have very little idea of how to get businesses to invest more.

My Tech Review story called this group the “institutionalists,” which one of my sources coined when pressed to distinguish his faction. But “New Industrialism” is far clearer.

So who’s a part of this agenda? Smith mentions The Roosevelt Institute’s excellent reports on short-termism; ITIF and Brookings Metro belong on the list; we’ve published a lot at HBR that I think would count (a few examples here, here, here, here).

And one could argue that Brad DeLong and Stephen Cohen’s forthcoming book on Hamiltonian economic policy (my employer is the publisher) is in this conversation, except arguing that such an agenda isn’t new at all. Here’s their recent Fortune piece:

Hamilton’s system was constructed of four drivers that reinforced one another, not just economically but politically: high tariffs; high spending on infrastructure; assumption of the states’ debts by the federal government; and a central bank.

As Smith writes,  “New Industrialism… is not yet mainstream,” and frankly there’s still a lot to be fleshed out before we can even ask whether such an agenda is superior to the alternatives. But he concludes that “it could be just the thing we need to fix the holes in our neoliberal growth policy.” He may just be right.