The Slack Wire

Strange Defeat

Anyone who found something useful or provoking in my Jacobin piece on the state of economics might also be interested in this 2013 article by me and Arjun Jayadev, “Strange Defeat: How Austerity Economics Lost All the Intellectual Battles and Still WOn the War.” It covers a good deal of the same ground, a bit more systematically but without the effort to find the usable stuff in mainstream macro that I made in the more recent piece. Perhaps there wasn’t so much of it five years ago!

Here are some excerpts; you can read the full piece here.

* * * 

The extent of the consensus in mainstream macroeconomic theory is often obscured by the intensity of the disagreements over policy…  In fact, however, the contending schools and their often heated debates obscure the more fundamental consensus among mainstream macroeconomists. Despite the label, “New Keynesians” share the core commitment of their New Classical opponents to analyse the economy only in terms of the choices of a representative agent optimising over time. For New Keynesians as much as New Classicals, the only legitimate way to answer the question of why the economy is in the state it is in, is to ask under what circumstances a rational planner, knowing the true probabilities of all possible future events, would have chosen exactly this outcome as the optimal one. Methodologically, Keynes’ vision of psychologically complex agents making irreversible decisions under conditions of fundamental uncertainty has been as completely repudiated by the “New Keynesians” as by their conservative opponents.

For the past 30 years the dominant macroeconomic models that have been in use by central banks and leading macroeconomists have … ranged from what have been termed real business cycle theory approaches on the one end to New Keynesian approaches on the other: perspectives that are considerably closer in flavour and methodological commitments to each other than to the “old Keynesian” approaches embodied in such models as the IS-LM framework of undergraduate economics. In particular, while demand matters in the short run in New Keynesian models, it can have no effect in the long run; no matter what, the economy always eventually returns to its full-employment growth path.

And while conventional economic theory saw the economy as self-equilibrating, economic policy discussion was dominated by faith in the stabilising powers of central banks and in the wisdom of “sound finance”. … Some of the same economists, who today are leading the charge against austerity, were arguing just as forcefully a few years ago that the most important macroeconomic challenge was reducing the size of public debt…. New Keynesians follow Keynes in name only; they have certainly given better policy advice than the austerians in recent years, but such advice does not always flow naturally from their models.

The industrialised world has gone through a prolonged period of stagnation and misery and may have worse ahead of it. Probably no policy can completely tame the booms and busts that capitalist economies are subject to. And even those steps that can be taken will not be taken without the pressure of strong popular movements challenging governments from the outside. The ability of economists to shape the world, for good or for ill is strictly circumscribed. Still, it is undeniable that the case for austerity – so weak on purely intellectual grounds – would never have conquered the commanding heights of policy so easily if the way had not been prepared for it by the past 30 years of consensus macroeconomics. Where the possibility and political will for stimulus did exist, modern economics – the stuff of current scholarship and graduate education – tended to hinder rather than help. While when the turn to austerity came, even shoddy work could have an outsize impact, because it had the whole weight of conventional opinion behind it. For this the mainstream of the economics profession – the liberals as much as the conservatives – must take some share of the blame.

In Jacobin: A Demystifying Decade for Economics

(The new issue of Jacobin has a piece by me on the state of economics ten years after the crisis. The published version is here. I’ve posted a slightly expanded version below. Even though Jacobin was generous with the word count and Seth Ackerman’s edits were as always superb, they still cut some material that, as king of the infinite space of this blog, I would rather include.)

 

For Economics, a Demystifying Decade

Has economics changed since the crisis? As usual, the answer is: It depends. If we look at the macroeconomic theory of PhD programs and top journals, the answer is clearly, no. Macroeconomic theory remains the same self-contained, abstract art form that it has been for the past twenty-five years. But despite its hegemony over the peak institutions of academic economics, this mainstream is not the only mainstream. The economics of the mainstream policy world (central bankers, Treasury staffers, Financial Times editorialists), only intermittently attentive to the journals in the best times, has gone its own way; the pieties of a decade ago have much less of a hold today. And within the elite academic world, there’s plenty of empirical work that responds to the developments of the past ten years, even if it doesn’t — yet — add up to any alternative vision.

For a socialist, it’s probably a mistake to see economists primarily as either carriers of valuable technical expertise or systematic expositors of capitalist ideology. They are participants in public debates just like anyone else. The profession as the whole is more often found trailing after political developments than advancing them.

***

The first thing to understand about macroeconomic theory is that it is weirder than you think. The heart of it is the idea that the economy can be thought of as a single infinite-lived individual trading off leisure and consumption over all future time. For an orthodox macroeconomist – anyone who hoped to be hired at a research university in the past 30 years – this approach isn’t just one tool among others. It is macroeconomics. Every question has to be expressed as finding the utility-maximizing path of consumption and production over all eternity, under a precisely defined set of constraints. Otherwise it doesn’t scan.

This approach is formalized in something called the Euler equation, which is a device for summing up an infinite series of discounted future values. Some version of this equation is the basis of most articles on macroeconomic theory published in a mainstream journal in the past 30 years.It might seem like an odd default, given the obvious fact that real economies contain households, businesses, governments and other distinct entities, none of whom can turn income in the far distant future into spending today. But it has the advantage of fitting macroeconomic problems — which at face value involve uncertainty, conflicting interests, coordination failures and so on — into the scarce-means-and-competing-ends Robinson Crusoe vision that has long been economics’ home ground.

There’s a funny history to this technique. It was invented by Frank Ramsey, a young philosopher and mathematician in Keynes’ Cambridge circle in the 1920s, to answer the question: If you were organizing an economy from the top down and had to choose between producing for present needs versus investing to allow more production later, how would you decide the ideal mix? The Euler equation offers a convenient tool for expressing the tradeoff between production in the future versus production today.

This makes sense as a way of describing what a planner should do. But through one of those transmogrifications intellectual history is full of, the same formalism was picked up and popularized after World War II by Solow and Samuelson as a description of how growth actually happens in capitalist economies. The problem of macroeconomics has continued to be framed as how an ideal planner should direct consumption and production to produce the best outcomes for anyone, often with the “ideal planner” language intact. Pick up any modern economics textbook and you’ll find that substantive questions can’t be asked except in terms of how a far sighted agent would choose this path of consumption as the best possible one allowed by the model.

There’s nothing wrong with adopting a simplified formal representation of a fuzzier and more complicated reality. As Marx said, abstraction is the social scientist’s substitute for the microscope or telescope. But these models are not simple by any normal human definition. The models may abstract away from features of the world that non-economists might think are rather fundamental to “the economy” — like the existence of businesses, money, and government — but the part of the world they do represent — the optimal tradeoff between consumption today and consumption tomorrow — is described in the greatest possible detail. This combination of extreme specificity on one dimension and extreme abstraction on the others might seem weird and arbitrary. But in today’s profession, if you don’t at least start from there, you’re not doing economics.

At the same time, many producers of this kind of models do have a quite realistic understanding of the behavior of real economies, often informed by first-hand experience in government. The combination of tight genre constraints and real insight leads to a strange style of theorizing, where the goal is to produce a model that satisfies the the conventions of the discipline while arriving at a conclusion that you’ve already reached by other means. Michael Woodford, perhaps the leading theorist of “New Keynesian” macroeconomics, more or less admits that the purpose of his models is to justify the countercyclical interest rate policy already pursued by central banks in a language acceptable to academic economists. Of course the central bankers themselves don’t learn anything from such an exercise — and you will scan the minutes of Fed meetings in vain for discussion of first-order ARIMA technology shocks — but they  presumably find it reassuring to hear that what they already thought is consistent with the most modern economic theory. It’s the economic equivalent of the college president in Randall Jarrell’s Pictures from an Institution:

About anything, anything at all, Dwight Robbins believed what Reason and Virtue and Tolerance and a Comprehensive Organic Synthesis of Values would have him believe. And about anything, anything at all, he believed what it was expedient for the president of Benton College to believe. You looked at the two beliefs, and lo! the two were one. Do you remember, as a child without much time, turning to the back of the arithmetic book, getting the answer to a problem, and then writing down the summary hypothetical operations by which the answer had been, so to speak, arrived at? It is the only method of problem-solving that always gives correct answers…

The development of theory since the crisis has followed this mold. One prominent example: After the crash of 2008, Paul Krugman immediately began talking about the liquidity trap and the “perverse” Keynesian claims that become true when interest rates were stuck at zero. Fiscal policy was now effective, there was no danger in inflation from increases in the money supply, a trade deficit could cost jobs, and so on. He explicated these ideas with the help of the “IS-LM” models found in undergraduate textbooks — genuinely simple abstractions that haven’t played a role in academic work in decades.

Some years later, he and Gautti Eggertson unveiled a model in the approved New Keynesian style, which showed that, indeed, if interest rates  were fixed at zero then fiscal policy, normally powerless, now became highly effective. This exercise may have been a display of technical skill (I suppose; I’m not a connoisseur) but what do we learn from it? After all, generating that conclusion was the announced  goal from the beginning. The formal model was retrofitted to generate the argument that Krugman and others had been making for years, and lo! the two were one.

It’s a perfect example of Joan Robinson’s line that economic theory is the art of taking a rabbit out of a hat, when you’ve just put it into the hat in full view of the audience. I suppose what someone like Krugman might say in his defense is that he wanted to find out if the rabbit would fit in the hat. But if you do the math right, it always does.

(What’s funnier in this case is that the rabbit actually didn’t fit, but they insisted on pulling it out anyway. As the conservative economist John Cochrane gleefully pointed out, the same model also says that raising taxes on wages should also boost employment in a liquidity trap. But no one believed that before writing down the equations, so they didn’t believe it afterward either. As Krugman’s coauthor Eggerston judiciously put it, “there may be reasons outside the model” to reject the idea that increasing payroll taxes is a good idea in a recession.)

Left critics often imagine economics as an effort to understand reality that’s gotten hopelessly confused, or as a systematic effort to uphold capitalist ideology. But I think both of these claims are, in a way, too kind; they assume that economic theory is “about” the real world in the first place. Better to think of it as a self-constrained art form, whose apparent connections to economic phenomena are results of a confusing overlap in vocabulary. Think about chess and medieval history: The statement that “queens are most effective when supported by strong bishops” might be reasonable in both domains, but its application in the one case will tell you nothing about its application in the other.

Over the past decade, people (such as, famously, Queen Elizabeth) have often asked why economists failed to predict the crisis. As a criticism of economics, this is simultaneously setting the bar too high and too low. Too high, because crises are intrinsically hard to predict. Too low, because modern macroeconomics doesn’t predict anything at all.  As Suresh Naidu puts it, the best way to think about what most economic theorists do is as a kind of constrained-maximization poetry. It makes no more sense to ask “is it true” than of a haiku.

***

While theory buzzes around in its fly-bottle, empirical macroeconomics, more attuned to concrete developments, has made a number of genuinely interesting departures. Several areas have been particularly fertile: the importance of financial conditions and credit constraints; government budgets as a tool to stabilize demand and employment; the links between macroeconomic outcomes and the distribution of income; and the importance of aggregate demand even in the long run.

Not surprisingly, the financial crisis spawned a new body of work trying to assess the importance of credit, and financial conditions more broadly, for macroeconomic outcomes. (Similar bodies of work were produced in the wake of previous financial disruptions; these however don’t get much cited in the current iteration.) A large number of empirical papers tried to assess how important access to credit was for household spending and business investment, and how much of the swing from boom to bust could be explained by the tighter limits on credit. Perhaps the outstanding figures here are Atif Mian and Amir Sufi, who assembled a large body of evidence that the boom in lending in the 2000s reflected mainly an increased willingness to lend on the part of banks, rather than an increased desire to borrow on the part of families; and that the subsequent debt overhang explained a large part of depressed income and employment in the years after 2008.

While Mian and Sufi occupy solidly mainstream positions (at Princeton and Chicago, respectively), their work has been embraced by a number of radical economists who see vindication for long-standing left-Keynesian ideas about the financial roots of economic instability. Markus Brunnermeier (also at Princeton) and his coauthors have also done interesting work trying to untangle the mechanisms of the 2008 financial crisis and to generalize them, with particular attention to the old Keynesian concept of liquidity. That finance is important to the economy is not, in itself, news to anyone other than economists; but this new empirical work is valuable in translating this general awareness into concrete usable form.

A second area of renewed empirical interest is fiscal policy — the use of the government budget to manage aggregate demand. Even more than with finance, economics here has followed rather than led the policy debate. Policymakers were turning to large-scale fiscal stimulus well before academics began producing studies of its effectiveness. Still, it’s striking how many new and sophisticated efforts there have been to estimate the fiscal multiplier — the increase in GDP generated by an additional dollar of government spending.

In the US, there’s been particular interest in using variation in government spending and unemployment across states to estimate the effect of the former on the latter. The outstanding work here is probably that of Gabriel Chodorow-Reich. Like most entries in this literature, Chodorow-Reich’s suggests fiscal multipliers that are higher than almost any mainstream economist would have accepted a decade ago, with each dollar of government spending adding perhaps two dollars to GDP. Similar work has been published by the IMF, which acknowledged that past studies had “significantly underestimated” the positive effects of fiscal policy. This mea culpa was particularly striking coming from the global enforcer of economic orthodoxy.

The IMF has also revisited its previously ironclad opposition to capital controls — restrictions on financial flows across national borders. More broadly, it has begun to offer, at least intermittently, a platform for work challenging the “Washington Consensus” it helped establish in the 1980s, though this shift predates the crisis of 2008. The changed tone coming out of the IMF’s research department has so far been only occasionally matched by a change in its lending policies.

Income distribution is another area where there has been a flowering of more diverse empirical work in the past decade. Here of course the outstanding figure is Thomas Piketty. With his collaborators (Gabriel Zucman, Emmanuel Saez and others) he has practically defined a new field. Income distribution has always been a concern of economists, of course, but it has typically been assumed to reflect differences in “skill.” The large differences in pay that appeared to be unexplained by education, experience, and so on, were often attributed to “unmeasured skill.” (As John Eatwell used to joke: Hegemony means you get to name the residual.)

Piketty made distribution — between labor and capital, not just across individuals — into something that evolves independently, and that belongs to the macro level of the economy as a whole rather than the micro level of individuals. When his book Capital in the 21st Century was published, a great deal of attention was focused on the formula “r > g,” supposedly reflecting a deep-seated tendency for capital accumulation to outpace economic growth. But in recent years there’s been an interesting evolution in the empirical work Piketty and his coauthors have published, focusing on countries like Russia/USSR and China, etc., which didn’t feature in the original survey. Political and institutional factors like labor rights and the legal forms taken by businesses have moved to center stage, while the formal reasoning of “r > g” has receded — sometimes literally to a footnote. While no longer embedded in the grand narrative of Capital in the 21st Century, this body of empirical work is extremely valuable, especially since Piketty and company are so generous in making their data publicly available. It has also created space for younger scholars to make similar long-run studies of the distribution of income and wealth in countries that the Piketty team hasn’t yet reached, like Rishabh Kumar’s superb work on India. It has also been extended by other empirical economists, like Lukas Karabarbounis and coauthors, who have looked at changes in income distribution through the lens of market power and the distribution of surplus within the corporation — not something a University of Chicago economist would have ben likely to study a decade ago.

A final area where mainstream empirical work has wandered well beyond its pre-2008 limits is the question of whether aggregate demand — and money and finance more broadly — can affect long-run economic outcomes. The conventional view, still dominant in textbooks, draws a hard line between the short run and the long run, more or less meaning a period longer than one business cycle. In the short run, demand and money matter. But in the long run, the path of the economy depends strictly on “real” factors — population growth, technology, and so on.

Here again, the challenge to conventional wisdom has been prompted by real-world developments. On the one hand, weak demand — reflected in historically low interest rates — has seemed to be an ongoing rather than a cyclical problem. Lawrence Summers dubbed this phenomenon “secular stagnation,” reviving a phrase used in the 1940s by the early American Keynesian Alvin Hansen.

On the other hand, it has become increasingly clear that the productive capacity of the economy is not something separate from current demand and production levels, but dependent on them in various ways. Unemployed workers stop looking for work; businesses operating below capacity don’t invest in new plant and equipment or develop new technology. This has manifested itself most clearly in the fall in labor force participation over the past decade, which has been considerably greater than can be explained on the basis of the aging population or other demographic factors. The bottom line is that an economy that spends several years producing less than it is capable of, will be capable of producing less in the future. This phenomenon, usually called “hysteresis,” has been explored by economists like Laurence Ball, Summers (again) and Brad DeLong, among others. The existence of hysteresis, among other implications, suggests that the costs of high unemployment may be greater than previously believed, and conversely that public spending in a recession can pay for itself by boosting incomes and taxes in future years.

These empirical lines are hard to fit into the box of orthodox theory — not that people don’t try. But so far they don’t add up to more than an eclectic set of provocative results. The creativity in mainstream empirical work has not yet been matched by any effort to find an alternative framework for thinking of the economy as a whole. For people coming from non-mainstream paradigms — Marxist or Keynesian — there is now plenty of useful material in mainstream empirical macroeconomics to draw on – much more than in the previous decade. But these new lines of empirical work have been forced on the mainstream by developments in the outside world that were too pressing to ignore. For the moment, at least, they don’t imply any systematic rethinking of economic theory.

***

Perhaps the central feature of the policy mainstream a decade ago was a smug and, in retrospect, remarkable complacency that the macroeconomic problem had been solved by independent central banks like the Federal Reserve.  For a sense of the pre-crisis consensus, consider this speech by a prominent economist in September 2007, just as the US was heading into its worst recession since the 1930s:

One of the most striking facts about macropolicy is that we have progressed amazingly. … In my opinion, better policy, particularly on the part of the Federal Reserve, is directly responsible for the low inflation and the virtual disappearance of the business cycle in the last 25 years. … The story of stabilization policy of the last quarter century is one of amazing success.

You might expect the speaker to be a right-wing Chicago type like Robert Lucas, whose claim that “the problem of depression prevention has been solved” was widely mocked after the crisis broke out. But in fact it was Christina Romer, soon headed to Washington as the Obama administration’s top economist. In accounts of the internal debates over fiscal policy that dominated the early days of the administration, Romer often comes across as one of the heroes, arguing for a big program of public spending against more conservative figures like Summers. So it’s especially striking that in the 2007 speech she spoke of a “glorious counterrevolution” against Keynesian ideas. Indeed, she saw the persistence of the idea of using deficit spending to fight unemployment as the one dark spot in an otherwise cloudless sky. There’s more than a little irony in the fact that opponents of the massive stimulus Romer ended up favoring drew their intellectual support from exactly the arguments she had been making just a year earlier. But it’s also a vivid illustration of a consistent pattern: ideas have evolved more rapidly in the world of practical policy than among academic economists.

For further evidence, consider a 2016 paper by Jason Furman, Obama’s final chief economist, on “The New View of Fiscal Policy.” As chair of the White House Council of Economic Advisers, Furman embodied the policy-economics consensus ex officio. Though he didn’t mention his predecessor by name, his paper was almost a point-by-point rebuttal of Romer’s “glorious counterrevolution” speech of a decade earlier. It starts with four propositions shared until recently by almost all respectable economists: that central banks can and should stabilize demand all by themselves, with no role for fiscal policy; that public deficits raise interest rates and crowd out private investment; that budget deficits, even if occasionally called for, need to be strictly controlled with an eye on the public debt; and that any use of fiscal policy must be strictly short-term.

None of this is true, suggests Furman. Central banks cannot reliably stabilize modern economies on their own, increased public spending should be a standard response to a downturn, worries about public debt are overblown, and stimulus may have to be maintained indefinitely. While these arguments obviously remain within a conventional framework in which the role of the public sector is simply to maintain the flow of private spending at a level consistent with full employment, they nonetheless envision much more active management of the economy by the state. It’s a remarkable departure from textbook orthodoxy for someone occupying such a central place in the policy world.

Another example of orthodoxy giving ground under the pressure of practical policymaking is Narayana Kocherlakota. When he was appointed as President of the Federal Reserve Bank of Minneapolis, he was on the right of debates within the Fed, confident that if the central bank simply followed its existing rules the economy would quickly return to full employment, and rejecting the idea of active fiscal policy. But after a few years on the Fed’s governing Federal Open Market Committee (FOMC), he had moved to the far left, “dovish” end of opinion, arguing strongly for a more aggressive approach to bringing unemployment down by any means available, including deficit spending and more aggressive unconventional tools at the Fed. This meant rejecting much of his own earlier work, perhaps the clearest example of a high-profile economist repudiating his views after the crisis; in the process, he got rid of many of the conservative “freshwater” economists in the Minneapolis Fed’s research department.

The reassessment of central banks themselves has run on parallel lines but gone even farther.

For twenty or thirty years before 2008, the orthodox view of central banks offered a two-fold defense against the dangerous idea — inherited from the 1930s — that managing the instability of capitalist economies was a political problem. First, any mismatch between the economy’s productive capabilities (aggregate supply) and the desired purchases of households and businesses (aggregate demand) could be fully resolved by the central bank; the technicians at the Fed and its peers around the world could prevent any recurrence of mass unemployment or runaway inflation. Second, they could do this by following a simple, objective rule, without any need to balance competing goals.

During those decades, Alan Greenspan personified the figure of the omniscient central banker. Venerated by presidents of both parties, Greenspan was literally sanctified in the press — a 1990 cover of The International Economy had him in papal regalia, under the headline, “Alan Greenspan and His College of Cardinals.” A decade later, he would appear on the cover of Time as the central figure in “The Committee to Save the World,” flanked by Robert Rubin and the ubiquitous Summers. And a decade after that he showed up as Bob Woodward’s eponymous Maestro.

In the past decade, this vision of central banks and central bankers has eroded from several sides. The manifest failure to prevent huge falls in output and employment after 2008 is the most obvious problem. The deep recessions in the US, Europe and elsewhere make a mockery of the “virtual disappearance of the business cycle” that people like Romer had held out as the strongest argument for leaving macropolicy to central banks. And while Janet Yellen or Mario Draghi may be widely admired, they command nothing like the authority of a Greenspan.

The pre-2008 consensus is even more profoundly undermined by what central banks did do than what they failed to do. During the crisis itself, the Fed and other central banks decided which financial institutions to rescue and which to allow to fail, which creditors would get paid in full and which would face losses. Both during the crisis and in the period of stagnation that followed, central banks also intervened in a much wider range of markets, on a much larger scale. In the US, perhaps the most dramatic moment came in late summer 2008, when the commercial paper market — the market for short-term loans used by the largest corporations — froze up, and the Fed stepped in with a promise to lend on its own account to anyone who had previously borrowed there. This watershed moment took the Fed from its usual role of regulating and supporting the private financial system, to simply replacing it.

That intervention lasted only a few months, but in other markets the Fed has largely replaced private creditors for a number of years now. Even today, it is the ultimate lender for about 20 percent of new mortgages in the United States. Policies of quantitative easing, in the US and elsewhere, greatly enlarged central banks’ weight in the economy — in the US, the Fed’s assets jumped from 6 percent of GDP to 25 percent, an expansion that is only now beginning to be unwound.  These policies also committed central banks to targeting longer-term interest rates, and in some cases other asset prices as well, rather than merely the overnight interest rate that had been the sole official tool of policy in the decades before 2008.

While critics (mostly on the Right) have objected that these interventions “distort” financial markets, this makes no sense from the perspective of a practical central banker. As central bankers like the Fed’s Ben Bernanke or the Bank of England’s Adam Posen have often said in response to such criticism, there is no such thing as an “undistorted” financial market. Central banks are always trying to change financial conditions to whatever it thinks favors full employment and stable prices. But as long as the interventions were limited to a single overnight interest rate, it was possible to paper over the contradiction between active monetary policy and the idea of a self-regulating economy, and pretend that policymakers were just trying to follow the “natural” interest rate, whatever that is. The much broader interventions of the past decade have brought the contradiction out into the open.

The broad array of interventions central banks have had to carry out over the past decade have also provoked some second thoughts about the functioning of financial markets even in normal times. If financial markets can get things wrong so catastrophically during crises, shouldn’t that affect our confidence in their ability to allocate credit the rest of the time? And if we are not confident, that opens the door for a much broader range of interventions — not only to stabilize markets and maintain demand, but to affirmatively direct society’s resources in better ways than private finance would do on its own.

In the past decade, this subversive thought has shown up in some surprisingly prominent places. Wearing his policy rather than his theory hat, Paul Krugman sees

… a broader rationale for policy activism than most macroeconomists—even self-proclaimed Keynesians—have generally offered in recent decades. Most of them… have seen the role for policy as pretty much limited to stabilizing aggregate demand. … Once we admit that there can be big asset mispricing, however, the case for intervention becomes much stronger… There is more potential for and power in [government] intervention than was dreamed of in efficient-market models.

From another direction, the notion that macroeconomic policy does not involve conflicting interests has become harder to sustain as inflation, employment, output and asset prices have followed diverging paths. A central plank of the pre-2008 consensus was the aptly named “divine coincidence,” in which the same level of demand would fortuitously and simultaneously lead to full employment, low and stable inflation, and production at the economy’s potential. Operationally, this was embodied in the “NAIRU” — the level of unemployment below which, supposedly, inflation would begin to rise without limit.

Over the past decade, as estimates of the NAIRU have fluctuated almost as much as the unemployment rate itself, it’s become clear that the NAIRU is too unstable and hard to measure to serve as a guide for policy, if it exists at all. It is striking to see someone as prominent as IMF chief economist Olivier Blanchard write (in 2016) that “the US economy is far from satisfying the ‘divine coincidence’,” meaning that stabilizing inflation and minimizing unemployment are two distinct goals. But if there’s no clear link between unemployment and inflation, it’s not clear why central banks should worry about low unemployment at all, or how they should trade off the risks of prices rising undesirably fast against the risk of too-high unemployment. With surprising frankness, high officials at the Fed and other central banks have acknowledged that they simply don’t know what the link between unemployment and inflation looks like today.

To make matters worse, a number of prominent figures — most vocally at the Bank for International Settlements — have argued that we should not be concerned only with conventional price inflation, but also with the behavior of asset prices, such as stocks or real estate. This “financial stability” mandate, if it is accepted, gives central banks yet another mission. The more outcomes central banks are responsible for, and the less confident we are that they all go together, the harder it is to treat central banks as somehow apolitical, as not subject to the same interplay of interests as the rest of the state.

Given the strategic role occupied by central banks in both modern capitalist economies and economic theory, this rethinking has the potential to lead in some radical directions. How far it will actually do so, of course, remains to be seen. Accounts of the Fed’s most recent conclave in Jackson Hole, Wyoming suggest a sense of “mission accomplished” and a desire to get back to the comfortable pieties of the past. Meanwhile, in Europe, the collapse of the intellectual rationale for central banks has been accompanied by the development of the most powerful central bank-ocracy the world has yet seen. So far the European Central Bank has not let its lack of democratic mandate stop it from making coercive intrusions into the domestic policies of its member states, or from serving as the enforcement arm of Europe’s creditors against recalcitrant debtors like Greece.

One thing we can say for sure: Any future crisis will bring the contradictions of central banks’ role as capitalism’s central planners into even sharper relief.

***

Many critics were disappointed the crisis of a 2008 did not lead to an intellectual revolution on the scale of the 1930s. It’s true that it didn’t. But the image of stasis you’d get from looking at the top journals and textbooks isn’t the whole picture — the most interesting conversations are happening somewhere else. For a generation, leftists in economics have struggled to change the profession, some by launching attacks (often well aimed, but ignored) from the outside, others by trying to make radical ideas parsable in the orthodox language. One lesson of the past decade is that both groups got it backward.

Keynes famously wrote that “Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” It’s a good line. But in recent years the relationship seems to have been more the other way round. If we want to change the economics profession, we need to start changing the world. Economics will follow.

Thanks to Arjun Jayadev, Ethan Kaplan, Mike Konczal and Suresh Naidu for helpful suggestions and comments.

How to Read a Regression

As I mentioned in an earlier post, I am for the first time teaching a class in quantiative methods in John Jay’s new economics MA program.[footnote]If you’re curious about this program, please email me at profjwmason@gmail.com.[/footnote] One thing I’ve found is that the students, even those who have taken econometrics or statistics classes before, really benefit from an explanation fo how to read regression results — what exactly all the numbers you find in a regression table actually mean. I’m sure there is a textbook out there that gives a good, clear, comprehensive, accessible explanation of how to read regression results, but I haven’t found it. Besides, I like making my own materials. Among other things, it’s a good way to be sure you understand things yourself, and to clarify how you think people should think about them. So I’ve been writing my own notes on how to read a regression. They are on my teaching materials page, along with lots of macroeconomics notes and a few other things.

If you teach an introductory econometrics or statistics class, take a look, and feel free to use them if they seem helpful. Or if you are taking one, or just curious. And if anything seems wrong or confusing to you, or if you know of something similar but more polished and complete — or just better — please let me know.

By the way, these notes, like my macroeconomics notes, are written in latex using the tufte-handout package.

Link: A guide to regressions

Now Playing Everywhere

This Businessweek story on the Sears bankruptcy is like the perfect business action-adventure story for our times.

First act: Brash young(ish) hedge fund guy takes over iconic American business, forces through closures and layoffs, makes lots of money for his friends.

From the moment he bought into what was then called Sears, Roebuck & Co., he also maneuvered to protect his financial interests. At times, he even made money. He closed stores, fired employees and … carved out some choice assets for himself.

All seems to be going well. But now the second act: He gets too attached, and instead of passing the drained but still functioning business onto some other sucker, imagines he can run it himself. But managing a giant retailer is harder than it looks. Getting on a videoconference a couple times a week and telling the executives that they’re idiots isn’t enough to turn things around.

But the big mistake was even trying to. Poor Eddie Lampert has forgotten “the investors’ commandment: Get out in time.” That’s always the danger for money and its human embodiments — to get drawn into some business, some concrete human activity, instead of returning to its native immaterial form. Once the wasp larva has sucked the caterpillar dry, it needs to get out and turn back into a wasp, not go shambling around in the husk. This one waited too long.

Not even Lampert’s friends could understand why the hedge-fund manager, once hailed as a young Warren Buffett, clung to his spectacularly bad investment in Sears, a dying department store chain. … After 13 years under Lampert’s stewardship, Sears finally seems to be hurtling toward bankruptcy, if not outright liquidation. And, once again, Wall Street is wondering what Eddie Lampert will salvage for himself and his $1.3 billion fund, ESL Investments Inc., whose future may now be in doubt.

Oh no! Will the fund survive? Don’t worry, there’s a third act. Sears may have crashed and burned,  but it turns out Lampert had a parachute – he set himself up as the senior creditor in the bankruptcy, and presciently spun off the best assets for himself.

Under the filing the company is said to be preparing for as soon as this weekend, he and ESL — together they hold almost 50 percent of the shares — would be at the head of the line when the remnants are dispersed. As secured creditors, Lampert and the fund could get 100 cents on the dollar… And Lampert carved out what looked like — and in some cases might yet be — saves for himself, with spinoffs that gave him chunks of equity in new companies. One was Seritage Growth Properties, the real estate investment trust that counts Sears as its biggest tenant and of which Lampert is the largest shareholder; he created it in 2015 to hold stores that were leased back to Sears — cordoning those off from any bankruptcy proceeding. He and ESL got a majority stake in Land’s End Inc., the apparel and accessories maker he split from Sears in 2014.

The fund is saved. The business crashes but the money escapes. The billionaire is still a billionaire, battered but upright, dramatically backlit by the flames from the wreckage behind him. Credits roll.

 

Acquisitions as Corporate Money Hose

Among the small group of heterodox economics people interested in corporate finance, it is common knowledge that the stock market is a tool for moving money out of the corporate sector, not into it.  Textbooks may talk about stock markets as a tool for raising funds for investment, but this kind of financing is dwarfed by the payments each year from the corporations to shareholders.

The classic statement, as is often the case, is in Doug Henwood’s Wall Street:

Instead of promoting investment, the U.S. financial system seems to do quite the opposite… Take, for example, the stock market, which is probably the centerpiece of the whole enterprise. What does it do? Both civilians and professional apologists would probably answer by saying that it raises capital for investment. In fact it doesn’t. Between 1981 and 1997, U.S. nonfinancial corporations retired $813 billion more in stock than they issued, thanks to takeovers and buybacks. Of course, some individual firms did issue stock to raise money, but surprisingly little of that went to investment either. A Wall Street Journal article on 1996’s dizzying pace of stock issuance (McGeehan 1996) named overseas privatizations (some of which, like Deutsche Telekom, spilled into U.S. markets) “and the continuing restructuring of U.S. corporations” as the driving forces behind the torrent of new paper. In other words, even the new-issues market has more to do with the arrangement and rearrangement of ownership patterns than it does with raising fresh capital.

The pattern of negative net share issues has if anything only gotten stronger in the 20 years since then, with net equity issued by US corporations averaging around negative 2 percent of GDP. That’s the lower line in the figure below:

Source

 

Note that in the passage I quote, Doug correctly writes “takeovers and buybacks.” But a lot of other people writing in this area — definitely including me — have focused on just the buyback part. We’ve focused on a story in which corporate managers choose — are compelled or pressured or incentivized — to deliver more of the firm’s surplus funds to shareholders, rather than retaining them for real investment. And these payouts have increasingly taken the form of share repurchases rather than dividends.

In telling this story, we’ve often used the negative net issue of equity as a measure of buybacks. At the level of the individual corporation, this is perfectly reasonable: A firm’s net issue of stocks is simply its new issues less repurchases. So the net issue is a measure of the total funds raised from shareholders — or if it is negative, as it generally is, of the payments made to them.

It’s natural to extend this to the aggregate level, and assume that the net change in equities outstanding similarly reflects the balance between new issues and repurchases. William Lazonick, for instance, states as a simple matter of fact that “buybacks are largely responsible for negative net equity issues.” 1 But are they really?

If we are looking at a given corporation over time, the only way the shares outstanding can decline is via repurchases.2 But at the aggregate level, lots of other things can be responsible — bankruptcies, other changes in legal organization, acquisitions. Quantitatively the last of these is especially important.   Of course when acquisitions are paid in stock, the total volume of shares doesn’t change. But when they are paid in cash, it does. 3 In the aggregate, when publicly trade company A pays $1 billion to acquire publicly traded company B, that is just a payment from the corproate sector to the household sector of $1 billion, just as if the corporation were buying back its own stock. But if we want to situate the payment in any kind of behavioral or institutional or historical story, the two cases may be quite different.

Until recently, there was no way to tell how much of the aggregate share retirements were due to repurchases and how much were due to acquisitions or other causes.4 The financial accounts reported only a single number, net equity issues. (So even the figure above couldn’t be produced with aggregate data, only the lower line in it.) Under these circumstances the assumption that that buybacks were the main factor was reasonable, or at least as reasonable as any other.

Recently, though, the Fed has begun reporting more detailed equity-finance flows, which break out the net issue figure into gross issues, repurchases, and retirements by acquisition. And it turns out that while buybacks are substantial, acquisitions are actually a bigger factor in negative net stock issues. Over the past 20 years, gross equity issues have averaged 1.9 percent of GDP, repurchases have averaged 1.7 percent of GDP, and retirements via acquisitions just over 2 percent of GDP. So if we look only at corporations’ transactions in their own stock, it seems that that the stock market still is — barely — a net source of funds. For the corproate sector as a whole, of course, it is still the case that the stock market is, in Jeff Spross’ memorable phrase, a giant money hose to nowhere.

The figure below shows dividends, gross equity issues, repurchases and M&A retirements, all as a percent of GDP.

Source

What do we see here? First, the volume of shares retired through acquisitions is consistently, and often substantially, greater than the volume retired through repurchases. If you look just at the aggregate net equity issue you would think that share repurchases were now comparable to dividends as a means of distributing profits to shareholders; but it’s clear here that that’s not the case. Share repurchases plus acquisitions are about equal to dividends, but repurchases by themselves are half the size of dividends — that is, they account for only around a third of shareholder payouts.

One particular period the new data changes the picture is the tech boom period around 2000. Net equity issues were significantly negative in that period, on the order of 1 percent of GDP. But as we can now see, that was entirely due to an increased volume of acquisitions. Repurchases were flat and, by the standard of more recent periods, relatively low. So the apparent paradox that even during an investment boom businesses were paying out far more to shareholders than they were taking in, is not quite such a puzzle. If you were writing a macroeconomic history of the 1990s-2000s, this would be something to know.

It’s important data. I think it clarifies a lot and I hope people will make more use of it in the future.

We do have to be careful here. Some fraction of the M&A retirements are stock transactions, where the acquiring company issues new stock as a kind of currency to pay for the stock of the company it is acquiring.5 In these cases, it’s misleading to treat the stock issuance and the stock retirement as two separate transactions — as independent sources and uses of funds. It would be better to net those transactions out earlier before reporting the gross figures here. Unfortunately, the Fed doesn’t give a historical series of cash vs. stock acquisition spending. But in recent years, at least, it seems that no more than a quarter or so of acquisitions are paid in stock, so the figure above is at least qualitatively correct. Removing the stock acquisitions — where there is arguably no meaningful issue or retirement of stock, jsut a swap of one company’s for another’s — would move the M&A Retirements and Gross Equity Issues lines down somewhat. But the basic picture would remain the same.

It’s also the case that a large fraction of equity issues are the result of exercise of employee stock options. I suspect — tho again I haven’t seen definite data — that stock options accout for a large fraction, maybe a majority, of stock issues in recent decades. But this doesn’t change the picture as far as sectoral flows goes — it just means that what is being financed is labor costs rather than investment.

The bottom line here is, I don’t think we heterodox corporate finance people have thought enough about acquisitions. A major part of payments from corporations to shareholders are not distribution of profits in the usual sense, but payments by managers for control rights over a production process that some other shareholders have claims on. I don’t think our current models handle this well — we either think implicitly of a single unitary corporate sector, or we follow the mainstream in imagining production as a bouillabaisse in where you just throw in a certain amount of labor and a certain amount of capital, so it doesn’t matter who is in charge.

Of course we know that the exit, the liquidity moment, for many tech startups today is not an IPO — let alone reaching profitability under the management of early investors — but acquisition by an established company. But this familiar fact hasn’t really made it into macro analysis.

I think we need to take more seriously the role of Wall Street in rearranging ownership claims. Both because who is in charge of particular production processes is important. And because we can’t understand the money flows between corporations and households without it.

 

New Piece on MMT

Arjun Jayadev and I have a new piece up at the Institute for New Economic Thinking, trying to clarify the relationship between Modern Monetary Theory (MMT) and textbook macroeconomics. (There is also a pdf version here, which I think is a bit more readable.) I will have a blogpost summarizing the argument later today or tomorrow, but in the meantime here is the abstract:

An increasingly visible school of heterodox macroeconomics, Modern Monetary Theory (MMT), makes the case for functional finance—the view that governments should set their fiscal position at whatever level is consistent with price stability and full employment, regardless of current debt or deficits. Functional finance is widely understood, by both supporters and opponents, as a departure from orthodox macroeconomics. We argue that this perception is mistaken: While MMT’s policy proposals are unorthodox, the analysis underlying them is largely orthodox. A central bank able to control domestic interest rates is a sufficient condition to allow a government to freely pursue countercyclical fiscal policy with no danger of a runaway increase in the debt ratio. The difference between MMT and orthodox policy can be thought of as a different assignment of the two instruments of fiscal position and interest rate to the two targets of price stability and debt stability. As such, the debate between them hinges not on any fundamental difference of analysis, but rather on different practical judgements—in particular what kinds of errors are most likely from policymakers.

Read the rest here or here.

Lecture Notes for Research Methods

I’m teaching a new class this semester, a masters-level class on research methods. It could be taught as simply the second semester of an econometrics sequence, but I’m taking a different approach, trying to think about what will help students do effective empirical work in policy/political settings. We’ll see how it works.

For anyone interested, here are the slides I will use on the first day. I’m not sure it’s all right, in fact I’m sure some of it is wrong But that is how you figure out what you really think and know and don’t know about something, by teaching it.

After we’ve talked through this, we will discuss this old VoxEU piece as an example of effective use of simple scatterplots to make an economic argument.

I gave a somewhat complementary talk on methodology and heterodox macroeconomics at the Eastern Economics Association meetings last year. I’ve been meaning to transcribe it into a blogpost, but in the meantime you can listen to a recording, if you’re interested.

 

Macroeconomic Lessons from the Past Decade

Below the fold is a draft of a chapter I’m contributing to an edited volume on aggregate demand and employment. My chapter is supposed to cover macroeconomic policy and employment in the US, with other chapters covering other countries and regions. 

The chapter is mostly based on material I’ve pulished elsewhere, mainly my Roosevelt papers “What Recovery?” and “A New Direction for the Federal Reserve.” My goal was something that summarized the arguments there for an audience of (presumably) heterodox macroeconomists, and that could also be used in the classroom.

There is still time to revise this, so comments/criticisms are very welcome.

*

Continue reading Macroeconomic Lessons from the Past Decade

“Economic Growth, Income Distribution, and Climate Change”

In response to my earlier post on climate change and aggregate demand, Lance Taylor sends along his recent article “Economic Growth, Income Distribution, and Climate Change,” coauthored with Duncan Foley and Armon Rezai.

The article, which was published in Ecological Economics, lays out a structuralist growth model with various additions to represent the effects of climate change and possible responses to it. The bulk of the article works through the formal properties of the model; the last section shows the results of some simulations based on plausible values of the various parmaters. 6 I hadn’t seen the article before, but its conclusions are broadly parallel to my arguments in the previous two posts. It tells a story in which public spending on decarbonization not only avoids the costs and dangers of climate change itself, but leads to higher private output, income and employment – crowding in rather than crowding out.

Before you click through, a warning: There’s a lot of math there. We’ve got a short run where output and investment are determined via demand and distribution, a long run where the the investment rate from the short run dynamics is combined with exogenous population growth and endogenous productivity growth to yield a growth path, and an additional climate sector that interacts with the economic variables in various ways. How much the properties of a model like this change your views about the substantive question of climate change and economic growth, will depend on how you feel about exercises like this in general. How much should the fact that that one can write down a model where climate change mitigation more than pays for itself through higher output, change our beliefs about whether this is really the case?

For some people (like me) the specifics of the model may be less important that the fact that one of the world’s most important heterodox macroeconomists thinks the conclusion is plausible. At the least, we can say that there is a logically coherent story where climate change mitigation does not crowd out other spending, and that this represents an important segment of heterodox economics and not just an idiosyncratic personal view.

If you’re interested, the central conclusions of the calibrated model are shown below. The dotted red line shows the business-as-usual scenario with no public spending on climate change, while the other two lines show scenarios with more or less aggressive public programs to reduce and/or offset carbon emissions.

Here’s the paper’s summary of the outcomes along the business-as-usual trajectory:

Rapid growth generates high net emissions which translate into rising global mean temperature… As climate damages increase, the profit rate falls. Investment levels are insufficient to maintain aggregate demand and unemployment results. After this boom-bust cycle, output is back to its current level after 200 years but … employment relative to population falls from 40% to 15%. … Those lucky enough to find employment are paid almost three times the current wage rate, but the others have to rely on subsistence income or public transfers. Only in the very long run, as labor productivity falls in response to rampant unemployment, can employment levels recover. 

In the other scenarios, with a peak of 3-6% of world GDP spent on mitigation, we see continued exponential output growth in line with historical trends. The paper doesn’t make a direct comparison between the mitigation cases and a world where there was no climate change problem to begin with. But the structure of the model at least allows for the possibility that output ends up higher in the former case.

The assumptions behind these results are: that the economy is demand constrained, so that public spending on climate mitigation boosts output and employment in the short run; that investment depends on demand conditions as well as distributional conflict, allowing the short-run dynamics to influence the long-run growth path; that productivity growth is endogenous, rising with output and with employment; and that climate change affects the growth rate and not just the level of output, via lower profits and faster depreciation of existing capital.7

This is all very interesting. But again, we might ask how much we learn from this sort of simulation. Certainly it shouldn’t be taken as a prediction! To me there is one clear lesson at least: A simple cost benefit framework is inadequate for thinking about the economic problem of climate change. Spending on decarbonization is not simply a cost. If we want to think seriously about its economic effects, we have to think about demand, investment, distribution and induced technological change. Whether you find this particular formalization convincing, these are the questions to ask.

Guns and Ice Cream

I’ve gotten some pushback on the line from my decarbonization piece that “wartime mobilization did not crowd out civilian production.” More than one person has told me they agree with the broader argument but don’t find that claim believable. Will Boisvert writes in comments:

Huh? The American war economy was an *austerity* economy. There was no civilian auto production or housing construction for the duration. There were severe housing shortages, and riots over housing shortages. Strikes were virtually banned. Millions of soldiers lived in barracks, tents or foxholes, on rations. So yeah, there were drastic trade-offs between guns and butter (which was rationed for civilians).

It’s true that there were no new cars produced during the war, and very little new housing.8 But this doesn’t tell us what happened to civilian output in general. For most of the war, wartime planning involved centralized allocation of a handful of key resources — steel, aluminum, rubber — that were the most important constraints on military production. This obviously ruled out making cars, but most civilian production wasn’t directly affected by wartime controls. 9 If we want to look at what happened to civilian production overall, we have to look at aggregate measures.

The most comprehensive discussions of this I’ve seen are in various pieces by Hugh Rockoff.10 Here’s the BEA data on real (inflation-adjusted) civilian and military production, as he presents it:

Civilian and military production in constant dollars. Source: H. Rockoff, ‘The United States: from ploughshares into swords’ in M. Harrison, ed, The Economics of World War II

As you can see, civilian and military production rose together in 1941, but civilian production fell in 1942, once the US was officially at war. So there does seem to be some crowding out. But looking at the big picture, I think my claim is defensible. From 1939 to its peak in 1944, annual military production increased by 80 percent of prewar GDP. The fall in real civilian production over this period was less than 4 percent of prewar GDP. So essentially none of the increase in military output came at the expense of civilian output; it was all additional to it. And civilian production began rising again before the end of the war; by 1945 it was well above 1939 levels.

Production is not the same as living standards. As it happens, civilian investment fell steeply during the war — in 1943-44, it was only about one third its prewar level. If we look at civilian consumption rather than output, we see a steady rise during the war. By the official numbers, real per-capita civilian consumption was 5 percent higher in 1944 – the peak of war production — than it had been in 1940. Rockoff believes that, although the BLS did try to correct for the distortions created by rationing and price controls, the official numbers still understate the inflation facing civilians. But even his preferred estimate shows a modest increase in per-capita civilian consumption over this period.

We can avoid the problems of aggregation if we look at physical quantities of particular goods. For example, shoes were rationed, but civilians nonetheless bought about 5 percent more shoes annually in 1942-1944 than they had in 1941. Civilian meat consumption increased by about 10 percent, from 142 pounds of meat per person in 1940 to 154 pounds per person in 1944. As it happens, butter seems to be one of the few categories of food where consumption declined during the war. Here’s Rockoff’s discussion:

Consumption of edible fats, particularly butter, was down somewhat during the war. Thus in a strict sense the United States did not have guns and butter. The reasons are not clear, but the long-term decline in butter consumption probably played a role. Ice cream consumption, which had been rising for a long time, continued to rise. Thus, the United States did have guns and ice cream. The decline in edible fat consumption was a major concern, and the meat rationing system was designed to provide each family with an adequate fat ration. The concern about fats aside, [civilian] food production held up well.

As this passage suggests, rationing in itself should not be seen as a sign of increased scarcity. It is, rather, an alternative to the price mechanism for the allocation of scarce goods. In the wartime setting, it was introduced where demand would exceed supply at current prices, and where higher prices were considered undesirable. In this sense, rationing is the flipside of price controls. Rationing can also be used to deliver a more equitable distribution than prices would — especially important where we are talking about a necessity like food or shoes.

The fundamental reason why rationing was necessary in the wartime US was not that civilian production had fallen, but because civilian incomes were rising so rapidly. Civilian consumption might have been 5 percent higher in 1944 than in 1940; but aggregate civilian wages and salaries were 170 percent higher. Prices rose somewhat during the war years; but without price controls and rationing inflation would undoubtedly have been much higher. Rockoff’s comment on meat probably applies to a wide range of civilian goods: “Wartime shortages … were the result of large increases in demand combined with price controls, rather than decreases in supply.”

Another issue, which Rockoff touches on only in passing, is the great compression of incomes during the war. Per Piketty and co., the income share of the top 10 percent dropped from 45 percent in 1940 to 33 percent in 1945. If civilian consumption rose modestly in the aggregate, it must have risen by more for the non-wealthy majority. So I think it’s pretty clear that in the US, civilian living standards generally rose during the war, despite the vast expansion of military production.

You might argue that even if civilian consumption rose, it’s still wrong to say there was no crowding out, since it could have risen even more without the war. Of course one can’t know what would have happened; even speculation depends on what the counterfactual scenario is. But certainly it didn’t look this way at the time. Real per capita income in the US increased by less than 2 percent in total over the decade 1929-1939.  So the growth of civilian consumption during the war was actually faster than in the previous decade. There was a reason for the popular perception that “we’ve never had it so good.”

It is true that there was already some pickup in growth in 1940, before the US entered the war (but rearmament was already under way). But there was no reason to think that faster growth was fated to happen regardless of military production. If you read stuff written at the time, it’s clear that most people believed the 1930s represented, at least to some degree, a new normal; and no one believed that the huge increase in production of the war years would have happened on its own.

Will also writes:

War production itself was profoundly irrational. Expensive capital goods were produced, thousands of tanks and warplanes and warships, whose service lives spanned just a few hours. Factories and production lines were built knowing that in a year or two there would be no market at all for their products.

I agree that military production itself is profoundly irrational. Abolishing the military is a program I fully support. But I don’t think the last sentence follows. Much wartime capital investment could be, and was, rapidly turned to civilian purposes afterwards. One obvious piece of evidence for this is the huge increase in civilian output in 1946; there’s no way that production could increase by one third in a single year except by redirecting plant and equipment built for the military.

And of course much wartime investment was in basic industries for which reconversion wasn’t even necessary. The last chapter of Mark Wilson’s Destructive Creation makes a strong case that postwar privatization of factories built during the war was very valuable for postwar businesses, and that acquiring them was a top priority for business leaders in the reconversion period. 11 By one estimate, in the late 1940s around a quarter of private manufacturing capital consisted of plant and equipment built by the government during the war and subsequently transferred to private business. In 1947, for example, about half the nation’s aluminum came from plants built by the government during the war for aircraft production. All synthetic rubber — about half total rubber production — came from plants built for the military. And so on. While not all wartime investment was useful after the war, it’s clear that a great deal was.

I think people are attracted to the idea of wartime austerity because we’ve all been steeped in the idea of scarcity – that economic problems consist of the allocation of scarce means among alternative ends, in Lionel Robbins’ famous phrase. Aggregate demand is, in that sense, a profoundly subversive idea – it suggests that’s what’s really scarce isn’t our means but our wants. Most people are doing far less than they could be, given the basic constraints of the material world, to meet real human needs. And markets are a weak and unreliable tool for redirecting our energies to something better. World War II is the biggest experiment to date on the limits of boosting output through a combination of increased market demand and central planning. And it suggests that, altho supply constraints are real — wartime controls on rubber and steel were there for a reason – in general we are much, much farther from those constraints than we normally think.