Strange Defeat

Anyone who found something useful or provoking in my Jacobin piece on the state of economics might also be interested in this 2013 article by me and Arjun Jayadev, “Strange Defeat: How Austerity Economics Lost All the Intellectual Battles and Still WOn the War.” It covers a good deal of the same ground, a bit more systematically but without the effort to find the usable stuff in mainstream macro that I made in the more recent piece. Perhaps there wasn’t so much of it five years ago!

Here are some excerpts; you can read the full piece here.

* * * 

The extent of the consensus in mainstream macroeconomic theory is often obscured by the intensity of the disagreements over policy…  In fact, however, the contending schools and their often heated debates obscure the more fundamental consensus among mainstream macroeconomists. Despite the label, “New Keynesians” share the core commitment of their New Classical opponents to analyse the economy only in terms of the choices of a representative agent optimising over time. For New Keynesians as much as New Classicals, the only legitimate way to answer the question of why the economy is in the state it is in, is to ask under what circumstances a rational planner, knowing the true probabilities of all possible future events, would have chosen exactly this outcome as the optimal one. Methodologically, Keynes’ vision of psychologically complex agents making irreversible decisions under conditions of fundamental uncertainty has been as completely repudiated by the “New Keynesians” as by their conservative opponents.

For the past 30 years the dominant macroeconomic models that have been in use by central banks and leading macroeconomists have … ranged from what have been termed real business cycle theory approaches on the one end to New Keynesian approaches on the other: perspectives that are considerably closer in flavour and methodological commitments to each other than to the “old Keynesian” approaches embodied in such models as the IS-LM framework of undergraduate economics. In particular, while demand matters in the short run in New Keynesian models, it can have no effect in the long run; no matter what, the economy always eventually returns to its full-employment growth path.

And while conventional economic theory saw the economy as self-equilibrating, economic policy discussion was dominated by faith in the stabilising powers of central banks and in the wisdom of “sound finance”. … Some of the same economists, who today are leading the charge against austerity, were arguing just as forcefully a few years ago that the most important macroeconomic challenge was reducing the size of public debt…. New Keynesians follow Keynes in name only; they have certainly given better policy advice than the austerians in recent years, but such advice does not always flow naturally from their models.

The industrialised world has gone through a prolonged period of stagnation and misery and may have worse ahead of it. Probably no policy can completely tame the booms and busts that capitalist economies are subject to. And even those steps that can be taken will not be taken without the pressure of strong popular movements challenging governments from the outside. The ability of economists to shape the world, for good or for ill is strictly circumscribed. Still, it is undeniable that the case for austerity – so weak on purely intellectual grounds – would never have conquered the commanding heights of policy so easily if the way had not been prepared for it by the past 30 years of consensus macroeconomics. Where the possibility and political will for stimulus did exist, modern economics – the stuff of current scholarship and graduate education – tended to hinder rather than help. While when the turn to austerity came, even shoddy work could have an outsize impact, because it had the whole weight of conventional opinion behind it. For this the mainstream of the economics profession – the liberals as much as the conservatives – must take some share of the blame.

In Jacobin: A Demystifying Decade for Economics

(The new issue of Jacobin has a piece by me on the state of economics ten years after the crisis. The published version is here. I’ve posted a slightly expanded version below. Even though Jacobin was generous with the word count and Seth Ackerman’s edits were as always superb, they still cut some material that, as king of the infinite space of this blog, I would rather include.)

 

For Economics, a Demystifying Decade

Has economics changed since the crisis? As usual, the answer is: It depends. If we look at the macroeconomic theory of PhD programs and top journals, the answer is clearly, no. Macroeconomic theory remains the same self-contained, abstract art form that it has been for the past twenty-five years. But despite its hegemony over the peak institutions of academic economics, this mainstream is not the only mainstream. The economics of the mainstream policy world (central bankers, Treasury staffers, Financial Times editorialists), only intermittently attentive to the journals in the best times, has gone its own way; the pieties of a decade ago have much less of a hold today. And within the elite academic world, there’s plenty of empirical work that responds to the developments of the past ten years, even if it doesn’t — yet — add up to any alternative vision.

For a socialist, it’s probably a mistake to see economists primarily as either carriers of valuable technical expertise or systematic expositors of capitalist ideology. They are participants in public debates just like anyone else. The profession as the whole is more often found trailing after political developments than advancing them.

***

The first thing to understand about macroeconomic theory is that it is weirder than you think. The heart of it is the idea that the economy can be thought of as a single infinite-lived individual trading off leisure and consumption over all future time. For an orthodox macroeconomist – anyone who hoped to be hired at a research university in the past 30 years – this approach isn’t just one tool among others. It is macroeconomics. Every question has to be expressed as finding the utility-maximizing path of consumption and production over all eternity, under a precisely defined set of constraints. Otherwise it doesn’t scan.

This approach is formalized in something called the Euler equation, which is a device for summing up an infinite series of discounted future values. Some version of this equation is the basis of most articles on macroeconomic theory published in a mainstream journal in the past 30 years.It might seem like an odd default, given the obvious fact that real economies contain households, businesses, governments and other distinct entities, none of whom can turn income in the far distant future into spending today. But it has the advantage of fitting macroeconomic problems — which at face value involve uncertainty, conflicting interests, coordination failures and so on — into the scarce-means-and-competing-ends Robinson Crusoe vision that has long been economics’ home ground.

There’s a funny history to this technique. It was invented by Frank Ramsey, a young philosopher and mathematician in Keynes’ Cambridge circle in the 1920s, to answer the question: If you were organizing an economy from the top down and had to choose between producing for present needs versus investing to allow more production later, how would you decide the ideal mix? The Euler equation offers a convenient tool for expressing the tradeoff between production in the future versus production today.

This makes sense as a way of describing what a planner should do. But through one of those transmogrifications intellectual history is full of, the same formalism was picked up and popularized after World War II by Solow and Samuelson as a description of how growth actually happens in capitalist economies. The problem of macroeconomics has continued to be framed as how an ideal planner should direct consumption and production to produce the best outcomes for anyone, often with the “ideal planner” language intact. Pick up any modern economics textbook and you’ll find that substantive questions can’t be asked except in terms of how a far sighted agent would choose this path of consumption as the best possible one allowed by the model.

There’s nothing wrong with adopting a simplified formal representation of a fuzzier and more complicated reality. As Marx said, abstraction is the social scientist’s substitute for the microscope or telescope. But these models are not simple by any normal human definition. The models may abstract away from features of the world that non-economists might think are rather fundamental to “the economy” — like the existence of businesses, money, and government — but the part of the world they do represent — the optimal tradeoff between consumption today and consumption tomorrow — is described in the greatest possible detail. This combination of extreme specificity on one dimension and extreme abstraction on the others might seem weird and arbitrary. But in today’s profession, if you don’t at least start from there, you’re not doing economics.

At the same time, many producers of this kind of models do have a quite realistic understanding of the behavior of real economies, often informed by first-hand experience in government. The combination of tight genre constraints and real insight leads to a strange style of theorizing, where the goal is to produce a model that satisfies the the conventions of the discipline while arriving at a conclusion that you’ve already reached by other means. Michael Woodford, perhaps the leading theorist of “New Keynesian” macroeconomics, more or less admits that the purpose of his models is to justify the countercyclical interest rate policy already pursued by central banks in a language acceptable to academic economists. Of course the central bankers themselves don’t learn anything from such an exercise — and you will scan the minutes of Fed meetings in vain for discussion of first-order ARIMA technology shocks — but they  presumably find it reassuring to hear that what they already thought is consistent with the most modern economic theory. It’s the economic equivalent of the college president in Randall Jarrell’s Pictures from an Institution:

About anything, anything at all, Dwight Robbins believed what Reason and Virtue and Tolerance and a Comprehensive Organic Synthesis of Values would have him believe. And about anything, anything at all, he believed what it was expedient for the president of Benton College to believe. You looked at the two beliefs, and lo! the two were one. Do you remember, as a child without much time, turning to the back of the arithmetic book, getting the answer to a problem, and then writing down the summary hypothetical operations by which the answer had been, so to speak, arrived at? It is the only method of problem-solving that always gives correct answers…

The development of theory since the crisis has followed this mold. One prominent example: After the crash of 2008, Paul Krugman immediately began talking about the liquidity trap and the “perverse” Keynesian claims that become true when interest rates were stuck at zero. Fiscal policy was now effective, there was no danger in inflation from increases in the money supply, a trade deficit could cost jobs, and so on. He explicated these ideas with the help of the “IS-LM” models found in undergraduate textbooks — genuinely simple abstractions that haven’t played a role in academic work in decades.

Some years later, he and Gautti Eggertson unveiled a model in the approved New Keynesian style, which showed that, indeed, if interest rates  were fixed at zero then fiscal policy, normally powerless, now became highly effective. This exercise may have been a display of technical skill (I suppose; I’m not a connoisseur) but what do we learn from it? After all, generating that conclusion was the announced  goal from the beginning. The formal model was retrofitted to generate the argument that Krugman and others had been making for years, and lo! the two were one.

It’s a perfect example of Joan Robinson’s line that economic theory is the art of taking a rabbit out of a hat, when you’ve just put it into the hat in full view of the audience. I suppose what someone like Krugman might say in his defense is that he wanted to find out if the rabbit would fit in the hat. But if you do the math right, it always does.

(What’s funnier in this case is that the rabbit actually didn’t fit, but they insisted on pulling it out anyway. As the conservative economist John Cochrane gleefully pointed out, the same model also says that raising taxes on wages should also boost employment in a liquidity trap. But no one believed that before writing down the equations, so they didn’t believe it afterward either. As Krugman’s coauthor Eggerston judiciously put it, “there may be reasons outside the model” to reject the idea that increasing payroll taxes is a good idea in a recession.)

Left critics often imagine economics as an effort to understand reality that’s gotten hopelessly confused, or as a systematic effort to uphold capitalist ideology. But I think both of these claims are, in a way, too kind; they assume that economic theory is “about” the real world in the first place. Better to think of it as a self-constrained art form, whose apparent connections to economic phenomena are results of a confusing overlap in vocabulary. Think about chess and medieval history: The statement that “queens are most effective when supported by strong bishops” might be reasonable in both domains, but its application in the one case will tell you nothing about its application in the other.

Over the past decade, people (such as, famously, Queen Elizabeth) have often asked why economists failed to predict the crisis. As a criticism of economics, this is simultaneously setting the bar too high and too low. Too high, because crises are intrinsically hard to predict. Too low, because modern macroeconomics doesn’t predict anything at all.  As Suresh Naidu puts it, the best way to think about what most economic theorists do is as a kind of constrained-maximization poetry. It makes no more sense to ask “is it true” than of a haiku.

***

While theory buzzes around in its fly-bottle, empirical macroeconomics, more attuned to concrete developments, has made a number of genuinely interesting departures. Several areas have been particularly fertile: the importance of financial conditions and credit constraints; government budgets as a tool to stabilize demand and employment; the links between macroeconomic outcomes and the distribution of income; and the importance of aggregate demand even in the long run.

Not surprisingly, the financial crisis spawned a new body of work trying to assess the importance of credit, and financial conditions more broadly, for macroeconomic outcomes. (Similar bodies of work were produced in the wake of previous financial disruptions; these however don’t get much cited in the current iteration.) A large number of empirical papers tried to assess how important access to credit was for household spending and business investment, and how much of the swing from boom to bust could be explained by the tighter limits on credit. Perhaps the outstanding figures here are Atif Mian and Amir Sufi, who assembled a large body of evidence that the boom in lending in the 2000s reflected mainly an increased willingness to lend on the part of banks, rather than an increased desire to borrow on the part of families; and that the subsequent debt overhang explained a large part of depressed income and employment in the years after 2008.

While Mian and Sufi occupy solidly mainstream positions (at Princeton and Chicago, respectively), their work has been embraced by a number of radical economists who see vindication for long-standing left-Keynesian ideas about the financial roots of economic instability. Markus Brunnermeier (also at Princeton) and his coauthors have also done interesting work trying to untangle the mechanisms of the 2008 financial crisis and to generalize them, with particular attention to the old Keynesian concept of liquidity. That finance is important to the economy is not, in itself, news to anyone other than economists; but this new empirical work is valuable in translating this general awareness into concrete usable form.

A second area of renewed empirical interest is fiscal policy — the use of the government budget to manage aggregate demand. Even more than with finance, economics here has followed rather than led the policy debate. Policymakers were turning to large-scale fiscal stimulus well before academics began producing studies of its effectiveness. Still, it’s striking how many new and sophisticated efforts there have been to estimate the fiscal multiplier — the increase in GDP generated by an additional dollar of government spending.

In the US, there’s been particular interest in using variation in government spending and unemployment across states to estimate the effect of the former on the latter. The outstanding work here is probably that of Gabriel Chodorow-Reich. Like most entries in this literature, Chodorow-Reich’s suggests fiscal multipliers that are higher than almost any mainstream economist would have accepted a decade ago, with each dollar of government spending adding perhaps two dollars to GDP. Similar work has been published by the IMF, which acknowledged that past studies had “significantly underestimated” the positive effects of fiscal policy. This mea culpa was particularly striking coming from the global enforcer of economic orthodoxy.

The IMF has also revisited its previously ironclad opposition to capital controls — restrictions on financial flows across national borders. More broadly, it has begun to offer, at least intermittently, a platform for work challenging the “Washington Consensus” it helped establish in the 1980s, though this shift predates the crisis of 2008. The changed tone coming out of the IMF’s research department has so far been only occasionally matched by a change in its lending policies.

Income distribution is another area where there has been a flowering of more diverse empirical work in the past decade. Here of course the outstanding figure is Thomas Piketty. With his collaborators (Gabriel Zucman, Emmanuel Saez and others) he has practically defined a new field. Income distribution has always been a concern of economists, of course, but it has typically been assumed to reflect differences in “skill.” The large differences in pay that appeared to be unexplained by education, experience, and so on, were often attributed to “unmeasured skill.” (As John Eatwell used to joke: Hegemony means you get to name the residual.)

Piketty made distribution — between labor and capital, not just across individuals — into something that evolves independently, and that belongs to the macro level of the economy as a whole rather than the micro level of individuals. When his book Capital in the 21st Century was published, a great deal of attention was focused on the formula “r > g,” supposedly reflecting a deep-seated tendency for capital accumulation to outpace economic growth. But in recent years there’s been an interesting evolution in the empirical work Piketty and his coauthors have published, focusing on countries like Russia/USSR and China, etc., which didn’t feature in the original survey. Political and institutional factors like labor rights and the legal forms taken by businesses have moved to center stage, while the formal reasoning of “r > g” has receded — sometimes literally to a footnote. While no longer embedded in the grand narrative of Capital in the 21st Century, this body of empirical work is extremely valuable, especially since Piketty and company are so generous in making their data publicly available. It has also created space for younger scholars to make similar long-run studies of the distribution of income and wealth in countries that the Piketty team hasn’t yet reached, like Rishabh Kumar’s superb work on India. It has also been extended by other empirical economists, like Lukas Karabarbounis and coauthors, who have looked at changes in income distribution through the lens of market power and the distribution of surplus within the corporation — not something a University of Chicago economist would have ben likely to study a decade ago.

A final area where mainstream empirical work has wandered well beyond its pre-2008 limits is the question of whether aggregate demand — and money and finance more broadly — can affect long-run economic outcomes. The conventional view, still dominant in textbooks, draws a hard line between the short run and the long run, more or less meaning a period longer than one business cycle. In the short run, demand and money matter. But in the long run, the path of the economy depends strictly on “real” factors — population growth, technology, and so on.

Here again, the challenge to conventional wisdom has been prompted by real-world developments. On the one hand, weak demand — reflected in historically low interest rates — has seemed to be an ongoing rather than a cyclical problem. Lawrence Summers dubbed this phenomenon “secular stagnation,” reviving a phrase used in the 1940s by the early American Keynesian Alvin Hansen.

On the other hand, it has become increasingly clear that the productive capacity of the economy is not something separate from current demand and production levels, but dependent on them in various ways. Unemployed workers stop looking for work; businesses operating below capacity don’t invest in new plant and equipment or develop new technology. This has manifested itself most clearly in the fall in labor force participation over the past decade, which has been considerably greater than can be explained on the basis of the aging population or other demographic factors. The bottom line is that an economy that spends several years producing less than it is capable of, will be capable of producing less in the future. This phenomenon, usually called “hysteresis,” has been explored by economists like Laurence Ball, Summers (again) and Brad DeLong, among others. The existence of hysteresis, among other implications, suggests that the costs of high unemployment may be greater than previously believed, and conversely that public spending in a recession can pay for itself by boosting incomes and taxes in future years.

These empirical lines are hard to fit into the box of orthodox theory — not that people don’t try. But so far they don’t add up to more than an eclectic set of provocative results. The creativity in mainstream empirical work has not yet been matched by any effort to find an alternative framework for thinking of the economy as a whole. For people coming from non-mainstream paradigms — Marxist or Keynesian — there is now plenty of useful material in mainstream empirical macroeconomics to draw on – much more than in the previous decade. But these new lines of empirical work have been forced on the mainstream by developments in the outside world that were too pressing to ignore. For the moment, at least, they don’t imply any systematic rethinking of economic theory.

***

Perhaps the central feature of the policy mainstream a decade ago was a smug and, in retrospect, remarkable complacency that the macroeconomic problem had been solved by independent central banks like the Federal Reserve.  For a sense of the pre-crisis consensus, consider this speech by a prominent economist in September 2007, just as the US was heading into its worst recession since the 1930s:

One of the most striking facts about macropolicy is that we have progressed amazingly. … In my opinion, better policy, particularly on the part of the Federal Reserve, is directly responsible for the low inflation and the virtual disappearance of the business cycle in the last 25 years. … The story of stabilization policy of the last quarter century is one of amazing success.

You might expect the speaker to be a right-wing Chicago type like Robert Lucas, whose claim that “the problem of depression prevention has been solved” was widely mocked after the crisis broke out. But in fact it was Christina Romer, soon headed to Washington as the Obama administration’s top economist. In accounts of the internal debates over fiscal policy that dominated the early days of the administration, Romer often comes across as one of the heroes, arguing for a big program of public spending against more conservative figures like Summers. So it’s especially striking that in the 2007 speech she spoke of a “glorious counterrevolution” against Keynesian ideas. Indeed, she saw the persistence of the idea of using deficit spending to fight unemployment as the one dark spot in an otherwise cloudless sky. There’s more than a little irony in the fact that opponents of the massive stimulus Romer ended up favoring drew their intellectual support from exactly the arguments she had been making just a year earlier. But it’s also a vivid illustration of a consistent pattern: ideas have evolved more rapidly in the world of practical policy than among academic economists.

For further evidence, consider a 2016 paper by Jason Furman, Obama’s final chief economist, on “The New View of Fiscal Policy.” As chair of the White House Council of Economic Advisers, Furman embodied the policy-economics consensus ex officio. Though he didn’t mention his predecessor by name, his paper was almost a point-by-point rebuttal of Romer’s “glorious counterrevolution” speech of a decade earlier. It starts with four propositions shared until recently by almost all respectable economists: that central banks can and should stabilize demand all by themselves, with no role for fiscal policy; that public deficits raise interest rates and crowd out private investment; that budget deficits, even if occasionally called for, need to be strictly controlled with an eye on the public debt; and that any use of fiscal policy must be strictly short-term.

None of this is true, suggests Furman. Central banks cannot reliably stabilize modern economies on their own, increased public spending should be a standard response to a downturn, worries about public debt are overblown, and stimulus may have to be maintained indefinitely. While these arguments obviously remain within a conventional framework in which the role of the public sector is simply to maintain the flow of private spending at a level consistent with full employment, they nonetheless envision much more active management of the economy by the state. It’s a remarkable departure from textbook orthodoxy for someone occupying such a central place in the policy world.

Another example of orthodoxy giving ground under the pressure of practical policymaking is Narayana Kocherlakota. When he was appointed as President of the Federal Reserve Bank of Minneapolis, he was on the right of debates within the Fed, confident that if the central bank simply followed its existing rules the economy would quickly return to full employment, and rejecting the idea of active fiscal policy. But after a few years on the Fed’s governing Federal Open Market Committee (FOMC), he had moved to the far left, “dovish” end of opinion, arguing strongly for a more aggressive approach to bringing unemployment down by any means available, including deficit spending and more aggressive unconventional tools at the Fed. This meant rejecting much of his own earlier work, perhaps the clearest example of a high-profile economist repudiating his views after the crisis; in the process, he got rid of many of the conservative “freshwater” economists in the Minneapolis Fed’s research department.

The reassessment of central banks themselves has run on parallel lines but gone even farther.

For twenty or thirty years before 2008, the orthodox view of central banks offered a two-fold defense against the dangerous idea — inherited from the 1930s — that managing the instability of capitalist economies was a political problem. First, any mismatch between the economy’s productive capabilities (aggregate supply) and the desired purchases of households and businesses (aggregate demand) could be fully resolved by the central bank; the technicians at the Fed and its peers around the world could prevent any recurrence of mass unemployment or runaway inflation. Second, they could do this by following a simple, objective rule, without any need to balance competing goals.

During those decades, Alan Greenspan personified the figure of the omniscient central banker. Venerated by presidents of both parties, Greenspan was literally sanctified in the press — a 1990 cover of The International Economy had him in papal regalia, under the headline, “Alan Greenspan and His College of Cardinals.” A decade later, he would appear on the cover of Time as the central figure in “The Committee to Save the World,” flanked by Robert Rubin and the ubiquitous Summers. And a decade after that he showed up as Bob Woodward’s eponymous Maestro.

In the past decade, this vision of central banks and central bankers has eroded from several sides. The manifest failure to prevent huge falls in output and employment after 2008 is the most obvious problem. The deep recessions in the US, Europe and elsewhere make a mockery of the “virtual disappearance of the business cycle” that people like Romer had held out as the strongest argument for leaving macropolicy to central banks. And while Janet Yellen or Mario Draghi may be widely admired, they command nothing like the authority of a Greenspan.

The pre-2008 consensus is even more profoundly undermined by what central banks did do than what they failed to do. During the crisis itself, the Fed and other central banks decided which financial institutions to rescue and which to allow to fail, which creditors would get paid in full and which would face losses. Both during the crisis and in the period of stagnation that followed, central banks also intervened in a much wider range of markets, on a much larger scale. In the US, perhaps the most dramatic moment came in late summer 2008, when the commercial paper market — the market for short-term loans used by the largest corporations — froze up, and the Fed stepped in with a promise to lend on its own account to anyone who had previously borrowed there. This watershed moment took the Fed from its usual role of regulating and supporting the private financial system, to simply replacing it.

That intervention lasted only a few months, but in other markets the Fed has largely replaced private creditors for a number of years now. Even today, it is the ultimate lender for about 20 percent of new mortgages in the United States. Policies of quantitative easing, in the US and elsewhere, greatly enlarged central banks’ weight in the economy — in the US, the Fed’s assets jumped from 6 percent of GDP to 25 percent, an expansion that is only now beginning to be unwound.  These policies also committed central banks to targeting longer-term interest rates, and in some cases other asset prices as well, rather than merely the overnight interest rate that had been the sole official tool of policy in the decades before 2008.

While critics (mostly on the Right) have objected that these interventions “distort” financial markets, this makes no sense from the perspective of a practical central banker. As central bankers like the Fed’s Ben Bernanke or the Bank of England’s Adam Posen have often said in response to such criticism, there is no such thing as an “undistorted” financial market. Central banks are always trying to change financial conditions to whatever it thinks favors full employment and stable prices. But as long as the interventions were limited to a single overnight interest rate, it was possible to paper over the contradiction between active monetary policy and the idea of a self-regulating economy, and pretend that policymakers were just trying to follow the “natural” interest rate, whatever that is. The much broader interventions of the past decade have brought the contradiction out into the open.

The broad array of interventions central banks have had to carry out over the past decade have also provoked some second thoughts about the functioning of financial markets even in normal times. If financial markets can get things wrong so catastrophically during crises, shouldn’t that affect our confidence in their ability to allocate credit the rest of the time? And if we are not confident, that opens the door for a much broader range of interventions — not only to stabilize markets and maintain demand, but to affirmatively direct society’s resources in better ways than private finance would do on its own.

In the past decade, this subversive thought has shown up in some surprisingly prominent places. Wearing his policy rather than his theory hat, Paul Krugman sees

… a broader rationale for policy activism than most macroeconomists—even self-proclaimed Keynesians—have generally offered in recent decades. Most of them… have seen the role for policy as pretty much limited to stabilizing aggregate demand. … Once we admit that there can be big asset mispricing, however, the case for intervention becomes much stronger… There is more potential for and power in [government] intervention than was dreamed of in efficient-market models.

From another direction, the notion that macroeconomic policy does not involve conflicting interests has become harder to sustain as inflation, employment, output and asset prices have followed diverging paths. A central plank of the pre-2008 consensus was the aptly named “divine coincidence,” in which the same level of demand would fortuitously and simultaneously lead to full employment, low and stable inflation, and production at the economy’s potential. Operationally, this was embodied in the “NAIRU” — the level of unemployment below which, supposedly, inflation would begin to rise without limit.

Over the past decade, as estimates of the NAIRU have fluctuated almost as much as the unemployment rate itself, it’s become clear that the NAIRU is too unstable and hard to measure to serve as a guide for policy, if it exists at all. It is striking to see someone as prominent as IMF chief economist Olivier Blanchard write (in 2016) that “the US economy is far from satisfying the ‘divine coincidence’,” meaning that stabilizing inflation and minimizing unemployment are two distinct goals. But if there’s no clear link between unemployment and inflation, it’s not clear why central banks should worry about low unemployment at all, or how they should trade off the risks of prices rising undesirably fast against the risk of too-high unemployment. With surprising frankness, high officials at the Fed and other central banks have acknowledged that they simply don’t know what the link between unemployment and inflation looks like today.

To make matters worse, a number of prominent figures — most vocally at the Bank for International Settlements — have argued that we should not be concerned only with conventional price inflation, but also with the behavior of asset prices, such as stocks or real estate. This “financial stability” mandate, if it is accepted, gives central banks yet another mission. The more outcomes central banks are responsible for, and the less confident we are that they all go together, the harder it is to treat central banks as somehow apolitical, as not subject to the same interplay of interests as the rest of the state.

Given the strategic role occupied by central banks in both modern capitalist economies and economic theory, this rethinking has the potential to lead in some radical directions. How far it will actually do so, of course, remains to be seen. Accounts of the Fed’s most recent conclave in Jackson Hole, Wyoming suggest a sense of “mission accomplished” and a desire to get back to the comfortable pieties of the past. Meanwhile, in Europe, the collapse of the intellectual rationale for central banks has been accompanied by the development of the most powerful central bank-ocracy the world has yet seen. So far the European Central Bank has not let its lack of democratic mandate stop it from making coercive intrusions into the domestic policies of its member states, or from serving as the enforcement arm of Europe’s creditors against recalcitrant debtors like Greece.

One thing we can say for sure: Any future crisis will bring the contradictions of central banks’ role as capitalism’s central planners into even sharper relief.

***

Many critics were disappointed the crisis of a 2008 did not lead to an intellectual revolution on the scale of the 1930s. It’s true that it didn’t. But the image of stasis you’d get from looking at the top journals and textbooks isn’t the whole picture — the most interesting conversations are happening somewhere else. For a generation, leftists in economics have struggled to change the profession, some by launching attacks (often well aimed, but ignored) from the outside, others by trying to make radical ideas parsable in the orthodox language. One lesson of the past decade is that both groups got it backward.

Keynes famously wrote that “Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” It’s a good line. But in recent years the relationship seems to have been more the other way round. If we want to change the economics profession, we need to start changing the world. Economics will follow.

Thanks to Arjun Jayadev, Ethan Kaplan, Mike Konczal and Suresh Naidu for helpful suggestions and comments.

New Piece on MMT

Arjun Jayadev and I have a new piece up at the Institute for New Economic Thinking, trying to clarify the relationship between Modern Monetary Theory (MMT) and textbook macroeconomics. (There is also a pdf version here, which I think is a bit more readable.) I will have a blogpost summarizing the argument later today or tomorrow, but in the meantime here is the abstract:

An increasingly visible school of heterodox macroeconomics, Modern Monetary Theory (MMT), makes the case for functional finance—the view that governments should set their fiscal position at whatever level is consistent with price stability and full employment, regardless of current debt or deficits. Functional finance is widely understood, by both supporters and opponents, as a departure from orthodox macroeconomics. We argue that this perception is mistaken: While MMT’s policy proposals are unorthodox, the analysis underlying them is largely orthodox. A central bank able to control domestic interest rates is a sufficient condition to allow a government to freely pursue countercyclical fiscal policy with no danger of a runaway increase in the debt ratio. The difference between MMT and orthodox policy can be thought of as a different assignment of the two instruments of fiscal position and interest rate to the two targets of price stability and debt stability. As such, the debate between them hinges not on any fundamental difference of analysis, but rather on different practical judgements—in particular what kinds of errors are most likely from policymakers.

Read the rest here or here.

Macroeconomic Lessons from the Past Decade

Below the fold is a draft of a chapter I’m contributing to an edited volume on aggregate demand and employment. My chapter is supposed to cover macroeconomic policy and employment in the US, with other chapters covering other countries and regions. 

The chapter is mostly based on material I’ve pulished elsewhere, mainly my Roosevelt papers “What Recovery?” and “A New Direction for the Federal Reserve.” My goal was something that summarized the arguments there for an audience of (presumably) heterodox macroeconomists, and that could also be used in the classroom.

There is still time to revise this, so comments/criticisms are very welcome.

*

Continue reading Macroeconomic Lessons from the Past Decade

“Economic Growth, Income Distribution, and Climate Change”

In response to my earlier post on climate change and aggregate demand, Lance Taylor sends along his recent article “Economic Growth, Income Distribution, and Climate Change,” coauthored with Duncan Foley and Armon Rezai.

The article, which was published in Ecological Economics, lays out a structuralist growth model with various additions to represent the effects of climate change and possible responses to it. The bulk of the article works through the formal properties of the model; the last section shows the results of some simulations based on plausible values of the various parmaters. 1 I hadn’t seen the article before, but its conclusions are broadly parallel to my arguments in the previous two posts. It tells a story in which public spending on decarbonization not only avoids the costs and dangers of climate change itself, but leads to higher private output, income and employment – crowding in rather than crowding out.

Before you click through, a warning: There’s a lot of math there. We’ve got a short run where output and investment are determined via demand and distribution, a long run where the the investment rate from the short run dynamics is combined with exogenous population growth and endogenous productivity growth to yield a growth path, and an additional climate sector that interacts with the economic variables in various ways. How much the properties of a model like this change your views about the substantive question of climate change and economic growth, will depend on how you feel about exercises like this in general. How much should the fact that that one can write down a model where climate change mitigation more than pays for itself through higher output, change our beliefs about whether this is really the case?

For some people (like me) the specifics of the model may be less important that the fact that one of the world’s most important heterodox macroeconomists thinks the conclusion is plausible. At the least, we can say that there is a logically coherent story where climate change mitigation does not crowd out other spending, and that this represents an important segment of heterodox economics and not just an idiosyncratic personal view.

If you’re interested, the central conclusions of the calibrated model are shown below. The dotted red line shows the business-as-usual scenario with no public spending on climate change, while the other two lines show scenarios with more or less aggressive public programs to reduce and/or offset carbon emissions.

Here’s the paper’s summary of the outcomes along the business-as-usual trajectory:

Rapid growth generates high net emissions which translate into rising global mean temperature… As climate damages increase, the profit rate falls. Investment levels are insufficient to maintain aggregate demand and unemployment results. After this boom-bust cycle, output is back to its current level after 200 years but … employment relative to population falls from 40% to 15%. … Those lucky enough to find employment are paid almost three times the current wage rate, but the others have to rely on subsistence income or public transfers. Only in the very long run, as labor productivity falls in response to rampant unemployment, can employment levels recover. 

In the other scenarios, with a peak of 3-6% of world GDP spent on mitigation, we see continued exponential output growth in line with historical trends. The paper doesn’t make a direct comparison between the mitigation cases and a world where there was no climate change problem to begin with. But the structure of the model at least allows for the possibility that output ends up higher in the former case.

The assumptions behind these results are: that the economy is demand constrained, so that public spending on climate mitigation boosts output and employment in the short run; that investment depends on demand conditions as well as distributional conflict, allowing the short-run dynamics to influence the long-run growth path; that productivity growth is endogenous, rising with output and with employment; and that climate change affects the growth rate and not just the level of output, via lower profits and faster depreciation of existing capital.2

This is all very interesting. But again, we might ask how much we learn from this sort of simulation. Certainly it shouldn’t be taken as a prediction! To me there is one clear lesson at least: A simple cost benefit framework is inadequate for thinking about the economic problem of climate change. Spending on decarbonization is not simply a cost. If we want to think seriously about its economic effects, we have to think about demand, investment, distribution and induced technological change. Whether you find this particular formalization convincing, these are the questions to ask.

Readings: A Couple New Papers on Fiscal Policy

From the NBER working paper series — essential reading if you want to follow what the mainstream of the profession is up to — here are a couple interesting recent papers on fiscal policy. They offer some genuinely valuable insights, while also demonstrating the limits of orthodoxy.

Geographic Cross-Sectional Fiscal Spending Multipliers: What Have We Learned?
Gabriel Chodorow-Reich
NBER Working Paper No. 23577

Gabriel Chodorow-Reich has a useful new entry in the burgeoning literature on the empirics of fiscal multipliers — a review of the now-substantial work on state-level multipliers. Most of these papers are based on spending under the 2009 stimulus (the ARRA) — since many components of its spending were set by formulas not responsive to local economic conditions, cross-state variation can reasonably be considered exogenous. (Another reason the ARRA features so heavily in these papers is, of course, that the revival of mainstream interest in fiscal multipliers is mostly a post-crisis phenomenon.) Other studies estimate local multipliers based on  other public spending with plausibly exogenous regional variation, such as that involved in a military buildup or response to a natural disaster.

How do these local multipliers translate into the national multiplier we are usually more interested in? There are two main differences, pointing in opposite directions. On the one hand, states are more open than the US as a whole (or than other large countries, though perhaps not more than small European countries). This means more spillover of demand across borders, meaning a smaller multiplier. On the other hand, since states don’t conduct their own monetary policy (and since the US banking system is no longer partitioned by state) the usual channels of crowding out don’t operate at the state level. This implies a bigger multiplier. It’s hard to say which of these effects is bigger in general, but when interest rates are constrained, by the zero lower bound for example, crowding out doesn’t happen by that channel at the national level either. So at the zero lower bound, Chodorow-Reich argues, the national multiplier should be unambiguously greater than the average state multiplier.

Based on the various studies he discusses (including a couple of his own), he estimates a state-level multiplier of 1.8.  He subtracts an arbitrary tenth of a point to allow for financial crowding out even at the ZLB, giving a value of 1.7 as a lower bound for the national multiplier. This is toward the high end of existing estimates. For whatever reason, Chodorow-Reich makes no effort to even guess at the impact of the greater openness of state-level economies. But if we suppose that the typical import share at the state level is double the national import share, then a back-of-the-envelope calculation suggests that a state-level multiplier of 1.7 implies a national multiplier somewhere above 2.0. [1]

It’s a helpful paper, offering some more empirical support for the new view of fiscal policy that seems to be gradually displacing the balanced-budget orthodoxy of the past generation. But it must be said that it is one of those papers that presents some very interesting empirical results and is evidently attempting to deal with a concrete, policy-relevant question about economic reality — but that seems to devote a disproportionate amount of energy to making its results intelligible within mainstream theory. We’ll really have made progress when this kind of work can be published without a lot of apologies for the use of “non-Ricardian agents.”

The Dire Effects of the Lack of Monetary and Fiscal Coordination
Francesco Bianchi, Leonardo Melosi
NBER Working Paper No. 23605

The subordination of real-world insight to theoretical toy-train sets is much worse in this paper. But there is a genuine insight in it — that when you have a fiscal authority targeting the debt-GDP ratio and a monetary authority targeting inflation (or equivalently, unemployment or the output gap), then when they are independent their actions can create destabilizing feedback loops. In the simple case, suppose the monetary authority responds to higher inflation by raising interest rates. This raises debt service costs, forcing the fiscal authority to reduce spending or raise taxes to meet its debt target. The contractionary effect of this fiscal shift will have to be offset by the central bank lowering rates. This process may converge toward the unique combination of fiscal balance and interest rate at which both inflation and debt ratio are at their desired levels. But as Arjun Jayadev and I have shown, it can also diverge, with the interactions the actions of each authority provoking more and more violent responses from the other.

I’m glad to see some mainstream people recognizing this problem. As the authors note, the basic point was made by Michael Woodford. (Unsurprisingly, they don’t cite this recent paper by Peter Skott and Soon Ryoo, which carefully works through the possible dynamics between the two policy rules. [2]) The implications, as the NBER authors correctly state, are, first, that fiscal policy and monetary policy have to be seen as jointly affecting both the output gap and the public debt; and that if preventing a rising debt ratio is an important goal of policy, holding down interest rates and/or allowing a higher inflation rate are useful tools for achieving it. Unfortunately, the paper doesn’t really develop these ideas — the meat of it is a mathematical exercise showing how these results can occur in the world of a representative agent maximizing its utility over infinite time, if you set up the frictions just right.

 

[1] For the simplest case, suppose the multiplier is equal to (1-m)[1/(1-mpc)], where m is the marginal propensity to import and mpc is the marginal propensity to consume. Then if the state level import propensity is 0.4 and the state level multiplier is 1.7, that implies an mpc of 0.65. Combine that with a national import propensity of 0.2 and you get a national multiplier of 2.3.

[2] The paper was published in Metroeconomica in 2016, but I’m linking to the unpaywalled 2015 working paper version.

 

Reading Notes: Demand and Productivity

Here are two interesting articles on demand and productivity that people have recently brought to my attention.

The economic historian Gavin Wright — author of the classic account of the economic logic of the plantation — just sent me a piece he wrote a few years ago on the productivity boom of the 1990s. As he said in his email, his account of the ‘90s is very consistent with the suggestions I make in my Roosevelt paper about how strong demand might stimulate productivity growth.

In this article, Wright traces the idea that high wage regions will experience faster productivity growth back to H. J. Habbakuk’s 1962 American and British Technology in the Nineteenth Century. Then he assembles a number of lines of evidence that rapid wage growth drove the late-1990s productivity acceleration, rather than vice versa.

He points out that the widely-noted “productivity explosion” of the 1920s — from 1.5 percent a year to over 5 percent — was immediately preceded by a period of exceptionally strong wage growth: “The real price of labor in the 1920s … was between 50 and 70 percent higher than a decade earlier.” [1] The pressure of high wages, he suggests, encouraged the use of electricity and other general-purpose technologies, which had been available for decades but only widely adopted in manufacturing in the 1920s. Conversely, we can see the productivity slowdown of the 1970s as, at least in part, a result of the deceleration of wage growth, which — Wright argues — was the result of institutional changes including the decline of unions, the erosion of the minimum wage and other labor regulations, and more broadly the shift back toward “‘flexible labor markets,’ reversing fifty years of labor market policy.”

Turning to the 1990s, the starting point is the sharp acceleration of productivity in the second half of the decade. This acceleration was very widely shared, including sectors like retail where historically productivity growth had been limited. The timing of this acceleration has been viewed as a puzzle, with no “smoking gun” for simultaneous productivity boosting innovations across this range of industries over a short period. But “if you look at the labor market, you can find a smoking gun in the mid-1990s. … real hourly wages finally began to rise at precisely that time, after more than two decades of decline. … Unemployment rates fell below 4 percent — levels reached only briefly in the 1960s… Should it be surprising that employers turned to labor-saving technologies at this time?” This acceleration in real wages, Wright argues, was not the result of higher productivity or other supply-side factors; rather “it is most plausibly attributed to macroeconomic conditions, when an accommodating Federal Reserve allowed employment to press against labor supply for the first time in a generation.”

The productivity gains of the 1990s did, of course, involve new use of information technology. But the technology itself was not necessarily new. “James Cortada [2004] lists eleven key IT applications in the retail industry circa 1995-2000, including electronic shelf levels, scanning, electronic fund transfer, sales-based ordering and internet sales … with the exception of e-business, the list could have come from the 1970s and 1980s.”

Wright, who is after all a historian, is careful not to argue that there is a general law linking higher wages to higher productivity in all historical settings. As he notes, “such a claim is refuted by the experience of the 1970s, when upward pressures on wages led mainly to higher inflation…” In his story, both sides are needed — the technological possibilities must exist, and there must be sufficient wage pressure to channel them into productivity-boosting applications. I don’t think anyone would say he’s made a decisive case , but if you’re inclined to a view like this the article certainly gives you more material to support it.

*

A rather different approach to these questions is this 2012 paper by Servaas Storm and C. W. M. Naastepad. Wright is focusing on a few concrete episodes in the history of a particular country, which he explores using a variety of material — survey and narrative as well as conventional economic data. Storm and Naastepad are proposing a set of general rules that they support with a few stylized facts and then explore via of the properties of a formal model. There are things to be learned from both approaches.

In this case the model is simple: output is demand-determined. Demand is either positive or negative function of the wage share (i.e. the economy is either wage-led or profit-led). And labor productivity is a function of both output and the wage, reflecting two kinds of channels by which demand can influence productivity. And an accounting identity says that employment growth is qual to output growth less labor productivity growth. The productivity equation is the distinctive feature here. Storm and Naastepad adopt as “stylized facts” — derived from econometric studies but not discussed in any detail — that both parameters are on the order of 0.4: An additional one percent growth in output, or in wages, will lead to an 0.4 percent growth in labor productivity.

This is a very simple structure but it allows them to draw some interesting conclusions:

– Low wages may boost employment not through increased growth or competitiveness, but through lower labor productivity. (They suggest that this is the right way to think about the Dutch “employment miracle of the 1990s.)

– Conversely, even where demand is wage-led (i.e. a shift to labor tends to raise total spending) faster wage growth is not an effective strategy for boosting employment, because productivity will rise as well. (Shorter hours or other forms of job-sharing, they suggest, may be more successful.)

– Where demand is strongly wage-led (as in the Scandinavian countries, they suggest), profits will not be affected much by wage growth. The direct effect of higher wages in this case could be mostly or entirely offset by the combination of higher demand and higher productivity. If true, this has obvious implications for the feasibility of the social democratic bargain there.

– Where demand is more weakly wage-led or profit-led (as with most structuralists, they see the US as the main example of the latter), distributional conflicts will be more intense. On the other hand, in this case the demand and productivity effects work together to make wage restraint a more effective strategy for boosting employment.

It’s worth spelling out the implications a bit more. A profit-led economy is one in which investment decisions are very sensitive to profitability. But investment is itself a major influence on profit, as a source of demand and — emphasized here — as a source of productivity gains that are captured by capital. So wage gains are more threatening to profits in a setting in which investment decisions are based largely on profitability. In an environment in which investment decisions are motivated by demand or exogenous animal spirits (“only a little more than an expedition to the South Pole, based on a calculation of benefits to come”), capitalists have less to fear from rising wages. More bluntly: one of the main dangers to capitalists of a rise in wages, is their effects on the investment decisions of other capitalists.

What Recovery: Reading Notes

My Roosevelt Institute paper on potential output came out last week. (Summary here.) The paper has gotten some more press since Neil Irwin’s Times piece, including Ryan Cooper in The Week and Felix Salmon in Slate. My favorite headline is from Boing Boing: American Wages Are So Low, the Robots Don’t Want Your Jobs.

In the paper I tried to give a fairly comprehensive overview of the evidence and arguments that the US economy is not in any meaningful sense at potential output or full employment. But of course it was just one small piece of a larger conversation. Here are a few things I’ve found interesting recently on the same set of issues. .

Perhaps the most important new academic contribution to this debate is this paper by Olivier Coibion, Yuriy Gorodnichenko, and Mauricio Ulate, on estimates of potential output, which came out too late for me to mention in the Roosevelt report. Their paper rigorously demonstrates that, despite their production-function veneer, the construction of potential output estimates ensures that any persistent change in growth rates will appear as a change in potential. It follows that there is “little value added in estimates of potential GDP relative to simple measures of statistical trends.” (Matthew Klein puts it more bluntly in an Alphaville post discussing the paper: “‘Potential’ output forecasts are actually worthless.”) The paper proposes an alternative measure of potential output, which they suggest can distinguish between transitory demand shocks and permanent shifts in the economy’s productive capacity. This alternative measure gives a very similar estimate for the output gap as simply looking at the pre-2008 forecasts or extrapolating from the pre-2008 trend.  “Our estimates imply that U.S. output remains almost 10 percentage points below potential output, leaving ample room for policymakers to close the gap through demand-side policies if they so chose to.” Personally, I ‘m a little less convinced by their positive conclusions than by their negative ones. But this paper should definitely put to the rest the idea (as in last year’s notorious CEA-chair letter) that it is obviously wrong — absurd and unserious — that a sufficient stimulus could deliver several years of 4 percent real growth, until GDP returned to its pre-recession trend. It may or may not be true, but it isn’t crazy.

Many of the arguments in my paper were also made in this valuable EPI report by Josh Bivens, reviving the old idea of a “high pressure economy”. Like me, Bivens argues that slow productivity growth is largely  attributable to low investment, which in turn is due to weak demand and slow wage growth, which blunts the incentive for business to invest in labor-saving technology. One important point that Bivens makes that I didn’t, is that much past variation in productivity growth has been transitory; forecasts of future productivity growth based on the past couple of years have consistently performed worse than forecasts based on longer previous periods. So historical evidence gives us no reason see the most recent productivity slowdown as permanent. My one quibble is that he only discusses faster productivity growth and higher inflation as possible outcomes of a demand-driven acceleration in wages. This ignores the third possible effect, redistribution from from profits to wages — in fact a rise in the labor share is impossible without a period of “overfull” employment.

Minneapolis Fed president Neel Kashkari wrote a long post last fall on “diagnosing and treating the slow recovery.” Perhaps the most interesting thing here is that he poses the question at all. There’s a widespread view that once you correct for demographics, the exceptional performance of the late 1990s, etc., there’s nothing particularly slow about this recovery — no problem to diagnose or treat.

Another more recent post by Kashkari focuses on the dangers of forcing the Fed to mechanically follow a Taylor rule for setting interest rates. By his estimate, this would have led to an additional 2.5 million unemployed people this year. It’s a good illustration of the dangers of taking the headline measures of economic performance too literally. I also like its frank acknowledgement that the Fed — like all real world forecasters — rejects rational expectations in the models it uses for policymaking.

Kashkari’s predecessor Narayan Kocherlakota — who seems to agree more with the arguments in my paper — has a couple short but useful posts on his personal blog. The first, from a year ago, is probably the best short summary of the economic debate here that I’ve seen. Perhaps the key analytic point is that following a period of depressed investment, the economy may reach full employment given the existing capital stock while it is still well short of potential. So a period of rapid wage growth would not necessarily mean that the limits of expansionary policy have ben reached, even if those wage gains were fully passed through to higher prices. His emphasis:

Because fiscal policy has been too tight, we have too little public capital. … At the same time, physical investment has been too low… Conditional on these state variables, we might well be close to full employment.  … But, even though we’re close to full employment, there’s a lot of room for super-normal growth. Both capital and TFP are well below their [long run level].  The full-employment growth rate is going to be well above its long-run level for several years.  We can’t conclude the economy is overheating just because it is growing quickly.

His second post focuses on the straightforward but often overlooked point that policy should take into account not just our best estimates but our uncertainty about them, and the relative risks of erring on each side. And if there is even a modest chance that more expansionary policy could permanently raise productivity, then the risks are much greater on the over-contractionary side. [1] In particular, if we are talking about fiscal stimulus, it’s not clear that there are any costs at all. “Crowding out” is normally understood to involve a rise in interest rates and a shift from private investment to public spending. In the current setting, there’s a strong case that higher interest rates  at full employment would be a good thing (at least as long as we still rely on as the main tool of countercyclical policy). And it’s not obvious, to say the least, that the marginal dollar of private investment is more socially useful than many plausible forms of public spending. [2] Kashkari has a post making a similar argument in defense of his minority vote not to raise rates at the most recent FOMC meeting. (Incidentally, FOMC members blogging about their decisions is a trend to be encouraged.)

In a post from March which I missed at the time, Ryan Avent tries to square the circle of job-destroying automation and slow productivity growth. One half of the argument seems clearly right to me: Abundant labor and low wages discourage investment in productivity-raising technologies. As Avent notes, early British and even more American industrialization owe a lot to scarce labor and high wages. The second half of the argument is that labor is abundant today precisely because so much has been displaced by technology. His claim is that “robots taking the jobs” is consistent with low measured productivity growth if the people whose jobs are taken end up in a part of the economy with a much lower output per worker. I’m not sure if this works; this seems like the rare case in economics where an eloquent story would benefit from being re-presented with math.

Along somewhat similar lines, Simon Wren-Lewis points out that unemployment may fall because workers “price themselves into jobs” by accepting lower-wage (and presumably lower-productivity) jobs. But this doesn’t mean that the aggregate demand problem has been solved — instead, we’ve simply replaced open unemployment with what Joan Robinson called “disguised unemployment,” as some of people’s capacity for work continues to go to waste even while they are formally employed. “But there is a danger that central bankers would look at unemployment, … and conclude that we no longer have inadequate aggregate demand…. If demand deficiency is still a problem, this would be a huge and very costly mistake.”

Karl Smith at the Niskanen Center links this debate to the older one over the neutrality of money. Central bank interventions — and aggregate demand in general — are understood to be changes in the flow of money spending in the economy. But a lon-standing tradition in economic theory says that money should be neutral in the long run. As we are look at longer periods, changes in output and employment should depend more and more on real resources and technological capacities, and less and less on spending decisions — in the limit not at all. If you want to know why GDP fell in one quarter but rose in the next (this is something I always tell my undergraduates) you need to ask who chose to reduce their spending in the first period and who chose to increase it in the first. But if you want to know why we are materially richer than our grandparents, it would be silly to say it’s because we choose to spend more money. This is the reason why I’m a bit impatient with people who respond to the fact that, relative to the pre-2008 trend, output today has not recovered from the bottom of the recession, by saying “the trend doesn’t matter, deviations in output are always persistent.” This might be true but it’s a radical claim. It means you either take the real business cycle view that there’s no such thing as aggregate demand, even recessions are due to declines in the economy’s productive potential; or you must accept that in some substantial sense we really are richer than our grandparents because we spend more money. You can’t assert that GDP is not trend-stationary to argue against an output gap today unless you’re ready to accept these larger implications.

The invaluable Tom Walker has a fascinating post going back to even older debates, among 19th century anti-union and pro-union pamphleters, about whether there was a fixed quantity of labor to be performed and whether, in that case, machines were replacing human workers. The back and forth (more forth than back: there seem to be a lot more anti-labor voices in the archives) is fun to read, but what’s the payoff for todays’ debates?

The contemporary relevance of this excursion into the archives is that economic policy and economic thought walks on two legs. Conservative economists hypocritically but strategically embrace both the crowding out arguments for austerity and the projected lump-of-labor fallacy claims against pensions and shorter working time. They are for a “fixed amount” assumption when it suits their objectives and against it when it doesn’t. There is ideological method to their methodological madness. That consistency resolves itself into the “self-evidence” that nothing can be done.

That’s exactly right. When we ask why labor’s share has fallen so much over the past generation, we’re told it’s because of supply and demand — an increased supply of labor from China and elsewhere, and a decreased demand thanks to technology. But if it someone says that it might be a good idea then to limit the supply of labor (by lowering the retirement age, let’s say) and to discourage capital-intensive production, the response is “are you crazy? that will only make everyone poorer, including workers.” Somehow distribution is endogenous when it’s a question of shifts in favor of capital, but becomes exogenously fixed when it’s a question of reversing them.

A number of heterdox writers have identified the claim that productivity growth depends on demand as Verdoorn’s law (or the Kaldor-Verdoorn Law). For example, the Post Keynesian blogger Ramanan mentions it here and here. I admit I’m a bit dissatisfied with this “law”. It’s regularly asserted by heterodox people but you’ll scour our literature in vain looking for either a systematic account of how it is supposed to operate or quantitative evidence of how and how much (or whether) it does.

Adam Ozimek argues that the recent rise in employment should be seen as an argument for continued expansionary policy, not a shift away from it. After all, a few years ago many policymakers believed such a rise was impossible, since the decline in employment was supposed to be almost entirely structural.

Finally, Reihan Salam wants to enlist me for the socialist flank of a genuinely populist Trumpism. This is the flipside of criticism I’ve sometimes gotten for making this argument — doesn’t it just provide intellectual ammunition for the Bannon wing of the administration and its calls for vast infrastructure spending,  which is also supposed to boost demand and generate much faster growth? Personally I think you need to make the arguments for what you think is true regardless of their political valence. But I might worry about this more if I believed there was even a slight chance that Trump might try to deliver for his working-class supporters.

 

[1] Kocherlakota talks about total factor productivity. I prefer to focus on labor productivity because it is based on directly observable quantities, whereas TFP depends on estimates not only of the capital stock but of various unobservable parameters. The logic of the argument is the same either way.

[2] I made similar arguments here.

 

EDIT: My comments on the heterodox literature on the Kaldor-Verdoorn Law were too harsh. I do feel this set of ideas is underdeveloped, but there is more there than my original post implied. I will try to do a proper post on this work at some point.

The Big Question for Macroeconomic Policy: Is This Really Full Employment?

Cross-posted from the Roosevelt Institute’s Next New Deal blog. This is a summary of my new paper What Recovery? The Case for Continued Expansionary Policy, also discussed in Neil Irwin’s July 26 article in the Times.

 

“Right now,” wrote Senator Chuck Schumer in a New York Times op-ed on Monday, “millions of unemployed or underemployed people, particularly those without a college degree, could be brought back into the labor force” with appropriate government policies. With this seemingly anodyne point, Schumer took sides in a debate that has sharply divided economists and policymakers: Is the US economy today operating at potential, with enough spending to make full use of its productive capacity? Or is there still substantial slack, unused capacity that could be put to work if someone — households, businesses or governments — decided to spend more? Is there an aggregate-demand problem that government should be trying to solve?

It’s difficult to answer this question because the economic signals seem to point in conflicting directions. Despite the recession officially ending in June 2009 and the economy enjoying steady growth for the past eight years, GDP is still far below the pre-2008 trend. If we compare GDP to forecasts made before the recession, the gap that opened up during the recession has not closed at all — in fact, it continues to get wider. Meanwhile, the official unemployment rate — probably the most watched indicator for the state of aggregate demand — is down to 4.4%, well below the level that was considered full employment even a few years ago. But this positive performance only partially reflects an increase in the number of Americans with jobs; mostly it comes from a decline in the size of the labor force — people who have or are seeking jobs. The fraction of the adult population employed is down to 60 percent from 63 percent a decade ago (and nearly 65 percent at the end of the 1990s).

Is this decline in the fraction of people employed the inevitable result of an aging population and similar demographic changes, or is it a sign that, despite the low measured unemployment rate, the economy is still far short of full employment? The Federal Reserve — one of the main sites of macroeconomic policy — has already indicated its belief that full employment has been reached by raising interest rates 3 times since December 2016. Fed Chair and Janet Yellen are evidently convinced that the economy has reached its potential — that, given the real resources available, output and employment are as high as can reasonably be expected.

Other policymakers have been divided on the question, in ways that often cut across partisan lines. Senator Schumer’s statement — that the decline in employment is not an inevitable trend but rather a problem that government can and should solve — is a sign of new clarity coming to this murky debate. Along with his call for $1 trillion in new infrastructure spending, it’s an important acknowledgement that, despite the progress made since 2008, the country remains far from full employment.

In a new paper out this week, we at the Roosevelt Institute offer support for the emerging consensus that the economy needs policies to boost demand. The paper reviews the available data on where the economy is relative to its potential. We find that the balance of evidence suggests there is still a great deal of space for more expansionary policy.

We offer several lines of argument in support of this conclusion.

GDP has not recovered from the recession. GDP remains about 10 percent below both the long-term trend and the level that was predicted by the CBO and other forecasts prior to the 2008–2009 recession. There is no precedent in the postwar period for such a persistent decline in output. During the sixty years between 1947 and 2007, growth lost in recessions was always regained in the subsequent recovery.

The aging population does not explain low labor force participation. It is true that an aging population should contribute to lower employment, since older people are less likely to work than younger people. But this simple demographic story cannot explain the full fall in employment. Starting from the employment peak in 2000, aging trends only explain about half the decrease in employment that has actually occurred. And there are good reasons to think that even this overstates the role of demographics. First, during the same period, education levels have increased. Historically, higher education has been associated with higher employment rates, just as a share of elderly people has been associated with less employment; statistically, these two effects should just about cancel out. Second, the post-recession fall in employment rates is not concentrated in older age groups, but among people in their 20s — something that a demographic story cannot explain.

The weak economy has held back productivity. About half the shortfall in GDP relative to the pre-2008 trend is explained by exceptionally slow productivity growth — that is, slow growth in output per worker. While many people assume that productivity is the result of technological progress outside the reach of macroeconomic policy, there are good reasons to think that the productivity slowdown is at least in part due to weak demand. Among the many possible links: Business investment, which is essential to raising productivity, has been extraordinarily weak over the past decade, and economists have long believed that demand is a central factor driving investment. And slow wage growth — a sign of labor-market weakness — reduces the incentive to adopt productivity-boosting technology.

Only a demand story makes sense. The overall economic picture is hard to understand except in terms of a continued demand shortfall. If employment is falling due to demographics, that should be associated with rising productivity and wages, as firms compete for scarce labor. If productivity growth is slow because there aren’t any more big innovations to make, that should be associated with faster employment growth and low profits, as firms can no longer find new ways to replace labor with capital. But neither of these scenarios match the actual economy. And both stories predict higher inflation, rather than the persistent low inflation we have actually encouraged. So even if supply-side stories explain individual pieces of macroeconomic data, it is almost impossible to make sense of the big picture without a large fall in aggregate demand.

Austerity is riskier than stimulus. Finally, we argue that, if policymakers are uncertain about how much space the economy has for increased demand, they should consider the balance of risks on each side. Too much stimulus would lead to higher inflation — easy to reverse, and perhaps even desirable, given the continued shortfall of inflation relative to the official 2 percent target. An overheated economy would also see real wages rise faster than productivity. While policymakers often see this as something to avoid, the decline in the wage share over the past decade cannot be reversed without a period of such “excess” wage growth. On the other hand, if there is still an output gap, failure to take aggressive steps to close it means foregoing literally trillions of dollars of useful goods and services and condemning millions of people to joblessness.

Fortunately, the solution to a demand shortfall is no mystery. Since Keynes, economists have known that when an economy is operating below its potential, all that is needed is for someone to spend more money. Of course, it’s best if that spending also serves some useful social purpose; exactly what that should look like will surely be the subject of much debate to come. But the first step is to agree on the problem. Today’s economy is still far short of its potential. We can do better.

What Does Crowding Out Even Mean?

Paul Krugman is taking some guff for this column where he argues that the US economy is now at potential, or full employment, so any shift in the federal budget toward deficit will just crowd out private demand.

Whether higher federal spending (or lower taxes) could, in present conditions, lead to higher output is obviously a factual question, on which people may read the evidence in different ways. As it happens, I don’t agree that current output is close to the limits of current productive capacity. But that’s not what I want to write about right now. Instead I want to ask: What concretely would crowding out even mean right now?

Below, I run through six possible meanings of crowding out, and then ask if any of them gives us a reason, even in principle, to worry about over-expansionary policy today. (Another possibility, suggested by Jared Bernstein, is that while we don’t need to worry about supply constraints for the economy as a whole, tax cuts could crowd out useful spending due to some unspecified financial constraint on the federal government. I don’t address that here.) Needless to say, doubts about the economic case for crowding-out are in no way an argument for the specific deficit-boosting policies favored by the new administration.

The most straightforward crowding-out story starts from a fixed supply of private savings. These savings can either be lent to the government, or to business. The more the former takes, the less is left for the latter. But as Keynes pointed out long ago, this simple loanable-funds story assumes what it sets out to prove. The total quantity of saving is fixed only if total income is fixed. If higher government spending can in fact raise total income, it will raise total saving as well. We can only tell a story about government and business competing for a given pool of saving if we have already decided for some other reason that GDP can’t change.

The more sophisticated version, embodied in the textbook ISLM model, postulates a fixed supply of money, rather than saving. [1] In Hicks’ formulation, money is used both for transactions and as the maximally liquid store of wealth. The higher is output, the more money is needed for transactions, and the less is available to be held as wealth. By the familiar logic of supply and demand, this means that wealthholders must be paid more to part with their remaining stock of money. The price wealthholders receive to give up their money is interest; so as GDP rises, so does the interest rate.

Unlike the loanable funds story with fixed saving, this second story does give a logically coherent account of crowding out. In a world of commodity money, if such ever was, it might even be literally true. But in a world of bank-created credit money, it’s at best a metaphor. Is it a useful metaphor? That would require two things. First, that the interest rate (whichever one we are interested in) is set by the financial system. And second, that the process by which this happens causes rates to systematically rise with demand. The first premise is immediately rejected by the textbooks, which tell us that “the central bank sets the interest rate.” But we needn’t take this at face value. There are many interest rates, not just one, and the spreads between them vary quite a bit; logically it is possible that strong demand could lead to wider spreads, as banks stretch must their liquidity further to make more loans. But in reality, the opposite seems more likely. Government debt is a source of liquidity for private banks, not a use of it; lending more to the government makes it easier, not harder, for them to also lend more to private borrowers. Also, a booming economy is one in which business borrowers are more profitable; marginal borrowers look safer and are likely to get better terms. And rising inflation, obviously, reduces the real value of outstanding debt; however annoying this is to bankers, rationally it makes them more willing to lend more to their now less-indebted clients. Wicksell, the semi-acknowledged father of modern central banking theory, built his big book around the premise that in a credit-money system, inflation would give private banks no reason to raise interest rates.

And in fact this is what we see. Interest rate spreads are narrow in booms; they widen in crises and remain wide in downturns.

So crowding out mark two, the ISLM version, requires us to accept both that central banks cannot control the economically relevant interest rates, and that private banks systematically raise interest rates when times are good. Again, in a strict gold standard world there might something to this — banks have to raise rates, their gold reserves are running low — but if we ever lived in that world it was 150 or 200 years ago or more.

A more natural interpretation of the claim that the economy is at potential, is that any further increase in demand would just  lead to inflation. This is the version of crowding out in better textbooks, and also the version used by MMT folks. On a certain level, it’s obviously correct. Suppose the amount of money-spending in an economy increases. Then either the quantity of goods and services increases, or their prices do. There is no third option: The total percent increase in money spending, must equal the sum of the percent increase in “real” output and the percent increase in average prices. But how does the balance between higher output and higher prices play out in real life? One possibility is that potential output is a hard line: each dollar of spending up to there increases real output one for one, and leaves prices unchanged; each dollar of spending above there increases prices one for one and leaves output unchanged. Alternatively, we might imagine a smooth curve where as spending increases, a higher fraction of each marginal dollar translates into higher prices rather than higher output. [2] This is certainly more realistic, but it invites the question of which point exactly on this curve we call “potential”. And it awakens the great bane of postwar macro – an inflation-output tradeoff, where the respective costs and benefits must be assessed politically.

Crowding out mark three, the inflation version, is definitely right in some sense — you can’t produce more concrete use values without limit simply by increasing the quantity of money borrowed by the government (or some other entity). But we have to ask first, positively, when we will see this inflation, and second, normatively, how we value lower inflation vs higher output and income.

In the post-1980s orthodoxy, we as society are never supposed to face these questions. They are settled for us by the central bank. This is the fourth, and probably most politically salient, version of crowding out: higher government spending will cause the central bank to raise interest rates. This is the practical content of the textbook story, and in fact newer textbooks replace the LM curve — where the interest rate is in some sense endogenous — with a straight line at whatever interest rate is chosen by the central bank. In the more sophisticated textbooks, this becomes a central bank reaction function — the central bank’s actions change from being policy choices, to a fundamental law of the economic universe. The master parable for this story is the 1990s, when the Clinton administration came in with big plans for stimulus, only to be slapped down by Alan Greenspan, who warned that any increase in public spending would be offset by a contractionary shift by the federal reserve. But once Clinton made the walk to Canossa and embraced deficit reduction, Greenspan’s fed rewarded him with low rates, substituting private investment in equal measure for the foregone public spending. In the current contest, this means: Any increase in federal borrowing will be offset one for one by a fall in private investment —  because the Fed will raise rates enough to make it happen.

This story is crowding out mark four. It depends, first, on what the central bank reaction function actually is — how confident are we that monetary policy will respond in a direct, predictable way to changes in the federal budget balance or to shifts in demand? (The more attention we pay to how the monetary sausage gets made, the less confident we are likely to be.) And second, on whether the central bank really has the power to reliably offset shifts in fiscal policy. In the textbooks this is taken for granted but there are reasons for doubt. It’s also not clear why the actions of the central bank should be described as crowding out by fiscal policy. The central bank’s policy rule is not a law of nature. Unless there is some other reason to think expansionary policy can’t work, it’s not much of an argument to say the Fed won’t allow it. We end up with something like: “Why can’t we have deficit-financed nice things?” “Because the economy is at potential – any more public spending will just crowd out private spending.” “How will it be crowded out exactly?” “Interest rates will rise.” “Why will they rise?” “Because the federal reserve will tighten.” “Why will they tighten?” “Because the economy is at potential.”

Suppose we take the central bank out of the picture. Suppose we allow supply constraints to bind on their own, instead of being anticipated by the central planners at the Fed. What would happen as demand pushed up against the limits of productive capacity? One answer, again, is rising inflation. But we shouldn’t expect prices to all rise in lockstep. Supply constraints don’t mean that production growth halts at once; rather, bottlenecks develop in specific areas. So we should expect inflation to begin with rising prices for inputs in inelastic supply — land, oil, above all labor. Textbook models typically include a Phillips curve, with low unemployment leading to rising wages, which in turn are passed on to higher prices.

But why should they be passed on completely? It’s easy to imagine reasons why prices don’t respond fully or immediately to changes in wages. In which case, as I’ve discussed before, rising wages will result in an increase in the wage share. Some people will object that such effects can only be temporary. I’m not sure this makes sense — why shouldn’t labor, like anything else, be relatively more expensive in a world where it is relatively more scarce? But even if you think that over the long-term the wage share is entirely set on the supply side, the transition from one “fundamental” wage share to another still has to involve a period of wages  rising faster or slower than productivity growth — which in a Phillips curve world, means a period above or below full employment.

We don’t hear as much about the labor share as the fundamental supply constraint, compared with savings, inflation or interest rates. But it comes right out of the logic of standard models. To get to crowding out mark five, though, we have to take one more step. We have to also postulate that demand in the economy is profit-led — that a distributional shift from profits toward wages reduces desired investment by more than it increases desired consumption. Whether (or which) real economies display wage-led or profit-led demand is a subject of vigorous debate in heterodox macro. But there’s no need to adjudicate that now. Right now I’m just interested in what crowding out could possibly mean.

Demand can affect distribution only if wage increases are not fully passed on to prices. One reason this might happen is that in an open economy, businesses lack pricing power; if they try to pass on increased costs, they’ll lose market share to imports. Follow that logic to its endpoint and there are no supply constraints — any increase in spending that can’t be satisfied by domestic production is met by imports instead. For an ideal small, open economy potential output is no more relevant than the grocery store’s inventory is for an individual household when we go shopping. Instead, like the household, the small open economy faces a budget constraint or a financing constraint — how much it can buy depends on how much it can pay for.

Needless to say, we needn’t go to that extreme to imagine a binding external constraint. It’s quite reasonable to suppose that, thanks to dependence on imported inputs and/or demand for imported consumption goods, output can’t rise without higher imports. And a country may well run out of foreign exchange before it runs out of domestic savings, finance or productive capacity. This is the idea behind multiple gap models in development economics, or balance of payments constrained growth. It also seems like the direction orthodoxy is heading in the eurozone, where competitiveness is bidding to replace inflation as the overriding concern of macro policy.

Crowding out mark six says that any increase in demand from the government sector will absorb scarce foreign exchange that will no longer be available to private sector. How relevant it is depends on how inelastic import demand is, the extent to which the country as a whole faces a binding budget or credit constraint and, what concrete form that constraint faces — what actually happens if international creditors are stiffed, or worry they might be? But the general logic is that higher spending will lead to a higher trade deficit, which at some point can no longer be financed.

So now we have six forms of crowding out:

1. Government competes with business for fixed saving.

2. Government competes with business for scarce liquidity.

3. Increased spending would lead to higher inflation.

4. Increased spending would cause the central bank to raise interest rates.

5. Overfull employment would lead to overfast wage increases.

6. Increased spending would lead to a higher trade deficit.

The next question is: Is there any reason, even in principle, to worry about any of these outcomes in the US today? We can decisively set aside the first, which is logically incoherent, and confidently set aside the second, which doesn’t fit a credit-money economy in which government liabilities are the most liquid asset. But the other four certainly could, in principle, reflect real limits on expansionary policy. The question is: In the US in 2017, are higher inflation, higher interest rates, higher wages or a weaker balance of payments position problems we need to worry about? Are they even problems at all?

First, higher inflation. This is the most natural place to look for the costs of demand pushing up against capacity limits. In some situations you’d want to ask how much inflation, exactly, would come from erring on the side of overexpansion, and how costly that higher inflation would be against the benefits of lower unemployment. But we don’t have to ask that question right now, because inflation is by conventional measures, too low; so higher inflation isn’t a cost of expansionary policy, but an additional benefit. The problem is even worse for Krugman, who has been calling for years now for a higher inflation target, usually 4 percent. You can’t support higher inflation without supporting the concrete action needed to bring it about, namely, a period of aggregate spending in excess of potential. [2] Now you might say that changing the inflation target is the responsibility of the Fed, not the fiscal authorities. But even leaving aside the question of democratic accountability, it’s hard to take this response seriously when we’ve spent the last eight years watching the Fed miss its existing target; setting a new higher target isn’t going to make a difference unless something else happens to raise demand. I just don’t see how you can write “What do we want? Four percent! When do we want it? Now!” and then turn around and object to expansionary fiscal policy on the grounds that it might be inflationary.

OK, but what if the Fed does raise rates in response to any increase in the federal budget deficit, as many observers expect? Again, if you think that more expansionary policy is otherwise desirable, it would seem that your problem here is with the Fed. But set that aside, and assume our choice is between a baseline 2018-2020, and an alternative with the same GDP but with higher budget deficits and higher interest rates. (This is the worst case for crowding out.) Which do we prefer? In the old days, the low-deficit, low-interest world would have been the only respectable choice: Private investment is obviously preferable to whatever government deficits might finance. (And to be fair, in the actual 2018-2020, they will mostly be financing high-end tax cuts.) But as Brad DeLong points out, the calculation is different today. Higher interest rates are now a blessing, not a curse, because they create more running room for the Fed to respond to a downturn. [3] In the second scenario, there will be some help from conventional monetary policy in the next recession, for whatever it’s worth; in the first scenario there will be no help at all. And one thing we’ve surely learned since 2008 is the costs of cyclical downturns are much larger than previously believed. So here again, what is traditionally considered a costs of pushing past supply constraints turns out on closer examination to be a benefit.

Third, the danger of more expansionary policy is that it will lead to a rise in the wage share. You don’t hear this one as much. I’ve suggested elsewhere that something like this may often motivate actual central bank decisions to tighten. Presumably it’s not what someone like Krugman is thinking about. But regardless of what’s in people’s heads, there’s a serious problem here for the crowding-out position. Let’s say that we believe, as both common sense and the textbooks tells us, that the rate of wage growth depends on the level of unemployment. Suppose  we define full employment in the conventional way as the level of unemployment that leads to nominal wage growth just equal to productivity growth plus the central bank’s inflation target. Then by definition, any increase in the wage share requires a period of overfull employment — of unemployment below the full employment level. This holds even if you think the labor share in the long run is entirely technologically determined. A forteori it holds if you think that the wage share is in some sense political, the result of the balance of forces between labor and capital.

Again, I’m simply baffled how someone can believe at the same time that the rising share of capital in national income is a problem, and that there is no space for expansionary policy once full employment is reached. [4] Especially since the unemployment target is missed so often from the other side. If you have periods of excessively high unemployment but no periods of excessively low unemployment, you get a kind of ratchet effect where the labor share can only go down, never up. I think this sort of cognitive dissonance happens because economics training puts aggregate demand in one box and income distribution in another. But this sort of hermetic separation isn’t really sustainable. The wage share can only be higher in the long run if there is some short-run period in which it rises.

Finally, the external constraint. It is probably true that more expansionary fiscal policy will lead to bigger trade deficits. But this only counts as crowding out if those deficits are in some sense unsustainable. Is this the case for the US? There are a lot of complexities here but the key point is that almost all our foreign liabilities (and all of the government’s) are denominated in dollars, and almost all our imports are invoiced in dollars. Personally, I think the world is still more likely to encounter a scarcity of dollar liquidity than a surfeit, so the problem of an external constraint doesn’t even arise. But let’s say I’m wrong and we get the worst-case scenario where the world is no longer willing to hold more dollar liabilities. What happens? Well, the value of the dollar falls. At a stroke, US foreign liabilities decline relative to foreign assets (which are almost all denominated in their home currencies), improving the US net international investment position; and US exports get cheaper for the rest of the world, improving US competitiveness. The problem solves itself.

Imagine a corporation with no liabilities except its stock, and that also paid all its employers and supplies in its own stock and sold its goods for its own stock. How could this business go bankrupt? Any bad news would instantly mean its debts were reduced and its goods became cheaper relative to its competitors’. The US is in a similar position internationally. And if you think that over the medium term the US should be improving its trade balance then, again, this cost of over-expansionary policy looks like a benefit — by driving down the value of the dollar, “irresponsible” policy will set the stage for a more sustainable recovery. The funny thing is that in other contexts Krugman understands this perfectly.

So as far as I can tell, even if we accept that the US economy has reached potential output/full employment, none of the costs for crossing this line are really costs today. Perhaps I’m wrong, perhaps I’m missing something. but it really is incumbent on anyone who argues there’s no space for further expansionary policy to explain what concretely would be the results of overshooting.

In short: When we ask how close the economy is to potential output, full employment or supply constraints, this is not just a factual question. We have to think carefully about what these terms mean, and whether they have the significance we’re used to in today’s conditions. This post has been more about Krugman than I intended, or than he deserves. A very large swathe of established opinion shares the view that the economy is close to potential in some sense, and that this is a serious objection to any policy that raises demand. What I’d like to ask anyone who thinks this is: Do you think higher inflation, a higher “natural” interest rate, a higher wage share or a weaker dollar would be bad things right now? And if not, what exactly is the supply constraint you are worried about?

 

[1] The LM in ISLM stands for liquidity-money. It’s supposed to be the combination of interest rates and output levels at which the demand for liquidity is satisfied by a given stock of money.

[2] OK, some people might say the Fed could bring about higher inflation just by announcing a different target. But they’re not who I’m arguing with here.

[3] Krugman himself says he’d “be a lot more comfortable … if interest rates were well clear of the ZLB.” How is that supposed to happen unless something else pushes demand above the full employment level at current rates?

[4] It would of course be defensible to say that the downward redistribution from lower unemployment would be outweighed by the upward redistribution from the package of tax cuts and featherbedding that delivered it. But that’s different from saying that a more expansionary stance is wrong in principle.