A new macroeconomics?

UPDATE: The video of this panel is here.

[On Friday, July 2, I am taking part in a panel organized by Economics for Inclusive Prosperity on “A new macroeconomics?” This is my contribution.]

Jón Steinsson wrote up some thoughts about the current state of macroeconomics. He begins:

There is a narrative within our field that macroeconomics has lost its way. While I have some sympathy with this narrative, I think it is a better description of the field 10 years ago than of the field today. Today, macroeconomics is in the process of regaining its footing. Because of this, in my view, the state of macroeconomics is actually better than it has been for quite some time.

I can’t help but be reminded of Olivier Blanchard’s 2008 article on the state of macroeconomics, which opened with a flat assertion that “the state of macro is good.” I am not convinced today’s positive assessment is going to hold up better than that one. 

Where I do agree with Jón is that empirical work in macro is in better shape than theory. But I think theory is in much worse shape than he thinks. The problem is not some particular assumptions. It is the fundamental approach.

We need to be brutally honest: What is taught in today’s graduate programs as macroeconomics is entirely useless for the kinds of questions we are interested in. 

I have in front of me the macro comp from a well-regarded mainstream economics PhD program. The comp starts with the familiar Euler equation with a representative agent maximizing their utility from consumption over an infinite future. Then we introduce various complications — instead of a single good we have a final and intermediate good, we allow firms to have some market power, we introduce random variation in the production technology or markup. The problem at each stage is to find what is the optimal path chosen by the representative household under the new set of constraints.

This is what macroeconomics education looks like in 2021. I submit that it provides no preparation whatsoever for thinking about the substantive questions we are interested in. It’s not that this or that assumption is unrealistic. It is that there is no point of contact between the world of these models and the real economies that we live in.

I don’t think that anyone in this conversation reasons this way when they are thinking about real economic questions. If you are asked how serious inflation is likely to be over the next year, or how much of a constraint public debt is on public spending, or how income distribution is likely to change based on labor market conditions, you will not base your answer on some kind of vaguely analogous questions about a world of rational households optimizing the tradeoff between labor and consumption over an infinite future. You will answer it based on your concrete institutional and historical knowledge of the world we live in today. 

To be sure, once you have come up with a plausible answer to a real world question, you can go back and construct a microfounded model that supports it. But so what? Yes, with some ingenuity you can get a plausible Keynesian multiplier out of a microfounded model. But in terms of what we actually know about real economies, we don’t learn anything from the exercise that the simple Keynesian multiplier didn’t already tell us.

The heterogenous agent models that Jón talks about are to me symptoms of the problem, not signs of progress. You start with a fact about the world that we already knew, that consumption spending is sensitive to current income. Then you backfill a set of microfoundations that lead to that conclusion. The model doesn’t add anything, it just gets you back to your starting point, with a lot of time and effort that you could have been using elsewhere. Why not just start from the existence of a marginal propensity to consume well above zero, and go forward from there?

Then on the other hand, think about what is not included in macroeconomics education at the graduate level. Nothing about national accounting. Nothing about about policy. Nothing about history. Nothing about the concrete institutions that structure real labor and product markets. 

My personal view is that we need to roll back the clock at least 40 years, and throw out the whole existing macroeconomics curriculum. It’s not going to happen tomorrow, of course. But if we want a macroeconomics that can contribute to public debates, that should be what we’re aiming for.

What should we be doing instead? There is no fully-fledged alternative to the mainstream, no heterodox theory that is ready to step in to replace the existing macro curriculum. Still, we don’t have to start from scratch. There are fragments, or building blocks, of a more scientific macroeconomics scattered around. We can find promising approaches in work from earlier generations, work in the margins of the profession, and work being done by people outside of economics, in the policy world, in finance, in other social sciences.  

This work, it seems to me, shares a number of characteristics.

First, it is in close contact with broader public debates. Macroeconomics exists not to study “the economy” in the abstract — there isn’t any such thing — but to help us address concrete problems with the economies that we live in. The questions of what topics are important, what assumptions are reasonable, what considerations are relevant, can only be answered from a perspective outside of theory itself. A useful macroeconomic theory cannot be an axiomatic system developed from first principles. It needs to start with the conversations among policymakers, business people, journalists, and so on, and then generalize and systematize them. 

A corollary of this is that we are looking not for a general model of the economy, but a lot of specialized models for particular questions. 

Second, it has national accounting at its center. Physical scientists spend an enormous amount of time refining and mastering their data collection tools. For macroeconomics, that means the national accounts, along with other sources of macro data. A major part of graduate education in economics should be gaining a deep understanding of existing accounting and data collection practices. If models are going to be relevant for policy or empirical work, they need to be built around the categories of macro data. One of the great vices of today’s macroeconomics is to treat a variable in a model as equivalent to a similarly-named item in the national accounts, even when they are defined quite differently.

Third, this work is fundamentally aggregative. The questions that macroeconomics asks involve aggregate variables like output, inflation, the wage share, the trade balance, etc. No matter how it is derived, the operational content of the theory is a set of causal relationships between these aggregate variables. You can certainly shed light on relationships between aggregates using micro data. But the questions we are asking always need to be posed in terms of observable aggregates. The disdain for “reduced form” models is something we have to rid ourselves of. 

Fourth, it is historical. There are few if any general laws for how “an economy” operates; what there are, are patterns that are more or less consistent over a certain span of time and space. Macroeconomics is also historical in a second sense: It deals with developments that unfold in historical time. (This, among other reasons, is why the intertemporal approach is fundamentally unsuitable.) We need fewer models of “the” business cycle, and more narrative descriptions of individual cycles. This requires a sort of figure-ground reversal in our thinking — instead of seeing concrete developments as case studies or tests of models, we need to see models as embedded in concrete stories. 

Fifth, it is monetary. The economies we live in are organized around money commitments and money flows, and most of the variables we are interested in are defined and measured in terms of money. These facts are not incidental. A model of a hypothetical non-monetary economy is not going to generate reliable intuitions about real economies. Of course it is sometimes useful to adjust money values for inflation, but it’s a bad habit to refer to the result quantities as “real” — it suggests that there is some objective quantity lying behind the monetary one, which is in no way the case.

In my ideal world, a macroeconomics education would proceed like this. First, here are the problems the external world is posing to us — the economic questions being asked by historians, policy makers, the business press. Second, here is the observable data relevant to those questions, here’s how the variables are defined and measured. Third, here are how those observables have evolved in some important historical cases. Fourth, here are some general patterns that seem to hold over a certain range  — and just as important, here is the range where they don’t. Finally, here are some stories that might explain those patterns, that are plausible given what we know about how economic activity is organized.

Well, that’s my vision. Does it have anything to do with a plausible future of macroeconomics?

I certainly don’t expect established macroeconomists to throw out the work they’ve been doing their whole careers. Among younger economists, at least those whose interest in the economy is not strictly professional, I do think there is a fairly widespread recognition that macroeconomic theory is at an intellectual dead end. But the response is usually to do basically atheoretical empirical work, or go into a different field, like labor, where the constraints on theory are not so rigid. Then there is the heterodox community, which I come out of. I think there has been a great deal of interesting and valuable work within heterodox economics, and I’m glad to be associated with it. But as a project to change the views of the rest of the economics profession, it is clearly a failure.

As far as I can see, orthodox macroeconomic theory is basically unchallenged on its home ground. Nonetheless, I am moderately hopeful for the future, for two reasons. 

First, academic macroeconomics has lost much of its hold on public debate. I have a fair amount of contact with policymakers, and in my experience, there is much less deference to mainstream economic theory than there used to be, and much more interest in alternative approaches. Strong deductive claims about the relationships between employment, inflation, wage growth, etc. are no longer taken seriously.

To be sure, there was always a gulf between macroeconomic theory and practical policymaking. But at one time, this could be papered over by a kind of folk wisdom — low unemployment leads to inflation, public deficits lead to higher interest rates, etc. — that both sides could accept. Under the pressure of the extraordinary developments of the past dozen years, the policy conversation has largely abandoned this folk wisdom — which, from my point of view, is real progress. At some point, I think, academic economics will recognize that it has lost contact with the policy conversation, and make a jump to catch up. 

Keynes got a lot of things right, but one thing I think he got wrong was that “practical men are slaves to some defunct economist.” The relationship is more often the other way round. When practical people come to think about economy in new ways, economic theory eventually follows.

I think this is often true even of people who in their day job do theory in the approved style. They don’t think in terms of their models when they are answering real world questions. And this in turn makes our problem easier. We don’t need to create a new body of macroeconomic theory out of whole cloth. We just need to take the implicit models that we already use in conversations like this one, and bring them into scholarship. 

That brings me to my second reason for optimism. Once people realize you don’t have to have microfoundations, that you don’t need to base your models on optimization by anyone, I think they will find that profoundly liberating. If you are wondering about, say, the effect of corporate taxation on productivity growth, there is absolutely no reason you need to model the labor supply decision of the representative household as some kind of intertemporal optimization. You can just, not do that. Whatever the story you’re telling, a simple aggregate relationship will capture it. 

The microfounded approach is not helping people answer the questions they’re interested in. It’s just a hoop they have to jump through if they want other people in the profession to take their work seriously. As Jón suggests, a lot of what people see as essential in theory, is really just sociological conventions within the discipline. These sorts of professional norms can be powerful, but they are also brittle. The strongest prop of the current orthodoxy is that it is the orthodoxy. Once people realize they don’t have to do theory this way, it’s going to open up enormous space for asking substantive questions about the real world. 

I think that once that dam breaks, it is going to sweep away most of what is now taught as macroeconomics. I hope that we’ll see something quite different in its place.  

Once we stop chasing the will-o-wisp of general equilibrium, we can focus on developing a toolkit of models addressed to particular questions. I hope in the years ahead we’ll see a more modest but useful body of theory, one that is oriented to the concrete questions that motivate public debates; that embeds its formal models in a historical narrative; that starts from the economy as we observe it, rather than a set of abstract first principles; that dispenses with utility and other unobservables; and that is ready to learn from historians and other social scientists.

Strange Defeat

Anyone who found something useful or provoking in my Jacobin piece on the state of economics might also be interested in this 2013 article by me and Arjun Jayadev, “Strange Defeat: How Austerity Economics Lost All the Intellectual Battles and Still WOn the War.” It covers a good deal of the same ground, a bit more systematically but without the effort to find the usable stuff in mainstream macro that I made in the more recent piece. Perhaps there wasn’t so much of it five years ago!

Here are some excerpts; you can read the full piece here.

* * * 

The extent of the consensus in mainstream macroeconomic theory is often obscured by the intensity of the disagreements over policy…  In fact, however, the contending schools and their often heated debates obscure the more fundamental consensus among mainstream macroeconomists. Despite the label, “New Keynesians” share the core commitment of their New Classical opponents to analyse the economy only in terms of the choices of a representative agent optimising over time. For New Keynesians as much as New Classicals, the only legitimate way to answer the question of why the economy is in the state it is in, is to ask under what circumstances a rational planner, knowing the true probabilities of all possible future events, would have chosen exactly this outcome as the optimal one. Methodologically, Keynes’ vision of psychologically complex agents making irreversible decisions under conditions of fundamental uncertainty has been as completely repudiated by the “New Keynesians” as by their conservative opponents.

For the past 30 years the dominant macroeconomic models that have been in use by central banks and leading macroeconomists have … ranged from what have been termed real business cycle theory approaches on the one end to New Keynesian approaches on the other: perspectives that are considerably closer in flavour and methodological commitments to each other than to the “old Keynesian” approaches embodied in such models as the IS-LM framework of undergraduate economics. In particular, while demand matters in the short run in New Keynesian models, it can have no effect in the long run; no matter what, the economy always eventually returns to its full-employment growth path.

And while conventional economic theory saw the economy as self-equilibrating, economic policy discussion was dominated by faith in the stabilising powers of central banks and in the wisdom of “sound finance”. … Some of the same economists, who today are leading the charge against austerity, were arguing just as forcefully a few years ago that the most important macroeconomic challenge was reducing the size of public debt…. New Keynesians follow Keynes in name only; they have certainly given better policy advice than the austerians in recent years, but such advice does not always flow naturally from their models.

The industrialised world has gone through a prolonged period of stagnation and misery and may have worse ahead of it. Probably no policy can completely tame the booms and busts that capitalist economies are subject to. And even those steps that can be taken will not be taken without the pressure of strong popular movements challenging governments from the outside. The ability of economists to shape the world, for good or for ill is strictly circumscribed. Still, it is undeniable that the case for austerity – so weak on purely intellectual grounds – would never have conquered the commanding heights of policy so easily if the way had not been prepared for it by the past 30 years of consensus macroeconomics. Where the possibility and political will for stimulus did exist, modern economics – the stuff of current scholarship and graduate education – tended to hinder rather than help. While when the turn to austerity came, even shoddy work could have an outsize impact, because it had the whole weight of conventional opinion behind it. For this the mainstream of the economics profession – the liberals as much as the conservatives – must take some share of the blame.

In Jacobin: A Demystifying Decade for Economics

(The new issue of Jacobin has a piece by me on the state of economics ten years after the crisis. The published version is here. I’ve posted a slightly expanded version below. Even though Jacobin was generous with the word count and Seth Ackerman’s edits were as always superb, they still cut some material that, as king of the infinite space of this blog, I would rather include.)

 

For Economics, a Demystifying Decade

Has economics changed since the crisis? As usual, the answer is: It depends. If we look at the macroeconomic theory of PhD programs and top journals, the answer is clearly, no. Macroeconomic theory remains the same self-contained, abstract art form that it has been for the past twenty-five years. But despite its hegemony over the peak institutions of academic economics, this mainstream is not the only mainstream. The economics of the mainstream policy world (central bankers, Treasury staffers, Financial Times editorialists), only intermittently attentive to the journals in the best times, has gone its own way; the pieties of a decade ago have much less of a hold today. And within the elite academic world, there’s plenty of empirical work that responds to the developments of the past ten years, even if it doesn’t — yet — add up to any alternative vision.

For a socialist, it’s probably a mistake to see economists primarily as either carriers of valuable technical expertise or systematic expositors of capitalist ideology. They are participants in public debates just like anyone else. The profession as the whole is more often found trailing after political developments than advancing them.

***

The first thing to understand about macroeconomic theory is that it is weirder than you think. The heart of it is the idea that the economy can be thought of as a single infinite-lived individual trading off leisure and consumption over all future time. For an orthodox macroeconomist – anyone who hoped to be hired at a research university in the past 30 years – this approach isn’t just one tool among others. It is macroeconomics. Every question has to be expressed as finding the utility-maximizing path of consumption and production over all eternity, under a precisely defined set of constraints. Otherwise it doesn’t scan.

This approach is formalized in something called the Euler equation, which is a device for summing up an infinite series of discounted future values. Some version of this equation is the basis of most articles on macroeconomic theory published in a mainstream journal in the past 30 years.It might seem like an odd default, given the obvious fact that real economies contain households, businesses, governments and other distinct entities, none of whom can turn income in the far distant future into spending today. But it has the advantage of fitting macroeconomic problems — which at face value involve uncertainty, conflicting interests, coordination failures and so on — into the scarce-means-and-competing-ends Robinson Crusoe vision that has long been economics’ home ground.

There’s a funny history to this technique. It was invented by Frank Ramsey, a young philosopher and mathematician in Keynes’ Cambridge circle in the 1920s, to answer the question: If you were organizing an economy from the top down and had to choose between producing for present needs versus investing to allow more production later, how would you decide the ideal mix? The Euler equation offers a convenient tool for expressing the tradeoff between production in the future versus production today.

This makes sense as a way of describing what a planner should do. But through one of those transmogrifications intellectual history is full of, the same formalism was picked up and popularized after World War II by Solow and Samuelson as a description of how growth actually happens in capitalist economies. The problem of macroeconomics has continued to be framed as how an ideal planner should direct consumption and production to produce the best outcomes for anyone, often with the “ideal planner” language intact. Pick up any modern economics textbook and you’ll find that substantive questions can’t be asked except in terms of how a far sighted agent would choose this path of consumption as the best possible one allowed by the model.

There’s nothing wrong with adopting a simplified formal representation of a fuzzier and more complicated reality. As Marx said, abstraction is the social scientist’s substitute for the microscope or telescope. But these models are not simple by any normal human definition. The models may abstract away from features of the world that non-economists might think are rather fundamental to “the economy” — like the existence of businesses, money, and government — but the part of the world they do represent — the optimal tradeoff between consumption today and consumption tomorrow — is described in the greatest possible detail. This combination of extreme specificity on one dimension and extreme abstraction on the others might seem weird and arbitrary. But in today’s profession, if you don’t at least start from there, you’re not doing economics.

At the same time, many producers of this kind of models do have a quite realistic understanding of the behavior of real economies, often informed by first-hand experience in government. The combination of tight genre constraints and real insight leads to a strange style of theorizing, where the goal is to produce a model that satisfies the the conventions of the discipline while arriving at a conclusion that you’ve already reached by other means. Michael Woodford, perhaps the leading theorist of “New Keynesian” macroeconomics, more or less admits that the purpose of his models is to justify the countercyclical interest rate policy already pursued by central banks in a language acceptable to academic economists. Of course the central bankers themselves don’t learn anything from such an exercise — and you will scan the minutes of Fed meetings in vain for discussion of first-order ARIMA technology shocks — but they  presumably find it reassuring to hear that what they already thought is consistent with the most modern economic theory. It’s the economic equivalent of the college president in Randall Jarrell’s Pictures from an Institution:

About anything, anything at all, Dwight Robbins believed what Reason and Virtue and Tolerance and a Comprehensive Organic Synthesis of Values would have him believe. And about anything, anything at all, he believed what it was expedient for the president of Benton College to believe. You looked at the two beliefs, and lo! the two were one. Do you remember, as a child without much time, turning to the back of the arithmetic book, getting the answer to a problem, and then writing down the summary hypothetical operations by which the answer had been, so to speak, arrived at? It is the only method of problem-solving that always gives correct answers…

The development of theory since the crisis has followed this mold. One prominent example: After the crash of 2008, Paul Krugman immediately began talking about the liquidity trap and the “perverse” Keynesian claims that become true when interest rates were stuck at zero. Fiscal policy was now effective, there was no danger in inflation from increases in the money supply, a trade deficit could cost jobs, and so on. He explicated these ideas with the help of the “IS-LM” models found in undergraduate textbooks — genuinely simple abstractions that haven’t played a role in academic work in decades.

Some years later, he and Gautti Eggertson unveiled a model in the approved New Keynesian style, which showed that, indeed, if interest rates  were fixed at zero then fiscal policy, normally powerless, now became highly effective. This exercise may have been a display of technical skill (I suppose; I’m not a connoisseur) but what do we learn from it? After all, generating that conclusion was the announced  goal from the beginning. The formal model was retrofitted to generate the argument that Krugman and others had been making for years, and lo! the two were one.

It’s a perfect example of Joan Robinson’s line that economic theory is the art of taking a rabbit out of a hat, when you’ve just put it into the hat in full view of the audience. I suppose what someone like Krugman might say in his defense is that he wanted to find out if the rabbit would fit in the hat. But if you do the math right, it always does.

(What’s funnier in this case is that the rabbit actually didn’t fit, but they insisted on pulling it out anyway. As the conservative economist John Cochrane gleefully pointed out, the same model also says that raising taxes on wages should also boost employment in a liquidity trap. But no one believed that before writing down the equations, so they didn’t believe it afterward either. As Krugman’s coauthor Eggerston judiciously put it, “there may be reasons outside the model” to reject the idea that increasing payroll taxes is a good idea in a recession.)

Left critics often imagine economics as an effort to understand reality that’s gotten hopelessly confused, or as a systematic effort to uphold capitalist ideology. But I think both of these claims are, in a way, too kind; they assume that economic theory is “about” the real world in the first place. Better to think of it as a self-constrained art form, whose apparent connections to economic phenomena are results of a confusing overlap in vocabulary. Think about chess and medieval history: The statement that “queens are most effective when supported by strong bishops” might be reasonable in both domains, but its application in the one case will tell you nothing about its application in the other.

Over the past decade, people (such as, famously, Queen Elizabeth) have often asked why economists failed to predict the crisis. As a criticism of economics, this is simultaneously setting the bar too high and too low. Too high, because crises are intrinsically hard to predict. Too low, because modern macroeconomics doesn’t predict anything at all.  As Suresh Naidu puts it, the best way to think about what most economic theorists do is as a kind of constrained-maximization poetry. It makes no more sense to ask “is it true” than of a haiku.

***

While theory buzzes around in its fly-bottle, empirical macroeconomics, more attuned to concrete developments, has made a number of genuinely interesting departures. Several areas have been particularly fertile: the importance of financial conditions and credit constraints; government budgets as a tool to stabilize demand and employment; the links between macroeconomic outcomes and the distribution of income; and the importance of aggregate demand even in the long run.

Not surprisingly, the financial crisis spawned a new body of work trying to assess the importance of credit, and financial conditions more broadly, for macroeconomic outcomes. (Similar bodies of work were produced in the wake of previous financial disruptions; these however don’t get much cited in the current iteration.) A large number of empirical papers tried to assess how important access to credit was for household spending and business investment, and how much of the swing from boom to bust could be explained by the tighter limits on credit. Perhaps the outstanding figures here are Atif Mian and Amir Sufi, who assembled a large body of evidence that the boom in lending in the 2000s reflected mainly an increased willingness to lend on the part of banks, rather than an increased desire to borrow on the part of families; and that the subsequent debt overhang explained a large part of depressed income and employment in the years after 2008.

While Mian and Sufi occupy solidly mainstream positions (at Princeton and Chicago, respectively), their work has been embraced by a number of radical economists who see vindication for long-standing left-Keynesian ideas about the financial roots of economic instability. Markus Brunnermeier (also at Princeton) and his coauthors have also done interesting work trying to untangle the mechanisms of the 2008 financial crisis and to generalize them, with particular attention to the old Keynesian concept of liquidity. That finance is important to the economy is not, in itself, news to anyone other than economists; but this new empirical work is valuable in translating this general awareness into concrete usable form.

A second area of renewed empirical interest is fiscal policy — the use of the government budget to manage aggregate demand. Even more than with finance, economics here has followed rather than led the policy debate. Policymakers were turning to large-scale fiscal stimulus well before academics began producing studies of its effectiveness. Still, it’s striking how many new and sophisticated efforts there have been to estimate the fiscal multiplier — the increase in GDP generated by an additional dollar of government spending.

In the US, there’s been particular interest in using variation in government spending and unemployment across states to estimate the effect of the former on the latter. The outstanding work here is probably that of Gabriel Chodorow-Reich. Like most entries in this literature, Chodorow-Reich’s suggests fiscal multipliers that are higher than almost any mainstream economist would have accepted a decade ago, with each dollar of government spending adding perhaps two dollars to GDP. Similar work has been published by the IMF, which acknowledged that past studies had “significantly underestimated” the positive effects of fiscal policy. This mea culpa was particularly striking coming from the global enforcer of economic orthodoxy.

The IMF has also revisited its previously ironclad opposition to capital controls — restrictions on financial flows across national borders. More broadly, it has begun to offer, at least intermittently, a platform for work challenging the “Washington Consensus” it helped establish in the 1980s, though this shift predates the crisis of 2008. The changed tone coming out of the IMF’s research department has so far been only occasionally matched by a change in its lending policies.

Income distribution is another area where there has been a flowering of more diverse empirical work in the past decade. Here of course the outstanding figure is Thomas Piketty. With his collaborators (Gabriel Zucman, Emmanuel Saez and others) he has practically defined a new field. Income distribution has always been a concern of economists, of course, but it has typically been assumed to reflect differences in “skill.” The large differences in pay that appeared to be unexplained by education, experience, and so on, were often attributed to “unmeasured skill.” (As John Eatwell used to joke: Hegemony means you get to name the residual.)

Piketty made distribution — between labor and capital, not just across individuals — into something that evolves independently, and that belongs to the macro level of the economy as a whole rather than the micro level of individuals. When his book Capital in the 21st Century was published, a great deal of attention was focused on the formula “r > g,” supposedly reflecting a deep-seated tendency for capital accumulation to outpace economic growth. But in recent years there’s been an interesting evolution in the empirical work Piketty and his coauthors have published, focusing on countries like Russia/USSR and China, etc., which didn’t feature in the original survey. Political and institutional factors like labor rights and the legal forms taken by businesses have moved to center stage, while the formal reasoning of “r > g” has receded — sometimes literally to a footnote. While no longer embedded in the grand narrative of Capital in the 21st Century, this body of empirical work is extremely valuable, especially since Piketty and company are so generous in making their data publicly available. It has also created space for younger scholars to make similar long-run studies of the distribution of income and wealth in countries that the Piketty team hasn’t yet reached, like Rishabh Kumar’s superb work on India. It has also been extended by other empirical economists, like Lukas Karabarbounis and coauthors, who have looked at changes in income distribution through the lens of market power and the distribution of surplus within the corporation — not something a University of Chicago economist would have ben likely to study a decade ago.

A final area where mainstream empirical work has wandered well beyond its pre-2008 limits is the question of whether aggregate demand — and money and finance more broadly — can affect long-run economic outcomes. The conventional view, still dominant in textbooks, draws a hard line between the short run and the long run, more or less meaning a period longer than one business cycle. In the short run, demand and money matter. But in the long run, the path of the economy depends strictly on “real” factors — population growth, technology, and so on.

Here again, the challenge to conventional wisdom has been prompted by real-world developments. On the one hand, weak demand — reflected in historically low interest rates — has seemed to be an ongoing rather than a cyclical problem. Lawrence Summers dubbed this phenomenon “secular stagnation,” reviving a phrase used in the 1940s by the early American Keynesian Alvin Hansen.

On the other hand, it has become increasingly clear that the productive capacity of the economy is not something separate from current demand and production levels, but dependent on them in various ways. Unemployed workers stop looking for work; businesses operating below capacity don’t invest in new plant and equipment or develop new technology. This has manifested itself most clearly in the fall in labor force participation over the past decade, which has been considerably greater than can be explained on the basis of the aging population or other demographic factors. The bottom line is that an economy that spends several years producing less than it is capable of, will be capable of producing less in the future. This phenomenon, usually called “hysteresis,” has been explored by economists like Laurence Ball, Summers (again) and Brad DeLong, among others. The existence of hysteresis, among other implications, suggests that the costs of high unemployment may be greater than previously believed, and conversely that public spending in a recession can pay for itself by boosting incomes and taxes in future years.

These empirical lines are hard to fit into the box of orthodox theory — not that people don’t try. But so far they don’t add up to more than an eclectic set of provocative results. The creativity in mainstream empirical work has not yet been matched by any effort to find an alternative framework for thinking of the economy as a whole. For people coming from non-mainstream paradigms — Marxist or Keynesian — there is now plenty of useful material in mainstream empirical macroeconomics to draw on – much more than in the previous decade. But these new lines of empirical work have been forced on the mainstream by developments in the outside world that were too pressing to ignore. For the moment, at least, they don’t imply any systematic rethinking of economic theory.

***

Perhaps the central feature of the policy mainstream a decade ago was a smug and, in retrospect, remarkable complacency that the macroeconomic problem had been solved by independent central banks like the Federal Reserve.  For a sense of the pre-crisis consensus, consider this speech by a prominent economist in September 2007, just as the US was heading into its worst recession since the 1930s:

One of the most striking facts about macropolicy is that we have progressed amazingly. … In my opinion, better policy, particularly on the part of the Federal Reserve, is directly responsible for the low inflation and the virtual disappearance of the business cycle in the last 25 years. … The story of stabilization policy of the last quarter century is one of amazing success.

You might expect the speaker to be a right-wing Chicago type like Robert Lucas, whose claim that “the problem of depression prevention has been solved” was widely mocked after the crisis broke out. But in fact it was Christina Romer, soon headed to Washington as the Obama administration’s top economist. In accounts of the internal debates over fiscal policy that dominated the early days of the administration, Romer often comes across as one of the heroes, arguing for a big program of public spending against more conservative figures like Summers. So it’s especially striking that in the 2007 speech she spoke of a “glorious counterrevolution” against Keynesian ideas. Indeed, she saw the persistence of the idea of using deficit spending to fight unemployment as the one dark spot in an otherwise cloudless sky. There’s more than a little irony in the fact that opponents of the massive stimulus Romer ended up favoring drew their intellectual support from exactly the arguments she had been making just a year earlier. But it’s also a vivid illustration of a consistent pattern: ideas have evolved more rapidly in the world of practical policy than among academic economists.

For further evidence, consider a 2016 paper by Jason Furman, Obama’s final chief economist, on “The New View of Fiscal Policy.” As chair of the White House Council of Economic Advisers, Furman embodied the policy-economics consensus ex officio. Though he didn’t mention his predecessor by name, his paper was almost a point-by-point rebuttal of Romer’s “glorious counterrevolution” speech of a decade earlier. It starts with four propositions shared until recently by almost all respectable economists: that central banks can and should stabilize demand all by themselves, with no role for fiscal policy; that public deficits raise interest rates and crowd out private investment; that budget deficits, even if occasionally called for, need to be strictly controlled with an eye on the public debt; and that any use of fiscal policy must be strictly short-term.

None of this is true, suggests Furman. Central banks cannot reliably stabilize modern economies on their own, increased public spending should be a standard response to a downturn, worries about public debt are overblown, and stimulus may have to be maintained indefinitely. While these arguments obviously remain within a conventional framework in which the role of the public sector is simply to maintain the flow of private spending at a level consistent with full employment, they nonetheless envision much more active management of the economy by the state. It’s a remarkable departure from textbook orthodoxy for someone occupying such a central place in the policy world.

Another example of orthodoxy giving ground under the pressure of practical policymaking is Narayana Kocherlakota. When he was appointed as President of the Federal Reserve Bank of Minneapolis, he was on the right of debates within the Fed, confident that if the central bank simply followed its existing rules the economy would quickly return to full employment, and rejecting the idea of active fiscal policy. But after a few years on the Fed’s governing Federal Open Market Committee (FOMC), he had moved to the far left, “dovish” end of opinion, arguing strongly for a more aggressive approach to bringing unemployment down by any means available, including deficit spending and more aggressive unconventional tools at the Fed. This meant rejecting much of his own earlier work, perhaps the clearest example of a high-profile economist repudiating his views after the crisis; in the process, he got rid of many of the conservative “freshwater” economists in the Minneapolis Fed’s research department.

The reassessment of central banks themselves has run on parallel lines but gone even farther.

For twenty or thirty years before 2008, the orthodox view of central banks offered a two-fold defense against the dangerous idea — inherited from the 1930s — that managing the instability of capitalist economies was a political problem. First, any mismatch between the economy’s productive capabilities (aggregate supply) and the desired purchases of households and businesses (aggregate demand) could be fully resolved by the central bank; the technicians at the Fed and its peers around the world could prevent any recurrence of mass unemployment or runaway inflation. Second, they could do this by following a simple, objective rule, without any need to balance competing goals.

During those decades, Alan Greenspan personified the figure of the omniscient central banker. Venerated by presidents of both parties, Greenspan was literally sanctified in the press — a 1990 cover of The International Economy had him in papal regalia, under the headline, “Alan Greenspan and His College of Cardinals.” A decade later, he would appear on the cover of Time as the central figure in “The Committee to Save the World,” flanked by Robert Rubin and the ubiquitous Summers. And a decade after that he showed up as Bob Woodward’s eponymous Maestro.

In the past decade, this vision of central banks and central bankers has eroded from several sides. The manifest failure to prevent huge falls in output and employment after 2008 is the most obvious problem. The deep recessions in the US, Europe and elsewhere make a mockery of the “virtual disappearance of the business cycle” that people like Romer had held out as the strongest argument for leaving macropolicy to central banks. And while Janet Yellen or Mario Draghi may be widely admired, they command nothing like the authority of a Greenspan.

The pre-2008 consensus is even more profoundly undermined by what central banks did do than what they failed to do. During the crisis itself, the Fed and other central banks decided which financial institutions to rescue and which to allow to fail, which creditors would get paid in full and which would face losses. Both during the crisis and in the period of stagnation that followed, central banks also intervened in a much wider range of markets, on a much larger scale. In the US, perhaps the most dramatic moment came in late summer 2008, when the commercial paper market — the market for short-term loans used by the largest corporations — froze up, and the Fed stepped in with a promise to lend on its own account to anyone who had previously borrowed there. This watershed moment took the Fed from its usual role of regulating and supporting the private financial system, to simply replacing it.

That intervention lasted only a few months, but in other markets the Fed has largely replaced private creditors for a number of years now. Even today, it is the ultimate lender for about 20 percent of new mortgages in the United States. Policies of quantitative easing, in the US and elsewhere, greatly enlarged central banks’ weight in the economy — in the US, the Fed’s assets jumped from 6 percent of GDP to 25 percent, an expansion that is only now beginning to be unwound.  These policies also committed central banks to targeting longer-term interest rates, and in some cases other asset prices as well, rather than merely the overnight interest rate that had been the sole official tool of policy in the decades before 2008.

While critics (mostly on the Right) have objected that these interventions “distort” financial markets, this makes no sense from the perspective of a practical central banker. As central bankers like the Fed’s Ben Bernanke or the Bank of England’s Adam Posen have often said in response to such criticism, there is no such thing as an “undistorted” financial market. Central banks are always trying to change financial conditions to whatever it thinks favors full employment and stable prices. But as long as the interventions were limited to a single overnight interest rate, it was possible to paper over the contradiction between active monetary policy and the idea of a self-regulating economy, and pretend that policymakers were just trying to follow the “natural” interest rate, whatever that is. The much broader interventions of the past decade have brought the contradiction out into the open.

The broad array of interventions central banks have had to carry out over the past decade have also provoked some second thoughts about the functioning of financial markets even in normal times. If financial markets can get things wrong so catastrophically during crises, shouldn’t that affect our confidence in their ability to allocate credit the rest of the time? And if we are not confident, that opens the door for a much broader range of interventions — not only to stabilize markets and maintain demand, but to affirmatively direct society’s resources in better ways than private finance would do on its own.

In the past decade, this subversive thought has shown up in some surprisingly prominent places. Wearing his policy rather than his theory hat, Paul Krugman sees

… a broader rationale for policy activism than most macroeconomists—even self-proclaimed Keynesians—have generally offered in recent decades. Most of them… have seen the role for policy as pretty much limited to stabilizing aggregate demand. … Once we admit that there can be big asset mispricing, however, the case for intervention becomes much stronger… There is more potential for and power in [government] intervention than was dreamed of in efficient-market models.

From another direction, the notion that macroeconomic policy does not involve conflicting interests has become harder to sustain as inflation, employment, output and asset prices have followed diverging paths. A central plank of the pre-2008 consensus was the aptly named “divine coincidence,” in which the same level of demand would fortuitously and simultaneously lead to full employment, low and stable inflation, and production at the economy’s potential. Operationally, this was embodied in the “NAIRU” — the level of unemployment below which, supposedly, inflation would begin to rise without limit.

Over the past decade, as estimates of the NAIRU have fluctuated almost as much as the unemployment rate itself, it’s become clear that the NAIRU is too unstable and hard to measure to serve as a guide for policy, if it exists at all. It is striking to see someone as prominent as IMF chief economist Olivier Blanchard write (in 2016) that “the US economy is far from satisfying the ‘divine coincidence’,” meaning that stabilizing inflation and minimizing unemployment are two distinct goals. But if there’s no clear link between unemployment and inflation, it’s not clear why central banks should worry about low unemployment at all, or how they should trade off the risks of prices rising undesirably fast against the risk of too-high unemployment. With surprising frankness, high officials at the Fed and other central banks have acknowledged that they simply don’t know what the link between unemployment and inflation looks like today.

To make matters worse, a number of prominent figures — most vocally at the Bank for International Settlements — have argued that we should not be concerned only with conventional price inflation, but also with the behavior of asset prices, such as stocks or real estate. This “financial stability” mandate, if it is accepted, gives central banks yet another mission. The more outcomes central banks are responsible for, and the less confident we are that they all go together, the harder it is to treat central banks as somehow apolitical, as not subject to the same interplay of interests as the rest of the state.

Given the strategic role occupied by central banks in both modern capitalist economies and economic theory, this rethinking has the potential to lead in some radical directions. How far it will actually do so, of course, remains to be seen. Accounts of the Fed’s most recent conclave in Jackson Hole, Wyoming suggest a sense of “mission accomplished” and a desire to get back to the comfortable pieties of the past. Meanwhile, in Europe, the collapse of the intellectual rationale for central banks has been accompanied by the development of the most powerful central bank-ocracy the world has yet seen. So far the European Central Bank has not let its lack of democratic mandate stop it from making coercive intrusions into the domestic policies of its member states, or from serving as the enforcement arm of Europe’s creditors against recalcitrant debtors like Greece.

One thing we can say for sure: Any future crisis will bring the contradictions of central banks’ role as capitalism’s central planners into even sharper relief.

***

Many critics were disappointed the crisis of a 2008 did not lead to an intellectual revolution on the scale of the 1930s. It’s true that it didn’t. But the image of stasis you’d get from looking at the top journals and textbooks isn’t the whole picture — the most interesting conversations are happening somewhere else. For a generation, leftists in economics have struggled to change the profession, some by launching attacks (often well aimed, but ignored) from the outside, others by trying to make radical ideas parsable in the orthodox language. One lesson of the past decade is that both groups got it backward.

Keynes famously wrote that “Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” It’s a good line. But in recent years the relationship seems to have been more the other way round. If we want to change the economics profession, we need to start changing the world. Economics will follow.

Thanks to Arjun Jayadev, Ethan Kaplan, Mike Konczal and Suresh Naidu for helpful suggestions and comments.

Lecture Notes for Research Methods

I’m teaching a new class this semester, a masters-level class on research methods. It could be taught as simply the second semester of an econometrics sequence, but I’m taking a different approach, trying to think about what will help students do effective empirical work in policy/political settings. We’ll see how it works.

For anyone interested, here are the slides I will use on the first day. I’m not sure it’s all right, in fact I’m sure some of it is wrong But that is how you figure out what you really think and know and don’t know about something, by teaching it.

After we’ve talked through this, we will discuss this old VoxEU piece as an example of effective use of simple scatterplots to make an economic argument.

I gave a somewhat complementary talk on methodology and heterodox macroeconomics at the Eastern Economics Association meetings last year. I’ve been meaning to transcribe it into a blogpost, but in the meantime you can listen to a recording, if you’re interested.

 

The Wit and Wisdom of Trygve Haavelmo

I was talking some time ago with my friend Enno about Merijn Knibbe’s series of articles on the disconnect between the variables used in economic models and the corresponding variables in the national accounts.1 Enno mentioned Trygve Haavelmo’s 1944 article The Probability Approach in Econometrics; he thought Haavelmo’s distinction between “theroetical variables,” “true variables,” and “observable variables” could be a useful way of thinking about the slippages between economic reality, economic data and economic theory.

I finally picked up the Haavelmo article, and it turns out to be a deep and insightful piece — for the reason Enno mentioned, but also more broadly on how to think about empirical economics. It’s especially interesting coming from soeone who won the Nobel Prize for his foundational work in econometrics. Another piece of evidence that orthodox economists in the mid-20th century thought more deeply and critically about the nature of their project than their successors do today.

It’s a long piece, with a lot of mathematical illustrations that someone reading it today can safely skip. The central argument comes down to three overlapping points. First, economic models are tools, developed to solve specific problems. Second, economic theories have content only insofar as they’re associated with specific procedures for measurement. Third, we have positive economic knowledge only insofar as we can make unconditional predictions about the distribution of observable variables.

The first point: We study economics in order to “become master of the happenings of real life.” This is on some level obvious, or vacuous, but it'[s important; it functions as a kind of “he who has ears, let him hear.” It marks the line between those who come to economics as a means to some other end — a political commitment, for many of us; but it could just as well come from a role in business or policy — and those for whom economic theory is an end in itself. Economics education must, obviously, be organized on the latter principle. As soon as you walk into an economics classroom, the purpose of your being there is to learn economics. But you can’t, from within the classroom, make any judgement about what is useful or interesting for the world outside. Or as Hayek put it, “One who is only an economist, cannot be a good economist.”2

Here is what Haavelmo says:

Theoretical models are necessary tools in our attempts to understand and explain events in real life. … Whatever be the “explanations” we prefer, it is not to be forgotten that they are all our own artificial inventions in a search for an understanding of real life; they are not hidden truths to be “discovered.”

It’s an interesting question, which we don’t have to answer here, whether or to what extent this applies to the physical sciences as well. Haavelmo thinks this pragmatic view of scientific laws applies across the board:

The phrase “In the natural sciences we have laws” means not much more and not much less than this: The natural sciences have chosen fruitful ways of looking upon physical reality.

We don’t need to decide here whether we want to apply this pragmatic view to the physical sciences. It is certainly the right way to look at economic models, in particular the models we construct in econometrics. The “data generating process” is not an object existing out in the world. It is a construct you have created for one or both of these reasons: It is an efficient description of the structure of a specific matrix of observed data; it allows you to make predictions about some specific yet-to-be-observed outcome. The idea of a data-generating process is obviously very useful in thinking about the logic of different statistical techniques. It may be useful to do econometrics as if there were a certain data generating process. It is dangerously wrong to believe there really is one.

Speaking of observation brings us to Haavelmo’s second theme: the meaningless of economic theory except in the context of a specific procedure for observation.  It might naively seem, he says, that

since the facts we want to study present themselves in the form of numerical measurement, we shall have to choose our models from … the field of mathematics. But the concepts of mathematics obtain their quantitative meaning implicitly through the system of logical operations we impose. In pure mathematics there really is no such problem as quantitative definition of a concept per se …

When economists talk about the problem of quantitative definitions of economic variables, they must have something in mind which has to do with real economic phenomena. More precisely, they want to give exact rules how to measure certain phenomena of real life.

Anyone who got a B+ in real analysis will have no problem with the first part of this statement. For the rest, this is the point: economic quantities come into existence only through some concrete human activity that involves someone writing down a number. You can ignore this, most of the time; but you should not ignore it all of the time. Because without that concrete activity there’s no link between economic theory and the social reality it hopes to help us master or make sense of.

Haavelmo has some sharp observations on the kind of economics that ignores the concrete activity that generates its data, which seem just as relevant to economic practice today:

Does a system of questions become less mathematical and more economic in character just by calling x “consumption,” y “price,” etc.? There are certainly many examples of studies to be found that do not go very much further than this, as far as economic significance is concerned.

There certainly are!

An equation, Haavelmo continues,

does not become an economic theory just by using economic terminology to name the variables invovled. It becomes an economic theory when associated with the rule of actual measurement of economic variables.

I’ve seen plenty of papers where the thought process seems to have been somthing like, “I think this phenomenaon is cyclical. Here is a set of difference equations that produce a cycle. I’ll label the variables with names of parts of the phenomenon. Now I have a theory of it!” With no discussion of how to measure the variables or in what sense the objects they describe exist in the external world.

What makes a piece of mathematical economics not only mathematics but also economics is this: When we set up a system of theoretical relationships and use economic names for the otherwise purely theoretical variables involved, we have in mind some actual experiment, or some design of an experiment, which we could at least imagine arranging, in order to measure those quantities in real economic life that we think might obey the laws imposed on their theoretical namesakes.

Right. A model has positive content only insofar as we can describe the concrete set of procedures that gets us from the directly accessible evidence of our senses. In my experience this comes through very clearly if you talk to someone who actually works in the physical sciences. A large part of their time is spent close to the interface with concrete reality — capturing that lizard, calibrating that laser.  The practice of science isn’t simply constructing a formal analog of physical reality, a model trainset. It’s actively pushing against unknown reality and seeing how it pushes back.

Haavelmo:

When considering a theoretical setup … it is common to ask about the actual meaning of this or that variable. But this question has no sense within the theoretical model. And if the question applies to reality it has no precise answer … we will always need some willingness among our fellow research workers to agree “for practical purposes” on questions of definitions and measurement …A design of experiments … is an essential appendix to any quantitative theory.

With respect to macroeconomics, the “design of experiments” means, in the first instance, the design of the national accounts. Needless to say, national accounting concepts cannot be treated as direct observations of the corresponding terms in economic theory, even if they have been reconstructed with that theory in mind. Cynamon and Fazzari’s paper on the measurement of household spending gives some perfect examples of this. There can’t be many contexts in which Medicare payments to hospitals, for example, are what people have in mind when they construct models of household consumption. But nonetheless that’s what they’re measuring, when they use consumption data from the national accounts.

I think there’s an important sense in which the actual question of any empirical macroeconomics work has to be: What concrete social process led the people working at the statistics office to enter these particular values in the accounts?

Or as Haavelmo puts it:

There is hardly an economist who feels really happy about identifying the current series of “national income, “consumptions,” etc. with the variables by those names in his theories. Or, conversely, he would think it too complicated or perhaps uninteresting to try to build models … [whose] variables would correspond to those actually given by current economic statistics. … The practical conclusion… is the advice that economists hardly ever fail to give, but that few actually follow, that one should study very carefully the actual series considered and the conditions under which they were produced, before identifying them with the variables of a particular theoretical model.

Good advice! And, as he says, hardly ever followed.

I want to go back to the question of the “meaning” of a variable, because this point is so easy to miss. Within a model, the variables have no meaning, we simply have a set of mathematical relationships that are either tautologous, arbitrary, or false. The variables only acquire meaning insofar as we can connect them to concrete social phenomena. It may be unclear to you, as a blog reader, why I’m banging on this point so insistently. Go to an economics conference and you’ll see.

The third central point of the piece is that meaningful explanation requires being able to identify a few causal links as decisive, so that all the other possible ones can be ignored.

Think back to that Paul Romer piece on what’s wrong with modern macroeconomics. One of the most interesting parts of it, to me, was its insistent Humean skepticism about the possibility of a purely inductive economics, or for that matter science of any kind. Paraphrasing Romer: suppose we have n variables, any of which may potentially influence the others. Well then, we have n equations, one for each variable, and n2 parameters (counting intercepts). In general, we are not going to be able to estimate this system based on data alone. We have to restrict the possible parameter space either on the basis of theory, or by “experiments” (natural or otherwise) that let us set most of the parameters to zero on the grounds that there is no independent variation in those variables between observations. I’m not sure that Romer fully engages with this point, whose implications go well beyond the failings of real business cycle theory. But it’s a central concern for Haavelmo:

A theoretical model may be said to be simply a restriction upon the joint variations of a system of quantities … which otherwise might have any value. … Our hope in economic theory and research is that it may be possible to establish contant and relatively simple relations between dependent variables … and a realtively small number of independent variables. … We hope that for each variable y to be explained, there is a realtively small number of explaining factors the variations of which are practically decisive in determining the variations of y. …  If we are trying to explain a certain observable varaible, y, by a system of causal factors, there is, in general, no limit to the number of such factors that might have a potential influence upon y. But Nature may limit the number of fctors that have a nonneglible factual influence to a relatively small number. Our hope for simple laws in economics rests upon the assumption that we may proceed as if such natural limitations of the number of relevant factors exist.

One way or another, to do empirical economic, we have to ignore mst of the logically possible relationships between our variables. Our goal, after all, is to explain variation in the dependent variable. Meaningful explanation is possible only if the number of relevant causal factors is small. If someone asks “why is unemployment high”, a meaningful answer is going to involve at most two or three causes. If you say, “I have no idea, but all else equal wage regulations are making it higher,” then you haven’t given an answer at all. To be masters of the hapennings of real life, we need to focus on causes of effects, not effects of causes.

In other words, ceteris paribus knowledge isn’t knowledge at all. Only unconditional claims count — but they don’t have to be predictions of a single variable, they can be claims about the joint distribution of several. But in any case we have positive knowledge only to the extent we can unconditionally say that future observations will fall entirely in a certain part of the state space. This fails if we have a ceteris paribus condition, or if our empirical works “corrects” for factors whose distribution and the nature of whose influence we have not invstigated.3 Applied science is useful because it gives us knowledge of the kind, “If I don’t turn the key, the car will not start, if I do turn the key, it will — or if it doesn’t there is a short list of possible reasons why not.” It doesn’t give us knowledge like “All else equal, the car is more likely to start when the key is turned than when it isn’t.”4

If probability distributions are simply tools for making unconditional claims about specific events, then it doesn’t make sense to think of them as existing out in the world. They are, as Keynes also emphasized, simply ways of describing our own subjective state of belief:

We might interpret “probability” simply as a measure of our a priori confidence in the occurrence of a certain event. Then the theoretical notion of a probability distribution serves us chiefly as a tool for deriving statements that have a very high probability of being true.

Another way of looking at this. Research in economics is generally framed in terms of uncovering universal laws, for which the particular phenomenon being  studied merely serves as a case study.5 But in the real world, it’s more oftne the other way: We are interested in some specific case, often the outcome of some specific action we are considering. Or as Haavelmo puts it,

As a rule we are not particularly interested in making statements about a large number of observations. Usually, we are interested in a relatively small number of observations points; or perhaps even more frequently, we are interested in a practical statement about just one single new observation.

We want economics to answer questions like, “what will happen if US imposes tariffs on China”? The question of what effects tariffs have on trade in the abstract is, itself, uninteresting and unanswerable.

What do we take from this? How, according to Haavelmo, should empirical economics be?

First, the goal of empirical work is to explain concrete phenomena — what happened, or will happen, in some particular case.

Second, the content of a theory is inseparable from the procedures for measuring the variables in it.

Third, empirical work requires restrictions on the logically possible space of parameters, some of which have to be imposed a priori.

Finally, prediction (the goal) means making unconditional claims about the joint distribution of one or more variables. “Everything else equal” means “I don’t know.”

All of this based on the idea that we study economics not as an end in itself, but in response to the problems forced on us by the world.

“The financialization of the nonfinancial corporation”

One common narrative attached to the murky term financialization is that nonfinancial corporations have, in effect, turned themselves into banks or hedge funds — they have replaced investment in means of production with ownership of financial assets. Financial profits, in this story, have increasingly substituted for profits from making and selling stuff. I’m not sure where this idea originates — the epidemiology points toward my own homeland of UMass-Amherst — but it’s become almost accepted wisdom in left economics.

I’ve been skeptical of this story for a while, partly because it conflicts with my own vision of financialization as something done to nonfinancial corporations rather than by them — a point I’ll return to at the end of the post — and partly because I’ve never seen good evidence for it. On the cashflow side, it’s true there is a rise in interest income from the 1960s through the 1980s. But, as discussed in the previous post, this is outweighed by a rise in interest payments; it reflects a general rise in interest rates rather than a reorientation of corporate activity; and has subsequently been reversed. On the balance sheet side, there is indeed a secular rise in “financial” assets, but this is all in what the financial accounts call “unidentified” assets, which I’ve always suspected is mostly goodwill and equity in subsidiaries rather than anything we would normally think of as financial assets.

Now courtesy of Nathan Tankus, here is an excellent paper by Joel Rabinovitch that makes this case much more thoroughly than I’d been able to.

The paper starts by distinguishing two broad stories of financialization: shareholder value orientation and acquisition of financial assets. In the first story, financialization means that corporations are increasingly oriented toward the wishes or interests of shareholders and other financial claimants. The second story is the one we are interested in here. Rabinovitch’s paper doesn’t directly engage with the shareholder-value story, but it implicitly strengthens it by criticizing the financial-assets one.

The targets of the paper include some of my smartest friends. So I’ll be interested to see what they say in response to it.

The critical questions are:  Have nonfinancial corporations’ holdings of financial assets really increased, relative to total assets? And, has their financial income risen relative to total income?

The answers in turn depend on two subsidiary issues. On the first question, we need to decide what is represented by the “other unidentified assets” category in the Financial Accounts, which is responsible for essentially all of the apparent rise in financial assets. And on the income side, we need to consistently compare the full set of financial flows to their nonfinancial equivalents. Rabinovitch argues, convincingly in my view, that looking at financial income in isolation is not give a meaningful picture.

On the face of it, the asset and income pictures look quite different. In the official accounts, financial assets of nonfinancial corporations have increased from 40% of nonfinancial assets to 120% between 1946 and 2015. Financial income, on the other hand, is only 2.5% of total income and shows no long-term increase. This should already make us skeptical that the increase in “financial” assets represents income-generating assets in the usual sense.

Rabinovitch then explores this is detail by combining the financial accounts with the IRS statistics of income (SOI) and the Compustat database. Each of these has strengths and weaknesses — Compustat provides firm-level data, but is limited to large, publicly-traded corporations and consolidates domestic and overseas operations; SOI gives detailed breakdowns of income sources for all forms of legal organization broken down by size, but it doesn’t include any balance-sheet variables, so it can’t be used to answer the asset questions.

iI the financial accounts, the majority of the increase in identified financial assets is FDI stock. As Rabinovitch notes, “it’s dubious to directly consider FDI as a financial asset if we take into account that it implies lasting interest with the intention to exercise control over the enterprise.” The largest part of the overall increase in financial assets, however, is in the residual “other unidentified assets” line of the financial accounts. The fact that there is no increase in income associated with these assets is already a reason to doubt that they are financial assets in the usual sense. Compustat data, while not strictly comparable, suggests that the majority of this is intangibles. The most important intangible is goodwill, which is simply the accounting term of the excess of an acquisition price over the book value of the acquired company. Importantly, goodwill is not depreciated but only written off through impairment. Another large portion is equity in unconsolidated subsidiaries; this accounts for a disproportionate share of the increase thanks to a change in accounting rules that required corporations to begin accounting for it explicitly. Other important intangibles include patents, copyrights, licenses, etc. These are not financial assets; rather they are assets or pseudo-assets acquired, like real investment, in order to carry out a company’s productive activities on an extended scale.

These are all aggregate numbers; perhaps the financialization story holds up better for the biggest firms? Rabinovich discusses this too. Both Compustat and SOI allow us to separate firms by size. As it turns out, the largest firms do have a greater proportion of financial income than the smaller ones. But even for the largest 0.05% of corporations, financial income is still only 3.5% or total income, and net financial income is still negative. As he reasonably concludes, “even for the biggest nonfinancial corporations, financialization must not be understood as mimicking financial corporations.”

What do we make of all this?

First, the view of financialization as nonfinancial businesses acquiring financial assets for income in placer of real investment, is widely held on the left. After my Jacobin interview came out, for example, several people promptly informed me that I was missing this important fact. So if the evidence does not in fact support it, that is worth knowing. Or at least, future statements of the hypothesis will be stronger if they respond to the points made here.

Second, the fact that “financial” assets in fact mostly consist of goodwill, interest in unconsolidated subsidiaries, and foreign investment is interesting in its own right, not just as negative criticism of the  financialization story. It a sign of the importance of ownership claims as a means of control over production— both as the substantive content of balance sheet positions and as a core part of corporate activity.

Third, the larger importance of the story is to the question of whether nonfinancial corporations and their managers should be seen mainly as participants in, or victims of, financialization. Conversely, is finance itself a distinct social actor? In a world in which the largest nonfinancial corporations have effectively turned themselves into hedge funds, it would not make much sense to talk about a conflict between productive capital and financial capital, or to imagine them as two distinct sets of people. But in a world like the one described here, or in my previous post, where the main nexus between nonfinancial corporations and finance is payments from the former to the latter, it may indeed make sense to think of them as distinct actors, of conflicts between them, and of intervening politically  on one side or the other.

Finally, to me, this paper is a model of  how to do empirical work in economics. Through some historical process I’d like to understand better, economists have become obsessed with regression, to the point that in academic economics it’s become synonymous with empirics. Regression analysis starts from the idea that the data we observe is a random draw from some underlying data generating process in which a variable of interest is a function of one or more other variables. The goal of the regression is to recover the parameters of that function by observing independent or exogenous variation in the variables. But for most macroeconomic questions, we are dealing with historical processes where our goal is to understand what actually happened, and where the hypothesis of some underlying data-generating process from which historical data is drawn randomly, is neither realistic nor useful. On the other hand, the economy is not a black box; we always have some idea of the mechanism linking macroeconomic variables. So we don’t need to evaluate our hypotheses by asking how probable the it would be to draw the distribution we observe from some hypothetical random process; we can, and generally should, ask instead whether the historical pattern is consistent with the mechanism. Furthermore, regression analysis is generally focused on the qualitative question of whether variation in one variable can be said to cause variation in a second one; but in historical macroeconomics we are generally interested in how much of the variation in some outcome is due to various causes. So a regression approach, it seems to me, is basically unsuited to the questions addressed here. This paper, it seems to me, is a model of what one should do instead.

Heterodoxy and the Fly-Bottle

(I have a review in the new Review of Keynesian Economics of a collection of essays on pluralist, or non-mainstream, economics teaching. You can the full review here. Since I doubt most readers of this blog are interested in the book, I’ve posted a shorter version of the review below – just the parts on the broader issues rather than my assessment of these particular essays.)

 

Wittgenstein famously described his aim in philosophy as “showing the fly the way out of the fly bottle.” The goal, he said, was not to resolve the questions posed by philosophers, but to escape them. As long as the fly is inside the bottle, understanding its contours is essential to getting it wherever it wants to go; but once the fly is outside, the shape of the bottle doesn’t matter at all.

Non-mainstream economists have a similar relationship to dominant theory. Because we’ve been inculcated for years that the best way to think about the economy is in terms of the exchange of goods by rational agents, criticisms of that framework are a necessary step on the way to thinking in other terms. But the logical and empirical shortcomings of thinking about economic life in terms of a perfectly rational representative agent optimizing utility over infinite future time don’t, in themselves, tell us how we should think instead.

The essentially negative character of economic heterodoxy is a special challenge for undergraduate teaching. You can’t teach criticisms of economic orthodoxy without first teaching the ideas to be criticized. Finding our way out of orthodoxy was, for many of us, central to our intellectual development. Naturally we want to reproduce that experience for our students. This leads to a style of teaching that amounts to putting the flies into the bottle so we can show them the way out. But how useful is it to our students to understand the defects of a logical system it would never have occurred to them to adopt in the first place? Having spent so much time looking for a way out, it sometimes seems we don’t know what do in the open air.

This dilemma is on full display in The Handbook of Pluralist Economics Education. In order to present a realistic model of the economy, Steve Keen writes in one of his two chapters, “an essential first is to demonstrate to students that the ostensibly well-developed and coherent traditional model is in fact an empty shell”. Many of the volume’s other contributors make similar claims. This is the spirit of Joan Robinson’s famous quip that the only reason to study economics is to avoid being fooled by economists. But if that is all we can offer, better to send our students to the departments of history, anthropology, engineering, or some other field that offers positive knowledge about social reality.

What then are we to do? Pluralism as such is not a useful guide; carried to an extreme it would, as Sheila Dow says here, amount to “anything goes,” which is not a viable basis for teaching a class (or for any other intellectual endeavor). This is a problem with pluralism as a positive value (and not only in economics teaching): Pluralism implies a number of distinct perspectives, but to be distinct they must be internally coherent, that is, unitary. Carried to an extreme, pluralism is self-undermining. To challenge the mainstream, at some point you must argue not just for the value of diversity in the abstract, but in favor of a particular alternative.

In practice, even economists who completely reject mainstream approaches in their own work often give them a large share of time in the classroom, in part because they feel obligated to prepare students for future academic work and in part, as Keen says, simply because of “the pressure to teach something”. Teaching is hard enough work even when you aren’t reconstructing the curriculum from the ground up. It’s much easier to teach a standard course and then add some critical material.

But pluralism in economics teaching doesn’t have to mean simply presenting orthodoxy and adding some criticisms of it. It could also mean approaching the material from a different angle that avoids — rather than attacks — the dominant formalisms in economics and gives students a useful set of tools for engaging with economic reality. For me, this means a focus on the definition and measurement of macroeconomic aggregates, and on the causal relationships between those aggregates. Concretely, it means reliance on flowcharts where the nodes are some observable variable, as opposed to the normal emphasis on diagrams representing functional relationships — ISLM, AS-AD, etc. — that can’t be directly observed.

A more specific problem in heterodox teaching — and heterodox economics in general — is the weight put on the financial crisis as an argument for alternatives to the mainstream. Many of the authors in this collection present the crisis of 2008 and its aftermath as a decisive refutation of economic orthodoxy. Edward Fullbrook declares that ‘no discipline has ever experienced systemic failure on the scale that economics has today.” David Wheat, less hyperbolically, argues that “the failure to foresee the financial epidemic in 2008” demonstrates a need to shift the focus of economics teaching away from long-run equilibrium. One might push back against this line of argument. It is true that several large financial institutions went bankrupt in 2008, and some financial assets fell steeply in value, to the dismay of their owners; but with the perspective of close to a decade, it’s less clear how much of a base these events offer for critique of either the economics profession or economic institutions. Singleminded focus on “the crisis” risks implying that the problem with our economic system is the rare occasions on which it fails to work well for owners of financial assets, while ignoring the ongoing problems of inequality, hierarchy and privilege; tedious and demeaning work; environmental degradation; and the fundamental disconnect between ever-increasing money wealth and unmet human needs – none of which has much to do with the failure of Lehman Brothers. As people used to say: capitalism is the crisis.

It is true, of course, that the economics profession failed to foresee or explain the 2008 crisis, but that’s nothing special. To make a list of phenomena unexplained by orthodox economics, just open the business pages of a newspaper. In any case, while it might have been reasonable at the time to expect some degree of self-criticism in the economics profession, and some increase in openness to alternatives, seven years later it is clear that there has not been. With a handful of exceptions – Naryana Kocherlakota is probably the most prominent in the US – mainstream economists have not revised their views in the light of the crisis; even those who were initially inclined to soul-searching have mostly decided that they were right all along. The case for heterodoxy must be made on other grounds.

Posts in Three Lines

I haven’t been blogging much lately. I’ve been doing real work, some of which will be appearing soon. But if I were blogging, here are some of the posts I might write.

*

Lessons from the 1990s. I have a new paper coming out from the Roosevelt Institute, arguing that we’re not as close to potential as people at the Fed and elsewhere seem to believe, and as I’ve been talking with people about it, it’s become clear that your priors depend a lot on how you think of the rapid growth of the 1990s. If you think it was a technological one-off, with no value as precedent — a kind of macroeconomic Bush v. Gore — then you’re likely to see today’s low unemployment as reflecting an economy working at full capacity, despite the low employment-population ratio and very weak productivity growth. But if you think the mid-90s is a possible analogue to the situation facing policymakers today, then it seems relevant that the last sustained episode of 4 percent unemployment led not to inflation but to employers actively recruiting new entrants to the laborforce among students, poor people, even prisoners.

Inflation nutters. The Fed, of course, doesn’t agree: Undeterred by the complete disappearance of the statistical relationship between unemployment and inflation, they continue to see low unemployment as a threatening sign of incipient inflation (or something) that must be nipped in the bud. Whatever other effects rate increases may have, the historical evidence suggests that one definite consequence will be rising private and public debt ratios. Economists focus disproportionately on the behavioral effects of interest rate changes and ignore their effects on the existing debt stock because “thinking like an economist” means, among other things, thinking in terms of a world in which decisions are made once and for all, in response to “fundamentals” rather than to conditions inherited from the past.

An army with only a signal corps. What are those other effects, though? Arguments for doubting central bankers’ control over macroeconomic outcomes have only gotten stronger than they were in the 2000s, when they were already strong; at the same time, when the ECB says, “let the government of Spain borrow at 2 percent,” it carries only a little less force than the God of genesis. I think we exaggerate power of central banks over real economy, but underestimate their power over financial markets (with the corollary that economists — heterodox as much as mainstream — see finance and real activity as much more tightly linked than they are).

It’s easy to be happy if you’re heterodox. This spring I was at a conference up at the University of Massachusetts, the headwaters of American heterodox economics, where I did my Phd. Seeing all my old friends reminded me what good prospects we in the heterodox world have – literally everyone I know from grad school has a good job. If you are wondering whether your prospects would be better at a nowhere-ranked heterodox economics program like UMass or a top-ranked program in some other social science, let me assure you, it’s the former by a mile — and you’ll probably have better drinking buddies as well.

The euro is not the gold standard. One of the topics I was talking about at the UMass conference was the euro which, I’ve argued, was intended to create something like a new gold standard, a hard financial constraint on governments. But that that was the intention doesn’t mean its the reality — in practice the TARGET2 system means that national central banks don’t face any binding constraint , unlike under the gold standard the central bank is “outside” the national monetary membrane. In this sense the euro is structurally more like Keynes’ proposals at Bretton Woods, it’s just not Keynes running it.

Can jobs be guaranteed? In principle I’m very sympathetic to the widespread (at least among my friends on social media) calls for a job guarantee. It makes sense as a direction of travel, implying a commitment to a much lower unemployment rate, expanded public employment, organizing work to fit people’s capabilities rather than vice versa, and increasing the power of workers vis-a-vis employers. But I have a nagging doubt: A job is contingent by its nature – without the threat of unemployment, can there even be employment as we know it?

The wit and wisdom of Haavelmo. I was talking a while back about Merijn Knibbe’s articles on the disconnect between economic theory and the national accounts with my friend Enno, and he mentioned Trygve Haavelmo’s 1944 article on The Probability Approach in Econometrics, which I’ve finally gotten around to reading. One of the big points of this brilliant article is that economic variables, and the models they enter into, are meaningful only via the concrete practices through which the variables are measured. A bigger point is that we study economics in order to “become master of the happenings of real life”: You can contribute to economics in the course of advancing a political project, or making money in financial markets, or administering a government agency (Keynes did all three), but you will not contribute if you pursue economics as an end in itself.

Coney Island. Laura and I took the boy down to Coney Island a couple days ago, a lovely day, his first roller coaster ride, rambling on the beach, a Cyclones game. One of the wonderful things about Coney Island is how little it’s changed from a century ago — I was rereading Delmore Schwartz’s In Dreams Begin Responsibilities the other day, and the title story’s description of a young immigrant couple walking the boardwalk in 1909 could easily be set today — so it’s disconcerting to think that the boy will never take his grandchildren there. It will all be under water.

I Don’t See Any Method At All

I’ve felt for a while that most critiques of economics miss the mark. They start from the premise that economics is a systematic effort to understand the concrete social phenomena we call “the economy,” an effort that has gone wrong in some way.

I don’t think that’s the right way to think about it. I think McCloskey was right to say that economics is just what economists do. Economic theory is essentially closed formal system; it’s a historical accident that there is some overlap between its technical vocabulary and the language used to describe concrete economic phenomena. Economics the discipline is to the economy the sphere of social reality as chess theory is to medieval history: The statement, say, that “queens are most effective when supported by strong bishops” might be reasonable in both domains, but studying its application in the one case will not help at all in applying it in in the other. A few years ago Richard Posner said that he used to think economics meant the study of “rational” behavior in whatever domain, but after the financial crisis he decided it should mean the study of the behavior of the economy using whatever methodologies. (I can’t find the exact quote.) Descriptively, he was right the first time; but the point is, these are two different activities. Or to steal a line from my friend Suresh, the best way to think about what most economists do is as a kind of constrained-maximization poetry. Makes no more sense to ask “is it true” than of a haiku.

One consequence of this is, as I say, that radical criticism of the realism or logical consistency of orthodox economics do nothing to get us closer to a positive understanding of the economy. How is a raven unlike a writing desk? An endless number of ways, and enumerating them will leave you no wiser about either corvids or carpentry. Another consequence, the topic of the remainder of this post, is that when we turn to concrete economic questions there isn’t really a “mainstream” at all. Left critics want to take academic orthodoxy, a right-wing political vision, and the economic policy preferred by the established authorities, and roll them into a coherent package. But I don’t think you can. I think there is a mix of common-sense opinions, political prejudices, conventional business practice, and pragmatic rules of thumb, supported in an ad hoc, opportunistic way by bits and pieces of economic theory. It’s not possible to deduce the whole tottering pile from a few foundational texts.

More concretely: An economics education trains you to think in terms of real exchange — in terms of agents who (somehow or other) have come into possession of a bundle of goods, which they trade with each other. You can only use this framework to make statements about real economic phenomena if they are understood in terms of the supply side — if economic outcomes are understood in terms of different endowments of goods, or different real uses for them. Unless you’re in a position to self-consciously take another perspective, fitting your understanding of economic phenomena into a broader framework is going to mean expressing it as this kind of story, about the limited supply of real resources available, and the unlimited demands on them to meet real human needs. But there may be no sensible story of that kind to tell.

More concretely: What are the major macroeconomic developments of the past ten to twenty years, compared, say, with the previous fifty? For the US and most other developed countries, the list might look like:

– low and falling inflation

– low and falling interest rates

– slower growth of output

– slower growth of employment

– low business investment

– slower growth of labor productivity growth

– a declining share of wages in income

If you pick up an economics textbook and try to apply it to the world around you, these are some of the main phenomena you’d want to explain. What does the orthodox, supply-side theory tell us?

The textbook says that lower inflation is normally the result of a positive supply shock — an increase in real resources or an improvement in technology. OK. But then what do we make of the slowdown in output and productivity?

The textbook says that, over the long run interest rates must reflect the marginal product of capital — the central bank (and monetary factors in general) can only change interest rates in the short run, not over a decade or more. In the Walrasian world, the interest rate and the return on investment are the same thing. So a sustained decline in interest rates must mean a decline in the marginal product of capital.

OK. So in combination with the slowdown in output growth, that suggests a negative technological shock. But that should mean higher inflation. Didn’t we just say that lower inflation implies a positive technological shock?

Employment growth in this framework is normally determined by demographics, or perhaps by structural changes in labor markets that change the effective labor supply. Slower employment growth means a falling labor supply — but that should, again, be inflationary. And it should be associated with higher wagess: If labor is becoming relatively scarce, its price should rise. Yes, the textbook combines a bargaining mode of wage determination for the short run with a marginal product story for the long run, without ever explaining how they hook up, but in this case it doesn’t matter, the two stories agree. A fall in the labor supply will result in a rise in the marginal product of labor as it’s withdrawn from the least productive activities — that’s what “marginal” means! So either way the demographic story of falling employment is inconsistent with low inflation, with a falling wage share, and with the showdown in productivity growth.

Slower growth of labor productivity could be explained by an increase in labor supply  — but then why has employment decelerated so sharply? More often it’s taken as technologically determined. Slower productivity growth then implies a slowdown in innovation — which at least is consistent with low interest rates and low investment. But this “negative technology shock” should again, be inflationary. And it should be associated with a fall in the return to capital, not a rise.

On the other hand, the decline in the labor share is supposed to reflect a change in productive technology that encourages substitution of capital for labor, robots and all that. But how is this reconciled with the fall in interest rates, in investment and in labor productivity? To replace workers with robots, someone has to make the robots, and someone has to buy them. And by definition this raises the productivity of the remaining workers.

Which subset of these mutually incompatible stories does the “mainstream” actually believe? I don’t know that they consistently believe any of them. My impression is that people adopt one or another based on the question at hand, while avoiding any systematic analysis through violent abuse of the ceteris paribus condition.

To paraphrase Leijonhufvud, on Mondays and Wednesdays wages are low because technological progress has slowed down, holding down labor productivity. On Tuesdays and Thursdays wages are low because technological progress has sped up, substituting capital for labor. Students may come away a bit confused but the main takeaway is clear: Low wages are the result of inexorable, exogenous technological change, and not of any kind of political choice. And certainly not of weak aggregate demand.

Larry Summers in this actually quite good Washington Post piece, at least is no longer talking about robots. But he can’t completely resist the supply-side lure: “The situation is worse in other countries with more structural issues and slower labor-force growth.” Wait, why would they be worse? As he himself says, “our problem today is insufficient inflation,” so what’s needed “is to convince people that prices will rise at target rates in the future,” which will “require … very tight markets.” If that’s true, then restrictions on labor supply are a good thing — they make it easier to generate wage and price increases. But that is still an unthought.

I admit, Summers does go on to say:

In the presence of chronic excess supply, structural reform has the risk of spurring disinflation rather than contributing to a necessary increase in inflation.  There is, in fact, a case for strengthening entitlement benefits so as to promote current demand. The key point is that the traditional OECD-type recommendations cannot be right as both a response to inflationary pressures and deflationary pressures. They were more right historically than they are today.

That’s progress, for sure — “less right” is a step toward “completely wrong”. The next step will be to say what his argument logically requires. If the problem is as he describes it then structural “problems” are part of the solution.

Varieties of the Phillips Curve

In this post, I first talk about a variety of ways that we can formalize the relationship between wages, inflation and productivity. Then I talk briefly about why these links matter, and finally how, in my view, we should think about the existence of a variety of different possible relationships between these variables.

*

My Jacobin piece on the Fed was, on a certain abstract level, about varieties of the Phillips curve. The Phillips curve is any of a family graphs with either unemployment or “real” GDP on the X axis, and either the level or the change of nominal wages or the level of prices or the level or change of inflation on the Y axis. In any of the the various permutations (some of which naturally are more common than others) this purports to show a regular relationship between aggregate demand and prices.

This apparatus is central to the standard textbook account of monetary policy transmission. In this account, a change in the amount of base money supplied by the central bank leads to a change in market interest rates. (Newer textbooks normally skip this part and assume the central bank sets “the” interest rate by some unspecified means.) The change in interest rates  leads to a change in business and/or housing investment, which results via a multiplier in a change in aggregate output. [1] The change in output then leads to a change in unemployment, as described by Okun’s law. [2] This in turn leads to a change in wages, which is passed on to prices. The Phillips curve describes the last one or two or three steps in this chain.

Here I want to focus on the wage-price link. What are the kinds of stories we can tell about the relationship between nominal wages and inflation?

*

The starting point is this identity:

(1) w = y + p + s

That is, the percentage change in nominal wages (w) is equal to the sum of the percentage changes in real output per worker (y; also called labor productivity), in the price level (p, or inflation) and in the labor share of output (s). [3] This is the essential context for any Phillips curve story. This should be, but isn’t, one of the basic identities in any intermediate macroeconomics textbook.

Now, let’s call the increase in “real” or inflation-adjusted wages r. [4] That gives us a second, more familiar, identity:

(2) r = w – p

The increase in real wages is equal to the increase in nominal wages less the inflation rate.

As always with these kinds of accounting identities, the question is “what adjusts”? What economic processes ensure that individual choices add up in a way consistent with the identity? [5]

Here we have five variables and two equations, so three more equations are needed for it to be determined. This means there are large number of possible closures. I can think of five that come up, explicitly or implicitly, in actual debates.

Closure 1:

First is the orthodox closure familiar from any undergraduate macroeconomics textbook.

(3a) w = pE + f(U); f’ < 0

(4a) y = y*

(5a) p = w – y

Equation 3a says that labor-market contracts between workers and employers result in nominal wage increases that reflect expected inflation (pE) plus an additional increase, or decrease, that reflects the relative bargaining power of the two sides. [6] The curve described by f is the Phillips curve, as originally formulated — a relationship between the unemployment rate and the rate of change of nominal wages. Equation 4a says that labor productivity growth is given exogenously, based on technological change. 5a says that since prices are set as a fixed markup over costs (and since there is only labor and capital in this framework) they increase at the same rate as unit labor costs — the difference between the growth of nominal wages and labor productivity.

It follows from the above that

(6a) w – p = y

and

(7a) s = 0

Equation 6a says that the growth rate of real wages is just equal to the growth of average labor productivity. This implies 7a — that the labor share remains constant. Again, these are not additional assumptions, they are logical implications from closing the model with 3a-5a.

This closure has a couple other implications. There is a unique level of unemployment U* such that w = y + p; only at this level of unemployment will actual inflation equal expected inflation. Assuming inflation expectations are based on inflation rates realized in the past, any departure from this level of unemployment will cause inflation to rise or fall without limit. This is the familiar non-accelerating inflation rate of unemployment, or NAIRU. [7] Also, an improvement in workers’ bargaining position, reflected in an upward shift of f(U), will do nothing to raise real wages, but will simply lead to higher inflation. Even more: If an inflation-targetting central bank is able to control the level of output, stronger bargaining power for workers will leave them worse off, since unemployment will simply rise enough to keep nominal wage growth in line with y*  and the central bank’s inflation target.

Finally, notice that while we have introduced three new equations, we have also introduced a new variable, pE, so the model is still underdetermined. This is intended. The orthodox view is that the same set of “real“ values is consistent with any constant rate of inflation, whatever that rate happens to be. It follows that a departure of the unemployment rate from U* will cause a permanent change in the inflation rate. It is sometimes suggested, not quite logically, that this is an argument in favor of making price stability the overriding goal of policy. [8]

If you pick up an undergraduate textbook by Carlin and Soskice, Krugman and Wells, or Blanchard, this is the basic structure you find. But there are other possibilities.

Closure 2: Bargaining over the wage share

A second possibility is what Anwar Shaikh calls the “classical” closure. Here we imagine the Phillips curve in terms of the change in the wage share, rather than the change in nominal wages.

(3b) s =  f(U); f’ < 0

(4b) y = y*

(5b) p = p*

Equation 3b says that the wage share rises when unemployment is low, and falls when unemployment is high. In this closure, inflation as well as labor productivity growth are fixed exogenously. So again, we imagine that low unemployment improves the bargaining position of workers relative to employers, and leads to more rapid wage growth. But now there is no assumption that prices will follow suit, so higher nominal wages instead translate into higher real wages and a higher wage share. It follows that:

(6b) w = f(U) + p + y

Or as Shaikh puts it, both productivity growth and inflation act as shift parameters for the nominal-wage Phillips curve. When we look at it this way, it’s no longer clear that there was any breakdown in the relationship during the 1970s.

If we like, we can add an additional equation making the change in unemployment a function of the wage share, writing the change in unemployment as u.

(7b) u = g(s); g’ > 0 or g’ < 0

If unemployment is a positive function of the wage share (because a lower profit share leads to lower investment and thus lower demand), then we have the classic Marxist account of the business cycle, formalized by Goodwin. But of course, we might imagine that demand is “wage-led” rather than “profit-led” and make U a negative function of the wage share — a higher wage share leads to higher consumption, higher demand, higher output and lower unemployment. Since lower unemployment will, according to 3b, lead to a still higher wage share, closing the model this way leads to explosive dynamics — or more reasonably, if we assume that g’ < 0 (or impose other constraints), to two equilibria, one with a high wage share and low unemployment, the other with high unemployment and a low wage share. This is what Marglin and Bhaduri call a “stagnationist” regime.

Let’s move on.

Closure 3: Real wage fixed.

I’ll call this the “Classical II” closure, since it seems to me that the assumption of a fixed “subsistence” wage is used by Ricardo and Malthus and, at times at least, by Marx.

(3c) w – p = 0

(4c) y = y*

(5c) p = p*

Equation 3c says that real wages are constant the change in nominal wages is just equal to the change in the price level. [9] Here again the change in prices and in labor productivity are given from outside. It follows that

(6c) s = -y

Since the real wage is fixed, increases in labor productivity reduce the wage share one for one. Similarly, falls in labor productivity will raise the wage share.

This latter, incidentally, is a feature of the simple Ricardian story about the declining rate of profit. As lower quality land if brought into use, the average productivity of labor falls, but the subsistence wage is unchanged. So the share of output going to labor, as well as to landlords’ rent, rises as the profit share goes to zero.

Closure 4:

(3d) w =  f(U); f’ < 0

(4d) y = y*

(5d) p = p*

This is the same as the second one except that now it is the nominal wage, rather than the wage share, that is set by the bargaining process. We could think of this as the naive model: nominal wages, inflation and productivity are all just whatever they are, without any regular relationships between them. (We could even go one step more naive and just set wages exogenously too.) Real wages then are determined as a residual by nominal wage growth and inflation, and the wage share is determined as a residual by real wage growth and productivity growth. Now, it’s clear that this can’t apply when we are talking about very large changes in prices — real wages can only be eroded by inflation so far.  But it’s equally clear that, for sufficiently small short-run changes, the naive closure may be the best we can do. The fact that real wages are not entirely a passive residual, does not mean they are entirely fixed; presumably there is some domain over which nominal wages are relatively fixed and their “real” purchasing power depends on what happens to the price level.

Closure 5:

One more.

(3e) w =  f(U) + a pE; f’ < 0; 0 < a < 1

(4e) y = b (w – p); 0 < b < 1

(5e) p =  c (w – y); 0 < c < 1

This is more generic. It allows for an increase in nominal wages to be distributed in some proportion between higher inflation, an increase in the wage share,  and faster productivity growth. The last possibility is some version of Verdoorn’s law. The idea that scarce labor, or equivalently rising wages, will lead to faster growth in labor productivity is perfectly admissible in an orthodox framework.  But somehow it doesn’t seem to make it into policy discussions.

In other word, lower unemployment (or a stronger bargaining position for workers more generally) will lead to an increase in the nominal wage. This will in turn increase the wage share, to the extent that it does not induce higher inflation and/or faster productivity growth:

(6e) s = (1  – b – c) w

This closure includes the first two as special cases: closure 1 if we set a = 0, b = 0, and c = 1, closure 2 if we set a = 1, b = 0, and c < 1. It’s worth framing the more general case to think clearly about the intermediate possibilities. In Shaikh’s version of the classical view, tighter labor markets are passed through entirely to a higher labor share. In the conventional view, they are passed through entirely to higher inflation. There is no reason in principle why it can’t be some to each, and some to higher productivity as well. But somehow this general case doesn’t seem to get discussed.

Here is a typical example  of the excluded middle in the conventional wisdom: “economic theory suggests that increases in labor costs in excess of productivity gains should put upward pressure on prices; hence, many models assume that prices are determined as a markup over unit labor costs.” Notice the leap from the claim that higher wages put some pressure on prices, to the claim that wage increases are fully passed through to higher prices. Or in terms of this last framework: theory suggests that b should be greater than zero, so let’s assume b is equal to one. One important consequence is to implicitly exclude the possibility of a change in the wage share.

*

So what do we get from this?

First, the identity itself. On one level it is obvious. But too many policy discussions — and even scholarship — talk about various forms of the Phillips curve without taking account of the logical relationship between wages, inflation, productivity and factor shares. This is not unique to this case, of course. It seems to me that scrupulous attention to accounting relationships, and to logical consistency in general, is one of the few unambiguous contributions economists make to the larger conversation with historians and other social scientists. [10]

For example: I had some back and forth with Phil Pilkington in comments and on twitter about the Jacobin piece. He made some valid points. But at one point he wrote: “Wages>inflation + productivity = trouble!” Now, wages > inflation + productivity growth just means, an increasing labor share. It’s two ways of saying the same thing. But I’m pretty sure that Phil did not intend to write that an increase in the labor share always means trouble. And if he did seriously mean that, I doubt one reader in a hundred would understand it from what he wrote.

More consequentially, austerity and liberalization are often justified by the need to prevent “real unit labor costs” from rising. What’s not obvious is that “real unit labor costs” is simply another word for the labor share. Since by definition the change real unit labor costs is just the change in nominal wages less sum of inflation and productivity growth. Felipe and Kumar make exactly this point in their critique of the use of unit labor costs as a measure of competitiveness in Europe: “unit labor costs calculated with aggregate data are no more than the economy’s labor share in total output multiplied by the price level.” As they note, one could just as well compute “unit capital costs,” whose movements would be just the opposite. But no one ever does, instead they pretend that a measure of distribution is a measure of technical efficiency.

Second, the various closures. To me the question of which behavioral relations we combine the identity with — that is, which closure we use — is not about which one is true, or best in any absolute sense. It’s about the various domains in which each applies. Probably there are periods, places, timeframes or policy contexts in which each of the five closures gives the best description of the relevant behavioral links. Economists, in my experience, spend more time working out the internal properties of formal systems than exploring rigorously where those systems apply. But a model is only useful insofar as you know where it applies, and where it doesn’t. Or as Keynes put it in a quote I’m fond of, the purpose of economics is “to provide ourselves with an organised and orderly method of thinking out particular problems” (my emphasis); it is “a way of thinking … in terms of models joined to the art of choosing models which are relevant to the contemporary world.” Or in the words of Trygve Haavelmo, as quoted by Leijonhufvud:

There is no reason why the form of a realistic model (the form of its equations) should be the same under all values of its variables. We must face the fact that the form of the model may have to be regarded as a function of the values of the variables involved. This will usually be the case if the values of some of the variables affect the basic conditions of choice under which the behavior equations in the model are derived.

I might even go a step further. It’s not just that to use a model we need to think carefully about the domain over which it applies. It may even be that the boundaries of its domain are the most interesting thing about it. As economists, we’re used to thinking of models “from the inside” — taking the formal relationships as given and then asking what the world looks like when those relationships hold. But we should also think about them “from the outside,” because the boundaries within which those relationships hold are also part of the reality we want to understand. [11] You might think about it like laying a flat map over some curved surface. Within a given region, the curvature won’t matter, the flat map will work fine. But at some point, the divergence between trajectories in our hypothetical plane and on the actual surface will get too large to ignore. So we will want to have a variety of maps available, each of which minimizes distortions in the particular area we are traveling through — that’s Keynes’ and Haavelmo’s point. But even more than that, the points at which the map becomes unusable, are precisely how we learn about the curvature of the underlying territory.

Some good examples of this way of thinking are found in the work of Lance Taylor, which often situates a variety of model closures in various particular historical contexts. I think this kind of thinking was also very common in an older generation of development economists. A central theme of Arthur Lewis’ work, for example, could be thought of in terms of poor-country labor markets that look  like what I’ve called Closure 3 and rich-country labor markets that look like Closure 5. And of course, what’s most interesting is not the behavior of these two systems in isolation, but the way the boundary between them gets established and maintained.

To put it another way: Dialectics, which is to say science, is a process of moving between the concrete and the abstract — from specific cases to general rules, and from general rules to specific cases. As economists, we are used to grounding concrete in the abstract — to treating things that happen at particular times and places as instances of a universal law. The statement of the law is the goal, the stopping point. But we can equally well ground the abstract in the concrete — treat a general rule as a phenomenon of a particular time and place.

 

 

 

[1] In graduate school you then learn to forget about the existence of businesses and investment, and instead explain the effect of interest rates on current spending by a change in the optimal intertemporal path of consumption by a representative household, as described by an Euler equation. This device keeps academic macroeconomics safely quarantined from contact with discussion of real economies.

[2] In the US, Okun’s law looks something like Delta-U = 0.5(2.5 – g), where Delta-U is the change in the unemployment rate and g is inflation-adjusted growth in GDP. These parameters vary across countries but seem to be quite stable over time. In my opinion this is one of the more interesting empirical regularities in macroeconomics. I’ve blogged about it a bit in the past  and perhaps will write more in the future.

[3] To see why this must be true, write L for total employment, Z for the level of nominal GDP, Y for per-capita GDP, W for the average wage, and P for the price level. The labor share S is by definition equal to total wages divided by GDP:

S = WL / Z

Real output per worker is given by

Y = (Z/P) / L

Now combine the equations and we get W = P Y S. This is in levels, not changes. But recall that small percentage changes can be approximated by log differences. And if we take the log of both sides, writing the log of each variable in lowercase, we get w = y + p + s. For the kinds of changes we observe in these variables, the approximation will be very close.

[4] I won’t keep putting “real” in quotes. But it’s important not to uncritically accept the dominant view that nominal quantities like wages are simply reflections of underlying non-monetary magnitudes. In fact the use of “real” in this way is deeply ideological.

[5] A discovery that seems to get made over and over again, is that since an identity is true by definition, nothing needs to adjust to maintain its equality. But it certainly does not follow, as people sometimes claim, that this means you cannot use accounting identities to reason about macroeconomic outcomes. The point is that we are always using the identities along with some other — implicit or explicit — claims about the choices made by economic units.

[6] Note that it’s not necessary to use a labor supply curve here, or to make any assumption about the relationship between wages and marginal product.

[7] Often confused with Milton Friedman’s natural rate of unemployment. But in fact the concepts are completely different. In Friedman’s version, causality runs the other way, from the inflation rate to the unemployment rate. When realized inflation is different from expected inflation, in Friedman’s story, workers are deceived about the real wage they are being offered and so supply the “wrong” amount of labor.

[8] Why a permanently rising price level is inconsequential but a permanently rising inflation rate is catastrophic, is never explained. Why are real outcomes invariant to the first derivative of the price level, but not to the second derivative? We’re never told — it’s an article of faith that money is neutral and super-neutral but not super-super-neutral. And even if one accepts this, it’s not clear why we should pick a target of 2%, or any specific number. It would seem more natural to think inflation should follow a random walk, with the central bank holding it at its current level, whatever that is.

[9] We could instead use w – p = r*, with an exogenously given rate of increase in real wages. The logic would be the same. But it seems simpler and more true to the classics to use the form in 3c. And there do seem to be domains over which constant real wages are a reasonable assumption.

[10] I was just starting grad school when I read Robert Brenner’s long article on the global economy, and one of the things that jumped out at me was that he discussed the markup and the wage share as if they were two independent variables, when of course they are just two ways of describing the same thing. Using s still as the wage share, and m as the average markup of prices over wages, s = 1 / (1 + m). This is true by definition (unless there are shares other than wages or profits, but none such figure in Brenner’s analysis). The markup may reflect the degree of monopoly power in product markets while the labor share may reflect bargaining power within the firm, but these are two different explanations of the same concrete phenomenon. I like to think that this is a mistake an economist wouldn’t make.

[11] The Shaikh piece mentioned above is very good. I should add, though, the last time I spoke to Anwar, he criticized me for “talking so much about the things that have changed, rather than the things that have not” — that is, for focusing so much on capitalism’s concrete history rather than its abstract logic. This is certainly a difference between Shaikh’s brand of Marxism and whatever it is I do. But I’d like to think that both approaches are called for.

 

EDIT: As several people pointed out, some of the equations were referred to by the wrong numbers. Also, Equation 5a and 5e had inflation-expectation terms in them that didn’t belong. Fixed.

EDIT 2: I referred to an older generation of development economics, but I think this awareness that the territory requires various different maps, is still more common in development than in most other fields. I haven’t read Dani Rodrik’s new book, but based on reviews it sounds like it puts forward a pretty similar view of economics methodology.