Taking Money Seriously

(Text of a talk I delivered at the Watson Institute for International and Public Affairs at Brown University on June 17, 2024.)

There is an odd dual quality to the world around us.

Consider a building. It has one, two or many stories; it’s made of wood, brick or steel; heated with oil or gas; with doors, windows and so on. If you could disassemble the building you could make a precise quantitative description of it — so many bricks, so much length of wire and pipe, so many tiles and panes of glass.

A building also has a second set of characteristics, that are not visible to the senses. Every building has an owner, who has more or less exclusive rights to the use of it. It has a price, reflected in some past or prospective sale and recorded on a balance sheet. It generates a stream of money payments. To the owner from tenants to whom the owner delegated som of their rights. From the owner to mortgage lenders and tax authorities, and to the people whose labor keeps them operating — or to the businesses that command that labor. Like the bricks in the building’s walls or the water flowing through its pipes, these can be expressed as numbers. But unlike those physical quantities, all of these can be expressed in the same way, as dollars or other units of currency.

What is the relationship between these two sets of characteristics? Do the prices and payments simply describe the or reflect the physical qualities? Or do they have their own independent existence? 

My starting point is that this is a problem — that the answer is not obvious.

The relationship between money-world and the concrete social and material world is long-standing, though not always explicit, question in the history of economic thought. A central strand in that history is the search for an answer that unifies these two worlds into one. 

From the beginnings of economics down to today’s textbooks, you can find variations on the argument that money quantities and money payments are just shorthand for the characteristics and use of concrete material objects. They are neutral — mere descriptions, which can’t change the underlying things. 

In 1752, we find David Hume writing that “Money is nothing but the representation of labour and commodities… Where coin is in greater plenty; as a greater quantity of it is required to represent the same quantity of goods; it can have no effect, either good or bad.”

And at the turn of the 21st century, we hear the same thing from FOMC member Lawrence Meyer: “Monetary policy cannot influence real variables–such as output and employment.” Money, he says, only affects “inflation in the long run. This immediately makes price stability … the direct, unequivocal, and singular long-term objective of monetary policy.”

We could add endless examples in between.

This view profoundly shapes most of our thinking about the economy.

We’ve all heard that money is neutral — that changes in the supply or availability of money only affect the price level while leaving relative prices and real activity unchanged. We’ve probably encountered the Coase Theorem, which says that the way goods are allocated to meet real human needs should be independent of who holds the associated property rights. We are used to talking about “real” output and “real “ interest rates without worrying too much about what they refer to.

There is, of course, also a long history of arguments on the other side — that money is autonomous, that money and credit are active forces shaping the concrete world of production and exchange, that there is no underlying value to which money-prices refer. But for the most part, these counter-perspectives occupy marginal or subterranean positions in economic theory, though they may have been influential in other domains.

The great exception is, of course, Keynes. Indeed, there is an argument that what was revolutionary about the Keynesian revolution was his break with orthodoxy on precisely this point. In the period leading up to the General Theory, he explained that the difference between the economic orthodoxy and the new theory he was seeking to develop was fundamentally the difference between the dominant vision of the economy in terms of what he called “real exchange,” and an alternative he vision he described as “monetary production.”

The orthodox theory (in our day as well as his) started from an economy in which commodities exchanged for other commodities, and then brought money in at a later stage, if at all, without changing the fundamental material tradeoffs on which exchange was based. His theory, by contrast, would describe an economy in which money is not neutral, and in which the organization of production cannot be understood in nonmonetary terms. Or in his words, it is the theory of “an economy in which money plays a part of its own and affects motives and decisions and is … so that the course of events cannot predicted, either in the long period or in the short, without a knowledge of the behavior of money.”

*

If you are fortunate enough to have been educated in the Keynesian tradition, then it’s easy enough to reject the idea that money is neutral. But figuring out how money world and concrete social reality do connect — that is not so straightforward. 

I’m currently in the final stages of writing a book with Arjun Jayadev, Money and Things, that is about exactly this question — the interface of money world with the social and material world outside of it. 

Starting from Keynes monetary-production vision, we explore question of how money matters in four settings.

First, the determination of the interest rate. There is, we argue, a basic incompatibility between a theory of the interest rate as price of saving or of time, and of the monetary interest rate we observe in the real world. And once we take seriously the idea of interest as the price of liquidity, we see why money cannot be neutral — why financial conditions invariably influence the composition as well as the level of expenditure. 

Second, price indexes and “real” quantities.  The ubiquitous  “real” quantities constructed by economists are, we suggest, at best phantom images of monetary quantities. Human productive activity is not in itself describable in terms of aggregate quantities. Obviously particular physical quantities, like the materials in this building, do exist. But there is no way to make a quantitative comparison between these heterogeneous things except on the basis of money prices — prices are not measuring any preexisting value. Prices within an exchange community are objective, from the point of view of those within the community. But there is no logically consistent procedure for comparing “real” output once you leave boundaries of a given exchange community, whether across time or between countries

The third area we look at the interface of money world and social reality is corporate finance and governance. We see the corporation as a central site of tension between the distinct social logics of money and production. Corporations are the central institutions of monetary production, but they are not themselves organized on market principles. In effect, the pursuit of profit pushes wealth owners to accept a temporary suspension of the logic of market – but this can only be carried so far.

The fourth area is debt and capital. These two central aggregates of money-world are generally understood to reflect “real,” nonmonetary facts about the world — a mass of means of production in the case of capital, cumulated spending relative to income in the case of debt. But the actual historical evolution of these aggregates cannot, we show, be understood in this way in either case. The evolution of capital as we observe it, in the form of wealth, is driven by changes in the value of existing claims on production, rather than the accumulation of new capital goods. These valuation changes in turn reflect, first, social factors influencing division of income between workers and owners and, second, financial factors influencing valuations of future income streams. Debt is indeed related to borrowing, in a way that capital is not related to accumulation. But changes in indebtedness over time owe as much to interest, income and price-level changes that affect burden of existing debt stock as they do to new borrowing. And in any case borrowing mainly finances asset ownership, as opposed to the dissaving that the real-excahnge vision imagines it as.

Even with the generous time allotted to me, I can’t discuss all four of those areas. So in this talk I will focus on the interest rate.

*

Some of what I am going to say here may seem familiar, or obvious. 

But I think it’s important to start here because it is so central to debates about money and macroeconomics. Axel Leijonhufvud long ago argued that the theory of the interest rate was at the heart of the confusion in modern macroeconomics. “The inconclusive quarrels … that drag on because the contending parties cannot agree what the issue is, largely stem from this source.” I think this is still largely true. 

Orthodoxy thinks of the interest rate as the price of savings, or loanable funds, or alternatively, as the tradeoff between consumption in the future and consumption in the present.

Interest in this sense is a fundamentally non-monetary concept. It is a price of two commodities, based on the same balance of scarcity and human needs that are the basis of other prices. The tradeoff between a shirt today and a shirt next year, expressed in the interest rate, is no different between the tradeoff between a cotton shirt and a linen one, or one with short versus long sleeves. The commodities just happen to be distinguished by time, rather than some other quality. 

Monetary loans, in this view, are just like a loan of a tangible object. I have a some sugar, let’s say. My neighbor knocks on the door, and asks to borrow it. If I lend it to them, I give up the use of it today. Tomorrow, the neighbor will return the same amount of sugar to me, plus something  extra – perhaps one of the cookies they baked with it. Whatever income you receive from ownership of an asset — whether we call it interest, profit or cookies — is a reward for deferring your use of the concrete services that the asset provides.

This way of thinking about interest is ubiquitous in economics. In the early 19th century Nassau Senior described interest as the reward for abstinence, which gives it a nice air of Protestant morality. In a current textbook, in this case Gregory Mankiw’s, you can find the same idea expressed in more neutral language: “Saving and investment can be interpreted in terms of supply and demand … of loanable funds — households lend their savings to investors or deposit their savings in a bank that then loans the funds out.”

It’s a little ambiguous exactly how we are supposed to imagine these funds, but clearly they are something that already exists before the bank comes into the picture. Just as with the sugar, if their owner is not currently using them, they can lend them to someone else, and get a reward for doing so.

If you’ve studied macroeconomics at the graduate level, you probably spent much of the semester thinking about variations on this story of tradeoffs between stuff today and stuff in the future, in the form of an Euler equation equating marginal costs and benefits across time. It’s not much of an exaggeration to say that mathematically elaborated versions of this story are the contemporary macro curriculum.

Money and finance don’t come into this story. As Mankiw says, investors can borrow from the public directly or indirectly via banks – the economic logic is the same either way. 

We might challenge this story from a couple of directions.

One criticism — first made by Piero Sraffa, in a famous debate with Friedrich Hayek about 100 years ago — is that in a non monetary world each commodity will have its own distinct rate of interest. Let’s say a pound of flour trades for 1.1 pounds (or kilograms) of flour a year from now. What will a pound or kilo of sugar today trade for? If, over the intervening year, the price of usage rises relative to the price of flour, then a given quantity of sugar today will trade for a smaller amount of sugar a year from now, than the same quantity of flour will. Unless the relative price of flour and sugar are fixed, their interest rates will be different. Flour today will trade at one rate for flour in the future, sugar at a different rate; the use of a car or a house, a kilowatt of electricity, and so on will each trade with the same thing in the future at their own rates, reflecting actual and expected conditions in the markets for each of these commodities. There’s no way to say that any one of these myriad own-rates is “the” rate of interest.

Careful discussions of the natural rate of interest will acknowledge that it is only defined under the assumption that relative prices never change.

Another problem is that the savings story assumes that the thing to be loaned — whether it is a specific commodity or generic funds — already exists. But in the monetary economy we live in, production is carried out for sale. Things that are not purchased, will not be produced. When you decide not to consume something, you don’t make that thing available for someone else. Rather, you reduce the output of it, and the income of the producers of it, by the same amount as you reduce your own consumption. 

Saving, remember, is the difference between income and consumption. For you as an individual, you can take my  income as given when deciding how much to consume. So consuming less means saving more. But at the level of the economy as a whole, income is not independent of consumption. A decision to consume less does not raise aggregate saving, it lowers aggregate income. This is the fallacy of consumption emphasized by Keynes: individual decisions about consumption and saving have no effect on aggregate saving.

So the question of how the interest rate is determined, is linked directly to the idea of demand constraints.

Alternatively, rather than criticizing the loanable-funds story, we can start from the other direction, from the monetary world we actually live in. Then we’ll see that credit transactions don’t involve the sort of tradeoff between present and future that orthodoxy focuses on. 

Let’s say you are buying a home.

On the day that you settle , you visit the bank to finalize your mortgage. The bank manager puts in two ledger entries: One is a credit to your account, and a liability to the bank, which we call the deposit. The other, equal and offsetting entry is a credit to the bank’s own account, and a liability for you. This is what we call the loan. The first is an IOU from the bank to you, payable at any time.  The second is an IOU from you to the bank,  with specified payments every month, typically, in the US, for the next 30 years. Like ordinary IOUs, these ledger entries are created simply by recording them — in earlier times it was called “fountain pen” money.

The deposit is then immediately transferred to the seller, in return for the title to the house. For the bank, this simply means changing the name on the deposit — in effect,  you communicate to the bank that their debt that was payable to you, is now payable to the seller. On your balance sheet, one asset has been swapped for another — the $250,000 deposit, in this case, for a house worth $250,000.  The seller makes the opposite swap, of the title to a house for an equal value IOU from the bank.

As we can see, there is no saving or dissaving here. Everyone has just swapped assets of equal value.

This mortgage is not a loan of preexisting funds or of anything else. No one had to first make a deposit at the bank in order to allow them to make this loan.  The deposit — the money — was created in the process of making the loan itself. Banking does not channel saving to borrowing as in the loanable-funds view, but allows a swap of promises.

One thing I always emphasize to my students: You should not talk about putting money in the bank. The bank’s record is the money.

On one level this is common knowledge. I am sure almost everyone in this room could explain how banks create money. But the larger implications are seldom thought through. 

What did this transaction consist of? A set of promises. The bank made a promise to the borrowers, and the borrowers made a promise to the bank. And then the bank’s promise was transferred to the sellers, who can transfer it to some third party in turn. 

The reason that the bank is needed here is because you cannot directly make a promise to the seller. 

You are willing to make a promise of future payments whose present value is worth more than the value the seller puts on their house. Accepting that deal will make both sides better off. But you can’t close that deal, because your promise of payments over the next 30 years is not credible. They don’t know if you are good for it. They don’t have the ability to enforce it. And even they trust you, maybe because you’re related or have some other relationship, other people do not. So the seller can’t turn your promise of payment into an immediate claim on other things they might want. 

Orthodox theory starts from assumption that everyone can freely contract over income and commodities at any date in the future. That familiar Euler equation is based on the idea that you can allocate your income from any future period to consumption in the present, or vice versa. That is the framework within which the interest rate looks like a tradeoff between present and future. But you can’t understand interest in a framework that abstracts away from precisely the function that money and credit play in real economies.

The fundamental role of a bank, as Hyman Minsky emphasized,  is not intermediation but acceptance. Banks function as third parties who broaden the range of transactions that can take place on the basis of promises. You are willing to commit to a flow of money payments to gain legal rights to the house. But that is not enough to acquire the house. The bank, on the other hand, precisely because its own promises are widely trusted, is in a position to accept a promise from you.

Interest is not paid because consumption today is more desirable than consumption in the future. Interest is paid because credible promises about the future are hard to make. 

*

The cost of the mortgage loan is not that anyone had to postpone their spending. The cost is that the balance sheets of both transactors have become less liquid.

We can think of liquidity in terms of flexibility — an asset or a balance sheet position is liquid insofar as it broadens your range of options. Less liquidity, means fewer options.

For you as a homebuyer, the result of the transaction is that you have committed yourself to a set of fixed money payments over the next 30 years, and acquired the legal rights associated with ownership of a home. These rights are presumably worth more to you than the rental housing you could acquire with a similar flow of money payments. But title to the house cannot easily be turned back into money and thereby to claims on other parts of the social product. Home ownership involves — for better or worse — a long-term commitment to live in a particular place.  The tradeoff the homebuyer makes by borrowing is not more consumption today in exchange for less consumption tomorrow. It is a higher level of consumption today and tomorrow, in exchange for reduced flexibility in their budget and where they will live. Both the commitment to make the mortgage payments and the non-fungibility of home ownership leave less leeway to adapt to unexpected future developments.

On the other side, the bank has added a deposit liability, which requires payment at any time, and a mortgage asset which in itself promises payment only on a fixed schedule in the future. This likewise reduces the bank’s freedom of maneuver. They are exposed not only to the risk that the borrower will not make payments, but also to the risk of capital loss if interest rates rise during the period they hold the mortgage, and to the risk that the mortgage will not be saleable in an emergency, or only at an unexpectedly low price. As real world examples like, recently, Silicon Valley Bank show, these latter risks may in practice be much more serious than the default risk. The cost to the bank making the loan is that its balance sheet becomes more fragile.

Or as Keynes put it in a 1937 article, “The interest rate … can be regarded as being determined by the interplay of the terms on which the public desires to become more or less liquid and those on which the banking system is ready to become more or less unliquid.”

Of course in the real world things are more complicated. The bank does not need to wait for the mortgage payments to be made at the scheduled time. It can transfer the mortgage to a third party,  trading off some of the income it expected for a more liquid position. The buyer might be some other financial institution looking for a position farther toward the income end of the liquidity-income tradeoff, perhaps with multiple layers of balance sheets in between. Or the buyer might be the professional liquidity-providers at the central bank. 

Incidentally, this is an answer to a question that people don’t ask often enough: How is it that the central bank is able to set the interest rate at all? The central bank plays no part in the market for loanable funds. But central banks are very much in the liquidity business. 

It is monetary policy, after all, not savings policy.  

One thing this points to is that there is no fundamental difference between routine monetary policy and the central bank’s role as a lender of last resort and a regulator. All of these activities are about managing the level of liquidity within the financial system. How easy is it to meet your obligations. Too hard, and the web of obligations breaks. Too easy, and the web of money obligations loses its ability to shape our activity, and no longer serves as an effective coordination device. 

As the price of money — the price for flexibility in making payments as opposed to fixed commitments — the interest rate is a central parameter of any monetary economy. The metaphor of “tight” or “loose” conditions for high or low interest rates captures an important truth about the connection between interest and the flexibility or rigidity of the financial system. High interest rates correspond to a situation in which promises of future payment are worth less in terms of command over resources today. When it’s harder to gain control over real resources with promises of future payment, the pattern of today’s payments is more tightly linked to yesterday’s income. Conversely, low interest rates mean that a promise of future payments goes a long way in securing resources today. That means that claims on real resources therefore depend less on incomes in the past, and more on beliefs about the future. And because interest rate changes always come in an environment of preexisting money commitments, interest also acts as a scaling variable, reweighting the claims of creditors against the income of debtors.

*

In addition to credit transactions, the other setting in which interest appears in the real world is in the  price of existing assets. 

A promise of money payments in the future becomes an object in its own right, distinct from those payments themselves. I started out by saying that all sorts of tangible objects have a shadowy double in money-world. But a flow of money payments can also acquire a phantom double.  A promise of future payment creates a new property right, with its owner and market price. 

When we focus on that fact, we see an important role for convention in the determination of interest. To some important extent, bond prices – and therefore interest rates – are what they are, because that is what market participants expect them to be. 

A corporate bond promises a set of future payments. It’s easy in a theoretical world of certainty, to talk as if the bond just is those future payments. But it is not. 

This is not just because it might default, which is easy to incorporate into the model. It’s not just because any real bond was issued in a certain jurisdiction, and conveys rights and obligations beyond payment of interest — though these other characteristics always exist and can sometimes be important. It’s because the bond can be traded, and has a price which can change independent of the stream of future payments. 

If interest rates fall, your bond’s price will rise — and that possibility itself is a factor in the price of the bond.

This helps explain a widely acknowledged anomaly in financial markets. The expectation hypothesis says that the interest rate on a longer bond should be the same as the average of shorter rates over the same period, or at least that they should be related by a stable term premium. This seems like a straightforward arbitrage, but it fails completely, even in its weaker form.

The answer to this puzzle is an important part of Keynes’ argument in The General Theory. Market participants are not just interested in the two payment streams. They are interested in the price of the long bond itself.

Remember, the price of an asset always moves inversely with its yield. When rates on a given type of credit instrument go up, the price of that instrument falls. Now let’s say it’s widely believed that a 10 year bond is unlikely to trade below 2 percent for very long. Then you would be foolish to buy it at a yield much below 2 percent, because you are going to face a capital loss when yields return to their normal level. And if most people believe this, then the yield never will fall below 2 percent, no matter what happens with short rates.

In a real world where the future is uncertain and monetary commitments have their own independent existence, there is an important sense in which interest rates, especially longer ones, are what they are because that’s what people expect them to be.

One important implication of this is that we cannot think of various market interest rates as simply “the” interest rate, plus a risk premium. Different interest rates can move independently for reasons that have nothing to do with credit risk. 

*

On the one hand, we have a body of theory built up on the idea of “the” interest rate as a tradeoff between present and future consumption. On the other, we have actual interest rates, set in the financial system in quite different ways.

People sometimes try to square the circle with the idea of a natural rate. Yes, they say, we know about liquidity and the term premium and the importance of different kinds of financial intermediaries and regulation and so on. But we still want to use the intertemporal model we were taught in graduate school. We reconcile this by treating the model as an analysis of what the interest rate ought to be. Yes, banks set interest rates in all kinds of ways, but there is only one interest rate consistent with stable prices and, more broadly, appropriate use of society’s resources. We call this the natural rate.

This idea was first formulated around the turn of the 20th century by Swedish economist Knut Wicksell. But the most influential modern statement comes from Milton Friedman. He introduces the natural rate of interest, along with its close cousin the natural rate of unemployment, in his 1968 Presidential Address to the American Economics Association, which has been described as the most influential paper in economics since World War II. The natural rates there correspond to the rates that would be “ground out by the Walrasian system of general equilibrium equations, provided there is imbedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the cost of gathering information … and so on.” 

The appeal of the concept is clear: It provides a bridge between the nonmonetary world of intertemporal exchange of economic theory, and the monetary world of credit contracts in which we actually live. In so doing, it turns the intertemporal story from a descriptive one to a prescriptive one — from an account of how interest rates are determined, to a story about how central banks should conduct monetary policy.

Fed Chair Jerome Powell gave a nice example of how central bankers think of the natural rate in a speech a few years ago. He  introduces the natural interest rate R* with the statement that “In conventional models of the economy, major economic quantities … fluctuate around values that are considered ‘normal,’ or ‘natural,’ or ‘desired.’” R* reflects “views on the longer-run normal values for … the federal funds rate” which are based on “ fundamental structural features of the economy.” 

Notice the confusion here between the terms normal, natural and desired, three words with quite different meanings. R* is apparently supposed to be the long-term average interest rate, and the interest rate that we would see in a world governed only fundamentals and the interest rate that delvers the best policy outcomes.

This conflation is a ubiquitous and essential feature of discussions of natural rate. Like the controlled slipping between the two disks of a clutch in a car, it allows systems moving in quite different ways to be joined up without either side fracturing from the stress. The ambiguity between these distinct meanings is itself normal, natural and desired. 

The ECB gives perhaps an even nicer statement:  “At its most basic level, the interest rate is the ‘price of time’ — the remuneration for postponing spending into the future.” R* corresponds to this. It is a rate of interest determined by purely non monetary factors, which should be unaffected by developments in the financial system. Unfortunately, the actual interest rate may depart from this. In that case, the natural rate, says the ECB,  “while unobservable … provides a useful guidepost for monetary policy.”

I love the idea of an unobservable guidepost. It perfectly distills the contradiction embodied in the idea of R*. 

As a description of what the interest rate is, a loanable-funds model is merely wrong. But when it’s turned into a model of the natural rate, it isn’t even wrong. It has no content at all. There is no way to connect any of the terms in the model with any observable fact in the world. 

Go back to Friedman’s formulation, and you’ll see the problem: We don’t possess a model that embeds all the “actual structural characteristics” of the economy. For an economy whose structures evolve in historical time, it doesn’t make sense to even imagine such a thing. 

In practice, the short-run natural rate is defined as the one that results in inflation being at target — which is to say, whatever interest rate the central bank prefers.

The long-run natural rate is commonly defined as the real interest rate where “all markets are in equilibrium and there is therefore no pressure for any resources to be redistributed or growth rates for any variables to change.” In this hypothetical steady state, the interest rate depends only on the same structural features that are supposed to determine long-term growth — the rate of technical progress, population growth, and households’ willingness to defer consumption.

But there is no way to get from the short run to the long run. The real world is never in a situation where all markets are in equilibrium. Yes, we can sometimes identify long-run trends. But there is no reason to think that the only variables that matter for those trends are the ones we have chosen to focus on in a particular class of models. All those “actual structural characteristics” continue to exist in the long run.

The most we can say is this: As long as there is some reasonably consistent relationship between the policy interest rate set by the central bank and inflation, or whatever its target is, then there will be some level of the policy rate that gets you to the target. But there’s no way to identify that with “the interest rate” of a theoretical model. The current level of aggregate spending in the economy depends on all sorts of contingent, institutional factors, on sentiment, on choices made in the past, on the whole range of government policies. If you ask, what policy interest rate is most likely to move inflation toward 2 percent, all that stuff matters just as much as the supposed fundamentals.

The best you can do is set the policy rate by whatever rule of thumb or process you prefer, and then after the fact say that there must be some model where that would be the optimal choice. 

Michael Woodford is the author of Interest and Prices, one of the most influential efforts to incorporate monetary policy into a modern macroeconomic model. He pretty explicitly acknowledges that’s what he was doing — trying to backfill a theory to explain the choices that central banks were already making.

*

What are the implications of this?

First, with regard to monetary policy, let’s acknowledge that it involves political choices made to achieve a variety of often conflicting social goals. As Ben Braun and others have written about very insightfully. 

Second, recognizing that interest is the price of liquidity, set in financial markets, is important for how we think about sovereign debt.

There’s a widespread story about fiscal crises that goes something like this. First, a government’s fiscal balance (surplus or deficit) over time determines its debt-GDP ratio. If a country has a high debt to GDP, that’s the result of overspending relative to tax revenues. Second, the debt ratio determines to market confidence; private investors do not want to buy the debt of a country that has already issued too much. Third, the state of market confidence determines the interest rate the government faces, or whether it can borrow at all. Fourth, there is a clear line where high debt and high interest rates make debt unsustainable; austerity is the unavoidable requirement once that line is passed. And finally, when austerity restores debt sustainability, that will contribute to economic growth. 

Alberto Alesina was among the most vigorous promoters of this story, but it’s a very common one.

If you accept the premises, the conclusions follow logically. Even better, they offer the satisfying spectacle of public-sector hubris meeting its nemesis. But when we look at debt as a monetary phenomenon, we see that its dynamics don’t run along such well-oiled tracks.

First of all, as a historical matter, differences in growth, inflation and interest rates are at least as important as the fiscal position in determining the evolution of the debt ratio over time. Where debt is already high, moderately slower growth or higher interest rates can easily raise the debt ratio faster than even very large surpluses can reduce it – as many countries subject to austerity have discovered. Conversely, rapid economic growth and low interest rates can lead to very large reductions in the debt ratio without the government ever running surpluses, as in the US and UK after World War II. More recently, Ireland reduced its debt-GDP ratio by 20 points in just five years in the mid-1990s while continuing to run substantial deficits, thanks to very fast growth of the “Celtic tiger” period. 

At the second step, market demand for government debt clearly is not an “objective” assessment of the fiscal position, but reflects broader liquidity conditions and the self-confirming conventional expectations of speculative markets. The claim that interest rates reflect the soundness or otherwise of public budgets runs up against a glaring problem: The financial markets that recoil from a country’s bonds one day were usually buying them eagerly the day before. The same markets that sent interest rates on Spanish, Portuguese and Greek bonds soaring in 2010 were the ones snapping up their public and private debt at rock-bottom rates in the mid-2000s. And they’re the same markets that returned to buying those countries debt at historically low levels today, even as their debt ratios, in many cases, remained very high. 

People like Alesina got hopelessly tangled up on this point. They wanted to insist both that post-crisis interest rates reflected an objective assessment of the state of public finances, and that the low rates before the crisis were the result of a speculative bubble. But you can’t have it both ways.

This is not to say that financial markets are never a constraint on government budgets. For most of the world, which doesn’t enjoy the backstop of a Fed or ECB, they very much are. But we should never imagine that financial conditions are an objective reflection of a country’s fiscal position, or of the balance of savings and investment. 

The third big takeaway, maybe the biggest one, is that money is never neutral.

If the interest rate is a price, what it is a price of is not “saving” or the willingness to wait. It is not “remuneration for deferring spending,” as the ECB has it. Rather, it is of the capacity to make and accept promises. And where this capacity really matters, is where finance is used not just to rearrange claims on existing assets and resources, but to organize the creation of new ones. The technical advantages of long lived means of production and specialized organizations can only be realized if people are in a position to make long-term commitments. And in a world where production is organized mainly through money payments, that in turn depends on the degree of liquidity.

There are, at any moment, an endless number of ways some part of society’s resources could be reorganized so as to generate greater incomes, and hopefully use values. You could open a restaurant, or build a house, or get a degree, or write a computer program, or put on a play. The physical resources for these activities are not scarce; the present value of the income they can generate exceeds their costs at any reasonable discount rate. What is scarce is trust. You, starting on a project, must exercise a claim on society’s resources now; society must accept your promise of benefits later. The hierarchy of money  allows participants in various collective projects to substitute trust in a third party for trust in each other. But trust is still the scarce resource.

Within the economy, some activities are more trust-intensive, or liquidity-constrained,  than others.

Liquidity is more of a problem when there is a larger separation between outlays and rewards, and when rewards are more uncertain.

Liquidity is more of the problem when the scale of the outlay required is larger.

Liquidity and trust are more important when decisions are irreversible.

Trust is more important when something new is being done.

Trust is more scarce when we are talking about coordination between people without any prior relationship.

These are the problems that money and credit help solve. Abundant money does not just lead people to pay more for the same goods. It shifts their spending toward things that require bigger upfront payments and longer-term commitments, and that are riskier.

I was listening to an interview with an executive from wind-power company on the Odd Lots podcast the other day. “We like to say that our fuel is free,” he said. “But really, our fuel is the cost of capital.” The interest rate matters more for wind power than for gas or coal, because the costs must be paid almost entirely up front, as opposed to when the power is produced. 

When costs and returns are close together, credit is less important.

In settings where ongoing relationships exist, money is less important as a coordinating mechanism. Markets are for arms-length transactions between strangers.

Minsky’s version of the story emphasizes that we have to think about money in terms of two prices, current production and long-lived assets. Long-lived assets must be financed – acquiring one typically requires committing to a series of future payments . So their price is sensitive to the availability of money. An increase in the money supply — contra Hume, contra Meyer — does not raise all prices in unison. It disproportionately raises the price of long-lived assets, encouraging production of them. And it is long-lived assets that are the basis of modern industrial production.

The relative value of capital goods, and the choice between more and less capital-intensive production techniques, depends on the rate of interest. Capital goods – and the corporations and other long-lived entities that make use of them – are by their nature illiquid. The willingness of wealth owners to commit their wealth to these forms depends, therefore, on the availability of liquidity. We cannot analyze conditions of production in non-monetary terms first and then afterward add money and interest to the story.  Conditions of production themselves depend fundamentally on the network of money payments and commitments that structure them, and how flexible that network is.

*

Taking money seriously requires us to reconceptualize the real economy. 

The idea of the interest rate as the price of saving assumes, as I mentioned before, that output already exists to be either consumed or saved. Similarly, the idea of interest as an intertemporal price — the price of time, as the ECB has it — implies that future output is already determined, at least probabilistically. We can’t trade off current consumption against future consumption unless future consumption already exists for us to trade.

Wicksell, who did as much as anyone to create the natural-rate framework of today’s central banks, captured this aspect of it perfectly when he compared economic growth to wine barrels aging in the cellar. The wine is already there. The problem is just deciding when to open the barrels — you would like to have some wine now, but you know the wine will get better if you wait.

In policy contexts, this corresponds to the idea of a level of potential output (or full employment) that is given from the supply side. The productive capacity of the economy is already there; the most that money, or demand, can accomplish is managing aggregate spending so that production stays close to that capacity.

This is the perspective from which someone like Lawrence Meyer, or Paul Krugman for that matter, says that monetary policy can only affect prices in the long run. They assume that potential output is already given.

But one of the big lessons we have learned from the past 15 years of macroeconomic instability is that the economy’s productive potential is much more unstable, and much less certain, than economists used to think. We’ve seen that the labor force grows and shrinks in response to labor market conditions. We’ve seen that investment and productivity growth are highly sensitive to demand. If a lack of spending causes output to fall short of potential today, potential will be lower tomorrow. And if the economy runs hot for a while, potential output will rise.

We can see the same thing at the level of individual industries. One of the most striking, and encouraging developments of recent years has been the rapid fall in costs for renewable energy generation. It is clear that this fall in costs is the result, as much as the cause, of the rapid growth in spending on these technologies. And that in turn is largely due to successful policies to direct credit to those areas. 

A perspective that sees money as epiphenomenal to the “real economy” of production would have ruled out that possibility.

This sort of learning by doing is ubiquitous in the real world. Economists prefer to assume decreasing returns only because that’s an easy way to get a unique market equilibrium. 

This is one area where formal economics and everyday intuition diverge sharply. Ask someone whether they think that buying more or something, or making more of something, will cause the unit price to go up or down. If you reserve a block of hotel rooms, will the rooms be cheaper or more expensive than if you reserve just one? And then think about what this implies about the slope of the supply curve.

There’s a wonderful story by the great German-Mexican writer B. Traven called “Assembly Line.” The story gets its subversive humor from a confrontation between an American businessman, who takes it for granted that costs should decline with output, and a village artisan who insists on actually behaving like the textbook producer in a world of decreasing returns.

In modern economies, if not in the village, the businessman’s intuition is correct. Increasing returns are very much the normal case. This means that multiple equilibria and path dependence are the rule. And — bringing us back to money — that means that what can be produced, and at what cost, is a function of how spending has been directed in the past. 

Taking money seriously, as its own autonomous social domain, means recognizing that social and material reality is not like money. We cannot think of it in terms of a set of existing objects to be allocated, between uses or over time. Production is not a quantity of capital and a quantity of labor being combined in a production function. It is organized human activity, coordinated in a variety of ways, aimed at open-ended transformation of the world whose results are not knowable in advance.

On a negative side, this means we should be skeptical about any economic concept described as “natural” or “real”. These are very often an attempt to smuggle in a vision of a non monetary economy fundamentally different from our own, or to disguise a normative claim as a positive one, or both.

For example, we should be cautious about “real” interest rates. This term is ubiquitous, but it implicitly suggests that the underlying transaction is a swap of goods today for goods tomorrow, which just happens to take monetary form. But in fact it’s a swap of IOUs — one set of money payments for another. There’s no reason that the relative price of money versus commodities would come into it. 

And in fact, when we look historically, before the era of inflation-targeting central banks there was no particular relationship between inflation and interest rates.

We should also be skeptical of the idea of real GDP, or the price level. That’s another big theme of the book, but it’s beyond the scope of today’s talk.

On the positive side, this perspective is, I think, essential preparation to explore when and in what contexts finance matters for production. Obviously, in reality, most production coordinated in non-market ways, both within firms — which are planned economies internally — and through various forms of economy-wide planning. But there are also cases where the distribution of monetary claims through the financial system is very important. Understanding which specific activities are credit-constrained, and in what circumstances, seems like an important research area to me, especially in the context of climate change. 

*

Let me mention one more direction in which I think this perspective points us.

As I suggested, the idea of the interest rate as the price of time, and the larger real-exchange vision of which it is part, treats money flows and aggregates as stand-ins for an underlying nonmonetary real economy. People who take this view tend not be especially concerned with exactly how the monetary values are constructed. Which rate, out of the complex of interest rates, is “the” interest rate? Which f the various possible inflation rates, and over what period, do we subtract to get the “real” interest rate? What payments exactly are included in GDP, and what do we do if that changes, or if it’s different in different countries? 

If we think of the monetary values as just proxies for some underlying “real” value, the answers to these questions don’t really matter. 

I was reading a paper recently that used the intensity of nighttime illumination  across the Earth’s surface as an alternative measure of real output. It’s an interesting exercise. But obviously, if that’s the spirit you are approaching GDP in, you don’t worry about how the value of financial services is calculated, or on what basis we are imputing the services of owner-occupied housing.  The number produced by the BEA is just another proxy for the true value of real output, that you can approximate in all kinds of other ways.

On the other hand, if you think that the money values are what is actually real — if you don’t think they are proxies for any underlying material quantity — then you have to be very concerned with the way they are calculated. If the interest rate really does mean the payments on a loan contract, and not some hypothetical exchange rate between the past and the future, then you have to be clear about which loan contract you have in mind.

Along the same lines, most economists treat the object of inquiry as the underlying causal relationships in the economy, those “fundamental structural characteristics” that are supposed to be stable over time. Recall that the natural rate of interest is explicitly defined with respect to a long run equilibrium where all macroeconomic variables are constant, or growing at a constant rate. If that’s how you think of what you are doing, then specific historical developments are interesting at most as case studies, or as motivations for the real work, which consists of timeless formal models.

But if we take money seriously, then we don’t need to postulate this kind of underlying deep structure. If we don’t think of interest in terms of a tradeoff between the present and the future, then we don’t need to think of future income and output as being in any sense already determined. And if money matters for the activity of production, both as financing for investment and as demand, then there is no reason to think the actual evolution of the economy can be understood in terms of a long-run trend determined by fundamentals. 

The only sensible object of inquiry in this case is particular events that have happened, or might happen. 

Approaching our subject this way means working in terms of the variables we actually observe and measure. If we study GDP, it is GDP as the national accountants actually define it and measure it, not “output” in the abstract. These variables are generally monetary. 

It means focusing on explanations for specific historical developments, rather than modeling the behavior of “the economy” in the abstract.

It means elevating descriptive work over the kinds of causal questions that economists usually ask. Which means broadening our empirical toolkit away from econometrics. 

These methodological suggestions might seem far removed from alternative accounts of the interest rate. But as Arjun and I have worked on this book, we’ve become convinced that the two are closely related. Taking money seriously, and rejecting conventional ideas of the real economy, have far-reaching implications for how we do economics.  

Recognizing that money is its own domain allows us to see productive activity as an open-ended historical process, rather than a static problem of allocation. By focusing on money, we can get a clearer view of the non-monetary world — and, hopefully, be in a better position to change it. 

A new macroeconomics?

UPDATE: The video of this panel is here.

[On Friday, July 2, I am taking part in a panel organized by Economics for Inclusive Prosperity on “A new macroeconomics?” This is my contribution.]

Jón Steinsson wrote up some thoughts about the current state of macroeconomics. He begins:

There is a narrative within our field that macroeconomics has lost its way. While I have some sympathy with this narrative, I think it is a better description of the field 10 years ago than of the field today. Today, macroeconomics is in the process of regaining its footing. Because of this, in my view, the state of macroeconomics is actually better than it has been for quite some time.

I can’t help but be reminded of Olivier Blanchard’s 2008 article on the state of macroeconomics, which opened with a flat assertion that “the state of macro is good.” I am not convinced today’s positive assessment is going to hold up better than that one. 

Where I do agree with Jón is that empirical work in macro is in better shape than theory. But I think theory is in much worse shape than he thinks. The problem is not some particular assumptions. It is the fundamental approach.

We need to be brutally honest: What is taught in today’s graduate programs as macroeconomics is entirely useless for the kinds of questions we are interested in. 

I have in front of me the macro comp from a well-regarded mainstream economics PhD program. The comp starts with the familiar Euler equation with a representative agent maximizing their utility from consumption over an infinite future. Then we introduce various complications — instead of a single good we have a final and intermediate good, we allow firms to have some market power, we introduce random variation in the production technology or markup. The problem at each stage is to find what is the optimal path chosen by the representative household under the new set of constraints.

This is what macroeconomics education looks like in 2021. I submit that it provides no preparation whatsoever for thinking about the substantive questions we are interested in. It’s not that this or that assumption is unrealistic. It is that there is no point of contact between the world of these models and the real economies that we live in.

I don’t think that anyone in this conversation reasons this way when they are thinking about real economic questions. If you are asked how serious inflation is likely to be over the next year, or how much of a constraint public debt is on public spending, or how income distribution is likely to change based on labor market conditions, you will not base your answer on some kind of vaguely analogous questions about a world of rational households optimizing the tradeoff between labor and consumption over an infinite future. You will answer it based on your concrete institutional and historical knowledge of the world we live in today. 

To be sure, once you have come up with a plausible answer to a real world question, you can go back and construct a microfounded model that supports it. But so what? Yes, with some ingenuity you can get a plausible Keynesian multiplier out of a microfounded model. But in terms of what we actually know about real economies, we don’t learn anything from the exercise that the simple Keynesian multiplier didn’t already tell us.

The heterogenous agent models that Jón talks about are to me symptoms of the problem, not signs of progress. You start with a fact about the world that we already knew, that consumption spending is sensitive to current income. Then you backfill a set of microfoundations that lead to that conclusion. The model doesn’t add anything, it just gets you back to your starting point, with a lot of time and effort that you could have been using elsewhere. Why not just start from the existence of a marginal propensity to consume well above zero, and go forward from there?

Then on the other hand, think about what is not included in macroeconomics education at the graduate level. Nothing about national accounting. Nothing about about policy. Nothing about history. Nothing about the concrete institutions that structure real labor and product markets. 

My personal view is that we need to roll back the clock at least 40 years, and throw out the whole existing macroeconomics curriculum. It’s not going to happen tomorrow, of course. But if we want a macroeconomics that can contribute to public debates, that should be what we’re aiming for.

What should we be doing instead? There is no fully-fledged alternative to the mainstream, no heterodox theory that is ready to step in to replace the existing macro curriculum. Still, we don’t have to start from scratch. There are fragments, or building blocks, of a more scientific macroeconomics scattered around. We can find promising approaches in work from earlier generations, work in the margins of the profession, and work being done by people outside of economics, in the policy world, in finance, in other social sciences.  

This work, it seems to me, shares a number of characteristics.

First, it is in close contact with broader public debates. Macroeconomics exists not to study “the economy” in the abstract — there isn’t any such thing — but to help us address concrete problems with the economies that we live in. The questions of what topics are important, what assumptions are reasonable, what considerations are relevant, can only be answered from a perspective outside of theory itself. A useful macroeconomic theory cannot be an axiomatic system developed from first principles. It needs to start with the conversations among policymakers, business people, journalists, and so on, and then generalize and systematize them. 

A corollary of this is that we are looking not for a general model of the economy, but a lot of specialized models for particular questions. 

Second, it has national accounting at its center. Physical scientists spend an enormous amount of time refining and mastering their data collection tools. For macroeconomics, that means the national accounts, along with other sources of macro data. A major part of graduate education in economics should be gaining a deep understanding of existing accounting and data collection practices. If models are going to be relevant for policy or empirical work, they need to be built around the categories of macro data. One of the great vices of today’s macroeconomics is to treat a variable in a model as equivalent to a similarly-named item in the national accounts, even when they are defined quite differently.

Third, this work is fundamentally aggregative. The questions that macroeconomics asks involve aggregate variables like output, inflation, the wage share, the trade balance, etc. No matter how it is derived, the operational content of the theory is a set of causal relationships between these aggregate variables. You can certainly shed light on relationships between aggregates using micro data. But the questions we are asking always need to be posed in terms of observable aggregates. The disdain for “reduced form” models is something we have to rid ourselves of. 

Fourth, it is historical. There are few if any general laws for how “an economy” operates; what there are, are patterns that are more or less consistent over a certain span of time and space. Macroeconomics is also historical in a second sense: It deals with developments that unfold in historical time. (This, among other reasons, is why the intertemporal approach is fundamentally unsuitable.) We need fewer models of “the” business cycle, and more narrative descriptions of individual cycles. This requires a sort of figure-ground reversal in our thinking — instead of seeing concrete developments as case studies or tests of models, we need to see models as embedded in concrete stories. 

Fifth, it is monetary. The economies we live in are organized around money commitments and money flows, and most of the variables we are interested in are defined and measured in terms of money. These facts are not incidental. A model of a hypothetical non-monetary economy is not going to generate reliable intuitions about real economies. Of course it is sometimes useful to adjust money values for inflation, but it’s a bad habit to refer to the result quantities as “real” — it suggests that there is some objective quantity lying behind the monetary one, which is in no way the case.

In my ideal world, a macroeconomics education would proceed like this. First, here are the problems the external world is posing to us — the economic questions being asked by historians, policy makers, the business press. Second, here is the observable data relevant to those questions, here’s how the variables are defined and measured. Third, here are how those observables have evolved in some important historical cases. Fourth, here are some general patterns that seem to hold over a certain range  — and just as important, here is the range where they don’t. Finally, here are some stories that might explain those patterns, that are plausible given what we know about how economic activity is organized.

Well, that’s my vision. Does it have anything to do with a plausible future of macroeconomics?

I certainly don’t expect established macroeconomists to throw out the work they’ve been doing their whole careers. Among younger economists, at least those whose interest in the economy is not strictly professional, I do think there is a fairly widespread recognition that macroeconomic theory is at an intellectual dead end. But the response is usually to do basically atheoretical empirical work, or go into a different field, like labor, where the constraints on theory are not so rigid. Then there is the heterodox community, which I come out of. I think there has been a great deal of interesting and valuable work within heterodox economics, and I’m glad to be associated with it. But as a project to change the views of the rest of the economics profession, it is clearly a failure.

As far as I can see, orthodox macroeconomic theory is basically unchallenged on its home ground. Nonetheless, I am moderately hopeful for the future, for two reasons. 

First, academic macroeconomics has lost much of its hold on public debate. I have a fair amount of contact with policymakers, and in my experience, there is much less deference to mainstream economic theory than there used to be, and much more interest in alternative approaches. Strong deductive claims about the relationships between employment, inflation, wage growth, etc. are no longer taken seriously.

To be sure, there was always a gulf between macroeconomic theory and practical policymaking. But at one time, this could be papered over by a kind of folk wisdom — low unemployment leads to inflation, public deficits lead to higher interest rates, etc. — that both sides could accept. Under the pressure of the extraordinary developments of the past dozen years, the policy conversation has largely abandoned this folk wisdom — which, from my point of view, is real progress. At some point, I think, academic economics will recognize that it has lost contact with the policy conversation, and make a jump to catch up. 

Keynes got a lot of things right, but one thing I think he got wrong was that “practical men are slaves to some defunct economist.” The relationship is more often the other way round. When practical people come to think about economy in new ways, economic theory eventually follows.

I think this is often true even of people who in their day job do theory in the approved style. They don’t think in terms of their models when they are answering real world questions. And this in turn makes our problem easier. We don’t need to create a new body of macroeconomic theory out of whole cloth. We just need to take the implicit models that we already use in conversations like this one, and bring them into scholarship. 

That brings me to my second reason for optimism. Once people realize you don’t have to have microfoundations, that you don’t need to base your models on optimization by anyone, I think they will find that profoundly liberating. If you are wondering about, say, the effect of corporate taxation on productivity growth, there is absolutely no reason you need to model the labor supply decision of the representative household as some kind of intertemporal optimization. You can just, not do that. Whatever the story you’re telling, a simple aggregate relationship will capture it. 

The microfounded approach is not helping people answer the questions they’re interested in. It’s just a hoop they have to jump through if they want other people in the profession to take their work seriously. As Jón suggests, a lot of what people see as essential in theory, is really just sociological conventions within the discipline. These sorts of professional norms can be powerful, but they are also brittle. The strongest prop of the current orthodoxy is that it is the orthodoxy. Once people realize they don’t have to do theory this way, it’s going to open up enormous space for asking substantive questions about the real world. 

I think that once that dam breaks, it is going to sweep away most of what is now taught as macroeconomics. I hope that we’ll see something quite different in its place.  

Once we stop chasing the will-o-wisp of general equilibrium, we can focus on developing a toolkit of models addressed to particular questions. I hope in the years ahead we’ll see a more modest but useful body of theory, one that is oriented to the concrete questions that motivate public debates; that embeds its formal models in a historical narrative; that starts from the economy as we observe it, rather than a set of abstract first principles; that dispenses with utility and other unobservables; and that is ready to learn from historians and other social scientists.

Strange Defeat

Anyone who found something useful or provoking in my Jacobin piece on the state of economics might also be interested in this 2013 article by me and Arjun Jayadev, “Strange Defeat: How Austerity Economics Lost All the Intellectual Battles and Still WOn the War.” It covers a good deal of the same ground, a bit more systematically but without the effort to find the usable stuff in mainstream macro that I made in the more recent piece. Perhaps there wasn’t so much of it five years ago!

Here are some excerpts; you can read the full piece here.

* * * 

The extent of the consensus in mainstream macroeconomic theory is often obscured by the intensity of the disagreements over policy…  In fact, however, the contending schools and their often heated debates obscure the more fundamental consensus among mainstream macroeconomists. Despite the label, “New Keynesians” share the core commitment of their New Classical opponents to analyse the economy only in terms of the choices of a representative agent optimising over time. For New Keynesians as much as New Classicals, the only legitimate way to answer the question of why the economy is in the state it is in, is to ask under what circumstances a rational planner, knowing the true probabilities of all possible future events, would have chosen exactly this outcome as the optimal one. Methodologically, Keynes’ vision of psychologically complex agents making irreversible decisions under conditions of fundamental uncertainty has been as completely repudiated by the “New Keynesians” as by their conservative opponents.

For the past 30 years the dominant macroeconomic models that have been in use by central banks and leading macroeconomists have … ranged from what have been termed real business cycle theory approaches on the one end to New Keynesian approaches on the other: perspectives that are considerably closer in flavour and methodological commitments to each other than to the “old Keynesian” approaches embodied in such models as the IS-LM framework of undergraduate economics. In particular, while demand matters in the short run in New Keynesian models, it can have no effect in the long run; no matter what, the economy always eventually returns to its full-employment growth path.

And while conventional economic theory saw the economy as self-equilibrating, economic policy discussion was dominated by faith in the stabilising powers of central banks and in the wisdom of “sound finance”. … Some of the same economists, who today are leading the charge against austerity, were arguing just as forcefully a few years ago that the most important macroeconomic challenge was reducing the size of public debt…. New Keynesians follow Keynes in name only; they have certainly given better policy advice than the austerians in recent years, but such advice does not always flow naturally from their models.

The industrialised world has gone through a prolonged period of stagnation and misery and may have worse ahead of it. Probably no policy can completely tame the booms and busts that capitalist economies are subject to. And even those steps that can be taken will not be taken without the pressure of strong popular movements challenging governments from the outside. The ability of economists to shape the world, for good or for ill is strictly circumscribed. Still, it is undeniable that the case for austerity – so weak on purely intellectual grounds – would never have conquered the commanding heights of policy so easily if the way had not been prepared for it by the past 30 years of consensus macroeconomics. Where the possibility and political will for stimulus did exist, modern economics – the stuff of current scholarship and graduate education – tended to hinder rather than help. While when the turn to austerity came, even shoddy work could have an outsize impact, because it had the whole weight of conventional opinion behind it. For this the mainstream of the economics profession – the liberals as much as the conservatives – must take some share of the blame.

In Jacobin: A Demystifying Decade for Economics

(The new issue of Jacobin has a piece by me on the state of economics ten years after the crisis. The published version is here. I’ve posted a slightly expanded version below. Even though Jacobin was generous with the word count and Seth Ackerman’s edits were as always superb, they still cut some material that, as king of the infinite space of this blog, I would rather include.)

 

For Economics, a Demystifying Decade

Has economics changed since the crisis? As usual, the answer is: It depends. If we look at the macroeconomic theory of PhD programs and top journals, the answer is clearly, no. Macroeconomic theory remains the same self-contained, abstract art form that it has been for the past twenty-five years. But despite its hegemony over the peak institutions of academic economics, this mainstream is not the only mainstream. The economics of the mainstream policy world (central bankers, Treasury staffers, Financial Times editorialists), only intermittently attentive to the journals in the best times, has gone its own way; the pieties of a decade ago have much less of a hold today. And within the elite academic world, there’s plenty of empirical work that responds to the developments of the past ten years, even if it doesn’t — yet — add up to any alternative vision.

For a socialist, it’s probably a mistake to see economists primarily as either carriers of valuable technical expertise or systematic expositors of capitalist ideology. They are participants in public debates just like anyone else. The profession as the whole is more often found trailing after political developments than advancing them.

***

The first thing to understand about macroeconomic theory is that it is weirder than you think. The heart of it is the idea that the economy can be thought of as a single infinite-lived individual trading off leisure and consumption over all future time. For an orthodox macroeconomist – anyone who hoped to be hired at a research university in the past 30 years – this approach isn’t just one tool among others. It is macroeconomics. Every question has to be expressed as finding the utility-maximizing path of consumption and production over all eternity, under a precisely defined set of constraints. Otherwise it doesn’t scan.

This approach is formalized in something called the Euler equation, which is a device for summing up an infinite series of discounted future values. Some version of this equation is the basis of most articles on macroeconomic theory published in a mainstream journal in the past 30 years.It might seem like an odd default, given the obvious fact that real economies contain households, businesses, governments and other distinct entities, none of whom can turn income in the far distant future into spending today. But it has the advantage of fitting macroeconomic problems — which at face value involve uncertainty, conflicting interests, coordination failures and so on — into the scarce-means-and-competing-ends Robinson Crusoe vision that has long been economics’ home ground.

There’s a funny history to this technique. It was invented by Frank Ramsey, a young philosopher and mathematician in Keynes’ Cambridge circle in the 1920s, to answer the question: If you were organizing an economy from the top down and had to choose between producing for present needs versus investing to allow more production later, how would you decide the ideal mix? The Euler equation offers a convenient tool for expressing the tradeoff between production in the future versus production today.

This makes sense as a way of describing what a planner should do. But through one of those transmogrifications intellectual history is full of, the same formalism was picked up and popularized after World War II by Solow and Samuelson as a description of how growth actually happens in capitalist economies. The problem of macroeconomics has continued to be framed as how an ideal planner should direct consumption and production to produce the best outcomes for anyone, often with the “ideal planner” language intact. Pick up any modern economics textbook and you’ll find that substantive questions can’t be asked except in terms of how a far sighted agent would choose this path of consumption as the best possible one allowed by the model.

There’s nothing wrong with adopting a simplified formal representation of a fuzzier and more complicated reality. As Marx said, abstraction is the social scientist’s substitute for the microscope or telescope. But these models are not simple by any normal human definition. The models may abstract away from features of the world that non-economists might think are rather fundamental to “the economy” — like the existence of businesses, money, and government — but the part of the world they do represent — the optimal tradeoff between consumption today and consumption tomorrow — is described in the greatest possible detail. This combination of extreme specificity on one dimension and extreme abstraction on the others might seem weird and arbitrary. But in today’s profession, if you don’t at least start from there, you’re not doing economics.

At the same time, many producers of this kind of models do have a quite realistic understanding of the behavior of real economies, often informed by first-hand experience in government. The combination of tight genre constraints and real insight leads to a strange style of theorizing, where the goal is to produce a model that satisfies the the conventions of the discipline while arriving at a conclusion that you’ve already reached by other means. Michael Woodford, perhaps the leading theorist of “New Keynesian” macroeconomics, more or less admits that the purpose of his models is to justify the countercyclical interest rate policy already pursued by central banks in a language acceptable to academic economists. Of course the central bankers themselves don’t learn anything from such an exercise — and you will scan the minutes of Fed meetings in vain for discussion of first-order ARIMA technology shocks — but they  presumably find it reassuring to hear that what they already thought is consistent with the most modern economic theory. It’s the economic equivalent of the college president in Randall Jarrell’s Pictures from an Institution:

About anything, anything at all, Dwight Robbins believed what Reason and Virtue and Tolerance and a Comprehensive Organic Synthesis of Values would have him believe. And about anything, anything at all, he believed what it was expedient for the president of Benton College to believe. You looked at the two beliefs, and lo! the two were one. Do you remember, as a child without much time, turning to the back of the arithmetic book, getting the answer to a problem, and then writing down the summary hypothetical operations by which the answer had been, so to speak, arrived at? It is the only method of problem-solving that always gives correct answers…

The development of theory since the crisis has followed this mold. One prominent example: After the crash of 2008, Paul Krugman immediately began talking about the liquidity trap and the “perverse” Keynesian claims that become true when interest rates were stuck at zero. Fiscal policy was now effective, there was no danger in inflation from increases in the money supply, a trade deficit could cost jobs, and so on. He explicated these ideas with the help of the “IS-LM” models found in undergraduate textbooks — genuinely simple abstractions that haven’t played a role in academic work in decades.

Some years later, he and Gautti Eggertson unveiled a model in the approved New Keynesian style, which showed that, indeed, if interest rates  were fixed at zero then fiscal policy, normally powerless, now became highly effective. This exercise may have been a display of technical skill (I suppose; I’m not a connoisseur) but what do we learn from it? After all, generating that conclusion was the announced  goal from the beginning. The formal model was retrofitted to generate the argument that Krugman and others had been making for years, and lo! the two were one.

It’s a perfect example of Joan Robinson’s line that economic theory is the art of taking a rabbit out of a hat, when you’ve just put it into the hat in full view of the audience. I suppose what someone like Krugman might say in his defense is that he wanted to find out if the rabbit would fit in the hat. But if you do the math right, it always does.

(What’s funnier in this case is that the rabbit actually didn’t fit, but they insisted on pulling it out anyway. As the conservative economist John Cochrane gleefully pointed out, the same model also says that raising taxes on wages should also boost employment in a liquidity trap. But no one believed that before writing down the equations, so they didn’t believe it afterward either. As Krugman’s coauthor Eggerston judiciously put it, “there may be reasons outside the model” to reject the idea that increasing payroll taxes is a good idea in a recession.)

Left critics often imagine economics as an effort to understand reality that’s gotten hopelessly confused, or as a systematic effort to uphold capitalist ideology. But I think both of these claims are, in a way, too kind; they assume that economic theory is “about” the real world in the first place. Better to think of it as a self-constrained art form, whose apparent connections to economic phenomena are results of a confusing overlap in vocabulary. Think about chess and medieval history: The statement that “queens are most effective when supported by strong bishops” might be reasonable in both domains, but its application in the one case will tell you nothing about its application in the other.

Over the past decade, people (such as, famously, Queen Elizabeth) have often asked why economists failed to predict the crisis. As a criticism of economics, this is simultaneously setting the bar too high and too low. Too high, because crises are intrinsically hard to predict. Too low, because modern macroeconomics doesn’t predict anything at all.  As Suresh Naidu puts it, the best way to think about what most economic theorists do is as a kind of constrained-maximization poetry. It makes no more sense to ask “is it true” than of a haiku.

***

While theory buzzes around in its fly-bottle, empirical macroeconomics, more attuned to concrete developments, has made a number of genuinely interesting departures. Several areas have been particularly fertile: the importance of financial conditions and credit constraints; government budgets as a tool to stabilize demand and employment; the links between macroeconomic outcomes and the distribution of income; and the importance of aggregate demand even in the long run.

Not surprisingly, the financial crisis spawned a new body of work trying to assess the importance of credit, and financial conditions more broadly, for macroeconomic outcomes. (Similar bodies of work were produced in the wake of previous financial disruptions; these however don’t get much cited in the current iteration.) A large number of empirical papers tried to assess how important access to credit was for household spending and business investment, and how much of the swing from boom to bust could be explained by the tighter limits on credit. Perhaps the outstanding figures here are Atif Mian and Amir Sufi, who assembled a large body of evidence that the boom in lending in the 2000s reflected mainly an increased willingness to lend on the part of banks, rather than an increased desire to borrow on the part of families; and that the subsequent debt overhang explained a large part of depressed income and employment in the years after 2008.

While Mian and Sufi occupy solidly mainstream positions (at Princeton and Chicago, respectively), their work has been embraced by a number of radical economists who see vindication for long-standing left-Keynesian ideas about the financial roots of economic instability. Markus Brunnermeier (also at Princeton) and his coauthors have also done interesting work trying to untangle the mechanisms of the 2008 financial crisis and to generalize them, with particular attention to the old Keynesian concept of liquidity. That finance is important to the economy is not, in itself, news to anyone other than economists; but this new empirical work is valuable in translating this general awareness into concrete usable form.

A second area of renewed empirical interest is fiscal policy — the use of the government budget to manage aggregate demand. Even more than with finance, economics here has followed rather than led the policy debate. Policymakers were turning to large-scale fiscal stimulus well before academics began producing studies of its effectiveness. Still, it’s striking how many new and sophisticated efforts there have been to estimate the fiscal multiplier — the increase in GDP generated by an additional dollar of government spending.

In the US, there’s been particular interest in using variation in government spending and unemployment across states to estimate the effect of the former on the latter. The outstanding work here is probably that of Gabriel Chodorow-Reich. Like most entries in this literature, Chodorow-Reich’s suggests fiscal multipliers that are higher than almost any mainstream economist would have accepted a decade ago, with each dollar of government spending adding perhaps two dollars to GDP. Similar work has been published by the IMF, which acknowledged that past studies had “significantly underestimated” the positive effects of fiscal policy. This mea culpa was particularly striking coming from the global enforcer of economic orthodoxy.

The IMF has also revisited its previously ironclad opposition to capital controls — restrictions on financial flows across national borders. More broadly, it has begun to offer, at least intermittently, a platform for work challenging the “Washington Consensus” it helped establish in the 1980s, though this shift predates the crisis of 2008. The changed tone coming out of the IMF’s research department has so far been only occasionally matched by a change in its lending policies.

Income distribution is another area where there has been a flowering of more diverse empirical work in the past decade. Here of course the outstanding figure is Thomas Piketty. With his collaborators (Gabriel Zucman, Emmanuel Saez and others) he has practically defined a new field. Income distribution has always been a concern of economists, of course, but it has typically been assumed to reflect differences in “skill.” The large differences in pay that appeared to be unexplained by education, experience, and so on, were often attributed to “unmeasured skill.” (As John Eatwell used to joke: Hegemony means you get to name the residual.)

Piketty made distribution — between labor and capital, not just across individuals — into something that evolves independently, and that belongs to the macro level of the economy as a whole rather than the micro level of individuals. When his book Capital in the 21st Century was published, a great deal of attention was focused on the formula “r > g,” supposedly reflecting a deep-seated tendency for capital accumulation to outpace economic growth. But in recent years there’s been an interesting evolution in the empirical work Piketty and his coauthors have published, focusing on countries like Russia/USSR and China, etc., which didn’t feature in the original survey. Political and institutional factors like labor rights and the legal forms taken by businesses have moved to center stage, while the formal reasoning of “r > g” has receded — sometimes literally to a footnote. While no longer embedded in the grand narrative of Capital in the 21st Century, this body of empirical work is extremely valuable, especially since Piketty and company are so generous in making their data publicly available. It has also created space for younger scholars to make similar long-run studies of the distribution of income and wealth in countries that the Piketty team hasn’t yet reached, like Rishabh Kumar’s superb work on India. It has also been extended by other empirical economists, like Lukas Karabarbounis and coauthors, who have looked at changes in income distribution through the lens of market power and the distribution of surplus within the corporation — not something a University of Chicago economist would have ben likely to study a decade ago.

A final area where mainstream empirical work has wandered well beyond its pre-2008 limits is the question of whether aggregate demand — and money and finance more broadly — can affect long-run economic outcomes. The conventional view, still dominant in textbooks, draws a hard line between the short run and the long run, more or less meaning a period longer than one business cycle. In the short run, demand and money matter. But in the long run, the path of the economy depends strictly on “real” factors — population growth, technology, and so on.

Here again, the challenge to conventional wisdom has been prompted by real-world developments. On the one hand, weak demand — reflected in historically low interest rates — has seemed to be an ongoing rather than a cyclical problem. Lawrence Summers dubbed this phenomenon “secular stagnation,” reviving a phrase used in the 1940s by the early American Keynesian Alvin Hansen.

On the other hand, it has become increasingly clear that the productive capacity of the economy is not something separate from current demand and production levels, but dependent on them in various ways. Unemployed workers stop looking for work; businesses operating below capacity don’t invest in new plant and equipment or develop new technology. This has manifested itself most clearly in the fall in labor force participation over the past decade, which has been considerably greater than can be explained on the basis of the aging population or other demographic factors. The bottom line is that an economy that spends several years producing less than it is capable of, will be capable of producing less in the future. This phenomenon, usually called “hysteresis,” has been explored by economists like Laurence Ball, Summers (again) and Brad DeLong, among others. The existence of hysteresis, among other implications, suggests that the costs of high unemployment may be greater than previously believed, and conversely that public spending in a recession can pay for itself by boosting incomes and taxes in future years.

These empirical lines are hard to fit into the box of orthodox theory — not that people don’t try. But so far they don’t add up to more than an eclectic set of provocative results. The creativity in mainstream empirical work has not yet been matched by any effort to find an alternative framework for thinking of the economy as a whole. For people coming from non-mainstream paradigms — Marxist or Keynesian — there is now plenty of useful material in mainstream empirical macroeconomics to draw on – much more than in the previous decade. But these new lines of empirical work have been forced on the mainstream by developments in the outside world that were too pressing to ignore. For the moment, at least, they don’t imply any systematic rethinking of economic theory.

***

Perhaps the central feature of the policy mainstream a decade ago was a smug and, in retrospect, remarkable complacency that the macroeconomic problem had been solved by independent central banks like the Federal Reserve.  For a sense of the pre-crisis consensus, consider this speech by a prominent economist in September 2007, just as the US was heading into its worst recession since the 1930s:

One of the most striking facts about macropolicy is that we have progressed amazingly. … In my opinion, better policy, particularly on the part of the Federal Reserve, is directly responsible for the low inflation and the virtual disappearance of the business cycle in the last 25 years. … The story of stabilization policy of the last quarter century is one of amazing success.

You might expect the speaker to be a right-wing Chicago type like Robert Lucas, whose claim that “the problem of depression prevention has been solved” was widely mocked after the crisis broke out. But in fact it was Christina Romer, soon headed to Washington as the Obama administration’s top economist. In accounts of the internal debates over fiscal policy that dominated the early days of the administration, Romer often comes across as one of the heroes, arguing for a big program of public spending against more conservative figures like Summers. So it’s especially striking that in the 2007 speech she spoke of a “glorious counterrevolution” against Keynesian ideas. Indeed, she saw the persistence of the idea of using deficit spending to fight unemployment as the one dark spot in an otherwise cloudless sky. There’s more than a little irony in the fact that opponents of the massive stimulus Romer ended up favoring drew their intellectual support from exactly the arguments she had been making just a year earlier. But it’s also a vivid illustration of a consistent pattern: ideas have evolved more rapidly in the world of practical policy than among academic economists.

For further evidence, consider a 2016 paper by Jason Furman, Obama’s final chief economist, on “The New View of Fiscal Policy.” As chair of the White House Council of Economic Advisers, Furman embodied the policy-economics consensus ex officio. Though he didn’t mention his predecessor by name, his paper was almost a point-by-point rebuttal of Romer’s “glorious counterrevolution” speech of a decade earlier. It starts with four propositions shared until recently by almost all respectable economists: that central banks can and should stabilize demand all by themselves, with no role for fiscal policy; that public deficits raise interest rates and crowd out private investment; that budget deficits, even if occasionally called for, need to be strictly controlled with an eye on the public debt; and that any use of fiscal policy must be strictly short-term.

None of this is true, suggests Furman. Central banks cannot reliably stabilize modern economies on their own, increased public spending should be a standard response to a downturn, worries about public debt are overblown, and stimulus may have to be maintained indefinitely. While these arguments obviously remain within a conventional framework in which the role of the public sector is simply to maintain the flow of private spending at a level consistent with full employment, they nonetheless envision much more active management of the economy by the state. It’s a remarkable departure from textbook orthodoxy for someone occupying such a central place in the policy world.

Another example of orthodoxy giving ground under the pressure of practical policymaking is Narayana Kocherlakota. When he was appointed as President of the Federal Reserve Bank of Minneapolis, he was on the right of debates within the Fed, confident that if the central bank simply followed its existing rules the economy would quickly return to full employment, and rejecting the idea of active fiscal policy. But after a few years on the Fed’s governing Federal Open Market Committee (FOMC), he had moved to the far left, “dovish” end of opinion, arguing strongly for a more aggressive approach to bringing unemployment down by any means available, including deficit spending and more aggressive unconventional tools at the Fed. This meant rejecting much of his own earlier work, perhaps the clearest example of a high-profile economist repudiating his views after the crisis; in the process, he got rid of many of the conservative “freshwater” economists in the Minneapolis Fed’s research department.

The reassessment of central banks themselves has run on parallel lines but gone even farther.

For twenty or thirty years before 2008, the orthodox view of central banks offered a two-fold defense against the dangerous idea — inherited from the 1930s — that managing the instability of capitalist economies was a political problem. First, any mismatch between the economy’s productive capabilities (aggregate supply) and the desired purchases of households and businesses (aggregate demand) could be fully resolved by the central bank; the technicians at the Fed and its peers around the world could prevent any recurrence of mass unemployment or runaway inflation. Second, they could do this by following a simple, objective rule, without any need to balance competing goals.

During those decades, Alan Greenspan personified the figure of the omniscient central banker. Venerated by presidents of both parties, Greenspan was literally sanctified in the press — a 1990 cover of The International Economy had him in papal regalia, under the headline, “Alan Greenspan and His College of Cardinals.” A decade later, he would appear on the cover of Time as the central figure in “The Committee to Save the World,” flanked by Robert Rubin and the ubiquitous Summers. And a decade after that he showed up as Bob Woodward’s eponymous Maestro.

In the past decade, this vision of central banks and central bankers has eroded from several sides. The manifest failure to prevent huge falls in output and employment after 2008 is the most obvious problem. The deep recessions in the US, Europe and elsewhere make a mockery of the “virtual disappearance of the business cycle” that people like Romer had held out as the strongest argument for leaving macropolicy to central banks. And while Janet Yellen or Mario Draghi may be widely admired, they command nothing like the authority of a Greenspan.

The pre-2008 consensus is even more profoundly undermined by what central banks did do than what they failed to do. During the crisis itself, the Fed and other central banks decided which financial institutions to rescue and which to allow to fail, which creditors would get paid in full and which would face losses. Both during the crisis and in the period of stagnation that followed, central banks also intervened in a much wider range of markets, on a much larger scale. In the US, perhaps the most dramatic moment came in late summer 2008, when the commercial paper market — the market for short-term loans used by the largest corporations — froze up, and the Fed stepped in with a promise to lend on its own account to anyone who had previously borrowed there. This watershed moment took the Fed from its usual role of regulating and supporting the private financial system, to simply replacing it.

That intervention lasted only a few months, but in other markets the Fed has largely replaced private creditors for a number of years now. Even today, it is the ultimate lender for about 20 percent of new mortgages in the United States. Policies of quantitative easing, in the US and elsewhere, greatly enlarged central banks’ weight in the economy — in the US, the Fed’s assets jumped from 6 percent of GDP to 25 percent, an expansion that is only now beginning to be unwound.  These policies also committed central banks to targeting longer-term interest rates, and in some cases other asset prices as well, rather than merely the overnight interest rate that had been the sole official tool of policy in the decades before 2008.

While critics (mostly on the Right) have objected that these interventions “distort” financial markets, this makes no sense from the perspective of a practical central banker. As central bankers like the Fed’s Ben Bernanke or the Bank of England’s Adam Posen have often said in response to such criticism, there is no such thing as an “undistorted” financial market. Central banks are always trying to change financial conditions to whatever it thinks favors full employment and stable prices. But as long as the interventions were limited to a single overnight interest rate, it was possible to paper over the contradiction between active monetary policy and the idea of a self-regulating economy, and pretend that policymakers were just trying to follow the “natural” interest rate, whatever that is. The much broader interventions of the past decade have brought the contradiction out into the open.

The broad array of interventions central banks have had to carry out over the past decade have also provoked some second thoughts about the functioning of financial markets even in normal times. If financial markets can get things wrong so catastrophically during crises, shouldn’t that affect our confidence in their ability to allocate credit the rest of the time? And if we are not confident, that opens the door for a much broader range of interventions — not only to stabilize markets and maintain demand, but to affirmatively direct society’s resources in better ways than private finance would do on its own.

In the past decade, this subversive thought has shown up in some surprisingly prominent places. Wearing his policy rather than his theory hat, Paul Krugman sees

… a broader rationale for policy activism than most macroeconomists—even self-proclaimed Keynesians—have generally offered in recent decades. Most of them… have seen the role for policy as pretty much limited to stabilizing aggregate demand. … Once we admit that there can be big asset mispricing, however, the case for intervention becomes much stronger… There is more potential for and power in [government] intervention than was dreamed of in efficient-market models.

From another direction, the notion that macroeconomic policy does not involve conflicting interests has become harder to sustain as inflation, employment, output and asset prices have followed diverging paths. A central plank of the pre-2008 consensus was the aptly named “divine coincidence,” in which the same level of demand would fortuitously and simultaneously lead to full employment, low and stable inflation, and production at the economy’s potential. Operationally, this was embodied in the “NAIRU” — the level of unemployment below which, supposedly, inflation would begin to rise without limit.

Over the past decade, as estimates of the NAIRU have fluctuated almost as much as the unemployment rate itself, it’s become clear that the NAIRU is too unstable and hard to measure to serve as a guide for policy, if it exists at all. It is striking to see someone as prominent as IMF chief economist Olivier Blanchard write (in 2016) that “the US economy is far from satisfying the ‘divine coincidence’,” meaning that stabilizing inflation and minimizing unemployment are two distinct goals. But if there’s no clear link between unemployment and inflation, it’s not clear why central banks should worry about low unemployment at all, or how they should trade off the risks of prices rising undesirably fast against the risk of too-high unemployment. With surprising frankness, high officials at the Fed and other central banks have acknowledged that they simply don’t know what the link between unemployment and inflation looks like today.

To make matters worse, a number of prominent figures — most vocally at the Bank for International Settlements — have argued that we should not be concerned only with conventional price inflation, but also with the behavior of asset prices, such as stocks or real estate. This “financial stability” mandate, if it is accepted, gives central banks yet another mission. The more outcomes central banks are responsible for, and the less confident we are that they all go together, the harder it is to treat central banks as somehow apolitical, as not subject to the same interplay of interests as the rest of the state.

Given the strategic role occupied by central banks in both modern capitalist economies and economic theory, this rethinking has the potential to lead in some radical directions. How far it will actually do so, of course, remains to be seen. Accounts of the Fed’s most recent conclave in Jackson Hole, Wyoming suggest a sense of “mission accomplished” and a desire to get back to the comfortable pieties of the past. Meanwhile, in Europe, the collapse of the intellectual rationale for central banks has been accompanied by the development of the most powerful central bank-ocracy the world has yet seen. So far the European Central Bank has not let its lack of democratic mandate stop it from making coercive intrusions into the domestic policies of its member states, or from serving as the enforcement arm of Europe’s creditors against recalcitrant debtors like Greece.

One thing we can say for sure: Any future crisis will bring the contradictions of central banks’ role as capitalism’s central planners into even sharper relief.

***

Many critics were disappointed the crisis of a 2008 did not lead to an intellectual revolution on the scale of the 1930s. It’s true that it didn’t. But the image of stasis you’d get from looking at the top journals and textbooks isn’t the whole picture — the most interesting conversations are happening somewhere else. For a generation, leftists in economics have struggled to change the profession, some by launching attacks (often well aimed, but ignored) from the outside, others by trying to make radical ideas parsable in the orthodox language. One lesson of the past decade is that both groups got it backward.

Keynes famously wrote that “Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” It’s a good line. But in recent years the relationship seems to have been more the other way round. If we want to change the economics profession, we need to start changing the world. Economics will follow.

Thanks to Arjun Jayadev, Ethan Kaplan, Mike Konczal and Suresh Naidu for helpful suggestions and comments.

Lecture Notes for Research Methods

I’m teaching a new class this semester, a masters-level class on research methods. It could be taught as simply the second semester of an econometrics sequence, but I’m taking a different approach, trying to think about what will help students do effective empirical work in policy/political settings. We’ll see how it works.

For anyone interested, here are the slides I will use on the first day. I’m not sure it’s all right, in fact I’m sure some of it is wrong But that is how you figure out what you really think and know and don’t know about something, by teaching it.

After we’ve talked through this, we will discuss this old VoxEU piece as an example of effective use of simple scatterplots to make an economic argument.

I gave a somewhat complementary talk on methodology and heterodox macroeconomics at the Eastern Economics Association meetings last year. I’ve been meaning to transcribe it into a blogpost, but in the meantime you can listen to a recording, if you’re interested.

 

The Wit and Wisdom of Trygve Haavelmo

I was talking some time ago with my friend Enno about Merijn Knibbe’s series of articles on the disconnect between the variables used in economic models and the corresponding variables in the national accounts.1 Enno mentioned Trygve Haavelmo’s 1944 article The Probability Approach in Econometrics; he thought Haavelmo’s distinction between “theroetical variables,” “true variables,” and “observable variables” could be a useful way of thinking about the slippages between economic reality, economic data and economic theory.

I finally picked up the Haavelmo article, and it turns out to be a deep and insightful piece — for the reason Enno mentioned, but also more broadly on how to think about empirical economics. It’s especially interesting coming from soeone who won the Nobel Prize for his foundational work in econometrics. Another piece of evidence that orthodox economists in the mid-20th century thought more deeply and critically about the nature of their project than their successors do today.

It’s a long piece, with a lot of mathematical illustrations that someone reading it today can safely skip. The central argument comes down to three overlapping points. First, economic models are tools, developed to solve specific problems. Second, economic theories have content only insofar as they’re associated with specific procedures for measurement. Third, we have positive economic knowledge only insofar as we can make unconditional predictions about the distribution of observable variables.

The first point: We study economics in order to “become master of the happenings of real life.” This is on some level obvious, or vacuous, but it'[s important; it functions as a kind of “he who has ears, let him hear.” It marks the line between those who come to economics as a means to some other end — a political commitment, for many of us; but it could just as well come from a role in business or policy — and those for whom economic theory is an end in itself. Economics education must, obviously, be organized on the latter principle. As soon as you walk into an economics classroom, the purpose of your being there is to learn economics. But you can’t, from within the classroom, make any judgement about what is useful or interesting for the world outside. Or as Hayek put it, “One who is only an economist, cannot be a good economist.”2

Here is what Haavelmo says:

Theoretical models are necessary tools in our attempts to understand and explain events in real life. … Whatever be the “explanations” we prefer, it is not to be forgotten that they are all our own artificial inventions in a search for an understanding of real life; they are not hidden truths to be “discovered.”

It’s an interesting question, which we don’t have to answer here, whether or to what extent this applies to the physical sciences as well. Haavelmo thinks this pragmatic view of scientific laws applies across the board:

The phrase “In the natural sciences we have laws” means not much more and not much less than this: The natural sciences have chosen fruitful ways of looking upon physical reality.

We don’t need to decide here whether we want to apply this pragmatic view to the physical sciences. It is certainly the right way to look at economic models, in particular the models we construct in econometrics. The “data generating process” is not an object existing out in the world. It is a construct you have created for one or both of these reasons: It is an efficient description of the structure of a specific matrix of observed data; it allows you to make predictions about some specific yet-to-be-observed outcome. The idea of a data-generating process is obviously very useful in thinking about the logic of different statistical techniques. It may be useful to do econometrics as if there were a certain data generating process. It is dangerously wrong to believe there really is one.

Speaking of observation brings us to Haavelmo’s second theme: the meaningless of economic theory except in the context of a specific procedure for observation.  It might naively seem, he says, that

since the facts we want to study present themselves in the form of numerical measurement, we shall have to choose our models from … the field of mathematics. But the concepts of mathematics obtain their quantitative meaning implicitly through the system of logical operations we impose. In pure mathematics there really is no such problem as quantitative definition of a concept per se …

When economists talk about the problem of quantitative definitions of economic variables, they must have something in mind which has to do with real economic phenomena. More precisely, they want to give exact rules how to measure certain phenomena of real life.

Anyone who got a B+ in real analysis will have no problem with the first part of this statement. For the rest, this is the point: economic quantities come into existence only through some concrete human activity that involves someone writing down a number. You can ignore this, most of the time; but you should not ignore it all of the time. Because without that concrete activity there’s no link between economic theory and the social reality it hopes to help us master or make sense of.

Haavelmo has some sharp observations on the kind of economics that ignores the concrete activity that generates its data, which seem just as relevant to economic practice today:

Does a system of questions become less mathematical and more economic in character just by calling x “consumption,” y “price,” etc.? There are certainly many examples of studies to be found that do not go very much further than this, as far as economic significance is concerned.

There certainly are!

An equation, Haavelmo continues,

does not become an economic theory just by using economic terminology to name the variables invovled. It becomes an economic theory when associated with the rule of actual measurement of economic variables.

I’ve seen plenty of papers where the thought process seems to have been somthing like, “I think this phenomenaon is cyclical. Here is a set of difference equations that produce a cycle. I’ll label the variables with names of parts of the phenomenon. Now I have a theory of it!” With no discussion of how to measure the variables or in what sense the objects they describe exist in the external world.

What makes a piece of mathematical economics not only mathematics but also economics is this: When we set up a system of theoretical relationships and use economic names for the otherwise purely theoretical variables involved, we have in mind some actual experiment, or some design of an experiment, which we could at least imagine arranging, in order to measure those quantities in real economic life that we think might obey the laws imposed on their theoretical namesakes.

Right. A model has positive content only insofar as we can describe the concrete set of procedures that gets us from the directly accessible evidence of our senses. In my experience this comes through very clearly if you talk to someone who actually works in the physical sciences. A large part of their time is spent close to the interface with concrete reality — capturing that lizard, calibrating that laser.  The practice of science isn’t simply constructing a formal analog of physical reality, a model trainset. It’s actively pushing against unknown reality and seeing how it pushes back.

Haavelmo:

When considering a theoretical setup … it is common to ask about the actual meaning of this or that variable. But this question has no sense within the theoretical model. And if the question applies to reality it has no precise answer … we will always need some willingness among our fellow research workers to agree “for practical purposes” on questions of definitions and measurement …A design of experiments … is an essential appendix to any quantitative theory.

With respect to macroeconomics, the “design of experiments” means, in the first instance, the design of the national accounts. Needless to say, national accounting concepts cannot be treated as direct observations of the corresponding terms in economic theory, even if they have been reconstructed with that theory in mind. Cynamon and Fazzari’s paper on the measurement of household spending gives some perfect examples of this. There can’t be many contexts in which Medicare payments to hospitals, for example, are what people have in mind when they construct models of household consumption. But nonetheless that’s what they’re measuring, when they use consumption data from the national accounts.

I think there’s an important sense in which the actual question of any empirical macroeconomics work has to be: What concrete social process led the people working at the statistics office to enter these particular values in the accounts?

Or as Haavelmo puts it:

There is hardly an economist who feels really happy about identifying the current series of “national income, “consumptions,” etc. with the variables by those names in his theories. Or, conversely, he would think it too complicated or perhaps uninteresting to try to build models … [whose] variables would correspond to those actually given by current economic statistics. … The practical conclusion… is the advice that economists hardly ever fail to give, but that few actually follow, that one should study very carefully the actual series considered and the conditions under which they were produced, before identifying them with the variables of a particular theoretical model.

Good advice! And, as he says, hardly ever followed.

I want to go back to the question of the “meaning” of a variable, because this point is so easy to miss. Within a model, the variables have no meaning, we simply have a set of mathematical relationships that are either tautologous, arbitrary, or false. The variables only acquire meaning insofar as we can connect them to concrete social phenomena. It may be unclear to you, as a blog reader, why I’m banging on this point so insistently. Go to an economics conference and you’ll see.

The third central point of the piece is that meaningful explanation requires being able to identify a few causal links as decisive, so that all the other possible ones can be ignored.

Think back to that Paul Romer piece on what’s wrong with modern macroeconomics. One of the most interesting parts of it, to me, was its insistent Humean skepticism about the possibility of a purely inductive economics, or for that matter science of any kind. Paraphrasing Romer: suppose we have n variables, any of which may potentially influence the others. Well then, we have n equations, one for each variable, and n2 parameters (counting intercepts). In general, we are not going to be able to estimate this system based on data alone. We have to restrict the possible parameter space either on the basis of theory, or by “experiments” (natural or otherwise) that let us set most of the parameters to zero on the grounds that there is no independent variation in those variables between observations. I’m not sure that Romer fully engages with this point, whose implications go well beyond the failings of real business cycle theory. But it’s a central concern for Haavelmo:

A theoretical model may be said to be simply a restriction upon the joint variations of a system of quantities … which otherwise might have any value. … Our hope in economic theory and research is that it may be possible to establish contant and relatively simple relations between dependent variables … and a realtively small number of independent variables. … We hope that for each variable y to be explained, there is a realtively small number of explaining factors the variations of which are practically decisive in determining the variations of y. …  If we are trying to explain a certain observable varaible, y, by a system of causal factors, there is, in general, no limit to the number of such factors that might have a potential influence upon y. But Nature may limit the number of fctors that have a nonneglible factual influence to a relatively small number. Our hope for simple laws in economics rests upon the assumption that we may proceed as if such natural limitations of the number of relevant factors exist.

One way or another, to do empirical economic, we have to ignore mst of the logically possible relationships between our variables. Our goal, after all, is to explain variation in the dependent variable. Meaningful explanation is possible only if the number of relevant causal factors is small. If someone asks “why is unemployment high”, a meaningful answer is going to involve at most two or three causes. If you say, “I have no idea, but all else equal wage regulations are making it higher,” then you haven’t given an answer at all. To be masters of the hapennings of real life, we need to focus on causes of effects, not effects of causes.

In other words, ceteris paribus knowledge isn’t knowledge at all. Only unconditional claims count — but they don’t have to be predictions of a single variable, they can be claims about the joint distribution of several. But in any case we have positive knowledge only to the extent we can unconditionally say that future observations will fall entirely in a certain part of the state space. This fails if we have a ceteris paribus condition, or if our empirical works “corrects” for factors whose distribution and the nature of whose influence we have not invstigated.3 Applied science is useful because it gives us knowledge of the kind, “If I don’t turn the key, the car will not start, if I do turn the key, it will — or if it doesn’t there is a short list of possible reasons why not.” It doesn’t give us knowledge like “All else equal, the car is more likely to start when the key is turned than when it isn’t.”4

If probability distributions are simply tools for making unconditional claims about specific events, then it doesn’t make sense to think of them as existing out in the world. They are, as Keynes also emphasized, simply ways of describing our own subjective state of belief:

We might interpret “probability” simply as a measure of our a priori confidence in the occurrence of a certain event. Then the theoretical notion of a probability distribution serves us chiefly as a tool for deriving statements that have a very high probability of being true.

Another way of looking at this. Research in economics is generally framed in terms of uncovering universal laws, for which the particular phenomenon being  studied merely serves as a case study.5 But in the real world, it’s more oftne the other way: We are interested in some specific case, often the outcome of some specific action we are considering. Or as Haavelmo puts it,

As a rule we are not particularly interested in making statements about a large number of observations. Usually, we are interested in a relatively small number of observations points; or perhaps even more frequently, we are interested in a practical statement about just one single new observation.

We want economics to answer questions like, “what will happen if US imposes tariffs on China”? The question of what effects tariffs have on trade in the abstract is, itself, uninteresting and unanswerable.

What do we take from this? How, according to Haavelmo, should empirical economics be?

First, the goal of empirical work is to explain concrete phenomena — what happened, or will happen, in some particular case.

Second, the content of a theory is inseparable from the procedures for measuring the variables in it.

Third, empirical work requires restrictions on the logically possible space of parameters, some of which have to be imposed a priori.

Finally, prediction (the goal) means making unconditional claims about the joint distribution of one or more variables. “Everything else equal” means “I don’t know.”

All of this based on the idea that we study economics not as an end in itself, but in response to the problems forced on us by the world.

“The financialization of the nonfinancial corporation”

One common narrative attached to the murky term financialization is that nonfinancial corporations have, in effect, turned themselves into banks or hedge funds — they have replaced investment in means of production with ownership of financial assets. Financial profits, in this story, have increasingly substituted for profits from making and selling stuff. I’m not sure where this idea originates — the epidemiology points toward my own homeland of UMass-Amherst — but it’s become almost accepted wisdom in left economics.

I’ve been skeptical of this story for a while, partly because it conflicts with my own vision of financialization as something done to nonfinancial corporations rather than by them — a point I’ll return to at the end of the post — and partly because I’ve never seen good evidence for it. On the cashflow side, it’s true there is a rise in interest income from the 1960s through the 1980s. But, as discussed in the previous post, this is outweighed by a rise in interest payments; it reflects a general rise in interest rates rather than a reorientation of corporate activity; and has subsequently been reversed. On the balance sheet side, there is indeed a secular rise in “financial” assets, but this is all in what the financial accounts call “unidentified” assets, which I’ve always suspected is mostly goodwill and equity in subsidiaries rather than anything we would normally think of as financial assets.

Now courtesy of Nathan Tankus, here is an excellent paper by Joel Rabinovitch that makes this case much more thoroughly than I’d been able to.

The paper starts by distinguishing two broad stories of financialization: shareholder value orientation and acquisition of financial assets. In the first story, financialization means that corporations are increasingly oriented toward the wishes or interests of shareholders and other financial claimants. The second story is the one we are interested in here. Rabinovitch’s paper doesn’t directly engage with the shareholder-value story, but it implicitly strengthens it by criticizing the financial-assets one.

The targets of the paper include some of my smartest friends. So I’ll be interested to see what they say in response to it.

The critical questions are:  Have nonfinancial corporations’ holdings of financial assets really increased, relative to total assets? And, has their financial income risen relative to total income?

The answers in turn depend on two subsidiary issues. On the first question, we need to decide what is represented by the “other unidentified assets” category in the Financial Accounts, which is responsible for essentially all of the apparent rise in financial assets. And on the income side, we need to consistently compare the full set of financial flows to their nonfinancial equivalents. Rabinovitch argues, convincingly in my view, that looking at financial income in isolation is not give a meaningful picture.

On the face of it, the asset and income pictures look quite different. In the official accounts, financial assets of nonfinancial corporations have increased from 40% of nonfinancial assets to 120% between 1946 and 2015. Financial income, on the other hand, is only 2.5% of total income and shows no long-term increase. This should already make us skeptical that the increase in “financial” assets represents income-generating assets in the usual sense.

Rabinovitch then explores this is detail by combining the financial accounts with the IRS statistics of income (SOI) and the Compustat database. Each of these has strengths and weaknesses — Compustat provides firm-level data, but is limited to large, publicly-traded corporations and consolidates domestic and overseas operations; SOI gives detailed breakdowns of income sources for all forms of legal organization broken down by size, but it doesn’t include any balance-sheet variables, so it can’t be used to answer the asset questions.

iI the financial accounts, the majority of the increase in identified financial assets is FDI stock. As Rabinovitch notes, “it’s dubious to directly consider FDI as a financial asset if we take into account that it implies lasting interest with the intention to exercise control over the enterprise.” The largest part of the overall increase in financial assets, however, is in the residual “other unidentified assets” line of the financial accounts. The fact that there is no increase in income associated with these assets is already a reason to doubt that they are financial assets in the usual sense. Compustat data, while not strictly comparable, suggests that the majority of this is intangibles. The most important intangible is goodwill, which is simply the accounting term of the excess of an acquisition price over the book value of the acquired company. Importantly, goodwill is not depreciated but only written off through impairment. Another large portion is equity in unconsolidated subsidiaries; this accounts for a disproportionate share of the increase thanks to a change in accounting rules that required corporations to begin accounting for it explicitly. Other important intangibles include patents, copyrights, licenses, etc. These are not financial assets; rather they are assets or pseudo-assets acquired, like real investment, in order to carry out a company’s productive activities on an extended scale.

These are all aggregate numbers; perhaps the financialization story holds up better for the biggest firms? Rabinovich discusses this too. Both Compustat and SOI allow us to separate firms by size. As it turns out, the largest firms do have a greater proportion of financial income than the smaller ones. But even for the largest 0.05% of corporations, financial income is still only 3.5% or total income, and net financial income is still negative. As he reasonably concludes, “even for the biggest nonfinancial corporations, financialization must not be understood as mimicking financial corporations.”

What do we make of all this?

First, the view of financialization as nonfinancial businesses acquiring financial assets for income in placer of real investment, is widely held on the left. After my Jacobin interview came out, for example, several people promptly informed me that I was missing this important fact. So if the evidence does not in fact support it, that is worth knowing. Or at least, future statements of the hypothesis will be stronger if they respond to the points made here.

Second, the fact that “financial” assets in fact mostly consist of goodwill, interest in unconsolidated subsidiaries, and foreign investment is interesting in its own right, not just as negative criticism of the  financialization story. It a sign of the importance of ownership claims as a means of control over production— both as the substantive content of balance sheet positions and as a core part of corporate activity.

Third, the larger importance of the story is to the question of whether nonfinancial corporations and their managers should be seen mainly as participants in, or victims of, financialization. Conversely, is finance itself a distinct social actor? In a world in which the largest nonfinancial corporations have effectively turned themselves into hedge funds, it would not make much sense to talk about a conflict between productive capital and financial capital, or to imagine them as two distinct sets of people. But in a world like the one described here, or in my previous post, where the main nexus between nonfinancial corporations and finance is payments from the former to the latter, it may indeed make sense to think of them as distinct actors, of conflicts between them, and of intervening politically  on one side or the other.

Finally, to me, this paper is a model of  how to do empirical work in economics. Through some historical process I’d like to understand better, economists have become obsessed with regression, to the point that in academic economics it’s become synonymous with empirics. Regression analysis starts from the idea that the data we observe is a random draw from some underlying data generating process in which a variable of interest is a function of one or more other variables. The goal of the regression is to recover the parameters of that function by observing independent or exogenous variation in the variables. But for most macroeconomic questions, we are dealing with historical processes where our goal is to understand what actually happened, and where the hypothesis of some underlying data-generating process from which historical data is drawn randomly, is neither realistic nor useful. On the other hand, the economy is not a black box; we always have some idea of the mechanism linking macroeconomic variables. So we don’t need to evaluate our hypotheses by asking how probable the it would be to draw the distribution we observe from some hypothetical random process; we can, and generally should, ask instead whether the historical pattern is consistent with the mechanism. Furthermore, regression analysis is generally focused on the qualitative question of whether variation in one variable can be said to cause variation in a second one; but in historical macroeconomics we are generally interested in how much of the variation in some outcome is due to various causes. So a regression approach, it seems to me, is basically unsuited to the questions addressed here. This paper, it seems to me, is a model of what one should do instead.

Heterodoxy and the Fly-Bottle

(I have a review in the new Review of Keynesian Economics of a collection of essays on pluralist, or non-mainstream, economics teaching. You can the full review here. Since I doubt most readers of this blog are interested in the book, I’ve posted a shorter version of the review below – just the parts on the broader issues rather than my assessment of these particular essays.)

 

Wittgenstein famously described his aim in philosophy as “showing the fly the way out of the fly bottle.” The goal, he said, was not to resolve the questions posed by philosophers, but to escape them. As long as the fly is inside the bottle, understanding its contours is essential to getting it wherever it wants to go; but once the fly is outside, the shape of the bottle doesn’t matter at all.

Non-mainstream economists have a similar relationship to dominant theory. Because we’ve been inculcated for years that the best way to think about the economy is in terms of the exchange of goods by rational agents, criticisms of that framework are a necessary step on the way to thinking in other terms. But the logical and empirical shortcomings of thinking about economic life in terms of a perfectly rational representative agent optimizing utility over infinite future time don’t, in themselves, tell us how we should think instead.

The essentially negative character of economic heterodoxy is a special challenge for undergraduate teaching. You can’t teach criticisms of economic orthodoxy without first teaching the ideas to be criticized. Finding our way out of orthodoxy was, for many of us, central to our intellectual development. Naturally we want to reproduce that experience for our students. This leads to a style of teaching that amounts to putting the flies into the bottle so we can show them the way out. But how useful is it to our students to understand the defects of a logical system it would never have occurred to them to adopt in the first place? Having spent so much time looking for a way out, it sometimes seems we don’t know what do in the open air.

This dilemma is on full display in The Handbook of Pluralist Economics Education. In order to present a realistic model of the economy, Steve Keen writes in one of his two chapters, “an essential first is to demonstrate to students that the ostensibly well-developed and coherent traditional model is in fact an empty shell”. Many of the volume’s other contributors make similar claims. This is the spirit of Joan Robinson’s famous quip that the only reason to study economics is to avoid being fooled by economists. But if that is all we can offer, better to send our students to the departments of history, anthropology, engineering, or some other field that offers positive knowledge about social reality.

What then are we to do? Pluralism as such is not a useful guide; carried to an extreme it would, as Sheila Dow says here, amount to “anything goes,” which is not a viable basis for teaching a class (or for any other intellectual endeavor). This is a problem with pluralism as a positive value (and not only in economics teaching): Pluralism implies a number of distinct perspectives, but to be distinct they must be internally coherent, that is, unitary. Carried to an extreme, pluralism is self-undermining. To challenge the mainstream, at some point you must argue not just for the value of diversity in the abstract, but in favor of a particular alternative.

In practice, even economists who completely reject mainstream approaches in their own work often give them a large share of time in the classroom, in part because they feel obligated to prepare students for future academic work and in part, as Keen says, simply because of “the pressure to teach something”. Teaching is hard enough work even when you aren’t reconstructing the curriculum from the ground up. It’s much easier to teach a standard course and then add some critical material.

But pluralism in economics teaching doesn’t have to mean simply presenting orthodoxy and adding some criticisms of it. It could also mean approaching the material from a different angle that avoids — rather than attacks — the dominant formalisms in economics and gives students a useful set of tools for engaging with economic reality. For me, this means a focus on the definition and measurement of macroeconomic aggregates, and on the causal relationships between those aggregates. Concretely, it means reliance on flowcharts where the nodes are some observable variable, as opposed to the normal emphasis on diagrams representing functional relationships — ISLM, AS-AD, etc. — that can’t be directly observed.

A more specific problem in heterodox teaching — and heterodox economics in general — is the weight put on the financial crisis as an argument for alternatives to the mainstream. Many of the authors in this collection present the crisis of 2008 and its aftermath as a decisive refutation of economic orthodoxy. Edward Fullbrook declares that ‘no discipline has ever experienced systemic failure on the scale that economics has today.” David Wheat, less hyperbolically, argues that “the failure to foresee the financial epidemic in 2008” demonstrates a need to shift the focus of economics teaching away from long-run equilibrium. One might push back against this line of argument. It is true that several large financial institutions went bankrupt in 2008, and some financial assets fell steeply in value, to the dismay of their owners; but with the perspective of close to a decade, it’s less clear how much of a base these events offer for critique of either the economics profession or economic institutions. Singleminded focus on “the crisis” risks implying that the problem with our economic system is the rare occasions on which it fails to work well for owners of financial assets, while ignoring the ongoing problems of inequality, hierarchy and privilege; tedious and demeaning work; environmental degradation; and the fundamental disconnect between ever-increasing money wealth and unmet human needs – none of which has much to do with the failure of Lehman Brothers. As people used to say: capitalism is the crisis.

It is true, of course, that the economics profession failed to foresee or explain the 2008 crisis, but that’s nothing special. To make a list of phenomena unexplained by orthodox economics, just open the business pages of a newspaper. In any case, while it might have been reasonable at the time to expect some degree of self-criticism in the economics profession, and some increase in openness to alternatives, seven years later it is clear that there has not been. With a handful of exceptions – Naryana Kocherlakota is probably the most prominent in the US – mainstream economists have not revised their views in the light of the crisis; even those who were initially inclined to soul-searching have mostly decided that they were right all along. The case for heterodoxy must be made on other grounds.

Posts in Three Lines

I haven’t been blogging much lately. I’ve been doing real work, some of which will be appearing soon. But if I were blogging, here are some of the posts I might write.

*

Lessons from the 1990s. I have a new paper coming out from the Roosevelt Institute, arguing that we’re not as close to potential as people at the Fed and elsewhere seem to believe, and as I’ve been talking with people about it, it’s become clear that your priors depend a lot on how you think of the rapid growth of the 1990s. If you think it was a technological one-off, with no value as precedent — a kind of macroeconomic Bush v. Gore — then you’re likely to see today’s low unemployment as reflecting an economy working at full capacity, despite the low employment-population ratio and very weak productivity growth. But if you think the mid-90s is a possible analogue to the situation facing policymakers today, then it seems relevant that the last sustained episode of 4 percent unemployment led not to inflation but to employers actively recruiting new entrants to the laborforce among students, poor people, even prisoners.

Inflation nutters. The Fed, of course, doesn’t agree: Undeterred by the complete disappearance of the statistical relationship between unemployment and inflation, they continue to see low unemployment as a threatening sign of incipient inflation (or something) that must be nipped in the bud. Whatever other effects rate increases may have, the historical evidence suggests that one definite consequence will be rising private and public debt ratios. Economists focus disproportionately on the behavioral effects of interest rate changes and ignore their effects on the existing debt stock because “thinking like an economist” means, among other things, thinking in terms of a world in which decisions are made once and for all, in response to “fundamentals” rather than to conditions inherited from the past.

An army with only a signal corps. What are those other effects, though? Arguments for doubting central bankers’ control over macroeconomic outcomes have only gotten stronger than they were in the 2000s, when they were already strong; at the same time, when the ECB says, “let the government of Spain borrow at 2 percent,” it carries only a little less force than the God of genesis. I think we exaggerate power of central banks over real economy, but underestimate their power over financial markets (with the corollary that economists — heterodox as much as mainstream — see finance and real activity as much more tightly linked than they are).

It’s easy to be happy if you’re heterodox. This spring I was at a conference up at the University of Massachusetts, the headwaters of American heterodox economics, where I did my Phd. Seeing all my old friends reminded me what good prospects we in the heterodox world have – literally everyone I know from grad school has a good job. If you are wondering whether your prospects would be better at a nowhere-ranked heterodox economics program like UMass or a top-ranked program in some other social science, let me assure you, it’s the former by a mile — and you’ll probably have better drinking buddies as well.

The euro is not the gold standard. One of the topics I was talking about at the UMass conference was the euro which, I’ve argued, was intended to create something like a new gold standard, a hard financial constraint on governments. But that that was the intention doesn’t mean its the reality — in practice the TARGET2 system means that national central banks don’t face any binding constraint , unlike under the gold standard the central bank is “outside” the national monetary membrane. In this sense the euro is structurally more like Keynes’ proposals at Bretton Woods, it’s just not Keynes running it.

Can jobs be guaranteed? In principle I’m very sympathetic to the widespread (at least among my friends on social media) calls for a job guarantee. It makes sense as a direction of travel, implying a commitment to a much lower unemployment rate, expanded public employment, organizing work to fit people’s capabilities rather than vice versa, and increasing the power of workers vis-a-vis employers. But I have a nagging doubt: A job is contingent by its nature – without the threat of unemployment, can there even be employment as we know it?

The wit and wisdom of Haavelmo. I was talking a while back about Merijn Knibbe’s articles on the disconnect between economic theory and the national accounts with my friend Enno, and he mentioned Trygve Haavelmo’s 1944 article on The Probability Approach in Econometrics, which I’ve finally gotten around to reading. One of the big points of this brilliant article is that economic variables, and the models they enter into, are meaningful only via the concrete practices through which the variables are measured. A bigger point is that we study economics in order to “become master of the happenings of real life”: You can contribute to economics in the course of advancing a political project, or making money in financial markets, or administering a government agency (Keynes did all three), but you will not contribute if you pursue economics as an end in itself.

Coney Island. Laura and I took the boy down to Coney Island a couple days ago, a lovely day, his first roller coaster ride, rambling on the beach, a Cyclones game. One of the wonderful things about Coney Island is how little it’s changed from a century ago — I was rereading Delmore Schwartz’s In Dreams Begin Responsibilities the other day, and the title story’s description of a young immigrant couple walking the boardwalk in 1909 could easily be set today — so it’s disconcerting to think that the boy will never take his grandchildren there. It will all be under water.

I Don’t See Any Method At All

I’ve felt for a while that most critiques of economics miss the mark. They start from the premise that economics is a systematic effort to understand the concrete social phenomena we call “the economy,” an effort that has gone wrong in some way.

I don’t think that’s the right way to think about it. I think McCloskey was right to say that economics is just what economists do. Economic theory is essentially closed formal system; it’s a historical accident that there is some overlap between its technical vocabulary and the language used to describe concrete economic phenomena. Economics the discipline is to the economy the sphere of social reality as chess theory is to medieval history: The statement, say, that “queens are most effective when supported by strong bishops” might be reasonable in both domains, but studying its application in the one case will not help at all in applying it in in the other. A few years ago Richard Posner said that he used to think economics meant the study of “rational” behavior in whatever domain, but after the financial crisis he decided it should mean the study of the behavior of the economy using whatever methodologies. (I can’t find the exact quote.) Descriptively, he was right the first time; but the point is, these are two different activities. Or to steal a line from my friend Suresh, the best way to think about what most economists do is as a kind of constrained-maximization poetry. Makes no more sense to ask “is it true” than of a haiku.

One consequence of this is, as I say, that radical criticism of the realism or logical consistency of orthodox economics do nothing to get us closer to a positive understanding of the economy. How is a raven unlike a writing desk? An endless number of ways, and enumerating them will leave you no wiser about either corvids or carpentry. Another consequence, the topic of the remainder of this post, is that when we turn to concrete economic questions there isn’t really a “mainstream” at all. Left critics want to take academic orthodoxy, a right-wing political vision, and the economic policy preferred by the established authorities, and roll them into a coherent package. But I don’t think you can. I think there is a mix of common-sense opinions, political prejudices, conventional business practice, and pragmatic rules of thumb, supported in an ad hoc, opportunistic way by bits and pieces of economic theory. It’s not possible to deduce the whole tottering pile from a few foundational texts.

More concretely: An economics education trains you to think in terms of real exchange — in terms of agents who (somehow or other) have come into possession of a bundle of goods, which they trade with each other. You can only use this framework to make statements about real economic phenomena if they are understood in terms of the supply side — if economic outcomes are understood in terms of different endowments of goods, or different real uses for them. Unless you’re in a position to self-consciously take another perspective, fitting your understanding of economic phenomena into a broader framework is going to mean expressing it as this kind of story, about the limited supply of real resources available, and the unlimited demands on them to meet real human needs. But there may be no sensible story of that kind to tell.

More concretely: What are the major macroeconomic developments of the past ten to twenty years, compared, say, with the previous fifty? For the US and most other developed countries, the list might look like:

– low and falling inflation

– low and falling interest rates

– slower growth of output

– slower growth of employment

– low business investment

– slower growth of labor productivity growth

– a declining share of wages in income

If you pick up an economics textbook and try to apply it to the world around you, these are some of the main phenomena you’d want to explain. What does the orthodox, supply-side theory tell us?

The textbook says that lower inflation is normally the result of a positive supply shock — an increase in real resources or an improvement in technology. OK. But then what do we make of the slowdown in output and productivity?

The textbook says that, over the long run interest rates must reflect the marginal product of capital — the central bank (and monetary factors in general) can only change interest rates in the short run, not over a decade or more. In the Walrasian world, the interest rate and the return on investment are the same thing. So a sustained decline in interest rates must mean a decline in the marginal product of capital.

OK. So in combination with the slowdown in output growth, that suggests a negative technological shock. But that should mean higher inflation. Didn’t we just say that lower inflation implies a positive technological shock?

Employment growth in this framework is normally determined by demographics, or perhaps by structural changes in labor markets that change the effective labor supply. Slower employment growth means a falling labor supply — but that should, again, be inflationary. And it should be associated with higher wagess: If labor is becoming relatively scarce, its price should rise. Yes, the textbook combines a bargaining mode of wage determination for the short run with a marginal product story for the long run, without ever explaining how they hook up, but in this case it doesn’t matter, the two stories agree. A fall in the labor supply will result in a rise in the marginal product of labor as it’s withdrawn from the least productive activities — that’s what “marginal” means! So either way the demographic story of falling employment is inconsistent with low inflation, with a falling wage share, and with the showdown in productivity growth.

Slower growth of labor productivity could be explained by an increase in labor supply  — but then why has employment decelerated so sharply? More often it’s taken as technologically determined. Slower productivity growth then implies a slowdown in innovation — which at least is consistent with low interest rates and low investment. But this “negative technology shock” should again, be inflationary. And it should be associated with a fall in the return to capital, not a rise.

On the other hand, the decline in the labor share is supposed to reflect a change in productive technology that encourages substitution of capital for labor, robots and all that. But how is this reconciled with the fall in interest rates, in investment and in labor productivity? To replace workers with robots, someone has to make the robots, and someone has to buy them. And by definition this raises the productivity of the remaining workers.

Which subset of these mutually incompatible stories does the “mainstream” actually believe? I don’t know that they consistently believe any of them. My impression is that people adopt one or another based on the question at hand, while avoiding any systematic analysis through violent abuse of the ceteris paribus condition.

To paraphrase Leijonhufvud, on Mondays and Wednesdays wages are low because technological progress has slowed down, holding down labor productivity. On Tuesdays and Thursdays wages are low because technological progress has sped up, substituting capital for labor. Students may come away a bit confused but the main takeaway is clear: Low wages are the result of inexorable, exogenous technological change, and not of any kind of political choice. And certainly not of weak aggregate demand.

Larry Summers in this actually quite good Washington Post piece, at least is no longer talking about robots. But he can’t completely resist the supply-side lure: “The situation is worse in other countries with more structural issues and slower labor-force growth.” Wait, why would they be worse? As he himself says, “our problem today is insufficient inflation,” so what’s needed “is to convince people that prices will rise at target rates in the future,” which will “require … very tight markets.” If that’s true, then restrictions on labor supply are a good thing — they make it easier to generate wage and price increases. But that is still an unthought.

I admit, Summers does go on to say:

In the presence of chronic excess supply, structural reform has the risk of spurring disinflation rather than contributing to a necessary increase in inflation.  There is, in fact, a case for strengthening entitlement benefits so as to promote current demand. The key point is that the traditional OECD-type recommendations cannot be right as both a response to inflationary pressures and deflationary pressures. They were more right historically than they are today.

That’s progress, for sure — “less right” is a step toward “completely wrong”. The next step will be to say what his argument logically requires. If the problem is as he describes it then structural “problems” are part of the solution.