The Nonexistent Rise in Household Consumption

Did you know that about 10 percent of private consumption in the US consists of Medicare and Medicaid? Despite the fact that these are payments by the government to health care providers, they are counted by the BEA both as income and consumption spending for households.

I bet you didn’t know that. I bet plenty of people who work with the national income accounts for a living don’t know that. I know I didn’t know it, until I read this new working paper by Barry Cynamon and Steve Fazzari.

I’ve often thought that the best macroeconomics is just accounting plus history. This paper is an accounting tour de force. What they’ve done is go through the national accounts and separate out the components of household income and expenditure that represent cashflows received and made by households, from everything else.

Most people don’t realize how much of what goes into the headline measures of household income and household consumption does not actually correspond to any flow of money to or from households. In 2011 (the last year covered by the paper), personal consumption expenditure was given as just over $10 trillion. But of that, only about $7.5 trillion was money spent by households on goods and services. Of the rest, as of 2011:

– $1.2 trillion was imputed rents on owner-occupied housing. The national income and product accounts treat housing on the principle that the real output of housing should be the same whether or not the person living in the house happens to be the same person who owns it. So for owner-occupied housing, they impute an “owner equivalent rent” that the resident is implicitly paying to themselves for use of the house.  This sounds reasonable, but it conflicts with another principle of the national accounts, which is that only market transactions are recorded. It also creates measurement problems since most owned residences are single-family homes, for which there isn’t a big rental market, so the BEA has to resort to various procedures to estimate what the rent should be. One result of the procedures they use is that a rise in hoe prices, as in the 2000s, shows up as a rise in consumption spending on imputed rents even if no additional dollars change hands.

– $970 billion was Medicare and Medicaid payments; another $600 billion was employer purchases of group health insurance. The official measures of household consumption are constructed as if all spending on health benefits took the form of cash payments, which they then chose to spend on health care. This isn’t entirely crazy as applied to employer health benefits, since presumably workers do have some say in how much of their compensation takes the form of cash vs. health benefits; tho one wouldn’t want to push that assumption that too far. But it’s harder to justify for public health benefits. And, justifiable or not, it means the common habit of referring to personal consumption expenditure as “private” consumption needs a large asterix.

– $250 billion was imputed bank services. The BEA assumes that people accept below-market interest on bank deposits only as a way of purchasing some equivalent service in return. So the difference between interest from bank deposits and what it would be given some benchmark rate is counted as consumption of banking services.

– $400 billion in consumption by nonprofits. Nonprofits are grouped with the household sector in the national accounts. This is not necessarily unreasonable, but it creates confusion when people assume the household sector refers only to what we normally think of households, or when people try to match up the aggregate data with surveys or other individual-level data.

Take these items, plus a bunch of smaller ones, and you have over one-quarter of reported household consumption that does not correspond to what we normally think of as consumption: market purchases of goods and services to be used by the buyer.

The adjustments are even more interesting when you look at trends over time. Medicare and Medicaid don’t just represent close to 10 percent of reported “private” consumption; they represent over three quarters of the increase in consumption over the past 50 years. More broadly, if we limit “consumption” to purchases by households, the long term rise in household consumption — taken for granted by nearly everyone, heterodox or mainstream — disappears.

By the official measure, personal consumption has risen from around 60 percent of GDP in the 1950s, 60s and 70s, to close to 70 percent today. While there are great differences in stories about why this increase has taken place, almost everyone takes for granted that it has. But if you look at Cynamon and Fazzari’s measure, which reflects only market purchases by households themselves, there is no such trend. Consumption declines steadily from 55 percent of GDP in 1950 to around 47 percent today. In the earlier part of this period, impute rents for owner occupied housing are by far the biggest part of the difference; but in more recent years third-party medical expenditures have become more important. Just removing public health care spending from household consumption, as shown in the pal red line in the figure, is enough to change a 9 point rise in the consumption share of GDP into a 2 point rise. In other words, around 80 percent of the long-term rise in household consumption actually consists of public spending on health care.

In our “Fisher dynamics” paper, Arjun Jayadev and I showed that the rise in debt-income ratios for the household sector is not due to any increase in household borrowing, but can be entirely explained by higher interest rates relative to income growth and inflation. For that paper, we wanted to adjust reported income in the way that Fazzari and Cynamon do here, but we didn’t make a serious effort at it. Now with their data, we can see that not only does the rise in household debt have nothing to do with any household decisions, neither does the rise in consumption. What’s actually happened over recent decades is that household consumption as a share of income has remained roughly constant. Meanwhile, on the one hand disinflation and high interest rates have increased debt-income ratios, and on the other hand increased public health care spending and, in the 2000s high home prices, have increased reported household consumption. But these two trends have nothing to do with each other, or with any choices made by households.

There’s a common trope in left and heterodox circles that macroeconomic developments in recent decades have been shaped by “financialization.” In particular, it’s often argued that the development of new financial markets and instruments for consumer credit has allowed households to choose higher levels of consumption relative to income than they otherwise would. This is not true. Rising debt over the past 30 years is entirely a matter of disinflation and higher interest rates; there has been no long run increase in borrowing. Meanwhile, rising consumption really consists of increased non-market activity — direct provision of housing services through owner-occupied housing, and public provision of health services. This is if anything a kind of anti-financialization.

The Fazzari and Cynamon paper has radical implications, despite its moderate tone. It’s the best kind of macroeconomics. No models. No econometrics. Just read the damn tables, and think about what the numbers mean.

Liquidity Preference and Solidity Preference in the 19th Century

So I’ve been reading Homer and Sylla’s History of Interest Rates. One of the many fascinating things I’ve learned, is that in the market for federal debt, what we today call an inverted yield curve was at one time the norm.

From the book:

Three small loans floated in 1820–1821, principally to permit the continued redemption of high rate war loans, provide an interesting clue to investor preference… These were: 

$4.7 million “5s of 1820,” redeemable in 1832; sold at 100 = 5%.
“6s of 1820,” redeemable at pleasure of United States; sold at 102 = 5.88%.
“5s of 1821,” redeemable in 1835; sold at 1051⁄8 =4.50%, and at 108 = 4.25%. 

The yield was highest for the issue with early redemption risk and much lower for those with later redemption risks.

Nineteenth century government bonds were a bit different from modern bonds, in that the principal was repaid at the option of the borrower; repayment is usually not permitted until a certain date. [1] They were also sold with a fixed yield in terms of face value — that’s what the “5” and “6” refer to — but the actual yield depended on the discount or premium they were sold at. The important thing for our purposes is that the further away the earliest possible date of repayment is, the lower the interest rate — the opposite of the modern term premium. That’s what the passage above is saying.

The pattern isn’t limited to the 1820-21 bonds, either; it seems to exist through most of the 19th century, at least for the US. It’s the same with the massive borrowing during the Civil War:

In 1864, although the war was approaching its end, it had only been half financed. The Treasury was able to sell a large volume of bonds, but not at such favorable terms as the market price of its seasoned issues might suggest. Early in the year another $100 million of the 5–20s [bonds with a minimum maturity of 5 years and a maximum of 20] were sold and then a new longer issue was sold as follows: 

1864—$75 million “6s”  redeemable in 1881, tax-exempt; sold at 104.45 = 5.60%. 

The Treasury soon made an attempt to sell 5s, which met with a lukewarm reception. In order to attract investors to the lower rate the Treasury extended the term to redemption from five to ten years and the maturity from twenty to forty years

1864—$73 million “5%, 10–40s of 1864,” redeemable 1874, due in 1904, tax-exempt; sold at 100 = 5%.

Isn’t that striking? The Treasury couldn’t get investors to buy its shorter bonds at an acceptable rate, so they had to issue longer bonds instead. You wouldn’t see that story today.

The same pattern continues through the 1870s, with the new loans issue to refinance the Civil War debt. The first issue of bonds, redeemable in five to ten years sold at an interest rate of 5%; the next issue, redeemable in 13-15 years sold at 4.5%; and the last issue, redeemable in 27-29 years, sold at 4%. And it doesn’t seem like this is about expectations of a change in rates, like with a modern inverted yield curve. Investors simply were more worried about being stuck with uninvestable cash than about being stuck with unsaleable securities. This is a case where “solidity preference” dominates liquidity preference.

One possible way of explaining this in terms of Axel Leijonhufvud’s explanation of the yield curve.

The conventional story for why long loans normally have higher interest rates than short ones is that longer loans impose greater risks on lenders. They may not be able to convert the loan to cash if they need to make some payment before it matures, and they may suffer a capital loss if interest rates change during the life of the loan. But this can’t be the whole story, because short loans create the symmetric risk of not knowing what alternative asset will be available when the loan matures. In the one case, the lender risks a capital loss, but in the other case they risk getting a lower income. Why is “capital uncertainty” a greater concern than “income uncertainty”?

The answer, Leijonhufvud suggests, lies in

Keynes’ … “Vision” of a world in which currently active households must, directly or indirectly, hold their net worth in the form of titles to streams that run beyond their consumption horizon. The duration of the relevant consumption plan is limited by the sad fact that “in the Long Run, we are all dead.” But the great bulk of the “Fixed Capital of the modem world” is very long- term in nature and is thus destined to survive the generation which now owns it. This is the basis for the wealth effect of changes in asset values. 

The interesting point about this interpretation of the wealth effect is that it also provides a price-theoretical basis for Keynes’ Liquidity Preference theory. … Keynes’ (as well as Hicks’) statement of this hypothesis has been repeatedly criticized for not providing any rationale for the presumption that the system as a whole wants to shed “capital uncertainty” rather than “income uncertainty.” But Keynes’ mortal consumers cannot hold land, buildings, corporate equities, British consols, or other permanent income sources “to maturity.” When the representative, risk-averting transactor is nonetheless induced by the productivity of roundabout processes to invest his savings in such income sources, he must be resigned to suffer capital uncertainty. Forward markets will therefore generally show what Hicks called a “constitutional weakness” on the demand side.

I would prefer not to express this in terms of households’ consumption plans. And I would emphasize that the problem with wealth in the form of long-lived production processes is not just that it produces income far into the future, but that wealth in this form is always in danger of losing its character as money. Once capital is embodied in a particular production process and the organization that carries it out, it tends to evolve into the means of carrying out that organization’s intrinsic purposes, instead of the capital’s own self-expansion. But for this purpose, the difference doesn’t matter; either way, the problem only arises once you have, as Leijonhufvud puts it, “a system ‘tempted’ by the profitability of long processes to carry an asset stock which turns over more slowly than [wealth owners] would otherwise want.”

The temptation of long-lived production processes is inescapable in modern economies, and explains the constant search for liquidity. But in the pre-industrial United States? I don’t think so. Long-lived means of production were much less important, and to the extent they did exist, they weren’t an outlet for money-capital. Capital’s role in production was to finance stocks of raw materials, goods in process and inventories. There was no such thing, I don’t think, as investment by capitalists in long-lived capital goods. And even land — the long-lived asset in most settings — was not really an option, since it was abundant. The early United States is something like Samuelson’s consumption-loan world, where there is no good way to convert command over current goods into future production. [2] So there is excess demand rather than excess supply for long-lasting sources of income.

The switch over to positive term premiums comes early in the 20th century. By the 1920s, short-term loans in the New York market consistently have lower rates than corporate bonds, and 3-month Treasury bills have rates below longer bonds. Of course the organization of financial markets changed quite a lot in this period too, so one wouldn’t want to read too much into this timing. But it is at least consistent with the Leijonhufvud story. Liquidity preference becomes dominant in financial markets only once there has been a decisive shift toward industrial production by long-lived firm using capital-intensive techniques, and once claims on those firms has become a viable outlet for money-capital.

* * *

A few other interesting points about 19th century US interest rates. First, they were remarkably stable, at least before the 1870s. (This fits with the historical material on interest rates that Merijn Knibbe has been presenting in his excellent posts at Real World Economics Review.)

Second, there’s no sign of a Fisher equation. Nominal interest rates do not respond to changes in the price level, at all. Homer and Sylla mention that in earlier editions of the book, which dealt less with the 20th century, the concept of a “real” interest rate was not even mentioned.

As you can see from this graph, none of the major inflations or deflations between 1850 and 1960 had any effect on nominal interest rates. The idea that there is a fundamentals-determined “real” interest rate while the nominal rate adjusts in response to changes in the price level, clearly has no relevance outside the past 50 years. (Whether it describes the experience of the past 50 years either is a question for another time.)

Finally, there is no sign of “crowding out” of private by public borrowing. It is true that the federal government did have to pay somewhat higher rates during the periods of heavy borrowing (and of course also political uncertainty) in the War of 1812 and the Civil War. But rates for other borrowers didn’t budge. And on the other hand, the surpluses that resulted in the redemption of the entire debt in the 1830s didn’t deliver lower rates for other borrowers. Homer and Sylla:

Boston yields were about the same in 1835, when the federal debt was wiped out, as they were in 1830; this reinforces the view that there was little change in going rates of long-term interest during this five- year period of debt redemption.

If government borrowing really raises rates for private borrowers, you ought to see it here, given the absence of a central bank for most of this period and the enormous scale of federal borrowing during the Civil War. But you don’t.

[1] It seems that most, though not all, bonds were repaid at the earliest possible redemption date, so it is reasonably similar to the maturity of a modern bond.

[2] Slaves are the big exception. So the obvious test for the argument I am making here would be to find the modern pattern of term premiums in the South. Unfortunately, Homer and Sylla aren’t any help on this — it seems the only local bond markets in this period were in New England.

Borrowing ≠ Debt

There’s a common shorthand that makes “debt” and “borrowing” interchangeable. The question of why an economic unit had rising debt over some period, is treated as equivalent to the question of why it was borrowing more over that period, or why its expenditure was higher relative to its income. This is a natural way of talking, but it isn’t really correct.

The point of Arjun’s and my paper on debt dynamics was to show that for household debt, borrowing and changes in debt don’t line up well at all. While some periods of rising household leverage — like the housing bubble of the 2000s — were also periods of high household borrowing, only a small part of longer-term changes in household debt can be explained this way. This is because interest, income growth and inflation rates also affect debt-income ratios, and movements in these other variables often swamp any change in household borrowing.
As far as I know, we were the first people to make this argument in a systematic way for household debt. For government debt, it’s a bit better known — but only a bit. People like Willem Buiter or Jamie Galbraith do point out that the fall in US debt after World War II had much more to do with growth and inflation than with large primary surpluses. You can find the argument more fully developed for the US in papers by Hall and Sargent  or Aizenman and Marion, and for a large sample of countries by Abbas et al., which I’ve discussed here before. But while many of the people making it are hardly marginal, the point that government borrowing and government debt are not equivalent, or even always closely linked, hasn’t really made it into the larger conversation. It’s still common to find even very smart people saying things like this:

We didn’t have anything you could call a deficit problem until 1980. We then saw rising debt under Reagan-Bush; falling debt under Clinton; rising under Bush II; and a sharp rise in the aftermath of the financial crisis. This is not a bipartisan problem of runaway deficits! 

Note how the terms “deficits” and “rising debt” are used interchangeably; and though the text mostly says deficits, the chart next to this passage shows the ratio of debt to GDP.
What we have here is a kind of morality tale where responsible policy — keeping government spending in line with revenues — is rewarded with falling debt; while irresponsible policy — deficits! — gets its just desserts in the form of rising debt ratios. It’s a seductive story, in part because it does have an element of truth. But it’s mostly false, and misleading. More precisely, it’s about one quarter true and three quarters false.
Here’s the same graph of federal debt since World War II, showing the annual change in debt ratio (red bars) and the primary deficit (black bars), both measured as a fraction of GDP. (The primary deficit is the difference between spending other than interest payments and revenue; it’s the standard measure of the difference between current expenditure and current revenue.) So what do we see?
It is true that the federal government mostly ran primary surpluses from the end of the war until 1980, and more generally, that periods of surpluses were mostly periods of rising debt, and conversely. So it might seem that using “deficits” and “rising debt” interchangeably, while not strictly correct, doesn’t distort the picture in any major way. But it does! Look more carefully at the 1970s and 1980s — the black bars look very similar, don’t they? In fact, deficits under Reagan were hardy larger than under Ford and Carter —  a cumulative 6.2 percent of GDP over 1982-1986, compared with 5.6 percent of GDP over 1975-1978. Yet the debt-GDP ratio rose by just a single point (from 24 to 25) in the first episode, but by 8 points (from 32 to 40) in the second. Why did debt increase in the 1980s but not in the 1970s? Because in the 1980s the interest rate on federal debt was well above the economy’s growth rate, while in the 1970s, it was well below it. In that precise sense, if debt is a problem it very much is a bipartisan one; Volcker was the appointee of both Carter and Reagan.
Here’s the same data by decades, and for the pre- and post-1980 periods and some politically salient subperiods.  The third column shows the part of debt changes not explained by the primary balance. This corresponds to what Arjun and I call “Fisher dynamics” — the contribution of growth, inflation and interest rates to changes in leverage. [*] The units are percent of GDP.
Totals by Decade
Primary Deficit Change in Debt Residual Debt Change
1950s -8.6 -29.6 -20.9
1960s -7.3 -17.7 -10.4
1970s 2.8 -1.7 -4.6
1980s 3.3 16.0 12.7
1990s -15.9 -7.3 8.6
2000s 23.7 27.9 4.2
Annual averages
Primary Deficit Change in Debt Residual Debt Change
1947-1980 -0.7 -2.0 -1.2
1981-2011 0.1 1.3 1.2
   1981-1992 0.3 1.8 1.5
   1993-2000 -2.7 -1.6 1.1
   2001-2008 -0.1 0.8 0.9
   2009-2011 7.3 8.9 1.6

Here again, we see that while the growth of debt looks very different between the 1970s and 1980s, the behavior of deficits does not. Despite Reagan’s tax cuts and military buildup, the overall relationship between government revenues and expenditures was essentially the same in the two decades. Practically all of the acceleration in debt growth in the 1980s compared with the 1970s is due to higher interest rates and lower inflation.

Over the longer run, it is true that there is a shift from primary surpluses before 1980 to primary deficits afterward. (This is different from our finding for households, where borrowing actually fell after 1980.) But the change in fiscal balances is less than 25 percent the change in debt growth. In other words, the shift toward deficit spending, while real, only accounts for a quarter of the change in the trajectory of the federal debt. This is why I said above that the morality-tale version of the rising debt story is a quarter right and three quarters wrong.

By the way, this is strikingly consistent with the results of the big IMF study on the evolution of government debt ratios around the world. Looking at 60 episodes of large increases in debt-GDP ratios over the 20th century, they find that only about a third of the average increase is accounted for by primary deficits. [2] For episodes of falling debt, the role of primary surpluses is somewhat larger, especially in Europe, but if we focus on the postwar decades specifically then, again, primary surpluses accounted for only a about a third of the average fall. So while the link between government debt and deficits has been a bit weaker in the US than elsewhere, it’s quite weak in general.

So. Why should we care?

Most obviously, you should care if you’re worried about government debt. Now maybe you shouldn’t worry. But if you do think debt is a problem, then you are looking in the wrong place if you think holding down government borrowing is the solution. What matters is holding down i – (g + π) — that is, keeping interest rates low relative to growth and inflation. And while higher growth may not be within reach of policy, higher inflation and lower interest rates certainly are.

Even if you insist on worrying not just about government debt but about government borrowing, it’s important to note that the cumulative deficits of 2009-2011, at 22 percent of GDP, were exactly equal to the cumulative surpluses over the Clinton years, and only slightly smaller than the cumulative primary surpluses over the whole period 1947-1979. So if for whatever reason you want to keep borrowing down, policies to avoid deep recessions are more important than policies to control spending and raise revenue.

More broadly, I keep harping on this because I think the assumption that the path of government debt is the result of government borrowing choices, is symptomatic of a larger failure to think clearly about this stuff. Most practically, the idea that the long-run “sustainability” of the  debt requires efforts to control government borrowing — an idea which goes unquestioned even at the far liberal-Keynesian end of the policy spectrum —  is a serious fetter on proposals for more stimulus in the short run, and is a convenient justification for all sorts of appalling ideas. And in general, I just reject the whole idea of responsibility. It’s ideology in the strict sense — treating the conditions of existence of the dominant class as if they were natural law. Keynes was right to see this tendency to view of all of life through a financial lens — to see saving and accumulating as the highest goals in life, to think we should forego real goods to improve our financial position — as “one of those semicriminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease.”

On a methodological level, I see reframing the question of the evolution of debt in terms of the independent contributions of primary deficits, growth, inflation and interest rates as part of a larger effort to think about the economy in historical, dynamic terms, rather than in terms of equilibrium. But we’ll save that thought for another time.

The important point is that, historically, changes in government borrowing have not been the main factor in the evolution of debt-GDP ratios. Acknowledging that fact should be the price of admission to any serious discussion of fiscal policy.

[1] Strictly speaking, debt ratios can change for reasons other than either the primary balance or Fisher dynamics, such as defaults or the effects of exchange rate movements on foreign-currency-denominated debt. But none of these apply to the postwar US.

[2] The picture is a bit different from the US, since adverse exchange-rate movements are quite important in many of these episodes. But it remains true that high deficits are the main factor in only a minority of large increases in debt-GDP ratios.

Graeber Cycles and the Wicksellian Judgment Day

So it’s halfway through the semester, and I’m looking over the midterms. Good news: Learning has taken place.

One of the things you hope students learn in a course like this is that money consists of three things: demand deposits (checking accounts and the like), currency and bank reserves. The first is a liability of private banks, the latter two are liabilities of the central bank. That money is always someone’s liability — a debt — is often a hard thing for students to get their heads around, so one can end up teaching it a bit catechistically. Balance sheets, with their absolute (except for the exceptions) and seemingly arbitrary rules, can feel a bit like religious formula. On this test, the question about the definition of money was one of the few that didn’t require students to think.

But when you do think about it, it’s a very strange thing. What we teach as just a fact about the world, is really the product of — or rather, a moment in — a very specific historical evolution. We are lumping together two very different kinds of “money.” Currency looks like classical money, like gold; but demand deposits do not. The most obvious difference, at least in the context of macroeconomics, is that one is exogenous (or set by policy) and the other endogenous. We paper this over by talking about reserve requirements, which allow the central bank to set “the” money supply to determine “the” interest rate. But everyone knows that reserve requirements are a dead letter and have been for decades, probably. While monetarists like Nick Rowe insist that there’s something special about currency — they have to, given the logic of their theories — in the real world the link between the “money” issued by central banks and the “money” that matters for the economy has attenuated to imperceptible gossamer, if it hasn’t been severed entirely. The best explanation for how conventional monetary policy works today is pure convention: With the supply of money entirely in the hands of private banks, policy is effective only because market participants expect it to be effective.

In other words, central banks today are like the Chinese emperor Wang Wei-Shao in the mid-1960s film Genghis Khan:

One of the film’s early scenes shows the exquisitely attired emperor, calligraphy brush in hand, elegantly composing a poem. With an ethereal self-assurace born of unquestioning confidence in the divinely ordained course of worldly affairs, he explains that the poem’s purpose is to express his displeasure at the Mongol barbarians who have lately been creating a disturbance on the empire’s western frontier, and, by so doing, cause them to desist.  

Today expressions of intentions by leaders of the world’s major central banks typically have immediate repercussions in financial markets… Central bankers’ public utterances … regularly move prices and yields in the financial markets, and these financial variables in turn affect non-financial economic activity… Indeed, a widely shared opinion today is that central bank need not actually do anything. … 

In truth the ability of central banks to affect the evolution of prices and output … [is] something of a mystery. … Each [explanation of their influence] … turns out to depend on one or another of a series of by now familiar fictions: households and firms need currency to purchase goods and services; banks can issue only reserve-bearing liabilities; no non-bank financial institutions create credit; and so on. 

… at a practical level, there is today [1999] little doubt that a country’s monetary policy not only can but does largely determine the evolution of its price level…, and almost as little doubt that monetary policy exerts significant influence over … employment and output… Circumstances change over time, however, and when they do the fictions that once described matters adequately may no longer do so. … There may well have been a time when the might of the Chinese empire was such that the mere suggestion of willingness to use it was sufficient to make potential invaders withdraw.

What looked potential a dozen years ago is now actual, if it wasn’t already then. It’s impossible to tell any sensible macroeconomic story that hinges on the quantity of outside money. The shift in our language from  money, which can be measured — that one could formulate a “quantity theory” of  — to discussions of liquidity, still a noun but now not a tangible thing but a property that adheres in different assets to different degrees, is a key diagnostic. And liquidity is a result of the operations of the financial system, not a feature of the natural world or a dial that can be set by the central bank. In 1820 or 1960 or arguably even in 1990 you could tell a kind of monetarist story that had some purchase on reality. Not today. But, and this is my point! it’s not a simple before and after story. Because, not in 1890 either.

David Graeber, in his magisterial Debt: The First 5,000 Years [1], describes a very long alternation between world economies based on commodity money and world economies based on credit money. (Graeber’s idiolect is money and debt; let’s use here the standard terms.) The former is anonymous, universal and disembedded, corresponds to centralized states and extensive warfare, and develops alongside those other great institutions for separating people from their social contexts, slavery and bureaucracy. [2] Credit, by contrast, is personal, particular, and unavoidably connected with specific relationships and obligations; it corresponds to decentralized, heterogeneous forms of authority. The alternations between commodity-money systems,with their transcendental, monotheistic religious-philosophical superstructures; and credit systems, with their eclectic, immanent, pantheistic superstructures, is, in my opinion, the heart of Debt. (The contrast between medieval Christianity, with its endless mediations by saints and relics and the letters of Christ’s name, and modern Christianity, with just you and the unknowable Divine, is paradigmatic.) Alternations not cycles, since there is no theory of the transition; probably just as well.

For Graeber, the whole half-millenium from the 16th through the 20th centuries is a period of the dominion of money, a dominion only now — maybe — coming to an end. But closer to ground level, there are shorter cycles. This comes through clearly in Axel Leijonhufvud’s brilliant short essay on Wicksell’s monetary theory, which is really the reason this post exists. (h/t David Glasner, I think Ashwin at Macroeconomic Resilience.) Among a whole series of sharp observations, Leijonhufvud makes the point that the past two centuries have seen several swings between commodity (or quasi-commodity) money and credit money. In the early modern period, the age of Adam Smith, there really was a (commodity) money economy, you could talk about a quantity of money. But even by the time of Ricardo, who first properly formalized the corresponding theory, this was ceasing to be true (as Wicksell also recognized), and by the later 19th century it wasn’t true at all. The high gold standard era (1870-1914, roughly) really used gold only for settling international balances between central banks; for private transactions, it was an age not of gold but of bank-issued paper money. [3]

If I somehow found myself teaching this course in the 18th century, I’d explain that money means gold, or gold and silver. But by the mid 19th century, if you asked people about the money in their pocket, they would have pulled out paper bills, not so unlike bills of today — except they very likely would have been bills issued by private banks.

The new world of bank-created money worried classical economists like Wicksell, who, like later monetarists, were strongly committed to the idea that the overall price level depends on the amount of money in circulation. The problem is that in a world of pure credit money, it’s impossible to base a theory of the price level on the relationship between the quantity of money and the level of output, since the former is determined by the latter. Today we’ve resolved this problem by just giving up on a theory of the price level, and focusing on inflation instead. But this didn’t look like an acceptable solution before World War II. For economists then — for any reasonable person — a trajectory of the price level toward infinity was an obvious absurdity that would inevitably come to a halt, disastrously if followed too far. Whereas today, that trajectory is the precise definition of price stability, that is, stable inflation. [4] Wicksell was part of an economics profession that saw explaining the price level as a, maybe the, key task; but he had no doubt that the trend was toward an ever-diminishing role for gold, at least domestically, leaving the money supply in the hands of the banks and the price level frighteningly unmoored.

Wicksell was right. Or at least, he was right when he wrote, a bit before 1900. But a funny thing happened on the way to the world of pure credit money. Thanks to new government controls on the banking system, the trend stopped and even reversed. Leijonhufvud:

Wicksell’s “Day of Judgment” when the real demand for the reserve medium would shrink to epsilon was greatly postponed by regime changes already introduced before or shortly after his death [in 1926]. In particular, governments moved to monopolize the note issue and to impose reserve requirements on banks. The control over the banking system’s total liabilities that the monetary authorities gained in this way greatly reduced the potential for the kind of instability that preoccupied Wicksell. It also gave the Quantity Theory a new lease of life, particularly in the United States.

But although Judgment Day was postponed it was not cancelled. … The monetary anchors on which 20th century central bank operating doctrines have relied are giving way. Technical developments are driving the process on two fronts. First, “smart cards” are circumventing the governmental note monopoly; the private sector is reentering the business of supplying currency. Second, banks are under increasing competitive pressure from nonbank financial institutions providing innovative payment or liquidity services; reserve requirements have become a discriminatory tax on banks that handicap them in this competition. The pressure to eliminate reserve requirements is consequently mounting. “Reserve requirements already are becoming a dead issue.”

The second bolded sentence makes a nice point. Milton Friedman and his followers are regarded as opponents of regulation, supporters of laissez-faire, etc. But to the extent that the theory behind monetarism ever had any validity (or still has any validity in its present guises) it is precisely because of strict government control over credit creation. It’s an irony that textbooks gloss over when they treat binding reserve requirements and the money multiplier as if they were facts of nature.

(That’s more traditional textbooks. Newer textbooks replace the obsolete story that the central bank controls interest rates by setting the money supply with a new story that the central bank sets the interest rate by … look, it just does, ok? Formally this is represented by replacing the old upward sloping LM curve with a horizontal MP (for monetary policy) line at the interest rate chosen by the central bank. The old story was artificial and, with respect to recent decades, basically wrong, but it did have the virtue of recognizing that the interest rate is determined in financial markets, and that monetary policy has to operate by changing the supply of liquidity. In the up-to-date modern version, policy might just as well operate by calligraphy.)

So, in the two centuries since Heinrich van Storch lectured the young Grand Dukes of Russia on the economic importance of “precious metals and fine jewels,” capitalism has gone through two full Graeber cycles, from commodity money to credit money, back to (pseudo-)commodity money and now to credit money again. It’s a process that proceeds unevenly; both the reality and the theory of money are uncomfortable hybrids of the two. But reality has advanced further toward the pure credit pole than theory has.

This time, will it make it all the way? Is Leijonhufvud right to suggest that Wicksell’s Day of Judgment was deferred but not canceled, and now is at hand?

Certainly the impotence of conventional monetary policy even before the crisis is a serious omen. And it’s hard to imagine a breakdown of the credit system that would force a return to commodity money, as in, say, medieval China. But on the other hand, it is not hard to imagine a reassertion of the public monopoly on means of payment. Indeed, when you think about it, it’s hard to understand why this monopoly was ever abandoned. The practical advantages of smart cards over paper tokens are undeniable, but there’s no reason that the cards shouldn’t have been public goods just like the tokens were. (For Graeber’s spiritual forefather Karl Polanyi, money, along with land and labor, was one of the core social institutions that could not be treated as commodities without destroying the social fabric.) The evolution of electronic money from credit cards looks contingent, not foreordained. Credit cards are only one of several widely-used electronic means of payment, and there’s no obvious reason why they and not one of the ones issued by public entities should have been adopted universally. This is, after all, an area with extremely strong network externalities, where lock-in is likely. Indeed, in the Benjamin Friedman article quoted above, he explicitly suggests that subway cards issued by the MTA could just as easily have developed into the universal means of payment. After all, the “pay community” of subway riders in New York is even more extensive than the pay community of taxpayers, and there was probably a period in the 1990s when more people had subway cards in their wallets than had credit or debit cards. What’s more, the MTA actually experimented with distributing subway card-reading machines to retailers to allow the cards to be used like, well, money. The experiment was eventually abandoned, but there doesn’t seem to be any reason why it couldn’t have succeeded; even today, with debit/credit cards much more widespread than two decades ago, many campuses find it advantageous to use college-issued smart cards as a kind of local currency.

These issues were touched on in the debate around interchange fees that rocked the econosphere a while back. (Why do checks settle at par — what I pay is exactly what you get — but debit and credit card transactions do not? Should we care?) But that discussion, while useful, could hardly resolve the deeper question: Why have we allowed means of payment to move from being a public good to a private oligopoly? In the not too distant past, if I wanted to give you some money and you wanted to give me a good or service, we didn’t have to pay any third party for permission to make the trade. Now, most of the time, we do. And the payments are not small; monetarists used to (still do?) go on about the “shoe leather costs” of holding more cash as a serious reason to worry about inflation, but no sane person could imagine those costs could come close to five percent of retail spending. And that’s not counting the inefficiencies. This is a private sales tax that we allow to be levied on almost every transaction,  just as distortionary and just as regressive as other sales taxes but without the benefit of, you know, funding public services. The more one thinks about it, the stranger it seems. Why, of all the expansions of public goods and collective provision won over the past 100 or 200 years, is this the one big one that has been rolled back? Why has this act of enclosure apparently not even been noticed, let alone debated? Why has the modern equivalent of minting coinage — the prerogative of sovereigns for as long as there’ve been any — been allowed to pass into the hands of Visa and MasterCard, with neoliberal regimes not just allowing but actively encouraging it?

The view of the mainstream — which in this case stretches well to the left of Krugman and DeLong, and on the right to everyone this side of Ron Paul — is that, whatever the causes of the crisis and however the authorities should or do respond, eventually we will return to the status quo ante. Conventional monetary policy may not be effective now, but there’s no reason to doubt that it will one day get back to so being. I’m not so sure. I think people underestimate the extent to which modern central banking depended on a public monopoly on means of payment, a monopoly that arose — was established — historically, and has now been allowed to lapse. Christina Romer’s Berkeley speech on the glorious counterrevolution in macroeconomic policy may not have been anti-perfectly timed just because it was given months before the beginning of the worst recession in 70 years, but because it marked the end of the period in which the body of theory and policy that she was extolling applied.

[1] Information wants to be free. If there’s a free downloadable version of a book out there, that’s what I’m going to link to. But assuming some bank has demand deposits payable to you on the liability side of its balance sheet (i.e. you’ve got the money), this is a book you ought to buy.

[2] In pre-modern societies a slave is simply someone all of whose kinship ties have been extinguished, and is therefore attached only to the household of his/her master. They were not necessarily low in status or living standards, and they weren’t distinguished by being personally subordinated to somebody, since everyone was. And slavery certainly cannot be defined as a person being property, since, as Graeber shows, private property as we know it is simply a generalization of the law of slavery.

[3] A point also emphasized by Robert Triffin in his essential paper Myths and Realities of the So-Called Gold Standard.

[4] Which is a cautionary tale for anyone who thinks the fact that an economic process that involves some ratio diverging to infinity is by defintion unsustainable. Physiocrats thought a trajectory of the farming share of the population toward zeo was an absolute absurdity and that in practice it could certaily not fall below half. They were wrong; and more generally, capitalism is not an equilibrium process. There may be seven unsustainable processes out there, or even more, but you cannot show it simply by noting that the trend of some ratio will take it outside its historic range.

UPDATE: Nick Rowe has a kind of response which, while I don’t agree with it, lays out the case against regarding money as a liability very clearly. I have a long comment there, of which the tl;dr is that we should be thinking — both logically and chronologically — of central bank money evolving from private debt contracts, not from gold currency. I don’t know if Nick read the Leijonhufvud piece I quote here, but the point that it makes is that writing 100-odd years ago, Wicksell started from exactly the position Nick takes now, and then observed how it breaks down with modern (even 1900-era modern) financial systems.

Also, the comments below are exceptionally good; anyone who read this post should definitely read the comments as well.

Trade: The New Normal Was the Old Normal Too

Matthew Yglesias is puzzled by

the fundamental weirdness of having so much savings flowing uphill from poor, fast-growing countries into the rich, mature economy of the United States. It ought to be the case that people in fast-growing countries are eager to consume more than they produce, knowing that they’ll be much richer in the near future. And it ought to be the case that people in rich countries are eager to invest in poor ones seeking higher returns. But it’s not what was happening pre-crisis and it’s not what’s been happening post-crisis.

He should have added: And it’s not what’s ever happened.

I’m not sure what “ought” is doing in this passage. If it expresses pious hope, fine. But if it’s supposed to be a claim about what’s normal or usual, as the contrast with “weirdness” would suggest, then it just ain’t so. Sure, in some very artifical textbook models savings flow from rich countries to poor ones. But it has never been the case, since the world economy came into being in the 19th century, that unregulated capital flows have behaved the way they “ought” to.

Albert Fishlow’s paper “Lessons from the Past: Capital Markets During the 19th Century and the Interwar Period” includes a series for the net resources transferred from creditor to debtor countries from the mid-19th century up to the second world war. (That is, new investment minus interest and dividends on existing investment.) This series turns negative sometime between 1870 and 1885, and remains so through the end of the 1930s. For 50 years — the Gold Standard age of stable exchange rates, flexible prices, free trade and unregulated capital flows — the poor countries were consistently transferring resources to the rich ones. In other words, what Yglesias sees as the “fundamental weirdness” of the current period is the normal historical pattern. Or as Fishlow puts it:

Despite the rapid prewar growth in the stock of foreign capital, at an annual average rate of 4.6 percent between 1870 and 1913, foreign investment did not fully keep up with the reflow of income from interest and dividends. Return income flowed at a rate close to 5 percent a year on outstanding balances, meaning that on average creditors transferred no resources to debtor nations over the period. … Such an aggregate result casts doubt on the conventional description of the regular debt cycle that capital recipients were supposed to experience. … most [developing] countries experienced only brief periods of import surplus [i.e. current account deficit]. For most of the time they were compelled to export more than they imported in order to meet their debt payments.

A similar situation existed for much of the post World War II period, especially after the secular increase in world interest rates around 1980.

There is a difference between the old pattern (which still applies to much of the global south) and the new one. Then, net-debtor poor countries  ran current account surpluses to make payments on their high-yielding liabilities to rich countries. Now, net-creditor (relatively-) poor countries run current account surpluses to accumulate low-yielding assets in rich countries. I would argue there are reasons to prefer the new pattern to the old one. But the flow of real resources is unchanged: from the periphery to the center. Meanwhile, those countries that have successfully industrialized, as scholars like Ha-Joon Chang have shown, have done so not by accessing foreign savings by connecting with the world financial system, but by keeping their own savings at home by disconnecting from it.

It seems that an unregulated international finance doesn’t benevolently put the world’s collective savings to the best use for everyone, but instead channels wealth from the poor to the rich. That may not be the way things ought to be, but historically it’s pretty clearly the way things are.