Unemployment and Productivity Growth

I write here frequently about “the money view” — the idea that we need to see economic relationships as a system of money flows and money commitments, that is not reducible to the “real” production and exchange of goods and services. Seeing the money-game as a self-contained system is the first step; the next step is to ask how this system interacts with the concrete activities of production.

One way to look at this interface is through the concept of potential output, and its relationship to current expenditure, or demand. In the textbook view, there is no connection between the long-run evolution of potential output with demand. This is a natural view if you think that economic quantities have an independent material existence. First we have scarce resources, then the choice about which end to devote them to. Knut Wicksell suggests somewhere an evocative metaphor for this view of economic growth: It’s as if we had a cellar full off wine in barrels, which will improve with age. The problem of economic growth is then equivalent to choosing the optimal tradeoff between having better wine, and drinking it sooner than later. But whatever choice we make, all the wine is already there. Ramsey and Solow growth models, with their “golden rule” growth rate, are descriptions of this kind of problem. Aggregate demand doesn’t come into it.

From our point of view, on the other hand, production is a creative, social activity. Economic growth is not a matter of allowing an exiting material process to continue operating through time, but of learning how to work together in new ways. The fundamental problem is coordination, not allocation.  From this point of view, the technical conditions of production are endogenous to the organization of production, and the money payments that structure it. So it’s natural to think that aggregate expenditure could be an important factor determining the pace at which productive activity can be reorganized.

Now, whether demand actually does matter in the longer run is hotly debated point in heterodox economics. You can find very smart Post Keynesians like Steve Fazzari arguing that it does, and equally smart Marxists like Dumenil and Levy arguing that it does not. (Amitava Dutt has a good summary; Mark Setterfield has a good recent discussion of the formal issues of incorporating demand into Kaldorian growth models.) But within our framework, at least it is possible to ask the question.

Which brings me to this recent article in the Real World Economic Review. I don’t recommend the piece — it is not written in a way to inspire confidence. But it does make an interesting claim, that over the long run there is an inverse relationship between unemployment and labor productivity growth in the US, with average labor productivity growth equal to 8 minus the unemployment rate. This is consistent with the idea that demand conditions influence productivity growth, most obviously because pressures to economize on labor will be greater when labor is scarce.

A strong empirical regularity like this would be interesting, if it was real. But is it?

Here is one obvious test (a bit more sensible to me than the approach in the RWER article). The figure below shows the average US unemployment rate and real growth rate of hourly labor productivity for rolling ten-year windows.

It’s not exactly “the rule of 8” — the slope of the regression line is just a big greater than -0.5, rather than -1. But it is still a striking relationship. Ten-year periods with high growth of productivity invariably also have low unemployment rates; periods of high average unemployment are invariably also periods of slow productivity growth.

Of course these are overlapping periods, so this tells us much less than it would if they were independent observations. But the association of above-average productivity growth with below-average unemployment is indeed a historical fact, at least for the postwar US. (As it turns out, this relationship is not present in most other advanced countries — see below.) So what could it mean?

1. It might mean nothing. We really only have four periods here — two high-productivity-growth, low-unemployment periods, one in the 1950s-1960s and one in the 1990s; and two low-productivity-growth, high-unemployment periods, one in the 1970s-1980s and one in the past decade or so. It’s quite possible these two phenomena have separate causes that just happened to shake out this way. It’s also possible that a common factor is responsible for both — a new technology-induced investment boom is the obvious candidate.

2. It might be that high productivity growth leads to lower unemployment. The story here I guess would be the Fed responding to a positive supply shock. I don’t find this very plausible.

3. It might be that low unemployment, or strong demand in general, fosters faster productivity growth. This is the most interesting for our purposes. I can think of several versions of this story. First is the increasing-returns story that originally motivated Verdoorn’s law. High demand allows firms to produce further out on declining cost curves. Second, low unemployment could encourage firms to adopt more labor-saving production techniques. Third, low unemployment might associated with more rapid movement of labor from lower-productivity to higher-productivity activities. (In other words, the relationship might be due to lower visible unemployment being associated with lower disguised unemployment.) Or fourth, low unemployment might be associated with a relaxing of the constraints that normally limit productivity-boosting investment — demand itself, and also financing. In any of these stories, the figure above shows a causal relationship running from the x-axis to the y-axis.

One scatterplot of course hardly proves anything. I’m really just posing the question. Still, this one figure is enough to establish one thing: A positive relationship between unemployment and labor productivity has not been the dominant influence on either variable in the postwar US. In particular, this is strong evidence against the idea the idea of technological unemployment, beloved by everyone from Jeremy Rifkin to Lawrence Summers. (At least as far as this period is concerned — the future could be different.) To tell a story in which paid labor is progressively displaced by machines, you must have a positive relationship between labor productivity and unemployment. But historically, high unemployment has been associated with slower growth in labor productivity, not faster. So we can say with confidence that whatever has driven changes in unemployment over the past 75 years, it has not been changes in the pace at which human labor is replaced by technology.

The negative relationship between unemployment and productivity growth, whatever it means, turns out to be almost unique to the US. Of the dozen or so other countries I looked at, the only one with a similar pattern is Japan, and even there the relationship is weaker. I honestly don’t know what to make of this. But if you’re interested, the other scatterplots are below the fold.

Note: Labor productivity is based on real GDP per hour, from the BLS International Labor Comparisons project; unemployment is the harmonized unemployment rate for all persons from the OECD Main Economic Indicators database. I used these because they are (supposed to be) defined consistently across countries and were available on FRED. Because the international data covers shorter periods than the US data does, I used 8-year windows instead of 10-year windows.

German Unification as Proto-Europe?

Here is the opening passage of a pamphlet published by the German central bank in 1900, on the 25th anniversary of its founding:

The newly established German Empire found in the organization of the coinage, paper money, and bank-note systems, an urgent and difficult task. Probably in no department of the entire national economic system were the disadvantages of the political disunion of Germany so clear…; in no economic department were greater advantages to be expected from a political union. 

Although the customs union (Zollverein) had happily united the greater part of Germany in a commercial union, similar attempts in monetary affairs had met with but modest success, and were absolutely fruitless in banking.  

The inconvenience most complained of was the multiplicity and variety of the different coinage systems (seven in all) in the different states, also the want of an adequate, regulated circulation of gold coins.

This is quoted in Goodhart’s Evolution of Central Banks. An additional motivation for establishing a German central bank, Goodhart notes, was to organize the national payment system. Before then, there had ben no Germany-wide clearinghouse for interbank settlement. When the Reichsbank (as it then was) opened branches throughout Germany, the purpose was not only to manage the money supply but to offer a new facility for long-distance payments.

(Goodhart’s larger themes are first, that central bank-like institutions develop organically within banking systems, whether or not they are established by law. And second, that the fusion of payment and intermediation functions that defines banks is a historical accident; banks as we know them needn’t, and he probably shouldn’t, be features of future financial systems. I am convinced on the first point, not so much on the second.)

What this passage makes me wonder is: Has anyone ever written about European integration in the light of German unification in the late 19th century? The claim in the Reichsbank pamphlet that customs union was the easy first step, and that monetary union followed only later and with difficulty, certainly suggests some parallels. So does the suggestion that monetary union was the biggest economic benefit of political union. It would be interesting to ask, what were the concrete problems that monetary union was understood to be solving? And how did it fit into the larger political agenda of German unification?

Of course there are fundamental differences — most importantly that German unification took place under the aegis of a sovereign political authority, whereas the central political-economic fact about Europe is that the monetary authority stands above the various national governments. But it still seems like the comparison could be illuminating.  

The Nonexistent Rise in Household Consumption

Did you know that about 10 percent of private consumption in the US consists of Medicare and Medicaid? Despite the fact that these are payments by the government to health care providers, they are counted by the BEA both as income and consumption spending for households.

I bet you didn’t know that. I bet plenty of people who work with the national income accounts for a living don’t know that. I know I didn’t know it, until I read this new working paper by Barry Cynamon and Steve Fazzari.

I’ve often thought that the best macroeconomics is just accounting plus history. This paper is an accounting tour de force. What they’ve done is go through the national accounts and separate out the components of household income and expenditure that represent cashflows received and made by households, from everything else.

Most people don’t realize how much of what goes into the headline measures of household income and household consumption does not actually correspond to any flow of money to or from households. In 2011 (the last year covered by the paper), personal consumption expenditure was given as just over $10 trillion. But of that, only about $7.5 trillion was money spent by households on goods and services. Of the rest, as of 2011:

– $1.2 trillion was imputed rents on owner-occupied housing. The national income and product accounts treat housing on the principle that the real output of housing should be the same whether or not the person living in the house happens to be the same person who owns it. So for owner-occupied housing, they impute an “owner equivalent rent” that the resident is implicitly paying to themselves for use of the house.  This sounds reasonable, but it conflicts with another principle of the national accounts, which is that only market transactions are recorded. It also creates measurement problems since most owned residences are single-family homes, for which there isn’t a big rental market, so the BEA has to resort to various procedures to estimate what the rent should be. One result of the procedures they use is that a rise in hoe prices, as in the 2000s, shows up as a rise in consumption spending on imputed rents even if no additional dollars change hands.

– $970 billion was Medicare and Medicaid payments; another $600 billion was employer purchases of group health insurance. The official measures of household consumption are constructed as if all spending on health benefits took the form of cash payments, which they then chose to spend on health care. This isn’t entirely crazy as applied to employer health benefits, since presumably workers do have some say in how much of their compensation takes the form of cash vs. health benefits; tho one wouldn’t want to push that assumption that too far. But it’s harder to justify for public health benefits. And, justifiable or not, it means the common habit of referring to personal consumption expenditure as “private” consumption needs a large asterix.

– $250 billion was imputed bank services. The BEA assumes that people accept below-market interest on bank deposits only as a way of purchasing some equivalent service in return. So the difference between interest from bank deposits and what it would be given some benchmark rate is counted as consumption of banking services.

– $400 billion in consumption by nonprofits. Nonprofits are grouped with the household sector in the national accounts. This is not necessarily unreasonable, but it creates confusion when people assume the household sector refers only to what we normally think of households, or when people try to match up the aggregate data with surveys or other individual-level data.

Take these items, plus a bunch of smaller ones, and you have over one-quarter of reported household consumption that does not correspond to what we normally think of as consumption: market purchases of goods and services to be used by the buyer.

The adjustments are even more interesting when you look at trends over time. Medicare and Medicaid don’t just represent close to 10 percent of reported “private” consumption; they represent over three quarters of the increase in consumption over the past 50 years. More broadly, if we limit “consumption” to purchases by households, the long term rise in household consumption — taken for granted by nearly everyone, heterodox or mainstream — disappears.

By the official measure, personal consumption has risen from around 60 percent of GDP in the 1950s, 60s and 70s, to close to 70 percent today. While there are great differences in stories about why this increase has taken place, almost everyone takes for granted that it has. But if you look at Cynamon and Fazzari’s measure, which reflects only market purchases by households themselves, there is no such trend. Consumption declines steadily from 55 percent of GDP in 1950 to around 47 percent today. In the earlier part of this period, impute rents for owner occupied housing are by far the biggest part of the difference; but in more recent years third-party medical expenditures have become more important. Just removing public health care spending from household consumption, as shown in the pal red line in the figure, is enough to change a 9 point rise in the consumption share of GDP into a 2 point rise. In other words, around 80 percent of the long-term rise in household consumption actually consists of public spending on health care.

In our “Fisher dynamics” paper, Arjun Jayadev and I showed that the rise in debt-income ratios for the household sector is not due to any increase in household borrowing, but can be entirely explained by higher interest rates relative to income growth and inflation. For that paper, we wanted to adjust reported income in the way that Fazzari and Cynamon do here, but we didn’t make a serious effort at it. Now with their data, we can see that not only does the rise in household debt have nothing to do with any household decisions, neither does the rise in consumption. What’s actually happened over recent decades is that household consumption as a share of income has remained roughly constant. Meanwhile, on the one hand disinflation and high interest rates have increased debt-income ratios, and on the other hand increased public health care spending and, in the 2000s high home prices, have increased reported household consumption. But these two trends have nothing to do with each other, or with any choices made by households.

There’s a common trope in left and heterodox circles that macroeconomic developments in recent decades have been shaped by “financialization.” In particular, it’s often argued that the development of new financial markets and instruments for consumer credit has allowed households to choose higher levels of consumption relative to income than they otherwise would. This is not true. Rising debt over the past 30 years is entirely a matter of disinflation and higher interest rates; there has been no long run increase in borrowing. Meanwhile, rising consumption really consists of increased non-market activity — direct provision of housing services through owner-occupied housing, and public provision of health services. This is if anything a kind of anti-financialization.

The Fazzari and Cynamon paper has radical implications, despite its moderate tone. It’s the best kind of macroeconomics. No models. No econometrics. Just read the damn tables, and think about what the numbers mean.

Liquidity Preference and Solidity Preference in the 19th Century

So I’ve been reading Homer and Sylla’s History of Interest Rates. One of the many fascinating things I’ve learned, is that in the market for federal debt, what we today call an inverted yield curve was at one time the norm.

From the book:

Three small loans floated in 1820–1821, principally to permit the continued redemption of high rate war loans, provide an interesting clue to investor preference… These were: 

$4.7 million “5s of 1820,” redeemable in 1832; sold at 100 = 5%.
“6s of 1820,” redeemable at pleasure of United States; sold at 102 = 5.88%.
“5s of 1821,” redeemable in 1835; sold at 1051⁄8 =4.50%, and at 108 = 4.25%. 

The yield was highest for the issue with early redemption risk and much lower for those with later redemption risks.

Nineteenth century government bonds were a bit different from modern bonds, in that the principal was repaid at the option of the borrower; repayment is usually not permitted until a certain date. [1] They were also sold with a fixed yield in terms of face value — that’s what the “5” and “6” refer to — but the actual yield depended on the discount or premium they were sold at. The important thing for our purposes is that the further away the earliest possible date of repayment is, the lower the interest rate — the opposite of the modern term premium. That’s what the passage above is saying.

The pattern isn’t limited to the 1820-21 bonds, either; it seems to exist through most of the 19th century, at least for the US. It’s the same with the massive borrowing during the Civil War:

In 1864, although the war was approaching its end, it had only been half financed. The Treasury was able to sell a large volume of bonds, but not at such favorable terms as the market price of its seasoned issues might suggest. Early in the year another $100 million of the 5–20s [bonds with a minimum maturity of 5 years and a maximum of 20] were sold and then a new longer issue was sold as follows: 

1864—$75 million “6s”  redeemable in 1881, tax-exempt; sold at 104.45 = 5.60%. 

The Treasury soon made an attempt to sell 5s, which met with a lukewarm reception. In order to attract investors to the lower rate the Treasury extended the term to redemption from five to ten years and the maturity from twenty to forty years

1864—$73 million “5%, 10–40s of 1864,” redeemable 1874, due in 1904, tax-exempt; sold at 100 = 5%.

Isn’t that striking? The Treasury couldn’t get investors to buy its shorter bonds at an acceptable rate, so they had to issue longer bonds instead. You wouldn’t see that story today.

The same pattern continues through the 1870s, with the new loans issue to refinance the Civil War debt. The first issue of bonds, redeemable in five to ten years sold at an interest rate of 5%; the next issue, redeemable in 13-15 years sold at 4.5%; and the last issue, redeemable in 27-29 years, sold at 4%. And it doesn’t seem like this is about expectations of a change in rates, like with a modern inverted yield curve. Investors simply were more worried about being stuck with uninvestable cash than about being stuck with unsaleable securities. This is a case where “solidity preference” dominates liquidity preference.

One possible way of explaining this in terms of Axel Leijonhufvud’s explanation of the yield curve.

The conventional story for why long loans normally have higher interest rates than short ones is that longer loans impose greater risks on lenders. They may not be able to convert the loan to cash if they need to make some payment before it matures, and they may suffer a capital loss if interest rates change during the life of the loan. But this can’t be the whole story, because short loans create the symmetric risk of not knowing what alternative asset will be available when the loan matures. In the one case, the lender risks a capital loss, but in the other case they risk getting a lower income. Why is “capital uncertainty” a greater concern than “income uncertainty”?

The answer, Leijonhufvud suggests, lies in

Keynes’ … “Vision” of a world in which currently active households must, directly or indirectly, hold their net worth in the form of titles to streams that run beyond their consumption horizon. The duration of the relevant consumption plan is limited by the sad fact that “in the Long Run, we are all dead.” But the great bulk of the “Fixed Capital of the modem world” is very long- term in nature and is thus destined to survive the generation which now owns it. This is the basis for the wealth effect of changes in asset values. 

The interesting point about this interpretation of the wealth effect is that it also provides a price-theoretical basis for Keynes’ Liquidity Preference theory. … Keynes’ (as well as Hicks’) statement of this hypothesis has been repeatedly criticized for not providing any rationale for the presumption that the system as a whole wants to shed “capital uncertainty” rather than “income uncertainty.” But Keynes’ mortal consumers cannot hold land, buildings, corporate equities, British consols, or other permanent income sources “to maturity.” When the representative, risk-averting transactor is nonetheless induced by the productivity of roundabout processes to invest his savings in such income sources, he must be resigned to suffer capital uncertainty. Forward markets will therefore generally show what Hicks called a “constitutional weakness” on the demand side.

I would prefer not to express this in terms of households’ consumption plans. And I would emphasize that the problem with wealth in the form of long-lived production processes is not just that it produces income far into the future, but that wealth in this form is always in danger of losing its character as money. Once capital is embodied in a particular production process and the organization that carries it out, it tends to evolve into the means of carrying out that organization’s intrinsic purposes, instead of the capital’s own self-expansion. But for this purpose, the difference doesn’t matter; either way, the problem only arises once you have, as Leijonhufvud puts it, “a system ‘tempted’ by the profitability of long processes to carry an asset stock which turns over more slowly than [wealth owners] would otherwise want.”

The temptation of long-lived production processes is inescapable in modern economies, and explains the constant search for liquidity. But in the pre-industrial United States? I don’t think so. Long-lived means of production were much less important, and to the extent they did exist, they weren’t an outlet for money-capital. Capital’s role in production was to finance stocks of raw materials, goods in process and inventories. There was no such thing, I don’t think, as investment by capitalists in long-lived capital goods. And even land — the long-lived asset in most settings — was not really an option, since it was abundant. The early United States is something like Samuelson’s consumption-loan world, where there is no good way to convert command over current goods into future production. [2] So there is excess demand rather than excess supply for long-lasting sources of income.

The switch over to positive term premiums comes early in the 20th century. By the 1920s, short-term loans in the New York market consistently have lower rates than corporate bonds, and 3-month Treasury bills have rates below longer bonds. Of course the organization of financial markets changed quite a lot in this period too, so one wouldn’t want to read too much into this timing. But it is at least consistent with the Leijonhufvud story. Liquidity preference becomes dominant in financial markets only once there has been a decisive shift toward industrial production by long-lived firm using capital-intensive techniques, and once claims on those firms has become a viable outlet for money-capital.

* * *

A few other interesting points about 19th century US interest rates. First, they were remarkably stable, at least before the 1870s. (This fits with the historical material on interest rates that Merijn Knibbe has been presenting in his excellent posts at Real World Economics Review.)

Second, there’s no sign of a Fisher equation. Nominal interest rates do not respond to changes in the price level, at all. Homer and Sylla mention that in earlier editions of the book, which dealt less with the 20th century, the concept of a “real” interest rate was not even mentioned.

As you can see from this graph, none of the major inflations or deflations between 1850 and 1960 had any effect on nominal interest rates. The idea that there is a fundamentals-determined “real” interest rate while the nominal rate adjusts in response to changes in the price level, clearly has no relevance outside the past 50 years. (Whether it describes the experience of the past 50 years either is a question for another time.)

Finally, there is no sign of “crowding out” of private by public borrowing. It is true that the federal government did have to pay somewhat higher rates during the periods of heavy borrowing (and of course also political uncertainty) in the War of 1812 and the Civil War. But rates for other borrowers didn’t budge. And on the other hand, the surpluses that resulted in the redemption of the entire debt in the 1830s didn’t deliver lower rates for other borrowers. Homer and Sylla:

Boston yields were about the same in 1835, when the federal debt was wiped out, as they were in 1830; this reinforces the view that there was little change in going rates of long-term interest during this five- year period of debt redemption.

If government borrowing really raises rates for private borrowers, you ought to see it here, given the absence of a central bank for most of this period and the enormous scale of federal borrowing during the Civil War. But you don’t.

[1] It seems that most, though not all, bonds were repaid at the earliest possible redemption date, so it is reasonably similar to the maturity of a modern bond.

[2] Slaves are the big exception. So the obvious test for the argument I am making here would be to find the modern pattern of term premiums in the South. Unfortunately, Homer and Sylla aren’t any help on this — it seems the only local bond markets in this period were in New England.

Borrowing ≠ Debt

There’s a common shorthand that makes “debt” and “borrowing” interchangeable. The question of why an economic unit had rising debt over some period, is treated as equivalent to the question of why it was borrowing more over that period, or why its expenditure was higher relative to its income. This is a natural way of talking, but it isn’t really correct.

The point of Arjun’s and my paper on debt dynamics was to show that for household debt, borrowing and changes in debt don’t line up well at all. While some periods of rising household leverage — like the housing bubble of the 2000s — were also periods of high household borrowing, only a small part of longer-term changes in household debt can be explained this way. This is because interest, income growth and inflation rates also affect debt-income ratios, and movements in these other variables often swamp any change in household borrowing.
As far as I know, we were the first people to make this argument in a systematic way for household debt. For government debt, it’s a bit better known — but only a bit. People like Willem Buiter or Jamie Galbraith do point out that the fall in US debt after World War II had much more to do with growth and inflation than with large primary surpluses. You can find the argument more fully developed for the US in papers by Hall and Sargent  or Aizenman and Marion, and for a large sample of countries by Abbas et al., which I’ve discussed here before. But while many of the people making it are hardly marginal, the point that government borrowing and government debt are not equivalent, or even always closely linked, hasn’t really made it into the larger conversation. It’s still common to find even very smart people saying things like this:

We didn’t have anything you could call a deficit problem until 1980. We then saw rising debt under Reagan-Bush; falling debt under Clinton; rising under Bush II; and a sharp rise in the aftermath of the financial crisis. This is not a bipartisan problem of runaway deficits! 

Note how the terms “deficits” and “rising debt” are used interchangeably; and though the text mostly says deficits, the chart next to this passage shows the ratio of debt to GDP.
What we have here is a kind of morality tale where responsible policy — keeping government spending in line with revenues — is rewarded with falling debt; while irresponsible policy — deficits! — gets its just desserts in the form of rising debt ratios. It’s a seductive story, in part because it does have an element of truth. But it’s mostly false, and misleading. More precisely, it’s about one quarter true and three quarters false.
Here’s the same graph of federal debt since World War II, showing the annual change in debt ratio (red bars) and the primary deficit (black bars), both measured as a fraction of GDP. (The primary deficit is the difference between spending other than interest payments and revenue; it’s the standard measure of the difference between current expenditure and current revenue.) So what do we see?
It is true that the federal government mostly ran primary surpluses from the end of the war until 1980, and more generally, that periods of surpluses were mostly periods of rising debt, and conversely. So it might seem that using “deficits” and “rising debt” interchangeably, while not strictly correct, doesn’t distort the picture in any major way. But it does! Look more carefully at the 1970s and 1980s — the black bars look very similar, don’t they? In fact, deficits under Reagan were hardy larger than under Ford and Carter —  a cumulative 6.2 percent of GDP over 1982-1986, compared with 5.6 percent of GDP over 1975-1978. Yet the debt-GDP ratio rose by just a single point (from 24 to 25) in the first episode, but by 8 points (from 32 to 40) in the second. Why did debt increase in the 1980s but not in the 1970s? Because in the 1980s the interest rate on federal debt was well above the economy’s growth rate, while in the 1970s, it was well below it. In that precise sense, if debt is a problem it very much is a bipartisan one; Volcker was the appointee of both Carter and Reagan.
Here’s the same data by decades, and for the pre- and post-1980 periods and some politically salient subperiods.  The third column shows the part of debt changes not explained by the primary balance. This corresponds to what Arjun and I call “Fisher dynamics” — the contribution of growth, inflation and interest rates to changes in leverage. [*] The units are percent of GDP.
Totals by Decade
Primary Deficit Change in Debt Residual Debt Change
1950s -8.6 -29.6 -20.9
1960s -7.3 -17.7 -10.4
1970s 2.8 -1.7 -4.6
1980s 3.3 16.0 12.7
1990s -15.9 -7.3 8.6
2000s 23.7 27.9 4.2
Annual averages
Primary Deficit Change in Debt Residual Debt Change
1947-1980 -0.7 -2.0 -1.2
1981-2011 0.1 1.3 1.2
   1981-1992 0.3 1.8 1.5
   1993-2000 -2.7 -1.6 1.1
   2001-2008 -0.1 0.8 0.9
   2009-2011 7.3 8.9 1.6

Here again, we see that while the growth of debt looks very different between the 1970s and 1980s, the behavior of deficits does not. Despite Reagan’s tax cuts and military buildup, the overall relationship between government revenues and expenditures was essentially the same in the two decades. Practically all of the acceleration in debt growth in the 1980s compared with the 1970s is due to higher interest rates and lower inflation.

Over the longer run, it is true that there is a shift from primary surpluses before 1980 to primary deficits afterward. (This is different from our finding for households, where borrowing actually fell after 1980.) But the change in fiscal balances is less than 25 percent the change in debt growth. In other words, the shift toward deficit spending, while real, only accounts for a quarter of the change in the trajectory of the federal debt. This is why I said above that the morality-tale version of the rising debt story is a quarter right and three quarters wrong.

By the way, this is strikingly consistent with the results of the big IMF study on the evolution of government debt ratios around the world. Looking at 60 episodes of large increases in debt-GDP ratios over the 20th century, they find that only about a third of the average increase is accounted for by primary deficits. [2] For episodes of falling debt, the role of primary surpluses is somewhat larger, especially in Europe, but if we focus on the postwar decades specifically then, again, primary surpluses accounted for only a about a third of the average fall. So while the link between government debt and deficits has been a bit weaker in the US than elsewhere, it’s quite weak in general.

So. Why should we care?

Most obviously, you should care if you’re worried about government debt. Now maybe you shouldn’t worry. But if you do think debt is a problem, then you are looking in the wrong place if you think holding down government borrowing is the solution. What matters is holding down i – (g + π) — that is, keeping interest rates low relative to growth and inflation. And while higher growth may not be within reach of policy, higher inflation and lower interest rates certainly are.

Even if you insist on worrying not just about government debt but about government borrowing, it’s important to note that the cumulative deficits of 2009-2011, at 22 percent of GDP, were exactly equal to the cumulative surpluses over the Clinton years, and only slightly smaller than the cumulative primary surpluses over the whole period 1947-1979. So if for whatever reason you want to keep borrowing down, policies to avoid deep recessions are more important than policies to control spending and raise revenue.

More broadly, I keep harping on this because I think the assumption that the path of government debt is the result of government borrowing choices, is symptomatic of a larger failure to think clearly about this stuff. Most practically, the idea that the long-run “sustainability” of the  debt requires efforts to control government borrowing — an idea which goes unquestioned even at the far liberal-Keynesian end of the policy spectrum —  is a serious fetter on proposals for more stimulus in the short run, and is a convenient justification for all sorts of appalling ideas. And in general, I just reject the whole idea of responsibility. It’s ideology in the strict sense — treating the conditions of existence of the dominant class as if they were natural law. Keynes was right to see this tendency to view of all of life through a financial lens — to see saving and accumulating as the highest goals in life, to think we should forego real goods to improve our financial position — as “one of those semicriminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease.”

On a methodological level, I see reframing the question of the evolution of debt in terms of the independent contributions of primary deficits, growth, inflation and interest rates as part of a larger effort to think about the economy in historical, dynamic terms, rather than in terms of equilibrium. But we’ll save that thought for another time.

The important point is that, historically, changes in government borrowing have not been the main factor in the evolution of debt-GDP ratios. Acknowledging that fact should be the price of admission to any serious discussion of fiscal policy.

[1] Strictly speaking, debt ratios can change for reasons other than either the primary balance or Fisher dynamics, such as defaults or the effects of exchange rate movements on foreign-currency-denominated debt. But none of these apply to the postwar US.

[2] The picture is a bit different from the US, since adverse exchange-rate movements are quite important in many of these episodes. But it remains true that high deficits are the main factor in only a minority of large increases in debt-GDP ratios.

Graeber Cycles and the Wicksellian Judgment Day

So it’s halfway through the semester, and I’m looking over the midterms. Good news: Learning has taken place.

One of the things you hope students learn in a course like this is that money consists of three things: demand deposits (checking accounts and the like), currency and bank reserves. The first is a liability of private banks, the latter two are liabilities of the central bank. That money is always someone’s liability — a debt — is often a hard thing for students to get their heads around, so one can end up teaching it a bit catechistically. Balance sheets, with their absolute (except for the exceptions) and seemingly arbitrary rules, can feel a bit like religious formula. On this test, the question about the definition of money was one of the few that didn’t require students to think.

But when you do think about it, it’s a very strange thing. What we teach as just a fact about the world, is really the product of — or rather, a moment in — a very specific historical evolution. We are lumping together two very different kinds of “money.” Currency looks like classical money, like gold; but demand deposits do not. The most obvious difference, at least in the context of macroeconomics, is that one is exogenous (or set by policy) and the other endogenous. We paper this over by talking about reserve requirements, which allow the central bank to set “the” money supply to determine “the” interest rate. But everyone knows that reserve requirements are a dead letter and have been for decades, probably. While monetarists like Nick Rowe insist that there’s something special about currency — they have to, given the logic of their theories — in the real world the link between the “money” issued by central banks and the “money” that matters for the economy has attenuated to imperceptible gossamer, if it hasn’t been severed entirely. The best explanation for how conventional monetary policy works today is pure convention: With the supply of money entirely in the hands of private banks, policy is effective only because market participants expect it to be effective.

In other words, central banks today are like the Chinese emperor Wang Wei-Shao in the mid-1960s film Genghis Khan:

One of the film’s early scenes shows the exquisitely attired emperor, calligraphy brush in hand, elegantly composing a poem. With an ethereal self-assurace born of unquestioning confidence in the divinely ordained course of worldly affairs, he explains that the poem’s purpose is to express his displeasure at the Mongol barbarians who have lately been creating a disturbance on the empire’s western frontier, and, by so doing, cause them to desist.  

Today expressions of intentions by leaders of the world’s major central banks typically have immediate repercussions in financial markets… Central bankers’ public utterances … regularly move prices and yields in the financial markets, and these financial variables in turn affect non-financial economic activity… Indeed, a widely shared opinion today is that central bank need not actually do anything. … 

In truth the ability of central banks to affect the evolution of prices and output … [is] something of a mystery. … Each [explanation of their influence] … turns out to depend on one or another of a series of by now familiar fictions: households and firms need currency to purchase goods and services; banks can issue only reserve-bearing liabilities; no non-bank financial institutions create credit; and so on. 

… at a practical level, there is today [1999] little doubt that a country’s monetary policy not only can but does largely determine the evolution of its price level…, and almost as little doubt that monetary policy exerts significant influence over … employment and output… Circumstances change over time, however, and when they do the fictions that once described matters adequately may no longer do so. … There may well have been a time when the might of the Chinese empire was such that the mere suggestion of willingness to use it was sufficient to make potential invaders withdraw.

What looked potential a dozen years ago is now actual, if it wasn’t already then. It’s impossible to tell any sensible macroeconomic story that hinges on the quantity of outside money. The shift in our language from  money, which can be measured — that one could formulate a “quantity theory” of  — to discussions of liquidity, still a noun but now not a tangible thing but a property that adheres in different assets to different degrees, is a key diagnostic. And liquidity is a result of the operations of the financial system, not a feature of the natural world or a dial that can be set by the central bank. In 1820 or 1960 or arguably even in 1990 you could tell a kind of monetarist story that had some purchase on reality. Not today. But, and this is my point! it’s not a simple before and after story. Because, not in 1890 either.

David Graeber, in his magisterial Debt: The First 5,000 Years [1], describes a very long alternation between world economies based on commodity money and world economies based on credit money. (Graeber’s idiolect is money and debt; let’s use here the standard terms.) The former is anonymous, universal and disembedded, corresponds to centralized states and extensive warfare, and develops alongside those other great institutions for separating people from their social contexts, slavery and bureaucracy. [2] Credit, by contrast, is personal, particular, and unavoidably connected with specific relationships and obligations; it corresponds to decentralized, heterogeneous forms of authority. The alternations between commodity-money systems,with their transcendental, monotheistic religious-philosophical superstructures; and credit systems, with their eclectic, immanent, pantheistic superstructures, is, in my opinion, the heart of Debt. (The contrast between medieval Christianity, with its endless mediations by saints and relics and the letters of Christ’s name, and modern Christianity, with just you and the unknowable Divine, is paradigmatic.) Alternations not cycles, since there is no theory of the transition; probably just as well.

For Graeber, the whole half-millenium from the 16th through the 20th centuries is a period of the dominion of money, a dominion only now — maybe — coming to an end. But closer to ground level, there are shorter cycles. This comes through clearly in Axel Leijonhufvud’s brilliant short essay on Wicksell’s monetary theory, which is really the reason this post exists. (h/t David Glasner, I think Ashwin at Macroeconomic Resilience.) Among a whole series of sharp observations, Leijonhufvud makes the point that the past two centuries have seen several swings between commodity (or quasi-commodity) money and credit money. In the early modern period, the age of Adam Smith, there really was a (commodity) money economy, you could talk about a quantity of money. But even by the time of Ricardo, who first properly formalized the corresponding theory, this was ceasing to be true (as Wicksell also recognized), and by the later 19th century it wasn’t true at all. The high gold standard era (1870-1914, roughly) really used gold only for settling international balances between central banks; for private transactions, it was an age not of gold but of bank-issued paper money. [3]

If I somehow found myself teaching this course in the 18th century, I’d explain that money means gold, or gold and silver. But by the mid 19th century, if you asked people about the money in their pocket, they would have pulled out paper bills, not so unlike bills of today — except they very likely would have been bills issued by private banks.

The new world of bank-created money worried classical economists like Wicksell, who, like later monetarists, were strongly committed to the idea that the overall price level depends on the amount of money in circulation. The problem is that in a world of pure credit money, it’s impossible to base a theory of the price level on the relationship between the quantity of money and the level of output, since the former is determined by the latter. Today we’ve resolved this problem by just giving up on a theory of the price level, and focusing on inflation instead. But this didn’t look like an acceptable solution before World War II. For economists then — for any reasonable person — a trajectory of the price level toward infinity was an obvious absurdity that would inevitably come to a halt, disastrously if followed too far. Whereas today, that trajectory is the precise definition of price stability, that is, stable inflation. [4] Wicksell was part of an economics profession that saw explaining the price level as a, maybe the, key task; but he had no doubt that the trend was toward an ever-diminishing role for gold, at least domestically, leaving the money supply in the hands of the banks and the price level frighteningly unmoored.

Wicksell was right. Or at least, he was right when he wrote, a bit before 1900. But a funny thing happened on the way to the world of pure credit money. Thanks to new government controls on the banking system, the trend stopped and even reversed. Leijonhufvud:

Wicksell’s “Day of Judgment” when the real demand for the reserve medium would shrink to epsilon was greatly postponed by regime changes already introduced before or shortly after his death [in 1926]. In particular, governments moved to monopolize the note issue and to impose reserve requirements on banks. The control over the banking system’s total liabilities that the monetary authorities gained in this way greatly reduced the potential for the kind of instability that preoccupied Wicksell. It also gave the Quantity Theory a new lease of life, particularly in the United States.

But although Judgment Day was postponed it was not cancelled. … The monetary anchors on which 20th century central bank operating doctrines have relied are giving way. Technical developments are driving the process on two fronts. First, “smart cards” are circumventing the governmental note monopoly; the private sector is reentering the business of supplying currency. Second, banks are under increasing competitive pressure from nonbank financial institutions providing innovative payment or liquidity services; reserve requirements have become a discriminatory tax on banks that handicap them in this competition. The pressure to eliminate reserve requirements is consequently mounting. “Reserve requirements already are becoming a dead issue.”

The second bolded sentence makes a nice point. Milton Friedman and his followers are regarded as opponents of regulation, supporters of laissez-faire, etc. But to the extent that the theory behind monetarism ever had any validity (or still has any validity in its present guises) it is precisely because of strict government control over credit creation. It’s an irony that textbooks gloss over when they treat binding reserve requirements and the money multiplier as if they were facts of nature.

(That’s more traditional textbooks. Newer textbooks replace the obsolete story that the central bank controls interest rates by setting the money supply with a new story that the central bank sets the interest rate by … look, it just does, ok? Formally this is represented by replacing the old upward sloping LM curve with a horizontal MP (for monetary policy) line at the interest rate chosen by the central bank. The old story was artificial and, with respect to recent decades, basically wrong, but it did have the virtue of recognizing that the interest rate is determined in financial markets, and that monetary policy has to operate by changing the supply of liquidity. In the up-to-date modern version, policy might just as well operate by calligraphy.)

So, in the two centuries since Heinrich van Storch lectured the young Grand Dukes of Russia on the economic importance of “precious metals and fine jewels,” capitalism has gone through two full Graeber cycles, from commodity money to credit money, back to (pseudo-)commodity money and now to credit money again. It’s a process that proceeds unevenly; both the reality and the theory of money are uncomfortable hybrids of the two. But reality has advanced further toward the pure credit pole than theory has.

This time, will it make it all the way? Is Leijonhufvud right to suggest that Wicksell’s Day of Judgment was deferred but not canceled, and now is at hand?

Certainly the impotence of conventional monetary policy even before the crisis is a serious omen. And it’s hard to imagine a breakdown of the credit system that would force a return to commodity money, as in, say, medieval China. But on the other hand, it is not hard to imagine a reassertion of the public monopoly on means of payment. Indeed, when you think about it, it’s hard to understand why this monopoly was ever abandoned. The practical advantages of smart cards over paper tokens are undeniable, but there’s no reason that the cards shouldn’t have been public goods just like the tokens were. (For Graeber’s spiritual forefather Karl Polanyi, money, along with land and labor, was one of the core social institutions that could not be treated as commodities without destroying the social fabric.) The evolution of electronic money from credit cards looks contingent, not foreordained. Credit cards are only one of several widely-used electronic means of payment, and there’s no obvious reason why they and not one of the ones issued by public entities should have been adopted universally. This is, after all, an area with extremely strong network externalities, where lock-in is likely. Indeed, in the Benjamin Friedman article quoted above, he explicitly suggests that subway cards issued by the MTA could just as easily have developed into the universal means of payment. After all, the “pay community” of subway riders in New York is even more extensive than the pay community of taxpayers, and there was probably a period in the 1990s when more people had subway cards in their wallets than had credit or debit cards. What’s more, the MTA actually experimented with distributing subway card-reading machines to retailers to allow the cards to be used like, well, money. The experiment was eventually abandoned, but there doesn’t seem to be any reason why it couldn’t have succeeded; even today, with debit/credit cards much more widespread than two decades ago, many campuses find it advantageous to use college-issued smart cards as a kind of local currency.

These issues were touched on in the debate around interchange fees that rocked the econosphere a while back. (Why do checks settle at par — what I pay is exactly what you get — but debit and credit card transactions do not? Should we care?) But that discussion, while useful, could hardly resolve the deeper question: Why have we allowed means of payment to move from being a public good to a private oligopoly? In the not too distant past, if I wanted to give you some money and you wanted to give me a good or service, we didn’t have to pay any third party for permission to make the trade. Now, most of the time, we do. And the payments are not small; monetarists used to (still do?) go on about the “shoe leather costs” of holding more cash as a serious reason to worry about inflation, but no sane person could imagine those costs could come close to five percent of retail spending. And that’s not counting the inefficiencies. This is a private sales tax that we allow to be levied on almost every transaction,  just as distortionary and just as regressive as other sales taxes but without the benefit of, you know, funding public services. The more one thinks about it, the stranger it seems. Why, of all the expansions of public goods and collective provision won over the past 100 or 200 years, is this the one big one that has been rolled back? Why has this act of enclosure apparently not even been noticed, let alone debated? Why has the modern equivalent of minting coinage — the prerogative of sovereigns for as long as there’ve been any — been allowed to pass into the hands of Visa and MasterCard, with neoliberal regimes not just allowing but actively encouraging it?

The view of the mainstream — which in this case stretches well to the left of Krugman and DeLong, and on the right to everyone this side of Ron Paul — is that, whatever the causes of the crisis and however the authorities should or do respond, eventually we will return to the status quo ante. Conventional monetary policy may not be effective now, but there’s no reason to doubt that it will one day get back to so being. I’m not so sure. I think people underestimate the extent to which modern central banking depended on a public monopoly on means of payment, a monopoly that arose — was established — historically, and has now been allowed to lapse. Christina Romer’s Berkeley speech on the glorious counterrevolution in macroeconomic policy may not have been anti-perfectly timed just because it was given months before the beginning of the worst recession in 70 years, but because it marked the end of the period in which the body of theory and policy that she was extolling applied.

[1] Information wants to be free. If there’s a free downloadable version of a book out there, that’s what I’m going to link to. But assuming some bank has demand deposits payable to you on the liability side of its balance sheet (i.e. you’ve got the money), this is a book you ought to buy.

[2] In pre-modern societies a slave is simply someone all of whose kinship ties have been extinguished, and is therefore attached only to the household of his/her master. They were not necessarily low in status or living standards, and they weren’t distinguished by being personally subordinated to somebody, since everyone was. And slavery certainly cannot be defined as a person being property, since, as Graeber shows, private property as we know it is simply a generalization of the law of slavery.

[3] A point also emphasized by Robert Triffin in his essential paper Myths and Realities of the So-Called Gold Standard.

[4] Which is a cautionary tale for anyone who thinks the fact that an economic process that involves some ratio diverging to infinity is by defintion unsustainable. Physiocrats thought a trajectory of the farming share of the population toward zeo was an absolute absurdity and that in practice it could certaily not fall below half. They were wrong; and more generally, capitalism is not an equilibrium process. There may be seven unsustainable processes out there, or even more, but you cannot show it simply by noting that the trend of some ratio will take it outside its historic range.

UPDATE: Nick Rowe has a kind of response which, while I don’t agree with it, lays out the case against regarding money as a liability very clearly. I have a long comment there, of which the tl;dr is that we should be thinking — both logically and chronologically — of central bank money evolving from private debt contracts, not from gold currency. I don’t know if Nick read the Leijonhufvud piece I quote here, but the point that it makes is that writing 100-odd years ago, Wicksell started from exactly the position Nick takes now, and then observed how it breaks down with modern (even 1900-era modern) financial systems.

Also, the comments below are exceptionally good; anyone who read this post should definitely read the comments as well.

Trade: The New Normal Was the Old Normal Too

Matthew Yglesias is puzzled by

the fundamental weirdness of having so much savings flowing uphill from poor, fast-growing countries into the rich, mature economy of the United States. It ought to be the case that people in fast-growing countries are eager to consume more than they produce, knowing that they’ll be much richer in the near future. And it ought to be the case that people in rich countries are eager to invest in poor ones seeking higher returns. But it’s not what was happening pre-crisis and it’s not what’s been happening post-crisis.

He should have added: And it’s not what’s ever happened.

I’m not sure what “ought” is doing in this passage. If it expresses pious hope, fine. But if it’s supposed to be a claim about what’s normal or usual, as the contrast with “weirdness” would suggest, then it just ain’t so. Sure, in some very artifical textbook models savings flow from rich countries to poor ones. But it has never been the case, since the world economy came into being in the 19th century, that unregulated capital flows have behaved the way they “ought” to.

Albert Fishlow’s paper “Lessons from the Past: Capital Markets During the 19th Century and the Interwar Period” includes a series for the net resources transferred from creditor to debtor countries from the mid-19th century up to the second world war. (That is, new investment minus interest and dividends on existing investment.) This series turns negative sometime between 1870 and 1885, and remains so through the end of the 1930s. For 50 years — the Gold Standard age of stable exchange rates, flexible prices, free trade and unregulated capital flows — the poor countries were consistently transferring resources to the rich ones. In other words, what Yglesias sees as the “fundamental weirdness” of the current period is the normal historical pattern. Or as Fishlow puts it:

Despite the rapid prewar growth in the stock of foreign capital, at an annual average rate of 4.6 percent between 1870 and 1913, foreign investment did not fully keep up with the reflow of income from interest and dividends. Return income flowed at a rate close to 5 percent a year on outstanding balances, meaning that on average creditors transferred no resources to debtor nations over the period. … Such an aggregate result casts doubt on the conventional description of the regular debt cycle that capital recipients were supposed to experience. … most [developing] countries experienced only brief periods of import surplus [i.e. current account deficit]. For most of the time they were compelled to export more than they imported in order to meet their debt payments.

A similar situation existed for much of the post World War II period, especially after the secular increase in world interest rates around 1980.

There is a difference between the old pattern (which still applies to much of the global south) and the new one. Then, net-debtor poor countries  ran current account surpluses to make payments on their high-yielding liabilities to rich countries. Now, net-creditor (relatively-) poor countries run current account surpluses to accumulate low-yielding assets in rich countries. I would argue there are reasons to prefer the new pattern to the old one. But the flow of real resources is unchanged: from the periphery to the center. Meanwhile, those countries that have successfully industrialized, as scholars like Ha-Joon Chang have shown, have done so not by accessing foreign savings by connecting with the world financial system, but by keeping their own savings at home by disconnecting from it.

It seems that an unregulated international finance doesn’t benevolently put the world’s collective savings to the best use for everyone, but instead channels wealth from the poor to the rich. That may not be the way things ought to be, but historically it’s pretty clearly the way things are.