Here are two interesting articles on demand and productivity that people have recently brought to my attention.
The economic historian Gavin Wright — author of the classic account of the economic logic of the plantation — just sent me a piece he wrote a few years ago on the productivity boom of the 1990s. As he said in his email, his account of the ‘90s is very consistent with the suggestions I make in my Roosevelt paper about how strong demand might stimulate productivity growth.
In this article, Wright traces the idea that high wage regions will experience faster productivity growth back to H. J. Habbakuk’s 1962 American and British Technology in the Nineteenth Century. Then he assembles a number of lines of evidence that rapid wage growth drove the late-1990s productivity acceleration, rather than vice versa.
He points out that the widely-noted “productivity explosion” of the 1920s — from 1.5 percent a year to over 5 percent — was immediately preceded by a period of exceptionally strong wage growth: “The real price of labor in the 1920s … was between 50 and 70 percent higher than a decade earlier.”  The pressure of high wages, he suggests, encouraged the use of electricity and other general-purpose technologies, which had been available for decades but only widely adopted in manufacturing in the 1920s. Conversely, we can see the productivity slowdown of the 1970s as, at least in part, a result of the deceleration of wage growth, which — Wright argues — was the result of institutional changes including the decline of unions, the erosion of the minimum wage and other labor regulations, and more broadly the shift back toward “‘flexible labor markets,’ reversing fifty years of labor market policy.”
Turning to the 1990s, the starting point is the sharp acceleration of productivity in the second half of the decade. This acceleration was very widely shared, including sectors like retail where historically productivity growth had been limited. The timing of this acceleration has been viewed as a puzzle, with no “smoking gun” for simultaneous productivity boosting innovations across this range of industries over a short period. But “if you look at the labor market, you can find a smoking gun in the mid-1990s. … real hourly wages finally began to rise at precisely that time, after more than two decades of decline. … Unemployment rates fell below 4 percent — levels reached only briefly in the 1960s… Should it be surprising that employers turned to labor-saving technologies at this time?” This acceleration in real wages, Wright argues, was not the result of higher productivity or other supply-side factors; rather “it is most plausibly attributed to macroeconomic conditions, when an accommodating Federal Reserve allowed employment to press against labor supply for the first time in a generation.”
The productivity gains of the 1990s did, of course, involve new use of information technology. But the technology itself was not necessarily new. “James Cortada  lists eleven key IT applications in the retail industry circa 1995-2000, including electronic shelf levels, scanning, electronic fund transfer, sales-based ordering and internet sales … with the exception of e-business, the list could have come from the 1970s and 1980s.”
Wright, who is after all a historian, is careful not to argue that there is a general law linking higher wages to higher productivity in all historical settings. As he notes, “such a claim is refuted by the experience of the 1970s, when upward pressures on wages led mainly to higher inflation…” In his story, both sides are needed — the technological possibilities must exist, and there must be sufficient wage pressure to channel them into productivity-boosting applications. I don’t think anyone would say he’s made a decisive case , but if you’re inclined to a view like this the article certainly gives you more material to support it.
A rather different approach to these questions is this 2012 paper by Servaas Storm and C. W. M. Naastepad. Wright is focusing on a few concrete episodes in the history of a particular country, which he explores using a variety of material — survey and narrative as well as conventional economic data. Storm and Naastepad are proposing a set of general rules that they support with a few stylized facts and then explore via of the properties of a formal model. There are things to be learned from both approaches.
In this case the model is simple: output is demand-determined. Demand is either positive or negative function of the wage share (i.e. the economy is either wage-led or profit-led). And labor productivity is a function of both output and the wage, reflecting two kinds of channels by which demand can influence productivity. And an accounting identity says that employment growth is qual to output growth less labor productivity growth. The productivity equation is the distinctive feature here. Storm and Naastepad adopt as “stylized facts” — derived from econometric studies but not discussed in any detail — that both parameters are on the order of 0.4: An additional one percent growth in output, or in wages, will lead to an 0.4 percent growth in labor productivity.
This is a very simple structure but it allows them to draw some interesting conclusions:
– Low wages may boost employment not through increased growth or competitiveness, but through lower labor productivity. (They suggest that this is the right way to think about the Dutch “employment miracle of the 1990s.)
– Conversely, even where demand is wage-led (i.e. a shift to labor tends to raise total spending) faster wage growth is not an effective strategy for boosting employment, because productivity will rise as well. (Shorter hours or other forms of job-sharing, they suggest, may be more successful.)
– Where demand is strongly wage-led (as in the Scandinavian countries, they suggest), profits will not be affected much by wage growth. The direct effect of higher wages in this case could be mostly or entirely offset by the combination of higher demand and higher productivity. If true, this has obvious implications for the feasibility of the social democratic bargain there.
– Where demand is more weakly wage-led or profit-led (as with most structuralists, they see the US as the main example of the latter), distributional conflicts will be more intense. On the other hand, in this case the demand and productivity effects work together to make wage restraint a more effective strategy for boosting employment.
It’s worth spelling out the implications a bit more. A profit-led economy is one in which investment decisions are very sensitive to profitability. But investment is itself a major influence on profit, as a source of demand and — emphasized here — as a source of productivity gains that are captured by capital. So wage gains are more threatening to profits in a setting in which investment decisions are based largely on profitability. In an environment in which investment decisions are motivated by demand or exogenous animal spirits (“only a little more than an expedition to the South Pole, based on a calculation of benefits to come”), capitalists have less to fear from rising wages. More bluntly: one of the main dangers to capitalists of a rise in wages, is their effects on the investment decisions of other capitalists.
Matthew Klein has a characteristically thoughtful post disagreeing with my new paper on income distribution and debt. I think his post has some valid arguments, but also, from my point of view, some misunderstandings. In any case, this is the conversation we should be having.
I want to respond on the specific points Klein raises. But first, in this post, I want to clarify some background conceptual issues. In particular, I want to explain why I think it’s unhelpful to think about the issues of debt and demand in terms of saving.
Klein talks a great deal about saving in his post. Like most people writing on these issues, he treats the concepts of rising debt-income ratios, higher borrowing and lower saving as if they were interchangeable. In common parlance, the question “why have households borrowed more?” is equivalent to “why have households saved less?” And either way, the spending that raises debt and reduces saving, is also understood to contribute to aggregate demand.
This conception is laid out in Figure 1 below. These are accounting rather than causal relationships. A minus sign in the link means the relationship is negative.
We start with households’ decision to consume more or less out of their income. Implicitly, all household outlays are for consumption, or at least, this is the only flow of household spending that varies significantly. An additional dollar of household consumption spending means an additional dollar of demand for goods and services; it also means a dollar less of savings. A dollar less of savings equals a dollar more of borrowing. More borrowing obviously means higher debt, or — equivalently in this view — a higher debt-GDP ratio.
There’s nothing particularly orthodox or heterodox about this way of looking at things. You can hear the claim that a rise in the household debt-income ratio contributes more or less one for one to aggregate demand as easily from Paul Krugman as from Steve Keen. Similarly, the idea that a decline in savings rates is equivalent to an increase in borrowing is used by Marxists as well as by mainstream economists, not to mention eclectic business journalists like Klein. Of course no one actually says “we assume that household assets are fixed or nonexistent.” But implicitly that’s what you’re doing when you treat the question of what has happened to household borrowing as if it were the equivalent of what has happened to household saving.
There is nothing wrong, in principle, with thinking in terms of the logic of Figure 1, or constructing models on that basis. Social science is impossible without abstraction. It’s often useful, even necessary, to think through the implications of a small subset of the relationships between economic variables, while ignoring the rest. But when we turn to the concrete historical changes in macroeconomic quantities like household debt and aggregate demand in the US, the ceteris paribus condition is no longer available. We can’t reason in terms of the hypothetical case where all else was equal. We have to take into account all the factors that actually did contribute to those changes.
This is one of the main points of the debt-inequality paper, and of my work with Arjun Jayadev on household debt. In reality, much of the historical variation in debt-income ratios and related variables cannot be explained in terms of the factors in Figure 1. You need something more like Figure 2.
Figure 2 shows a broader set of factors that we need to include in a historical account of household sector balances. I should emphasize, again, that this is not about cause and effect. The links shown in the diagram are accounting relationships. You cannot explain the outcomes at the bottom without the factors shown here.  I realize it looks like a lot of detail. But this is not complexity for complexity’s sake. All the links shown in Figure 2 are quantitatively important.
The dark black links are the same as in the previous diagram. It is still true that higher household consumption spending reduces saving and raises aggregate demand, and contributes to lower saving and higher borrowing, which in turn contributes to lower net wealth and an increase in the debt ratio. Note, though, that I’ve separated saving from balance sheet improvement. The economic saving used in the national accounts is quite different from the financial saving that results in changes in the household balance sheet.
In addition to the factors the debt-demand story of Figure 1 focuses on, we also have to consider: various actual and imputed payment flows that the national accounts attribute to the household sector, but which do not involve any money payments to or fro households (blue); the asset side of household balance sheets (gray); factors other than current spending that contribute to changes in debt-income ratios (red); and change in value of existing assets (cyan).
The blue factors are discussed in Section 5 of the debt-distribution paper. There is a much fuller discussion in a superb paper by Barry Cynamon and Steve Fazzari, which should be read by anyone who uses macroeconomic data on household income and consumption. Saving, remember, is defined as the difference between income and consumption. But as Cynamon and Fazzari point out, on the order of a quarter of both household income and consumption spending in the national accounts is accounted for by items that involve no actual money income or payments for households, and thus cannot affect household balance sheets.
These transactions include, first, payments by third parties for services used by households, mainly employer-paid premiums for health insurance and payments to healthcare providers by Medicaid and Medicare. These payments are counted as both income and consumption spending for households, exactly as if Medicare were a cash transfer program that recipients then chose to use to purchase healthcare. If we are interested in changes in household balance sheets, we must exclude these payments, since they do not involve any actual outlays by households; but they still do contribute to aggregate demand. Second, there are imputed purchases where no money really changes hands at all. The most important of these are owners’ equivalent rent that homeowners are imputed to pay to themselves, and the imputed financial services that households are supposed to purchase (paid for with imputed interest income) when they hold bank deposits and similar assets paying less than the market interest rate. Like the third party payments, these imputed interest payments are counted as both income and expenditure for households. Owners’ equivalent rent is also added to household income, but net of mortgage interest, property taxes and maintenance costs. Finally, the national accounts treat the assets of pension and similar trust funds as if they were directly owned by households. This means that employer contributions and asset income for these funds are counted as household income (and therefore add to measured saving) while benefit payments are not.
These items make up a substantial part of household payments as recorded in the national accounts – Medicare, Medicaid and employer-paid health premiums together account for 14 percent of official household consumption; owners’ equivalent rent accounts for another 10 percent; and imputed financial services for 4 percent; while consolidating pension funds with households adds about 2 percent to household income (down from 5 percent in the 1980s). More importantly, the relative size of these components has changed substantially in the past generation, enough to substantially change the picture of household consumption and income.
Incidentally, Klein says I exclude all healthcare spending in my adjusted consumption series. This is a misunderstanding on his part. I exclude only third-party health care spending — healthcare spending by employers and the federal government. I’m not surprised he missed this point, given how counterintuitive it is that Medicare is counted as household consumption spending in the first place.
This is all shown in Figure 3 below (an improved version of the paper’s Figure 1):
The two dotted lines remove public and employer payments for healthcare, respectively, from household consumption. As you can see, the bulk of the reported increase in household consumption as a share of GDP is accounted for by healthcare spending by units other than households. The gray line then removes owners’ equivalent rent. The final, heavy black line removes imputed financial services, pension income net of benefits payments, and a few other, much smaller imputed items. What we are left with is monetary expenditure for consumption by households. The trend here is essentially flat since 1980; it is simply not the case that household consumption spending has increased as a share of GDP.
So Figure 3 is showing the contributions of the blue factors in Figure 2. Note that while these do not involve any monetary outlay by households and thus cannot affect household balance sheets or debt, they do all contribute to measured household saving.
The gray factors involve household assets. No one denies, in principle, that balance sheets have both an asset side and a liability side; but it’s striking how much this is ignored in practice, with net and gross measures used interchangeably. In the first place, we have to take into account residential investment. Purchase of new housing is considered investment, and does not reduce measured saving; but it does of course involve monetary outlay and affects household balance sheets just as consumption spending does.  We also have take into account net acquisition of financial assets. An increase in spending relative to income moves household balance sheets toward deficit; this may be accommodated by increased borrowing, but it can just as well be accommodated by lower net purchases of financial assets. In some cases, higher desired accumulation of financial asset can also be an autonomous factor requiring balance sheet adjustment. (This is probably more important for other sectors, especially state and local governments, than for households.) The fact that adjustment can take place on the asset as well as the liability side is another reason there is no necessary connection between saving and debt growth.
Net accumulation of financial assets affects household borrowing, but not saving or aggregate demand. Residential investment also does not reduce measured saving, but it does increase aggregate demand as well as borrowing. The red line in Figure 3 adds residential investment by households to adjusted consumption spending. Now we can see that household spending on goods and services did indeed increase during the housing bubble period – conventional wisdom is right on that point. But this was a spike of limited duration, not the secular increase that the standard consumption figures suggest.
Again, this is not just an issue in principle; historical variation in net acquisition of assets by the household sector is comparable to variation in borrowing. The decline in observed savings rates in the 1980s, in particular, was much more reflected in slower acquisition of assets than faster growth of debt. And the sharp fall in saving immediately prior to the great recession in part reflects the decline in residential investment, which peaked in 2005 and fell rapidly thereafter.
The cyan item is capital gains, the other factor, along with net accumulation, in growth of assets and net wealth. For the debt-demand story this is not important. But in other contexts it is. As I pointed out in my Crooked Timber post on Piketty, the growth in capital relative to GDP in the US is entirely explained by capital gains on existing assets, not by the accumulation dynamics described by his formula “r > g”.
Finally, the red items in Figure 2 are factors other than current spending and income that affect the debt-income ratio. Arjun Jayadev and I call this set of factors “Fisher dynamics,” after Irving Fisher’s discussion of them in his famous paper on the Great Depression. Interest payments reduce measured saving and shift balance sheets toward deficit, just like consumption; but they don’t contribute to aggregate demand. Defaults or charge-offs reduce the outstanding stock of debt, without affecting demand or measured savings. Like capital gains, they are a change in a stock without any corresponding flow.  Finally, the debt-income ratio has a denominator as well as a numerator; it can be raised just as well by slower nominal income growth as by higher borrowing.
These factors are the subject of two papers you can find here and here. The bottom line is that a large part of historical changes in debt ratios — including the entire long-term increase since 1980 — are the result of the items shown in red here.
So what’s the point of all this?
First, borrowing is not the opposite of saving. Not even roughly. Matthew Klein, like most people, immediately translates rising debt into declining saving. The first half of his post is all about that. But saving and debt are very different things. True, increased consumption spending does reduce saving and increase debt, all else equal. But saving also depends on third party spending and imputed spending and income that has no effect on household balance sheets. While debt growth depends, in addition to saving, on residential investment, net acquisition of financial assets, and the rate of chargeoffs; if we are talking about the debt-income ratio, as we usually are, then it also depends on nominal income growth. And these differences matter, historically. If you are interested in debt and household expenditure, you have to look at debt and expenditure. Not saving.
Second, when we do look at expenditure by households, there is no long-term increase in consumption. Consumption spending is flat since 1980. Housing investment – which does involve outlays by households and may require debt financing – does increase in the late 1990s and early 2000s, before falling back. Yes, this investment was associated with a big rise in borrowing, and yes, this borrowing did come significantly lower in the income distribution that borrowing in most periods. (Though still almost all in the upper half.) There was a debt-financed housing bubble. But we need to be careful to distinguish this episode from the longer-term rise in household debt, which has different roots.
 Think of it this way: If I ask why the return on an investment was 20 percent, there is no end to causal factors you can bring in, from favorable macroeconomic conditions to a sound business plan to your investing savvy or inside knowledge. But in accounting terms, the return is always explained by the income and the capital gains over the period. If you know both those components, you know the return; if you don’t, you don’t. The relationships in the figure are the second kind of explanation.
 Improvement of existing housing is also counted as investment, as are brokers’ commissions and other ownership transfer costs. This kind of spending will absorb some part of the flow of mortgage financing to the household sector — including the cash-out refinancing of the bubble period — but I haven’t seen an estimate of how much.
 There’s a strand of heterodox macro called “stock-flow consistent modeling.” Insofar as this simply means macroeconomics that takes aggregate accounting relationships seriously, I’m very much in favor of it. Social accounting matrices (SAMs) are an important and underused tool. But it’s important not to take the name too literally — economic reality is not stock-flow consistent!
Here is a figure from the paper I’m presenting at the Eastern Economics Association meetings next weekend, on state and local government balance sheets:
This figure is just for aggregate state governments. It shows total borrowing (red), net acquisition of financial assets (blue), and the overall fiscal balance (black, with surplus as positive). It also shows the year over year change in the ratio of state debt to GDP (the gray dotted line). A number of interesting points come out here:
Despite statutory balanced-budget requirements, state budgets do show significant cyclical movement, from aggregate deficits of around 0.5 percent of GDP in recent recessions to surpluses as high as 0.5 percent of GDP in the expansions of the 1980s and 1990s (not shown here). Individual state governments show larger movements.
Shifts in state government fiscal balances are accommodated almost entirely on the asset side of the balance sheet. When state government revenue exceeds current expenditure, they buy financial assets; when revenue falls or expenditure rises, they sell financial assets (or buy less). State governments borrow in order to finance specific capital projects; unlike the federal government, they do not use credit-market borrowing to close gaps between current expenditure and revenue. (As I show in the paper, this is still true when we look at state governments cross-sectionally rather than aggregate data.) Between 2005 and 2009, state budgets moved from an aggregate surplus of around 0.3 percent of GDP to an aggregate deficit of around 0.5 percent. But borrowing over this period was completely flat – the entire shortfall was made up by reduced acquisition of financial assets.
The ratio of state government debt to GDP rose over the Great Recession period, by a total of about 2 points. While this is small compared with the increase in federal debt over the same period, it is certainly not trivial. Among other things, rising state debt ratios have been used as arguments for austerity and attacks on pubic-sector unions in a number of states. But as we see here, the entire rise in state debt-GDP ratios over this period is explained by slower growth. The ratio rose because of a smaller denominator, not a bigger numerator.
State debt ratios rose around the same time that state budgets moved into deficit. But there is no direct relationship between these two developments. Deficits were financed entirely through a reduction in assets. Simultaneously, the drastic slowdown in growth mean that even though state governments significantly reduced their borrowing, in dollar terms, during the recession, the ratio of debt to income rose. It is true, of course, that both the deficits and the growth slowdown were the result of the recession. But the increase in state debt ratios would have been exactly the same if state budgets had not moved to deficit at all.
Since 2010 there has been a simultaneous fall in state government borrowing and acquisition of assets. When these two variables vary together (as they also do across governments in some periods) it suggests that there is some autonomous balance sheet adjustment going on that can’t be reduced to the net financial position changing to accommodate real flows. (The fact that offsetting financial positions cannot in general be netted out is one of the main planks of Bezemer’s accounting view of economics.)
The pattern is similar in the previous recession. Although there was some increase in borrowing as state governments moved into deficit in 2002-2003, the large majority of the financing was on the asset side.
The larger significance of all this, and the data underlying it, is discussed more in the paper. I will post that here next week. In the meantime, the two big takeaways are, first, that a lot of historical variation in debt ratios are driven by the effect of different nominal growth rates on the existing debt stock rather than by new borrowing; and that state governments don’t finance budget imbalances on the liability side of their balance sheets, but on the asset side.
(Earlier posts based on the same work here and here.)
In a previous post, I pointed out that state and local governments in the US have large asset positions — 33 percent of GDP in total, down from nearly 40 percent before the recession. This is close to double state and local debt, which totals 17 percent of GDP. Among other things, this means that a discussion of public balance sheets that looks only at debt is missing at least half the picture.
On the other hand, a bit over half of those assets are in pension funds. Some people would argue that it’s misleading to attribute those holdings to the sponsoring governments, or that if you do you should also include the present value of future pension benefits as a liability. I’m not sure; I think there are interesting questions here.
But there are also interesting questions that don’t depend on how you treat the existing stocks of pension assets and liabilities. Here are a couple. First, how how do changes in state credit-market debt break down between the current fiscal balance and other factors, including pension fund contributions? And second, how much of state and local fiscal imbalances are financed by borrowing, and how much by changes in the asset position?
Most economists faced with questions like these would answer them by running a regression.  But as I mentioned in the previous post, I don’t think a regression is the right tool for this job. (If you don’t care about the methods and just want to hear the results, you can skip the next several paragraphs, all the way down to “So what do we find?”)
Think about it: what is a regression doing? Basically, we have a variable a that we think is influenced by some others: b, c, d … Our observations of whatever social process we’re interested in consist of sets of values for a, b, c, d… , all of them different each time. A regression, fundamentally, is an imaginary experiment where we adjusted the value of just one of b, c, d… and observed how a changed as a result. That’s the meaning of the coefficients that are the main outputs of a regression, along with some measure of our confidence in them.
But in the case of state budgets we already know the coefficients! If you increase state spending by one dollar, holding all other variables constant, well then, you increase state debt by one dollar. If you increase revenue by one dollar, again holding everything else constant, you reduce debt by one dollar. Budgets are governed by accounting identities, which means we know all the coefficients — they are one or negative one as the case may be. What we are interested in is not the coefficients in a hypothetical “data generating process” that produces changes in state debt (or whatever). What we’re interested in is how much of the observed historical variation in the variable of interest is explained by the variation in each of the other variables. I’m always puzzled when I see people regressing the change in debt on expenditure and reporting a coefficient — what did they think they were going to find?
For the question we’re interested in, I think the right tool is a covariance matrix. (Covariance is the basic measure of the variation that is shared between two variables.) Here we are taking advantage of the fact that covariance is linear: cov(x, y + z) = cov(x, y) + cov(x, z). Variance, meanwhile, is just a variable’s covariance with itself. So if we know that a = b + c + d, then we know that the variance of a is equal to the sum of its covariances with each of the others. In other words, if y = Σ xn then:
(1) var(y) = Σ cov(y, xn)
So for example: If the budget balance is defined as revenue – spending, then the variance of some observed budget balances must be equal to the covariance of the balance with revenue, minus the covariance of the balance with spending.
This makes a covariance matrix an obvious tool to use when we want to allocate the observed variation in a variable among various known causes. But for whatever reason, economists turn to variance decompositions only in few specific contexts. It’s common, for instance, to see a variance decomposition of this kind used to distinguish between-group from within-group inequality in a discussion of income distribution. But the same approach can be used any time we have a set of variables linked by accounting identities (or other known relationships) and we want to assess their relative importance in explaining some concrete variation.
In the case of state and local budgets, we can start with the identity that sources of funds = uses of funds. (Of course this is true of any economic unit.) Breaking things up a bit more, we can write:
revenues + borrowing = expenditure + net acquisition of financial assets (NAFA).
Since we are interested in borrowing, we rearrange this to:
But we are not simply interested in borrowing,w e are interested in the change in the debt-GDP ratio (or debt-GSP ratio, in the case of individual states.) And this has a denominator as well as a numerator. So we write:
(3) change in debt ratio = net borrowing – nominal growth rate
This is also an accounting identity, but not an exact one; it’s a linear approximation of the true relationship, which is nonlinear. But with annual debt and income growth rates in the single digits, the approximation is very close.
So we have:
(4) change in debt ratio = expenditure – revenue + NAFA – nominal growth rate * current debt ratio
It follows from equation (1) that the variance of change in the debt ratio is equal to the sum of the covariances of the change with each of the right-side variables. In other words, if we are interested in understanding why debt-GDP ratios have risen in some years and fallen in others, it’s straightforward to decompose this variation into the contributions of variation in each of the other variables. There’s no reason to do a regression here. 
So what do we find?
Here’s the covariance matrix for combined state and local debt for 1955 to 2013. “Growth contrib.” refers to the last term in Equation (4). To make reading the table easier, I’ve reversed the sign of the growth contribution, fiscal balance and revenue; that means that positive values in the table all refer to factors that increase the variance of debt-ratio growth and negative values are factors that reduce it. 
Debt Ratio Growth
NAFA & Trusts
Debt Ratio Growth
Growth Contrib. (-)
Fiscal Balance (-)
NAFA & Trusts
How do we read this? First of all, note the bolded terms along the main diagonal — those are the covariance of each variable with itself, that is, its variance. It is a measure of how much individual observations of this variable differ from each other. The off-diagonal terms, then, show how much of this variation is shared between two variables. Again, we know that if one variable is the sum of several others, then its variance will be the sum of its covariances with each of the others.
So for example, total variance of debt ratio growth is 0.18. (That means that the debt ratio growth in a given year is, on average, about 0.4 percentage points above or below the average growth rate for the full period.) The covariance of debt-ratio growth and (negative) growth contribution is 0.10. So a bit over half the debt-ratio variance is attributable to nominal GDP growth. In other words, if we are looking at why the debt-GDP ratio rises more in some years than in others, more of the variation is going to be explained by the denominator of the ratio than the numerator. Next, we see that the covariance of debt growth with the (negative) fiscal balance is 0.03. In other words, about one-sixth of the variation in annual debt ratio growth is explained by fiscal deficits or surpluses.
This is important, because most discussions of state and local debt implicitly assume that all change in the debt ratio is explained this way. But in fact, while the fiscal balance does play some role in changes in the debt ratio — the covariance is greater than zero — it’s a distinctly secondary role. Finally, the last variable, “NAFA & Trusts,” explains about a third of variation in debt ratio growth. In other words, years when state and local government debt is rising more rapidly relative to GDP, are also years in which those governments are adding more rapidly to their holdings of financial assets. And this source of variation explains about twice as much of the historical pattern of debt ratio changes, as the fiscal balance does.
Since this is probably still a bit confusing, the next table presents the same information in a hopefully clearer way. Here see only the covariances with debt ratio growth — the first column of the previous table — and they are normalized by the variance of debt ratio growth. Again, I’ve flipped the sign of variables that reduce debt-ratio growth. So each value of the table shows the share of variation in the growth of state-local growth ratios that is explained by that component. There is also a second column, showing the same thing for state governments only.
State + Local
Nominal Growth (-)
Fiscal Balance (-)
… of which: Interest
Trust Contrib. and NAFA
… of which: Pensions
I’ve added a couple variables here — interest payments under expenditure and pension contributions under NAFA and Trusts. Note in particular the small value of the latter. Pension contributions are quite stable from year to year. (The standard deviation of state/local pension contributions as a percent of GDP is just 0.07, versus around 0.5 for nontrust NAFA.) This says that even though most state and local assets are in pension funds, pension contributions contribute only a little to the variation in asset acquisition. Most of the year to year variability is in governments’ acquisition of assets on their own behalf. This is helpful: It means that if we are interested in understanding variation in the growth of debt over time, or the role of assets vs. liabilities in accommodating fiscal imbalances, we don’t need to worry too much about how to think about pension funds. (If we want to focus on the total increase in state debt, as opposed to the variation over time, then pensions are still very important.)
If we compare the overall state-local sector with state governments only, the picture is broadly similar, but there are some interesting differences. First of all, nominal growth rates are somewhat less important, and the fiscal balance more important, for state government debt ratio. This isn’t surprising. State governments have more flexibility than local ones to independently adjust their spending and revenue; and state debt ratios are lower, so the effect on the ratio from a given change in growth rates is proportionately smaller. For the same reason, the effect of interest rate changes on the debt ratio, while small in both cases, is even smaller for the lower-debt state governments. 
So now we have shown more rigorously what we suggested in the previous post: While the fiscal balance plays some role in explaining why state and local debt ratios rise at some times and fall at others, it is not the main factor. Nominal growth rates and asset acquisition both play larger roles.
Let’s turn to the next question: How do state and local government balance sheets adjust to fiscal imbalances? Again, this is just a re-presentation of the data in the first table, this time focusing on the third column/row. Again, we’re also doing the decomposition for states in isolation, and adding a couple more items — in this case, the taxes and intergovernmental assistance components of revenue, and the pension contribution component of NAFA. The values are normalized here by the variance of the fiscal balance. The first four lines sum to 1, as do the last three. In effect, the first four rows of the table tells us where fiscal imbalances come from; the final three tell us where they go.
State + Local
Revenue, of which:
Trust Contrib. and NAFA, of which:
So what do we see? Looking at the first set of lines, we see that state-local fiscal imbalances are entirely expenditure-driven. Surprisingly, revenues are no lower in deficit years than in surplus ones. Note that this is true of total revenues, but not of taxes. Deficit years are indeed associated with lower tax revenue and surplus years with higher taxes, as we would expect. (That’s what the positive values in the “taxes” row mean.) But this is fully offset by the opposite variation in payments from the federal government, which are lower in surplus years and higher in deficit years. During the most recent recession, for example, aggregate state and local taxes declined by about 0.4 percent of GDP. But federal assistance to state and local governments increased by 0.9 percent of GDP. This was unexpected to me: I had expected most of the variation in state budget balances to come from the revenue side. But evidently it doesn’t. The covariance matrix is confirming, and quantifying, what you can see in the figure below: Deficit years for the state-local sector are associated with peaks in spending, not troughs in revenue.
Turning to the question of how imbalances are accommodated, we find a similarly one-sided story. None of the changes in state-local budget balances result in changes in borrowing; all of them go to changes in fund contributions and direct asset purchases.  For the sector as a whole, in fact, asset purchases absorb more than all the variation in fiscal imbalances; borrowing is lower in deficit years than in surplus years. (For state governments, borrowing does absorb about ten percent of variation in the fiscal balance.) Note that very little of this is accounted for by pensions — less than none in the case of state governments, which see lower overall asset accumulation but higher pension fund contributions in deficit years. Again, even though pension funds account for most state-local assets, they account for very little of the year to year variation in asset purchases.
So the data tells a very clear story: Variation in state-local budget balances is driven entirely by the expenditure side; cyclical changes in their own revenue are entirely offset by changes in federal aid. And state budget imbalances are accommodated entirely by changes in the rate at which governments buy or sell assets. Over the postwar period, the state-local government sector has not used borrowing to smooth over imbalances between revenue and spending.
 The interesting historical meta-question, to which I have no idea of the answer, when and why regression analysis came to so completely dominate empirical work in economics. I suspect there are some deep reasons why economists are more attracted to methodologies that treat observed data as a sample or “draw” from a universal set of rules, rather than methodologies that focus on the observed data as the object of inquiry in itself.
 I confess I only realized recently that variance decompositions can be used this way. In retrospect, we should have done this in our papers in household debt.
 Revenue and expenditure here include everything except trust fund income and payments. In other words, unlike in the previous post, I am following the standard practice of treating state and local budgets separate from pension funds and other trust funds. The last line, “NAFA and Trusts”, includes both contributions to trust funds and acquisition of financial assets by the local government itself. But income generated by trust fund assets, and employee contributions to pension funds, are not included in revenue, and benefits paid are not included in expenditure. So the “fiscal balance” term here is basically the same as that reported by the NIPAs and other standard sources.
 This is different from households and the federal government, where higher debt and, in the case of households, more variable interest rates, mean that interest rates are of first-order importance in explaining the evolution of debt ratios over time.
 It might seem contradictory to say that a third of the variation in changes in the debt ratio is due to the fiscal balance, even though none of the variation in the fiscal balance is passed through to changes in borrowing. The reason this is possible is that those periods when there are both deficits and higher borrowing, also are periods of slower nominal income growth. This implies additional variance in debt growth, which is attributed to both growth and the fiscal balance. There’s some helpful discussion here.
(This post is based on a paper in process. I probably will not post any more material from this project for the next month or so, since I need to return to the question of potential output.)
Let’s talk about state and local government balance sheets.
Like most sectors of the US economy, state and local governments have seen a long-term increase in credit-market debt, from about 8 percent of GDP in 1950 to 19 percent of GDP in 2010, before falling back a bit to 17 percent in 2013.  While this is modest compared with federal-government and household debt, it is not trivial. Municipal bonds are important assets in financial markets. On the liability side, state and local debt operates as a political constraint at the state level and often plays a prominent role in public discussions of state budgets. Cuts to state services and public employee wages and pensions are often justified by the problem of public debt, municipal bond offerings are a focal point for local politics, and you don’t have to look far to find scare stories about an approaching state or local debt crisis.
My interest in state and local debt is an extension of my work (with Arjun Jayadev) on household debt and on sovereign debt. The question is: To what extent to historical changes in debt ratios reflect the balance between revenue and expenditure, and to what extent do they represent monetary-financial factors like inflation and interest rates? The exact balance of course depends on the sector and period; what we want to steer people away from is the habit of assuming that balance sheet changes are a straightforward record of real income and spending flows. 
The first thing to note about state and local debt is that, as the first figure shows, only about 40 percent of it is owed by state governments, with the majority is owed by the thousands of local governments of various types. Of the 10 percent of GDP or so owed by local governments, about half is owed by general-purpose governments (cities, counties and towns, in that order), and half by special purpose districts, with school districts accounting for about half of this (or a bit over 2 percent of GDP). This is interesting because, as the figure below shows, the majority of state and local spending is at the state level.
This imbalance goes back to at least the 1950s and 1960s, when local governments accounted for just over half of combined state and local spending, but more than three-quarters of combined state and local government debt. The explanation for the different distributions of spending and debt over different levels of government is simple: While state governments account for a larger share of total state and local spending, local governments account for about two-thirds of state and local capital spending. In the US, most infrastructure spending is the responsibility of local governments; direct service provision, which requires buildings and other fixed assets, is also disproportionately local. State government budgets, on the other hand, include a large proportion of transfer spending, which is negligible at the local level. Since debt is mainly used to finance capital spending, it’s no surprise that the distribution of debt looks more like the distribution of capital spending than like the distribution of spending in general.
This is an interesting fact in itself, but it also is a good illustration of an important larger point that should be obvious but is often ignored: The main use of debt is to finance assets. This simple point is for some reason almost always ignored by economists — both mainstream and heterodox economists regard the paradigmatic loan as a consumption loan.  Among other things, this leads to the mistaken idea that credit-market debt reflects — or at least is somehow related to — dissaving. When in fact there’s no connection.
For households and businesses, just as for state and local governments, the majority of debt finances investment.  This means that additions to the liability side of the balance sheet are normally simultaneous with additions to the asset side, with no effect on saving. If anything, since most assets are not financed entirely with debt, most transactions that increase debt require saving to increase also. (Homebuyers normally get a mortgage and make a downpayment.) Sovereign governments are the only economic units whose borrowing mainly finances gaps between current revenue and current expenditure. Again, this point is missed as much by heterodoxy as by the mainstream. Just flipping over to the next tab in my browser, I find a Marxist writing that “Debt has become so high that the personal savings rate in the United States actually became negative.” Which is a non sequitur.
The fact that most state-local debt is at the local level, while most spending is at the state level, is a reflection of the fact that debt is used to finance capital spending and not spending in general. But in and of itself this fact doesn’t tell us anything about how much changes in the state-local debt ratio reflect fiscal deficits or dissaving. It still could be true that state and local debt mainly reflects accumulated fiscal deficits.
As it turns out, though, it isn’t true at all. As the next figure shows, historically there is no relationship between changes in the state-local debt ratio and the state-local fiscal balance.
Here, the vertical axis shows the change in the ratio of aggregate state and local debt to GDP over the year. The horizontal axis shows the aggregate fiscal balance, with surpluses positive and deficits negative. So for instance, in 2009 the debt ratio increased by about one point, while state and local governments ran an aggregate budget deficit of close to 6 percent of GDP.  If changes in the debt ratio mainly reflected fiscal deficits, we would expect most of the points to fall along a line sloping down from upper left to lower right. They really don’t. Yes, 2009 has both very large deficits and a large rise in the debt ratio; but 2007 has the largest aggregate surpluses, and the debt ratio rose by almost as much. Eyeballing the figure you might see a weak negative relationship; but in this case your eyeballs are fooling you. In fact, the correlation is positive. A regression of the change in on debt on the fiscal balance yields a coefficient of positive 0.11, significant at the 5 percent level. As I’ll discuss later, I’m not sure a regression is a good tool for this job. But it is good enough to answer the question, “Is state and local debt mainly the result of past deficits?” with a definite No.
How can state and local fiscal balances vary without changing the sector’s debt? The key thing to recognize about state and local government balance sheets is that they also have large financial asset positions. In the aggregate, the sector’s net financial wealth is positive; unlike the federal government, state and local governments are net creditors, not net borrowers, in financial markets. As of 2013, the sector as a whole had total debt of 18 percent of GDP, and financial assets of 34 percent of GDP. As the following figures show, the long-term rise in state and local assets is much bigger than the rise in debt. Now it is true that most of these assets are held in pension funds, rather than directly. But a lot of them are not. In fact, for state governments — though not for the state-local sector as a whole — even nontrust assets exceed total debt. And whether or not you want to attribute pension assets to the sponsoring government, contributions to pension funds are important margin on which state budgets adjust.
As the final figure shows, since the mid 1990s the aggregate financial assets of state-local government have exceeded aggregate debt in every single state. (Alaska, with government net financial wealth in excess of 100 percent of gross state product, is off the top of the chart, as is Wyoming.) This is a change from the 1950s and 1960s, when positive and negative net positions were about equally common. Nationally, the net credit position of state and local governments was equal to 16 percent of GDP in 2013, down from over 20 percent in 2007.
These large asset positions have a number of important implications:
1. To the extent that state and local governments run deficits in recessions, they are can be financed by reducing net acquisition of assets rather than by issuing more debt. And historically it seems that this is how they mostly are financed, especially in recent cycles. So if we are interested in whether state and local budgets behave procyclically or anticyclically, the degree of flexibility these governments have on the asset side is going to be a key factor.
2. Some large part of the long-term increase in state and local debt can be attributed to increased net acquisition of assets. This is especially notable in the 1980s, when there were simultaneous rises in both state debt and state financial assets. And changes in assets are strongly correlated across states. I.e. the states that increase their debt the most in a given year, tend to also be the ones that increased their assets the most — in some periods, higher debt is actually associated with a shift toward a net creditor position.
3. Low interest rates are not so clear an argument for increased infrastructure spending as people often assume, given that little of this spending currently happens at the federal level. Yes, an individual project may still look more cost-effective, but set against that is the pressure to increase trust fund contributions.
4. If state and local governments face financial constraints on current spending, these are at least as likely to reflect the terms on which they must prefund future expenses as the terms on which they can borrow.
The second point is the key one for my larger argument. Debt is part of a financial system that evolves independently of the system comprising “real” income and expenditure. They connect with each other, but they don’t correspond to each other. The case of state and local governments is somewhat different from households and the federal government — for the latter two, changes in interest rates play a major role in the evolution of debt ratios (along with changing default rates for households), while net acquisition of financial assets is not important for the federal government. But in all cases, purely financial factors play a major role in the evolution of debt ratios, along with changes in nominal income growth rates, which explain about a third of the variation in state-local debt ratios over time. And in all cases the divergence between the real and financial variables is especially visible in the 1980s.
With respect to state and local governments specifically, point 4 may be the most interesting one. Why do state and local governments hold so much bigger asset positions than they used to? What is the argument for prefunding pension benefits and similar future expenses, rather than meeting them on a pay-as-you-go basis? And how do those arguments change if we think the current regime of low interest rates is likely to persist indefinitely? It’s not obvious to me that either public employees or public employers are better off with funded pensions. Unlike in the private sector, public employees don’t need insurance against outliving their employer. It’s not obvious why governments should hold reserves against future pension payments but not against other equally large, equally predictable future payments. Nor is it obvious how much protection funded pensions offer against benefit cuts. And if interest rates remain lower than growth rates, prefunding pensions is actually more expensive than treating them as a current expense. I see lots of discussion about how state and local government funds should be managed, but does anyone ask whether they should hold these big funds at all?
In any case, given the very large asset positions of state and local governments, and the large cyclical and secular variation in net acquisition of assets, it’s clear that we shouldn’t imagine there’s any connection between sate and local debt and state and local fiscal positions. And we shouldn’t assume that the main financial problem faced by state and local governments is the terms they can borrow on. Most of the action is on the asset side.
 All data in this post comes from the Census of Governments.
 This is true of economic theory obviously, but it’s also true of a lot of empirical work. When Gabriel Chodorow-Reich was hired at Harvard a few years ago, for instance, his job market paper was an empirical study of credit constraints on business borrowers that ignored investment and treated credit as an input into current production.
 For households, nearly 70 percent of debt is accounted for by mortgages, with auto loans and student debt accounting for another 10 percent each. (Admittedly, spending in the latter two categories is counted as consumption the national accounts; but functionally, cars and diplomas are assets.) Less than 10 percent of household debt looks like consumption loans.
 This is different from the number you will find in the national accounts. The main reasons for the difference are, first, that the Census works on a strict cashflow basis, and, second, that it consolidates pension and other trust funds with the sponsoring government. (See here.) This means that if a pension fund’s benefit payments exceed its income in a given year, that contributes to the deficit of the sponsoring government in the Census data, but not in the national accounts. This is what’s responsible for the very large deficits reported for 2009. If we are interested in credit-market debt the Census approach seems preferable, but there are some tricky questions for sure. All this will be discussed in more detail in the paper I’m writing on state and local balance sheets.
Empirically-oriented macroeconomists have recognized since the early 20th century that output, employment and productivity move together over the business cycle. The fact that productivity falls during recessions means that employment varies less over the cycle than output does. This behavior is quite stable over time, giving rise to Okun’s law. In the US, Okun’s law says that the unemployment rate will rise by one point in each 2.5 point shortfall of GDP growth over trend — a ratio that doesn’t seem to have declined much since Arthur Okun first described it in the mid-1960s. 
It’s not obvious that potential should show this procyclical behavior. As I noted in the previous post, a naive prediction from a production function framework would be that a negative demand shock should reduce employment more than output, since business can lay off workers immediately but can’t reduce their capital stock right away. In other words, productivity should rise in recessions, since the labor of each still-employed worker is being combined with more capital.
There are various explanations for why labor productivity behaves procyclically instead. The most common focus on the transition costs of changing employment. Since hiring and firing is costly for businesses, they don’t adjust their laborforce to every change in demand. So when sales fall in recessions, they will keep extra workers on payroll — paying them now is cheaper than hiring them back later. Similarly, when sales rise businesses will initially try to get more work out of their existing employees. This shows up as rising labor productivity, and as the repeated phenomenon of “jobless recoveries.”
Understood in these terms, the positive relationship between output, employment and productivity should be a strictly short-term phenomenon. If a change in demand (or in other constraints on output) is sustained, we’d expect labor to fully adjust to the new level of production sooner or later. So over horizons of more than a year or two, we’d expect output and employment to change in proportion. If there are other limits on production (such as non-produced inputs like land) we’d expect output and labor productivity to move inversely, with faster productivity growth associated with slower employment growth or vice versa. (This is the logic of “robots taking the jobs.”) A short-term positive, medium term negative, long-term flat or negative relationship between employment growth and productivity growth is one of the main predictions that comes out of a production function. But it doesn’t require one. You can get there lots of other ways too.
And in fact, it is what we see.
The figure shows the simple correlation of employment growth and productivity growth over various periods, from one quarter out to 50 quarters. (This is based on postwar US data.) As you can see, over periods of a year or less, the correlation is (weakly) positive. Six-month periods in which employment growth was unusually weak are somewhat more likely to have seen weak productivity growth as well. This is the cyclical effect presumably due to transition costs — employers don’t always hire or fire in response to short-run changes in demand, allowing productivity to vary instead. But if sales remain high or low for an extended period, employers will eventually bring their laborforce into line, eliminating this relationship. And over longer periods, autonomous variation in productivity and labor supply are more important. Both of these tend to produce a negative relationship between employment and productivity. And that’s exactly what we see — a ten-year period in which productivity grew unusually quickly is likely to be one in which employment grew slowly. (Admittedly postwar US data doesn’t give you that many ten-year periods to look at.)
Another way of doing this is to plot an “Okun coefficient” for each horizon. Here we are looking at the relationship between changes in employment and output. Okun’s law is usually expressed in terms of the relatiojship between unemployment and output, but here we will look at it in terms of employment instead. We write
(1) %ΔE = a (g – c)
where %ΔE is the percentage change in employment, g is the percentage growth in GDP, c is a constant (the long-run average rate of productivity growth) and a is the Okun coefficient. The value of a says how much additional growth in employment we’d expect from a one percentage-point increase in GDP growth over the given period. When the equation is estimated in terms of unemployment and the period is one, year, a is generally on the order of 0.4 or so, meaning that to reduce unemployment by one point over a year normally requires GDP growth around 2.5 points above trend. We’d expect the coefficient for employment to be greater, but over short periods at least it should still be less than one.
Here is what we see if the estimate the equation for changes in output and employment for various periods, again ranging from one quarter up to ten years. (Again, postwar US data. The circles are the point estimates of the coefficients; the dotted lines are two standard errors above and below, corresponding to a standard 95% confidence interval.)
What’s this show? If we estimate Equation (1) looking at changes over one quarter, we find that one percentage point of additional GDP growth is associated with just half a point of additional employment growth. But if we estimate the same equation looking at changes over two years, we find that one point of additional GDP growth is associated with 0.75 points of additional employment growth.
The fact that the coefficient is smallest for the shorter periods is, again, consistent witht he conventional understanding of Okun’s law. Because hiring and firing is costly, employers don’t fully adjust staffing unless a change in sales is sustained for a while. If you were thinking in terms of a production function, the peak around 2 years represents a “medium-term” position where labor has adjusted to a change in demand but the capital stock has not.
While it’s not really relevant for current purposes, it’s interesting that at every horizon the coefficient is significantly below zero. What this tells us is that there is no actual time interval corresponding to the “long run” of the model– a period long enough for labor and the capital stock to be fully adjusted but short enough that technology is fixed. Over this hypothetical long run, the coefficient would be one. One way to think about the fact that the estimated coefficients are always smaller, is that any period long enough for labor to adjust, is already long enough to see noticeable autonomous changes in productivity. 
But what we’re interested in right now is not this normal pattern. We’re interested in how dramatically the post-2008 period has departed from it. The past eight years have seen close to the slowest employment growth of the postwar period and close to the slowest productivity growth. It is normal for employment and productivity to move together for a couple quarters or a year, but very unusual for this joint movement to be sustained over nearly a decade. In the postwar US, at least, periods of slow employment growth are much more often periods of rapid productivity growth, and conversely. Here’s a regression similar to the Okun one, but this time relating productivity growth to employment growth, and using only data through 2008.
While the significance lines can’t be taken literally given that these are overlapping periods, the figure makes clear that between 1947 and 2008, there were very few sustained periods in which both employment and productivity growth made large departures from trend in the same direction.
Put it another way: The past decade has seen exceptionally slow growth in employment — about 5 percent over the full period. If you looked at the US postwar data, you would predict with a fair degree of confidence that a period of such slow employment growth would see above-average productivity growth. But in fact, the past decade has also seen very low productivity growth. The relation between the two variables has been much closer to what we would predict by extrapolating their relationships over periods of a year. In that sense, the current slowdown resembles an extended recession more than it does previous periods of slower growth.
As I suggested in an earlier post, I think this is a bigger analytic problem than it might seem at first glance.
In the conventional story, productivity is supposed to be driven by technology, so a slowdown in productivity growth reflects a decline in innovation and so on. Employment is driven by demographics, so slower employment growth reflects aging and small families. Both of these developments are negative shifts in aggregate supply. So they should be inflationary — if the economy’s productive potential declines then the same growth in demand will instead lead to higher prices. To maintain stable prices in the face of these two negative supply shocks, a central bank would have to raise interest rates in order to reduce aggregate spending to the new, lower level of potential output. Is this what we have seen? No, of course not. We have seen declining inflation even as interest rates are at historically low levels. So even if you explain slower productivity growth by technology and explain slower employment growth by demographics, you still need to postulate some large additional negative shift in demand. This is DeLong and Summers’ “elementary signal identification point.”
Given that we are postulating a large, sustained fall in demand in any case, it would be more parsimonious if the demand shortfall also explained the slowdown in employment and productivity growth. I think there are good reasons to believe this is the case. Those will be the subject of the remaining posts in this series.
In the meantime, let’s pull together the historical evidence on output, employment and productivity growth in one last figure. Here, the horizontal axis is the ten-year percentage change in employment, while the vertical axis is the ten-year percentage change in productivity. The years are final year of the comparison. (In order to include the most recent data, we are comparing first quarters to first quarters.) The color of the text shows average inflation over the ten year period, with yellow highest and blue lowest. The diagonal line corresponds to the average real growth rate of GDP over the full period.
What we’re looking at here is the percentage change in productivity, employment and prices over every ten-year period from 1947-1957 through 2006-2016. So for instance, growth between 1990 and 2000 is represented by the point labeled “2000.” During this decade, total employment rose by about 20 percent while productivity rose by a total of 15 percent, implying an annual real growth of 3.3 percent, very close to the long-run average.
One natural way to think about this is that yellow points below and to the right of the line suggest negative supply shocks: If the productive capacity of the economy declines for some reason, output growth will slow, and prices will rise as private actors — abetted by a slow-to-react central bank — attempt to increase spending at the usual rate. Similarly, blue points above the line suggest positive supply shocks. Yellow points above the line suggest positive demand shocks — an increase in spending can increase output growth above trend, at least for a while, but will pull up prices as well. And blue points below the line suggest negative demand shocks. This, again, is Delong and Summers’ “elementary signal identification point.”
We immediately see what an outlier the recent period is. Both employment and productivity growth over the past ten years have been drastically slower than over the preceding decade — about 5 percent each, down from about 20 percent. 2000-2010 and 2001-2011 were the only ten-year periods in postwar US history when total employment actually declined. The abruptness of the deceleration on both dimensions is a challenge for views that slower growth is the result of deep structural forces. And the combination of the slowdown in output growth with falling prices — especially given ultra-low interest rats — strongly suggests that we’ve seen a negative shift in desired spending (demand) rather than in the economy’s productive capacities (supply).
Another way of looking at this is as three different regimes. In the middle is what we might call “the main sequence” — here there is steady growth in demand, met by varying mixes of employment and productivity growth. On the upper right is what gets called a “high-pressure economy,” in which low unemployment and strong demand draw more people into employment and facilitates the reallocation of labor and other resources toward more productive activity, but put upward pressure on prices. On the lower left is stagnation, where weak demand discourages participation in the labor force and reduces productivity growth by holding back investment, new business formation and by leaving a larger number of those with jobs underemployed, and persistent slack leads to downward pressure on prices (though so far not outright deflation). In other words, macroeconomically speaking the past decade has been a sort of anti-1960s.
 There are actually two versions of Okun’s law, one relating the change in the unemployment rate to GDP growth and one relating the level of unemployment to the deviation of GDP from potential. The two forms will be equivalent if potential grows at a constant rate.
 The assumption that variables can be partitioned into “fast” and “slow” ones, so that we can calculate equilibrium values of the former with the latter treated as exogenous, is a very widespread feature of economic modeling, heterodox as much as mainstream. I think it needs to be looked at more critically. One alternative is dynamic models where we focus on the system’s evolution over time rather than equilibrium conditions. This is, I suppose, the kind of “theory” implied by VAR-type forecasting models, but it’s rare to see it developed explicitly. There are people who talk about a system dynamics approach, which seems promising, but I don’t know much about them.
The Depression didn’t just see a fall in employment, it saw a fall in the output of those still employed, reversing much of the productivity gains of the 1920s. (This surprised Keynes, among others, who still believed in the declining marginal product of labor, which predicted the opposite.) Recovery in the late 1930s, conversely, didn’t just mean higher employment, it involved a sharp acceleration in labor productivity. There’s a widespread idea that output per worker necessarily reflects supply-side factors — technology, skills, etc. But if demand had such direct effects on labor productivity in the Great Depression, why not in the Lesser Depression too? But for some reason, people who scoff at the idea of the “Great Forgetting” of the 1930s have no trouble believing that the drastic slowdown in productivity growth of recent years has nothing to do with the economic crisis it immediately followed.
EDIT: I should add: While the decline in production during the Depression was, of course, primarily a matter of reduced employment, the decline in productivity was not trivial. If output per employee had continued to rise in the first half of the 1930s at the same rate as in 1920s, the total fall in output would have been on the order of 25 percent rather than 33 percent.
Note also that the only other comparable (in fact larger) fall in GDP per worker came in the immediate postwar demobilization period 1945-1947. I’ve never understood the current convention that says we should ignore the depression and wartime experience when thinking about macroeconomic relationships. Previous generations thought just the opposite — that we can learn the most about how the system operates from these kinds of extreme events, that “the prime test of Keynesian theory must be the Great Depression.” Isn’t it logical, if you want to understand how shifts in aggregate demand affect economic outcomes, that you would look first at the biggest such shifts, where the effects should be clearest? The impact of these two big demand shifts on output per worker, seem like good reason to expect such effects in general.
And it’s not hard to explain why. In real economies, there are great disparities in the value of the labor performed by similar people, and immense excess capacity in the form of low-productivity jobs accepted for lack of anything better. Increased demand mobilizes that capacity. When the munitions factories are running full tilt, no one works shining shoes.
DeLong rises to defend Ben Bernanke, against claims that unconventional monetary policy in recent years has discouraged businesses from investing. Business investment is doing just fine, he says:
As I see it, the Fed’s open-market operations have produced more spending–hence higher capacity utilization–and lower interest rates–has more advantageous costs of finance–and we are supposed to believe that its policies “have hurt business investment”?!?! … As I have said before and say again, weakness in overall investment is 100% due to weakness in housing investment. Is there an argument here that QE has reduced housing investment? No. Is nonresidential fixed investment below where one would expect it to be given that the overall recovery has been disappointing and capacity utilization is not high?
As evidence, DeLong points to the fact that nonresidential investment as a share of GDP is back where it was at the last two business cycle peaks.
As it happens, I agree with DeLong that it’s hard to make a convincing case that unconventional monetary policy is holding back business investment. Arguments about the awfulness of low interest rates seem more political or ideological, based on the real or imagined interests of interest-receivers than any identifiable economic analysis. But there’s a danger of overselling the opposite case.
It is certainly true that, as a share of potential GDP, nonresidential investment is not low by historical standards. But is this the right measure to be looking at? I think not, for a couple of reasons, one relatively minor and one major. The minor reason is that the recent redefinition of investment by the BEA to include various IP spending makes historical comparisons problematic. If we define investment as the BEA did until 2013, and as businesses still do under GAAP accounting standards, the investment share of GDP remains quite low compared to previous expansions. The major reason is that it’s misleading to evaluate investment relative to (actual or potential GDP), since weak investment will itself lead to slower GDP growth. 
On the first point: In 2013, the BEA redefined investment to include a variety of IP-related spending, including the commercial development of movies, books, music, etc. as well as research and development. We can debate whether, conceptually, Sony making Steve Jobs is the same kind of thing as Steve Jobs and his crew making the iPhone. But it’s important to realize that the apparent strength of investment spending in recent expansions is more about the former kind of activity than the latter.  More relevant for present purposes, since this kind of spending was not counted as investment — or even broken out separately, in many cases — prior to 2013, the older data are contemporary imputations. We should be skeptical of comparing today’s investment-cum-IP-and-R&D to the levels of 10 or 20 years ago, since 10 or 20 years ago it wasn’t even being measured. This means that historical comparisons are considerably more treacherous than usual. And if you count just traditional (GAAP) investment, or even traditional investment plus R&D, then investment has not, in fact, returned to its 2007 share of GDP, and remains well below long-run average levels. 
More importantly, using potential GDP as the yardstick is misleading because potential GDP is calculated simply as a trend of actual GDP, with a heavier weight on more recent observations. By construction, it is impossible for actual GDP to remain below potential for an extended period. So the fact that the current recovery is weak by historical standards automatically pulls down potential GDP, and makes the relative performance of investment look good.
We usually think that investment spending the single most important factor in business-cycle fluctuations. If weak investment growth results in a lower overall level of economic activity, investment as a share of GDP will look higher. Conversely, an investment boom that leads to rapid growth of the economy may not show up as an especially high investment share of GDP. So to get a clear sense of the performance of business investment, its better to look at the real growth of investment spending over a full business cycle, measured in inflation-adjusted dollars, not in percent of GDP. And when we do this, we see that the investment performance of the most recent cycle is the weakest on record — even using the BEA’s newer, more generous definition of investment.
The figure above shows the cumulative change in real investment spending since the previous business-cycle peak, using the current (broad) BEA definition. The next figure shows the same thing, but for the older, narrower GAAP definition. Data for both figures is taken from the aggregates published by the BEA, so it includes closely held corporations as well as publicly-traded ones. As the figures show, the most recent cycle is a clear outlier, both for the depth and duration of the fall in investment during the downturn itself, and even more for the slowness of the subsequent recovery.
Even using the BEA’s more generous definition, it took over 5 years for inflation-adjusted investment spending to recover its previous peak. (By the narrower GAAP definition, it took six years.) Five years after the average postwar business cycle peak, BEA investment spending had already risen 20 percent in real terms. As of the second quarter of 2015 — seven-and-a-half years after the most recent peak, and six years into the recovery — broad investment spending was up only 10 percent from its previous peak. (GAAP investment spending was up just 8.5 percent.) In the four previous postwar recoveries that lasted this long, real investment spending was up 63, 24, 56, and 21 percent respectively. So the current cycle has had less than half the investment growth of the weakest previous cycle. And it’s worth noting that the next two weakest investment performances of the ten postwar cycles came in the 1980s and the 2000s. In recent years, only the tech-boom period of the 1990s has matched the consistent investment growth of the 1950s, 1960s and 1970s.
So I don’t think it’s time to hang the “Mission Accomplished” banner up on Maiden Lane quite yet.
As DeLong says, it’s not surprising that business investment is weak given how far output is below trend. But the whole point of monetary policy is to stabilize output. For monetary policy to work, it needs to able to reliably offset lower than normal spending in other areas with stronger than normal investment spending. If after six years of extraordinarily stimulative monetary policy (and extraordinarily high corporate profits), business investment is just “where one would expect given that the overall recovery has been disappointing,” that’s a sign of failure, not success.
 Another minor issue, which I can’t discuss now, is DeLong’s choice to compare “real” (inflation-adjusted) spending to “real” GDP, rather than the more usual ratio of nominal values. Since the price index for investment goods consistent rises more slowly than the index for GDP as a whole, this makes current investment spending look higher relative to past investment spending.
 This IP spending is not generally counted as investment in the GAAP accounting rules followed by private businesses. As I’ve mentioned before, it’s problematic that national accounts diverge from private accounts this way. It seems to be part of a troubling trend of national accounts being colonized by economic theory.
 R&D spending is at least reported in financial statements, though I’m not sure how consistently. But with the other new types of IP investment — which account for the majority of it — the BEA has invented a category that doesn’t exist in business accounts at all. So the historical numbers must involve more than usual amount degree of guesswork.
So the Fed decided not to raise rates this weeks. And as you’ve probably seen, this provoked an angry response from representatives of financial institutions. The owners and managers of money have been demanding higher interest rates for years now, and were clearly hoping that this week they’d finally start getting them.
I’ve tried to understand demands that rates go up despite the absence of inflation pressure in terms of broad class interests. And the trouble is that it’s not at all clear where these interests lie. The wealthy get a lot of interest income, which means that they are hurt by low rates; but they also own a lot of assets, whose prices go up when monetary policy is easy. You can try to figure out the net effect, but what matters for the politics is perception, and that’s surely murky.
But, he has a theory:
What we should be doing … is focusing not on broad classes but on very specific business interests. … Commercial bankers really dislike a very low interest rate environment, because it’s hard for them to make profits: there’s a lower bound on the interest rates they can offer, and if lending rates are low that compresses their spread. So bankers keep demanding higher rates, and inventing stories about why that would make sense despite low inflation.
I certainly agree with Krugman that in thinking about the politics of monetary policy, we should pay attention to the narrow sectoral interests of the banks as well as the broader interests of the owning class. But I’m not sure this particular story makes sense. What he’s suggesting is that the interest rate on bank lending is more strongly affected by monetary policy than is the interest rate on bank liabilities, so that bank spreads are systematically wider at high rates than at low ones.
This story might have made sense in the 1950s and 1960s, when bank liabilities consisted mostly of transactions deposits that paid no interest. But today, non-interest bearing deposits compose less than a quarter of commercial bank liabilities. Meanwhile, bank liabilities are much shorter-term than their assets (that’s sort of what it means to be a bank) so the interest rates on their remaining liabilities tend to move more closely with the policy rate than the interest rates on their assets. So it’s not at all obvious that bank spreads should be narrower when rates are low; if anything, we might expect them to be wider.
Luckily, this is a question we can address with data. Historically, have higher interest rates been associated with a wider spread for commercial banks, or a narrower one? Or have interest rate changes left bank spreads unchanged? To answer this, I looked at total interest income and total interest payments for commercial banks, both normalized by total assets. These are reported in a convenient form, along with lots of other data on commercial banks, in the FDIC’s Historical Statistics on Banking.
The first figure here shows annual interest payments and interest costs for commercial banks on the vertical axis, and the Federal funds rate on the horizontal axis. It’s annual data, 1955 through 2014. The gap between the blue and red points is a measure of the profitability of bank loans that year.  The blue and red lines are OLS regression lines.
If Krugman’s theory were correct, the gap between the blue and red lines should be wider on the right, when interest rates are high, and narrower on the left, when they’re low. But in fact, the lines are almost exactly parallel. The gap between banks’ interest earnings and their funding costs is always close to 3 percent of assets, whether the overall level of rates is high or low. The theory that bank lending is systematically less profitable in a low-interest environment does not seem consistent with the historical evidence. So it’s not obvious why commercial banks should care about the overall level of interest rates one way or the other.
Here’s another way of looking at the same thing. Now we have interest received by commercial banks on the vertical axis, and interest paid on the horizontal axis. Again, both are scaled by total bank assets. To keep it legible, I’ve limited it to the years 1985-2014; anyway the earlier years are probably less relevant for today’s banking system. The diagonal line shows the average spread between the lending rate and the funding rate for this period. So points above the line are years when bank loans are unusually profitable, and points below are years when loans are less profitable than usual.
Here again, we see that there is no systematic relationship between the level of interest rates and the profitability of bank loans. Over the whole range of interest rates, spreads are clustered close to the diagonal. What we do see, though, is that the recent period of low interest rates has seen a steady narrowing of bank spreads. Since 2010, the average interest rate received by commercial banks has fallen by one full percentage point, while their average funding cost has fallen by a bit under half a point.
On the face of it, this might seem to support Krugman’s theory. But I don’t think it’s actually telling us anything about the effects of low interest rates as such. Rather, it reflects the fact that bank borrowing is much shorter term than bank lending. So a sustained fall in interest rates will always first widen bank spreads, and then narrow them again as lending rates catch up with funding costs. And in fact, the recent decline in bank spreads has simply brought them back to where they were in 2007. (Or in 1967, for that matter.) No doubt there are still a few long-term loans from the high-rate period that have not been refinanced and are still sitting profitably on banks’ books; but after seven years of ZIRP there can’t be very many. There’s no reason to think that continued low rates will continue to narrow bank spreads, or that higher rates will improve them. On the contrary, an increase in rates would almost certainly reduce lending profits initially, since banks’ funding rates will rise more quickly than their lending rates.
Now, on both substantive and statistical grounds, we might prefer to look at changes rather than levels. So the next two figures are the same as the previous ones, but using the year over year change rather than absolute level of interest rates. In the first graph, years with the blue above the red are years of widening spreads, while red above blue indicates narrowing spreads. In the second graph, the diagonal line indicates an equal change in bank lending and funding rates; points above the line are years of widening spreads, and points below the line are years of narrowing spreads. Again, I’ve limited it to 1985-2014.
Both figures show that rising rates are associated with narrower commercial bank spreads — that is, less profitable loans, not more profitable. (Note the steeper slope of the red line than the blue one in Figure 3.) Again, this is not surprising — since banks borrow short and lend long, their average funding costs change more quickly than their average lending rates do. The most recent three tightening episodes were all associated with narrower spreads, not wider ones. Over 2004-2006, banks’ funding costs rose by 1.5 points while the average rate on their loans rose by only 1.3 points. In 1999-2000, funding costs rose by 0.55 points while loan rates rose by 0.45 points. And in 1994-1996, bank funding costs rose by 0.6 points while loan rates rose by 0.4 points. Conversely, during the period of falling rates in 2007-2008, bank funding costs fell by 1.7 points while average loan rates fell by only 1.4 points. Admittedly, these are all rather small changes — what is most striking about banking spreads is their stability. But the important thing is that past tightening episodes have consistently reduced the lending profits of commercial banks. Not increased them.
Thinking about the political economy of support for higher rates, as Krugman is doing, is asking the right question. And the idea that the narrow interests of commercial banks could be important here, is reasonable on its face. But the idea that higher rates are associated with higher lending spreads, just doesn’t seem to be supported by the data. Unfortunately, I don’t have a simple alternative story. As the late Bob Fitch used to say, 90 percent of what happens in the world can be explained by vulgar Marxism. But banks’ support for hard money may fall in the other 10 percent.
UPDATE: For what it’s worth, here are the results of regressions of average interest received by commercial banks and of and their average funding costs, on the Federal Funds rate. Both interest flows are normalized by total assets.
Full Period (1955-2014)
Again, we don’t see any support for the hypothesis that spreads systematically rise with interest rates. Depending on the period and on whether you look at levels or changes, you can see a slightly stronger relationship of the Federal Funds rate with either bank lending rates of funding costs; but none of these differences would pass a standard significance test.
Two positive conclusions come out of this. First, all the coefficients are substantially, and significantly, below 1. In other words, the policy rate is passed through far from completely to market rates, even in the interbank market, which should be most closely linked to it. Second, looking at the bottom half of the table, we see that changes in the policy rate have a stronger affect on both the funding and lending rates (at least over a horizon of a year) today than they did in the postwar decades. This is not surprising, given the facts that non-interest-bearing deposits provided most bnk funding in the earlier period, and that monetary policy then worked through more limits on the quantity of credit than interest rates per se. But it’s interesting to see it so clearly in the data.
UPDATE 2: Krugman seems to be doubling down on the bank spreads theory. I hope he looks a bit at the historical data before committing too hard to this story.
VERY LATE UPDATE: In the table above, the first set of rows is levels; the second is year-over-year changes.
 This measure is not quite the same as the spread — for that, we would want to divide bank interest costs by their liabilities, or their interest-bearing liabilities, rather than their assets. But this measure, rather than the spread in the strict sense, is what’s relevant for the question we’re interested in, the effect of rate changes on bank lending profits. Insofar as bank loans are funded with equity, lending will become more profitable as rates rise, even if the spread is unchanged. For this reason, I refer to banks average funding costs, rather than average borrowing costs.
In a previous post, I pointed out that if capital means real investment, then the place where capital is going these days is fossil fuels, not the industries we usually think of as high tech. I want to build on that now by looking at some other financial flows across these same sectors.
As I discussed in the previous post, any analysis of investment and profits has to deal with the problem of R&D, and IP-related spending in general. If we want to be consistent with the national accounts and, arguably, economic theory, we should add R&D to investment, and therefore also to cashflow from operations. (It’s obvious why you have to do this, right?) But if we want to be consistent with the accounting principles followed by individual businesses, we must treat R&D as a current expense. For many purposes, it doesn’t end up making a big difference, but sometimes it does.
Below, I show the four major sources and uses of funds for three subsets of corporations. The flows are: cashflow from operations — that is, profits plus depreciation, plus R&D if that is counted in investment; profits; investment, possibly including R&D; and net borrowing. The universes are publicly traded corporations: first all of them; second the high tech sector, defined as in the previous post, and third fossil fuels, also as defined previously. Here I am using the broad measure of investment, including R&D, and the corresponding measure of cashflow from operations. At the end of the post, I show the same figures using the narrow measure of investment, and with profits as well as cashflow.
For the corporate sector as a whole, we have the familiar story. Over the past twenty-five years annual shareholder payouts (dividends plus share repurchases) have approximately doubled, rising from around 3 percent of sales in the 1950s, 60s and 70s to around 6 percent today. Payouts have also become more variable, with periods of high and low payouts corresponding with high and low borrowing. (This correlation between payouts and borrowing is also clearly visible across firms since the 1990s, but not previously, as discussed here.) There’s also a strong upward trend in cashflow from operations, especially in the last two expansions, rising from about 10 percent of sales in the 1970s to 15 percent today. Investment spending, however, shows no trend; since 1960, it’s stayed around 10 percent of sales. The result is an unprecedented gap between corporate earnings and and investment.
Here’s one way of looking at this. Recall that, if these were the only cashflows into and out of the corporate sector, then cash from operations plus net borrowing (the two sources) would have to equal investment plus payouts (the two uses). In the real world, of course, there are other important flows, including mergers and acquisitions, net acquisition of financial assets, and foreign investment flows. But there’s still a sense in which the upper gap in the figure is the mirror image or complement of the lower gap. The excess of cash from operations over investment shows that corporate sector’s real activities are a net source of cash, while the excess of payouts over borrowing suggests that its financial activities are a net use of cash.
Focusing on the relationship between cashflow and investment suggests a story with three periods rather than two. Between roughly 1950 and 1970, the corporate sector generated significantly more cash than it required for expansion, leaving a surplus to be paid out through the financial system in one form or another. (While payouts were low compared with today, borrowing was also quite low, leaving a substantial net flow to owners of financial assets.) Between 1970 and 1985 or so, the combination of higher investment and weaker cashflow meant that, in the aggregate all the funds generated within the corporate sector were being used there, with no net surplus available for financial claimants. This is the situation that provoked the “revolt of the rentiers.” Finally, from the 1990s and especially after 2000, we see the successful outcome of the revolt.
This is obviously a simplified and speculative story. It’s important to look at what’s going on across firms and not just at aggregates. It’s also important to look at various flows I’ve ignored here; cashflow ideally should be gross, rather than net, of interest and taxes, and those two flows along with net foreign investment, net acquisition of financial assets, and cash M&A spending, should be explicitly included. But this is a start.
Now, let’s see how things look in the tech sector. Compared with publicly-traded corporations as a whole, these are high-profit and high-invewtment industries. (At least when R&D is included in investment — without it, things look different.) It’s not surprising that high levels of these two flows would go together — firms with higher fixed costs will only be viable if they generate larger cashflows to cover them.
But what stands out in this picture is how the trends in the corporate sector as a whole are even more visible in the tech industries. The gap between cashflow and investment is always positive here, and it grows dramatically larger after 1990. In 2014, cashflow from operations averaged 30 percent of sales in these industries, and reported profits averaged 12 percent of sales — more than double the figures for publicly traded corporations as a whole. So to an even greater extent than corporations in general, the tech industries have increasingly been net sources of funds to the financial system, not net users of funds from it. Payouts in the tech industries have also increased even faster than for publicly traded corporations in general. Before 1985, shareholder payouts in the tech industries averaged 3.5 percent of sales, very close to the average for all corporations. But over the past decade, tech payouts have averaged full 10 percent of annual sales, compared with just a bit over 5 percent for publicly-traded corporations as a whole.
In 2014, there were 15 corporations listed on US stock markets with total shareholder payouts of $10 billion or more, as shown in the table below. Ten of the 15 were tech companies, by the definition used here. Computer hardware and software are often held out as industries in which US capitalism, with its garish inequality and fierce protections of property rights, is especially successful at fostering innovation. So it’s striking that the leading firms in these industries are not recipients of funds from financial markets, but instead pay the biggest tributes to the lords of finance.
EXXON MOBIL CORP
ROYAL DUTCH SHELL PLC
JOHNSON & JOHNSON
CISCO SYSTEMS INC
MERCK & CO
GENERAL ELECTRIC CO
2014. Values in millions of dollars. Tech firms in bold.
It’s hard to argue that Apple and Merck represent mature industries without significant growth prospects. And note that, apart from GE (which is not listed in the the high-tech sector as defined here, but perhaps should be), all the other members of the $10 billion club are in the fast-growing oil industry. It’s hard to shake the feeling that what distinguishes high-payout corporations is not the absence of investment opportunities, but rather the presence of large monopoly rents.
Finally, let’s quickly look at the fossil-fuel industries. Up through the 1980s, the picture here is not too different from publicly-traded corporations in general, though with more variability — the collapse in fossil-fuel earnings and dividends in the 1970s is especially striking. But it’s interesting that, despite very high payouts in several big oil companies, there has been no increase in payouts for the sector in general. And in the most recent oil and gas boom, new investment has been running ahead of internal cashflow, making the sector a net recipient of funds from financial markets. (This trend seems to have intensified recently, as falling profits in the sector have not (yet) been accompanied with falling investment.) So the capital-reallocation story has some prima facie plausibility as applied to the oil and gas boom.
In the next, and final, post in this series, I’ll try to explain why I don’t think it makes sense to think of shareholder payouts as a form of capital reallocation. My argument has two parts. First, I think these claims often rest on an implicit loanable-funds framework that is logically flawed. There is not a fixed stock of savings available for investment; rather, changes in investment result in changes in income that necessarily produce the required (dis)saving. So if payouts in one company boost investment in another, it cannot be by releasing real resources, but only by relieving liquidity constraints. And that’s the second part of my argument: While it is possible for higher payouts to result in greater liquidity, it is hard to see any plausible liquidity channel by which more than a small fraction of today’s payouts could be translated into higher investment elsewhere.
Finally, here are the same graphs as above but with investment counted as it is businesses’ own financial statements, with R&D spending counted as current costs. The most notable difference is the strong downward trend in tech-sector investment when R&D is excluded.