Employment, Productivity and the Business Cycle

Fourth post in a series. Posts one, two and three.

Empirically-oriented macroeconomists have recognized since the early 20th century that output, employment and productivity move together over the business cycle. The fact that productivity falls during recessions means that employment varies less over the cycle than output does. This behavior is quite stable over time, giving rise to Okun’s law. In the US, Okun’s law says that the unemployment rate will rise by one point in each 2.5 point shortfall of GDP growth over trend — a ratio that doesn’t seem to have declined much since Arthur Okun first described it in the mid-1960s. [1]

It’s not obvious that potential should show this procyclical behavior. As I noted in the previous post, a naive prediction from a production function framework would be that a negative demand shock should reduce employment more than output, since business can lay off workers immediately but can’t reduce their capital stock right away. In other words, productivity should rise in recessions, since the labor of each still-employed worker is being combined with more capital.

There are various explanations for why labor productivity behaves procyclically instead. The most common focus on the transition costs of changing employment. Since hiring and firing is costly for businesses, they don’t adjust their laborforce to every change in demand. So when sales fall in recessions, they will keep extra workers on payroll — paying them now is cheaper than hiring them back later. Similarly, when sales rise businesses will initially try to get more work out of their existing employees. This shows up as rising labor productivity, and as the repeated phenomenon of “jobless recoveries.”

Understood in these terms, the positive relationship between output, employment and productivity should be a strictly short-term phenomenon. If a change in demand (or in other constraints on output) is sustained, we’d expect labor to fully adjust to the new level of production sooner or later. So over horizons of more than a year or two, we’d expect output and employment to change in proportion. If there are other limits on production (such as non-produced inputs like land) we’d expect output and labor productivity to move inversely, with faster productivity growth associated with slower employment growth or vice versa. (This is the logic of “robots taking the jobs.”) A short-term positive, medium term negative, long-term flat or negative relationship between employment growth and productivity growth is one of the main predictions that comes out of a production function. But it doesn’t require one. You can get there lots of other ways too.

And in fact, it is what we see.

prod-emp correl

The figure shows the simple correlation of employment growth and productivity growth over various periods, from one quarter out to 50 quarters. (This is based on postwar US data.) As you can see, over periods of a year or less, the correlation is (weakly) positive. Six-month periods in which employment growth was unusually weak are somewhat more likely to have seen weak productivity growth as well. This is the cyclical effect presumably due to transition costs — employers don’t always hire or fire in response to short-run changes in demand, allowing productivity to vary instead. But if sales remain high or low for an extended period, employers will eventually bring their laborforce into line, eliminating this relationship. And over longer periods, autonomous variation in productivity and labor supply are more important. Both of these tend to produce a negative relationship between employment and productivity. And that’s exactly what we see — a ten-year period in which productivity grew unusually quickly is likely to be one in which employment grew slowly. (Admittedly postwar US data doesn’t give you that many ten-year periods to look at.)

Another way of doing this is to plot an “Okun coefficient” for each horizon. Here we are looking at the relationship between changes in employment and output. Okun’s law is usually expressed in terms of the relatiojship between unemployment and output, but here we will look at it in terms of employment instead. We write

(1)    %ΔE = a (g – c)

where %ΔE is the percentage change in employment, g is the percentage growth in GDP, is a constant (the long-run average rate of productivity growth) and a is the Okun coefficient. The value of a says how much additional growth in employment we’d expect from a one percentage-point increase in GDP growth over the given period. When the equation is estimated in terms of unemployment and the period is one, year, a is generally on the order of 0.4 or so, meaning that to reduce unemployment by one point over a year normally requires GDP growth around 2.5 points above trend. We’d expect the coefficient for employment to be greater, but over short periods at least it should still be less than one.

Here is what we see if the estimate the equation for changes in output and employment for various periods, again ranging from one quarter up to ten years. (Again, postwar US data. The circles are the point estimates of the coefficients; the dotted lines are two standard errors above and below, corresponding to a standard 95% confidence interval.)

emp on output

What’s this show? If we estimate Equation (1) looking at changes over one quarter, we find that one percentage point of additional GDP growth is associated with just half a point of additional employment growth. But if we estimate the same equation looking at changes over two years, we find that one point of additional GDP growth is associated with 0.75 points of additional employment growth.

The fact that the coefficient is smallest for the shorter periods is, again, consistent witht he conventional understanding of Okun’s law. Because hiring and firing is costly, employers don’t fully adjust staffing unless a change in sales is sustained for a while. If you were thinking in terms of a production function, the peak around 2 years represents a “medium-term” position where labor has adjusted to a change in demand but the capital stock has not.

While it’s not really relevant for current purposes, it’s interesting that at every horizon the coefficient is significantly below zero. What this tells us is that there is no actual time interval corresponding to the “long run” of the model– a period long enough for labor and the capital stock to be fully adjusted but short enough that technology is fixed. Over this hypothetical long run, the coefficient would be one. One way to think about the fact that the estimated coefficients are always smaller, is that any period long enough for labor to adjust, is already long enough to see noticeable autonomous changes in productivity. [2]

But what we’re interested in right now is not this normal pattern. We’re interested in how dramatically the post-2008 period has departed from it. The past eight years have seen close to the slowest employment growth of the postwar period and close to the slowest productivity growth. It is normal for employment and productivity to move together for a couple quarters or a year, but very unusual for this joint movement to be sustained over nearly a decade. In the postwar US, at least, periods of slow employment growth are much more often periods of rapid productivity growth, and conversely. Here’s a regression similar to the Okun one, but this time relating productivity growth to employment growth, and using only data through 2008.

prod on empWhile the significance lines can’t be taken literally given that these are overlapping periods, the figure makes clear that between 1947 and 2008, there were very few sustained periods in which both employment and productivity growth made large departures from trend in the same direction.

Put it another way: The past decade has seen exceptionally slow growth in employment — about 5 percent over the full period. If you looked at the US postwar data, you would predict with a fair degree of confidence that a period of such slow employment growth would see above-average productivity growth. But in fact, the past decade has also seen very low productivity growth. The relation between the two variables has been much closer to what we would predict by extrapolating their relationships over periods of a year. In that sense, the current slowdown resembles an extended recession more than it does previous periods of slower growth.

As I suggested in an earlier post, I think this is a bigger analytic problem than it might seem at first glance.

In the conventional story, productivity is supposed to be driven by technology, so a slowdown in productivity growth reflects a decline in innovation and so on. Employment is driven by demographics, so slower employment growth reflects aging and small families. Both of these developments are negative shifts in aggregate supply. So they should be inflationary — if the economy’s productive potential declines then the same growth in demand will instead lead to higher prices. To maintain stable prices in the face of these two negative supply shocks, a central bank would have to raise interest rates in order to reduce aggregate spending to the new, lower level of potential output. Is this what we have seen? No, of course not. We have seen declining inflation even as interest rates are at historically low levels. So even if you explain slower productivity growth by technology and explain slower employment growth by demographics, you still need to postulate some large additional negative shift in demand. This is DeLong and Summers’ “elementary signal identification point.”

Given that we are postulating a large, sustained fall in demand in any case, it would be more parsimonious if the demand shortfall also explained the slowdown in employment and productivity growth. I think there are good reasons to believe this is the case. Those will be the subject of the remaining posts in this series.

In the meantime, let’s pull together the historical evidence on output, employment and productivity growth in one last figure. Here, the horizontal axis is the ten-year percentage change in employment, while the vertical axis is the ten-year percentage change in productivity. The years are final year of the comparison. (In order to include the most recent data, we are comparing first quarters to first quarters.) The color of the text shows average inflation over the ten year period, with yellow highest and blue lowest. The diagonal line corresponds to the average real growth rate of GDP over the full period.

e-p scatter

What we’re looking at here is the percentage change in productivity, employment and prices over every ten-year period from 1947-1957 through 2006-2016. So for instance, growth between 1990 and 2000 is represented by the point labeled “2000.” During this decade, total employment rose by about 20 percent while productivity rose by a total of 15 percent, implying an annual real growth of 3.3 percent, very close to the long-run average.

One natural way to think about this is that yellow points below and to the right of the line suggest negative supply shocks: If the productive capacity of the economy declines for some reason, output growth will slow, and prices will rise as private actors — abetted by a slow-to-react central bank — attempt to increase spending at the usual rate. Similarly, blue points above the line suggest positive supply shocks. Yellow points above the line suggest positive demand shocks — an increase in spending can increase output growth above trend, at least for a while, but will pull up prices as well. And blue points below the line suggest negative demand shocks. This, again, is Delong and Summers’ “elementary signal identification point.”

We immediately see what an outlier the recent period is. Both employment and productivity growth over the past ten years have been drastically slower than over the preceding decade — about 5 percent each, down from about 20 percent. 2000-2010 and 2001-2011 were the only ten-year periods in postwar US history when total employment actually declined. The abruptness of the deceleration on both dimensions is a challenge for views that slower growth is the result of deep structural forces. And the combination of the slowdown in output growth with falling prices — especially given ultra-low interest rats — strongly suggests that we’ve seen a negative shift in desired spending (demand) rather than in the economy’s productive capacities (supply).

Another way of looking at this is as three different regimes. In the middle is what we might call “the main sequence” — here there is steady growth in demand, met by varying mixes of employment and productivity growth. On the upper right is what gets called a “high-pressure economy,” in which low unemployment and strong demand draw more people into employment and facilitates the reallocation of labor and other resources toward more productive activity, but put upward pressure on prices. On the lower left is stagnation, where weak demand discourages participation in the labor force and reduces productivity growth by holding back investment, new business formation and by leaving a larger number of those with jobs underemployed, and persistent slack leads to downward pressure on prices (though so far not outright deflation). In other words, macroeconomically speaking the past decade has been a sort of anti-1960s.

 

[1] There are actually two versions of Okun’s law, one relating the change in the unemployment rate to GDP growth and one relating the level of unemployment to the deviation of GDP from potential. The two forms will be equivalent if potential grows at a constant rate.

[2] The assumption that variables can be partitioned into “fast” and “slow” ones, so that we can calculate equilibrium values of the former with the latter treated as exogenous, is a very widespread feature of economic modeling, heterodox as much as mainstream. I think it needs to be looked at more critically. One alternative is dynamic models where we focus on the system’s evolution over time rather than equilibrium conditions. This is, I suppose, the kind of “theory” implied by VAR-type forecasting models, but it’s rare to see it developed explicitly. There are people who talk about a system dynamics approach, which seems promising, but I don’t know much about them.

Links for June 15, 2016

“Huge foreign demand for Treasuries”. Via Across the Curve, here’s the Wall Street Journal:

The global hunger for U.S. government debt is intensifying as investors seek better returns from the negative yields and record-low rates found in Japan and Europe. On Thursday, an auction of 30-year Treasury debt attracted some of the highest demand ever from overseas buyers…

Thirty-year bonds typically attract a specialized audience, largely pension funds and other investors trying to buy assets to match long-term liabilities. There are few viable alternatives for such buyers around the world. The frenzy of buying has sparked warnings about the potential of large losses if interest rates rise. …

One auction is hardly dispositive, but that huge foreign demand is worth keeping in mind when we think about the US fiscal and external deficits. A point that Ryan Cooper also makes well.

 

You get the debt, I get the cash. When I pointed out Minsky’s take on Trump on this blog a few months ago, there were some doubts. How exactly, did Trump extract equity from his businesses? Why didn’t the other investors sue?  In its fascinating long piece on Trump’s adventures in Atlantic City, the Times answers these questions in great detail. Rread the whole thing. But if you don’t, Trump himself has the tl;dr:

“Early on, I took a lot of money out of the casinos with the financings and the things we do,” he said in a recent interview. “Atlantic City was a very good cash cow for me for a long time.”

 

 

Unto the tenth generation. If you are going to make an argument for Britain leaving the euro, it seems to me that Steve Keen has the right one. The fundamental issue is not the direct economic effects of membership in the EU (which are probably exaggerated in any case.) The issue is the political economy. First, the EU lacks democratic legitimacy, it effectively shifts power away from elected national governments, while Europe-wide democracy remains an empty shell. And second, European institutions are committed to an agenda of liberalization and austerity.

From the other side, we get this:

Screen Shot 2016-06-15 at 9.52.44 PM

I wish I knew where this sort of thing came from. Maybe Brexit would reduce employment in the UK, maybe it wouldn’t — this is not the sort of thing that can be asserted as an unambiguous matter of fact. Whether it’s EU membership or taxes or trade or the minimum wage, everyone claims their preferred policy will lead to higher living standards than the alternatives. Chris Bertram seems like a reasonable person, I doubt that, in general, he wishes suffering on the children of people who see policy questions differently than he does. But on this one, disagreement is impermissible. Why?

 

I like hanging out in coffeehouses. Over at Bloomberg, Noah Smith offers a typology of macroeconomics:

The first is what I call “coffee-house macro,” and it’s what you hear in a lot of casual discussions. It often revolves around the ideas of dead sages — Friedrich Hayek, Hyman Minsky and John Maynard Keynes.

The second is finance macro. This consists of private-sector economists and consultants who try to read the tea leaves on interest rates, unemployment, inflation and other indicators in order to predict the future of asset prices (usually bond prices). It mostly uses simple math, … always includes a hefty dose of personal guesswork.

The third is academic macro. This traditionally involves professors making toy models of the economy — since the early ’80s, these have almost exclusively been DSGE models … most people outside the discipline who take one look at these models immediately think they’re kind of a joke.

The fourth type I call Fed macro…

It’s fair to say a lot of us here in the coffeehouses were pleased by this piece. Obviously, you can quibble with the details of his list. But it gets a couple of big things right. First, the fundamental problem with academic macroeconomics is not that it’s a  study of the concrete phenomena of “the economy” that has gone wrong in some way, but a self-contained activity. The purpose of models isn’t to explain anything, it’s just to satisfy the aesthetic standards of the profession. As my friend Suresh put it once, the best way to think about economics is a kind of haiku with Euler equations. I think a lot of people on the left miss this — they think that you can criticize economic by pointing out some discrepancy with the real world. But that’s like saying that chess is flawed because in medieval Europe there were many more knights than bishops.

The other thing I think Noah’s piece gets right is there is no such thing as “economic orthodoxy.” There are various different orthodoxies — the orthodoxy of the academy, the orthodoxy of business and finance, the orthodoxy of policymakers — and they don’t agree with each other. On some important dimensions they hardly even make contact.

Merijn Knibbe also likes the Bloomberg piece. (He’s doing very good work exploring these same cleavages.) Noah follows up on his blog. Justin Wolfers rejects basically everything taught as macroeconomics in grad school. DeLong makes a distinction between good and bad academic economics which to me, frankly, looks like wishful thinking. Brian Romanchuk has some interesting thoughts on this conversation.

 

Are there really excess reserves? I’ve just been reading Zoltan Pozsar’s Global Money Notes for Credit Suisse. Man they are good. If you’re interested in money, finance, central banks, monetary policy, any of that, you should be reading this guy. Anyway, in this one he makes a provocative argument that I think is right:

Contrary to conventional wisdom, there are no excess reserves – not one penny. Labelling the trillions of reserves created as a byproduct of QE as “excess” was appropriate only until the Liquidity Coverage Ratio (LCR) went live, but not after. …

Before the LCR, banks were required to hold reserves only against demand deposits issued in the U.S. … As banks went about their usual business of making loans and creating deposits, they routinely fell short of reserve requirements. To top up their reserve balances, banks with a shortfall of reserves … borrowed fed funds from banks with a surplus of reserves… These transactions comprised the fed funds market.

Under the LCR, banks are required to hold reserves (and more broadly, high-quality liquid assets) not only against overnight deposits, but all short-term liabilities that mature in less than 30 days, regardless of whether those liabilities were issued by a bank subsidiary, a broker-dealer subsidiary or a holding company onshore or offshore …

The idea is this: Under the new Basel III rules, which the US has adopted, banks are required to hold liquid assets equal to their total liabilities due in 30 days or less. This calculation is supposed to include liabilities of all kinds, across all the bank’s affiliates and subsidiaries, inside and outside the US. Most of this requirement must be satisfied with a short list of “Tier I” assets, which includes central bank reserves. So reserves that are excess with respect to the old (effectively moot) reserve requirements, will not be to the extent that they are held to satisfy these new rules.

Pozsar’s discussion of these issues is extremely informative. But his “no one penny” language may be a bit exaggerated. While the exact rules are still being finalized, it looks like current reserve holdings could still be excessive under LCR. Whie banks have to hold unencumbered liquid assets equal to thier short-term liabilities, the fraction that has to be reserves specifically is still being determined — Pozsar suggests the most likely fraction is 15 percent. He gives data for six big banks, four of which hold more reserves than required by that standard — though on the other hand, five of the six hold less total Tier I liquid assets than they will need. (That’s the “max” line in the figure — the “min” line is 15 percent.) But even if current reserve holdings turn out to be more than is required by LCR, it’s clear that there are far less excess reserves than one would think using the old requirements.

Liquid assets as a share of short-term liabilities. Source: Credit Suisse
Liquid assets as a share of short-term liabilities, selected banks. Source: Credit Suisse

Pozsar draws several interesting conclusions from these facts. First, the Federal Funds rate is dead for good as a tool of policy. (I wonder how long it will take textbook writers to catch up.) Second, central bank balance sheets are not going to shrink back to “normal” any time in the foreseeable future. Third, this is a step along the way to the Fed becoming the world’s central bank, de facto in even in a sense de jure. (Especially in conjunction with the permanent swap lines with other central banks, another insitutional evolution that has not gotten the attention it deserves.) A conclusion that he does not draw, but perhaps should have, is that this is another reason not to worry about demand for Treasury debt. There are good prudential reasons for requiring banks to hold government liabilities, but in effect it is also a form of financial repression.

This is also a good illustration of Noah’s point about the disconnect between academic economics and policy/finance economics. Academic economists are obsessed with “the” interest rate, which they map to the entirely unrelated intertemporal price called “interest rate” in the Walrasian system. Fundamentally what central banks do is determine the pace of credit expansion, which historically has involved a great variety of policy tools. Yes, for a while the tool of choice was an overnight interbank rate. But not anymore. And whatever the mix of immediate targets and instruments will be going forward, it’s a safe bet it won’t return to what it was in the past.