Five, Ten or Even Thirty Years

Neel Kashkari is clearly a very smart guy. He’s been an invaluable voice for sanity at the Fed these past few years. Doesn’t he see that something has gone very wrong here?

Kashkari:

When people borrow money to buy a house, or businesses take out a loan to build a new factory, they don’t really care about overnight interest rates. They care about what interest rates will be for the term of their loan: 5, 10 or even 30 years. [*] Similarly, when banks make loans to households and businesses, they also try to assess where interest rates will be over the length of the loan when they set the terms. Hence, expectations about future interest rates are enormously important to the economy. When the Fed wants to stimulate more economic activity, we do that by trying to lower the expected future path of interest rates. When we want to tap the brakes, we try to raise the expected future path of interest rates.

Here’s the problem: recession don’t last 5, 10 or even 30 years. Per the NBER, they last a year to 18 months.

Mainstream theory says we have a long run dictated by supply side — technology, demographics, etc. On average the output gap is zero, or at least, it’s at a stable level. On top of this are demand disturbances or shocks — changes in desired spending — which produce the businesses cycle, alternating periods of high unemployment and normal growth or rising inflation. The job of monetary policy is to smooth out these short-term fluctuations in demand; it absolves itself of responsibility for the longer-run growth path.

If policy responding to demand shortfalls lasting a year or two, how is that supposed to work if policy shifts have to be maintained for 5, 10 or even 30 years to be effective?

If the Fed is faced with rising inflation in 2005, is it supposed to respond by committing itself to keeping rates high even in 2010, when the economy is sliding into depression? Does anyone think that would have been a good idea? (I seriously doubt Kashkari thinks so.) And if it had made such a commitment in 2005, would anyone have worried about breaking it in 2010? But if the Fed can’t or shouldn’t make such a commitment, how is this vision of monetary policy supposed to work?

If monetary policy is only effective when sustained for 5, 10 or even 30 years, then monetary policy is not a suitable tool for managing the business cycle. Milton Friedman pointed this out long ago: It is impossible for countercyclical monetary policy to work unless the lags with which it takes effect are decidedly shorter than the frequency of the shocks it is supposed to respond to. The best you can do, in his view, is maintain a stable money supply growth that will ensure stable inflation over the long run.

Meanwhile, conventional monetary policy rules like the Taylor rule are defined based on current macroeconomic conditions — today’s inflation rate, today’s output gap, today’s unemployment rate. There’s no term in there for commitments the Fed made at some point in the past. The Taylor rule seems to describe the past two or three decades of monetary policy pretty well. Now, Kashkari says the Fed can influence real activity only insofar as it is setting policy over the next 5, 10 or even 30 year’s based on today’s conditions. But what the Fed actually will do, if the Taylor rule continues to be a reasonable guide, is to set monetary policy based on conditions over the next 5, 10 or even 30 years. So if Kashkari takes his argument seriously, he must believe that monetary policy as it is currently practiced is not effective at all.

In the abstract, we can imagine some kind of rule that sets policy today as some kind of weighted average of commitments made over the past 5, 10 or even 30 years. Of course neither economic theory nor official statements describe policy this way. Still, in the abstract, we can imagine it. But in practice? FOMC members come and go; Kashkari is there now, he’ll be gone next year. The chair and most members are appointed by presidents, who also come and go, sometimes in unpredictable ways. Whatever Kashkari thinks is the appropriate policy for 2022, 2027 or 2047, it’s highly unlikely he’ll be there to carry it out. Suppose that in 2019 Fed chair Kevin Warsh  looks at the state of the economy and says, “I think the most appropriate policy rate today is 4 percent”. Is it remotely plausible that that sentence continues “… but my predecessor made a commitment to keep rates low so I will vote for 2 percent instead”? If that’s the reed the Fed’s power over real activity rests on it, it’s an exceedingly thin one. Even leaving aside changes of personnel, the Fed has no institutional capacity to make commitments about future policy. Future FOMC members will make their choices based on their own preferred models of the economy plus the data on the state of the economy at the time. If monetary policy only works through expectations of policy 5, 10 or even 30 years from now, then monetary policy just doesn’t work.

There are a few ways you can respond to this.

One is to accept Kashkari’s premise — monetary policy is only effective if sustained over many years — and follow it to its logical conclusion: monetary policy is not useful for stabilizing demand over the business cycle. Two possible next steps: Friedman’s, which concludes that stabilizing demand at business cycle frequencies is not a realistic goal for policy, and the central bank should focus on the long-term price level; and Abba Lerner’s, which concludes that business cycles should be dealt with by fiscal policy instead.

The second response is to start from the fact — actual or assumed — that monetary policy is effective at smoothing out the business cycle, in which case Kashkari’s premise must be wrong. Evidently the effect of monetary policy on activity today does not depend on beliefs about what policy will be 5, 10 or even 30 years from now. This is not a hard case to make. We just have to remember that there is not “an” interest rate, but lots of different credit markets, with rationing as well as prices, with different institutions making different loans to different borrowers. Policy is effective because it targets some particular financial bottleneck. Perhaps stocks or inventories are typically financed short-term and changes in their financing conditions are also disproportionately likely to affect real activity; perhaps mortgage rates, for institutional reasons, are more closely linked to the policy rate than you would expect from “rational” lenders; perhaps banks become more careful in their lending standards as the policy rate rises. One way or another, these stories depend on widespread liquidity constraints and the lack of arbitrage between key markets. Generations of central bankers have told stories like these to explain the effectiveness of monetary policy. Remember Ben Bernanke? His article Inside the Black Box is a classic of this genre, and its starting point is precisely the inadequacy of Kashkari’s interest rate story to explain how monetary policy actually works. Somehow or other policy has to affect the volume of lending on a short timeframe than it can be expected to move long rates. Going back a bit further, the Fed’s leading economist of the 1950s, Richard Roosa, was vey clear that neither the direct effect of Fed policy shifts on longer rates, nor of interest rates on real activity, could be relied on. What mattered rather was the Fed’s ability to change the willingness of banks to make loans. This was the “availability doctrine” that guided monetary policy in the postwar years. [2] If you think monetary policy is generally an effective tool to moderate business cycles, you have to believe something like this.

Response three is to accept Kashkari’s premise, yet also to believe that monetary works. This means you need to adjust your view of what policy is supposed to be doing. Policy that has to be sustained for 5, 10 or even 30 years to be effective, is no good for responding to demand shortfalls that last only a year or two at most. It looks better if you think that demand may be lacking for longer periods, or indefinitely. If shifts in demand are permanent, it’s not such a problem that to be effective policy shifts must also be permanent, or close to it. And the inability to make commitments is less of a problem in this case; now if demand is weak today, theres a good chance it will be weak in 5, 10 or even 30 years too; so policy will be persistent even if it’s only based on current conditions. Obviously this is inconsistent with an idea that aggregate demand inevitably gravitates toward aggregate supply, but that’s ok. It might indeed be the case that demand deficiencies an persist indefinitely, requiring an indefinite maintenance of lower rates. There’s a good case that something like this response was Keynes’ view. [3] But while this idea isn’t crazy, it’s certainly not how central banks normally describe what they’re doing. And Kashkari’s post doesn’t present itself as a radical reformulation of monetary policy’s goals, or mention secular stagnation or anything like that.

I don’t know which if any of these responses Kashkari would agree with. I suppose it’s possible he sincerely believes that policy is only effective when sustained for 5, 10 or even 30 years, and simply hasn’t noticed that this is inconsistent with a mission of stabilizing demand over business cycles that turn much more quickly. Given what I’ve read of his I feel this is unlikely. It also seems unlikely that he really thinks you can understand monetary policy while abstracting from banks, finance, credit and, well, money — that you can think of it purely in terms of an intertemporal “interest rate,” goods today vs goods tomorrow, which the central bank can somehow set despite controlling neither preferences nor production possibilities. My guess: When he goes to make concrete policy, it’s on the basis of some version of my response two, an awareness that policy operates through the concrete financial structures that theory abstracts from. And my guess is he wrote this post the way he did because he thought the audience he’s writing for would be more comfortable with a discussion of the expectations of abstract agents, than with a discussion of the concrete financial structures through which monetary policy is transmitted. It doesn’t hurt that the former is much simpler.

Who knows, I’m not a mind reader. But it doesn’t really matter. Whether the most progressive member of the FOMC has forgotten everything his predecessors knew about the transmission of monetary policy, or whether he merely assumes his audience has, the implications are about the same. “The Fed sets the interest rate” is not the right starting point for thinking about monetary policy manages aggregate demand.

 

[1] This is a weird statement, and seems clearly wrong. My wife and I just bought a house, and I can assure you we were not thinking at all about what interest rates would be many years from now. Why would we? — our monthly payments are fixed in the contract, regardless of what happens to rates down the road. Allowing the buyer to not care about future interest rates is pretty much the whole point of the 30-year fixed rate mortgage. Now it is true that we did care, a little, about interest rates next year (not in 5 years). But this was in the opposite way that Kashkari suggests — today’s low rates are more of an inducement to buy precisely if they will not be sustained, i.e. if they are not informative about future rates.

I think what may be going on here is a slippage between long rates — which the borrower does care about — and expected short rates over the length of the loan. In any case we can let it go because Kashkari’s argument does work in principle for lenders.

[2] Thanks to Nathan Tankus for pointing this article out to me.

[3] Leijonhufvud as usual puts it best:

Keynes looked forward to an indefinite period of, at best, unrelenting deflationary pressure and painted it in colors not many shades brighter than the gloomy hues of the stagnationist picture. But these stagnationist fears were based on propositions that must be stated in terms of time-derivatives. Modern economies, he believed, were such that, at a full employment rate of investment, the marginal efficiency of capital would always tend to fall more rapidly than the long rate of interest. … When he states that the long rate “may fluctuate for decades about a level which is chronically too high” one should … see this in the historical context of the “obstinate maintenance of misguided monetary policies” of which he steadily complained.

Plausibility, Continued

Real output per worker, 1921-1939:

depression

 

The Depression didn’t just see a fall in employment, it saw a fall in the output of those still employed, reversing much of the productivity gains of the 1920s. (This surprised Keynes, among others, who still believed in the declining marginal product of labor, which predicted the opposite.) Recovery in the late 1930s, conversely, didn’t just mean higher employment, it involved a sharp acceleration in labor productivity. There’s a widespread idea that output per worker necessarily reflects supply-side factors — technology, skills, etc. But if demand had such direct effects on labor productivity in the Great Depression, why not in the Lesser Depression too? But for some reason, people who scoff at the idea of the “Great Forgetting” of the 1930s have no trouble believing that the drastic slowdown in productivity growth of recent years has nothing to do with the economic crisis it immediately followed.

 

EDIT: I should add: While the decline in production during the Depression was, of course, primarily a matter of reduced employment, the decline in productivity was not trivial. If output per employee had continued to rise in the first half of the 1930s at the same rate as in 1920s, the total fall in output would have been on the order of 25 percent rather than 33 percent.

Note also that the only other comparable (in fact larger) fall in GDP per worker came in the immediate postwar demobilization period 1945-1947. I’ve never understood the current convention that says we should ignore the depression and wartime experience when thinking about macroeconomic relationships. Previous generations thought just the opposite — that we can learn the most about how the system operates from these kinds of extreme events, that “the prime test of Keynesian theory must be the Great Depression.” Isn’t it logical, if you want to understand how shifts in aggregate demand affect economic outcomes, that you would look first at the biggest such shifts, where the effects should be clearest? The impact of these two big demand shifts on output per worker, seem like good reason to expect such effects in general.

And it’s not hard to explain why. In real economies, there are great disparities in the value of the labor performed by similar people, and immense excess capacity in the form of low-productivity jobs accepted for lack of anything better. Increased demand mobilizes that capacity. When the munitions factories are running full tilt, no one works shining shoes.

Mark Blyth on the Creditor’s Paradise

There’s a lot to like in this talk by Mark Blyth, reposted in Jacobin. I will certainly be quoting him in the future on the euro system as a “creditor’s paradise.” But I can’t help noting that the piece repeats exactly the two bits of conventional wisdom that I’ve been criticizing in my recent posts here on Europe. [1]

First, the uncritical adoption of the orthodox view that if Greece defaults on its debts to the euro system, it will have to leave the single currency.  Admittedly it’s just a line in passing. But I really wish that Blyth would not write “default or ‘Grexit’,” as if they were synonyms. Given that the assumption that they have to go together is one of the strongest weapons on the side of orthodoxy, opponents of austerity should at least pause a moment and ask if they necessarily do.

Second, this:

Austerity as economic policy simply doesn’t work. … European reforms … simply ask everyone to become “more competitive” — and who could be against that? Until one remembers that being competitive against each other’s main trading partners in the same currency union generates a “moving average” problem of continental proportions. 

It is statistically absurd to all become more competitive. It’s like everyone trying to be above average. It sounds like a good idea until we think about the intelligence of the children in a classroom. By definition, someone has to be the “not bright” one, even in a class of geniuses.

In comments to my last post, a couple people doubted if critics of austerity really say it’s impossible for all the countries in the euro to become more competitive. If you were one of the doubters, here you go: Mark Blyth says exactly that. Notice the slippage in the referent of “everyone,” from all countries in the euro system, to all countries in the world. Contra Blyth, since the eurozone is not a closed trading system, it is not inherently absurd to suggest that everyone in it can become more competitive. If competitiveness is measured by the trade balance, it’s not only not absurd, it’s an accomplished fact.

Obviously — but I guess it isn’t obvious — I don’t personally think that the shift toward trade surpluses throughout the eurozone represents any kind of improvement in the human condition. But it does directly falsify the claim Blyth is making here. And this is a problem if the stance we are trying to criticize austerity from is a neutral technocratic one, in which disagreements are about means rather than ends.

Austerity is part of the program of reinforcing and extending the logic of the market in political and social life. Personally I find that program repugnant. But on its own terms, austerity can work just fine.

[1] One of my posts was also cross-posted at Jacobin. Everybody should read Jacobin.

The Call Is Coming from Inside the House

Paul Krugman wonders why no one listens to academic economists. Almost all the economists in the IGM Survey agree that the 2009 stimulus bill successfully reduced unemployment and that its benefits outweighed its costs. So why are these questions still controversial?

One answer is that economists don’t listen to themselves. More precisely, liberal economists like Krugman who want the state to take a more active role in managing the economy, continue to teach  an economic theory that has no place for activist policy.

Let me give a concrete example.

One of Krugman’s bugaboos is the persistence of claims that expansionary monetary policy must lead to higher inflation. Even after 5-plus years of ultra-loose policy with no rising inflation in sight, we keep hearing that since so “much money has been created…, there should already be considerable inflation.” (That’s from exhibit A in DeLong’s roundup of inflationphobia.) As an empirical matter, of course, Krugman is right. But where could someone have gotten this idea that an increase in the money supply must always lead to higher inflation? Perhaps from an undergraduate economics class? Very possibly — if that class used Krugman’s textbook.

Here’s what Krugman’s International Economics says about money and inflation:

A permanent increase in the money supply causes a proportional increase in the price level’s long-run value. … we should expect the data to show a clear-cut positive association between money supplies and price levels. If real-world data did not provide strong evidence that money supplies and price levels move together in the long run, the usefulness of the theory of money demand we have developed would be in severe doubt. 

… 

Sharp swings in inflation rates [are] accompanied by swings in growth rates of money supplies… On average, years with higher money growth also tend to be years with higher inflation. In addition, the data points cluster around the 45-degree line, along which money supplies and price levels increase in proportion. … the data confirm the strong long-run link between national money supplies and national price levels predicted by economic theory. 

… 

Although the price levels appear to display short-run stickiness in many countries, a change in the money supply creates immediate demand and cost pressures that eventually lead to future increases in the price level. 

… 

A permanent increase in the level of a country’s money supply ultimately results in a proportional rise in its price level but has no effect on the long-run values of the interest rate or real output. 

This last sentence is simply the claim that money is neutral in the long run, which Krugman continues to affirm on his blog. [1] The “long run” is not precisely defined here, but it is clearly not very long, since we are told that “Even year by year, there is a strong positive relation between average Latin American money supply growth and inflation.”

From the neutrality of money, a natural inference about policy is drawn:

Suppose the Fed wishes to stimulate the economy and therefore carries out an increase in the level of the U.S. money supply. … the U.S. price level is the sole variable changing in the long run along with the nominal exchange rate E$/€. … The only long-run effect of the U.S. money supply increase is to raise all dollar prices.

What is “the money supply”? In the US context, Krugman explicitly identifies it as M1, currency and checkable deposits, which (he says) is determined by the central bank. Since 2008, M1 has more than doubled in the US — an annual rate of increase of 11 percent, compared with an average of 2.5 percent over the preceding decade. Krugman’s textbook states, in  unambiguous terms, that such an acceleration of money growth will lead to a proportionate acceleration of inflation. He can hardly blame the inflation hawks for believing what he himself has taught a generation of economics students.

You might think these claims about money and inflation are unfortunate oversights, or asides from the main argument. They are not. The assumption that prices must eventually change in proportion to the central bank-determined money supply is central to the book’s four chapters on macroeconomic policy in an open economy. The entire discussion in these chapters is in terms of a version of the Dornbusch “overshooting” model. In this model, we assume that

1. Real exchange rates are fixed in the long run by purchasing power parity (PPP).
2. Interest rate differentials between countries are possible only if they are offset by expected changes in the nominal exchange rate.

Expansionary monetary policy means reducing interest rates here relative to the rest of the world. In a world of freely mobile capital, investors will hold our lower-return bonds only if they expect our nominal exchange rate to appreciate in the future. With the long-run real exchange rate pinned down by PPP, the expected future nominal exchange rate depends on expected inflation. So to determine what exchange rate today will make investors willing to holder our lower-interest bonds, we have to know how policy has changed their expectations of the future price level. Unless investors believe that changes in the money supply will translate reliably into changes in the price level, there is no way for monetary policy to operate in this model.

So  these are not throwaway lines. The more thoroughly a student understands the discussion in Krugman’s textbook, the stronger should be their belief that sustained expansionary monetary policy must be inflationary. Because if it is not, Krugman gives you no tools whatsoever to think about policy.

Let me anticipate a couple of objections:

Undergraduate textbooks don’t reflect the current state of economic theory. Sure, this is often true, for better or worse. (IS-LM has existed for decades only in the Hades of undergraduate instruction.) But it’s not much of a defense, is it? If Paul Krugman has been teaching his undergraduates economic theory that produces disastrous results when used as a guide for policy, you would think that would provoke some soul-searching on his part. But as far as I can tell, it hasn’t. But in this case I think the textbook does a good job summarizing the relevant scholarship. The textbook closely follows the model in Dornbusch’s Expectations and Exchange Rate Dynamics, which similarly depends on the assumption that the price level changes proportionately with the money supply. The Dornbusch article is among the most cited in open-economy macroeconomics and international finance, and continues to appear on international finance syllabuses in most top PhD programs.

Everything changes at the zero lower bound. Defending the textbook on the ground that it’s pre-ZLB effectively concedes that what economists were teaching before 2008 has become useless since then. (No wonder people don’t listen.) If orthodox theory as of 2007 has proved to be all wrong in the post-Lehmann world, shouldn’t that at least raise some doubts about whether it was all right pre-Lehmann? But again, that’s irrelevant here, since I am looking at the 9th Edition, published in 2011. And it does talk about the liquidity trap — not, to be sure, in the main chapters on macroeconomic policy, but in a two-page section at the end. The conclusion of that section is that while temporary increases in the money supply will be ineffective at the zero lower bond, a permanent increase will have the same effects as always: “Suppose the central bank can credibly promise to raise the money supply permanently … output will therefore expand, and the currency will depreciate.” (The accompanying diagram shows how the economy returns to full employment.) The only way such a policy might fail is if there is reason to believe that the increase in the money supply will subsequently be reversed. Just to underline the point, the further reading suggested on policy at the zero lower bound is an article by Lars Svennson that calls a permanent expansion in the money supply “the foolproof way” to escape a liquidity trap. There’s no suggestion here that the relationship between monetary policy and inflation is any less reliable at the ZLB; the only difference is that the higher inflation that must inevitably result from monetary expansion is now desirable rather than costly. This might help if Krugman were a market monetarist, and wanted to blame the whole Great Recession and slow recovery on bad policy by the Fed; but (to his credit) he isn’t and doesn’t.

Liberal Keynesian economists made a deal with the devil decades ago, when they conceded the theoretical high ground. Paul Krugman the textbook author says authoritatively that money is neutral in the long run and that a permanent increase in the money supply can only lead to inflation. Why shouldn’t people listen to him, and ignore Paul Krugman the blogger?

[1] That Krugman post also contains the following rather revealing explanation of his approach to textbook writing:

Why do AS-AD? First, you do want a quick introduction to the notion that supply shocks and demand shocks are different … and AS-AD gets you to that notion in a quick and dirty, back of the envelope way. 

Second — and this plays a surprisingly big role in my own pedagogical thinking — we do want, somewhere along the way, to get across the notion of the self-correcting economy, the notion that in the long run, we may all be dead, but that we also have a tendency to return to full employment via price flexibility. Or to put it differently, you do want somehow to make clear the notion (which even fairly Keynesian guys like me share) that money is neutral in the long run. That’s a relatively easy case to make in AS-AD; it raises all kinds of expositional problems if you replace the AD curve with a Taylor rule, which is, as I said, essentially a model of Bernanke’s mind.

This is striking for several reasons. First, Krugman wants students to believe in the “self-correcting economy,” even if this requires teaching them models that do not reflect the way professional economists think. Second, they should think that this self-correction happens through “price flexibility.” In other words, what he wants his students to look at, say, falling wages in Greece, and think that the problem must be that they have not fallen enough. That’s what “a return to full employment via price flexibility” means. Third, and most relevant for this post, this vision of self-correction-by-prices is directly linked to the idea that money is neutral in the long run — in other words, that a sustained increase in the money supply must eventually result in a proportionate increase in prices. What Krugman is saying here, in other words, is that a “surprising big” part of his thinking on pedagogy is how to inculcate the exact errors that drive him crazy in policy settings. But that’s what happens once you accept that your job as an educator is to produce ideological fables.

How Not to Think about Negative Rates

Last week’s big monetary-policy news was the ECB’s decision to target a negative interest rate, in the form of an 0.25 percent tax on bank reserves. This is the first time a major central bank has announced a negative policy rate, though some smaller ones (like the Bank of Sweden) have done so in the past few years.

Whether a tax on reserves is really equivalent to a negative interest rate, and whether this change should be expected to pass through to interest rates or credit availability for private borrowers, are not easy questions. I’m not going to try to answer them now. I just want to call attention to this rather extraordinary Neil Irwin column, as an example of how unsuited mainstream discussion is to addressing these questions.  
Here’s Irwin’s explanation of what a negative interest rate means:

When a bank pays a 1 percent interest rate, it’s clear what happens: If you deposit your money at the bank, it will pay you a penny each year for every dollar you deposited. When the interest rate is negative, the money goes the other direction. … Put bluntly: Normally the banks pay you to keep your money there. Under negative rates, you pay them for the privilege.

Not mentioned here, or anywhere else in the article, is that people pay interest to banks, as well as receiving interest from them. In Irwin’s world, “you” are always a creditor, never a borrower. 
Irwin continues:

The theory is that when it becomes more costly for European banks to keep money in the E.C.B., they will have incentive to do something else with it: Lend it out to consumers or businesses, for example.

Here’s the loanable funds theory in all its stupid glory. People put their “money” into a bank, which then either holds it or lends it out. Evidently it is not a requirement to be a finance columnist for the New York Times to know anything about how bank loans actually work. 
Irwin:

Banks will most likely pass these negative interest rates on to consumers, or at least try to. They may try to do so not by explicitly charging a negative interest rate, but by paying no interest and charging a fee for account maintenance.

Note that “consumers” here means depositors. The fact that banks also make loans has escaped Irwin’s attention entirely. 
Of course, most of us are already in this situation: We don’t receive any interest rate on our transaction balances, and pay are willing to pay various charges and fees for the liquidity benefits of holding them. 
The danger of negative rates, per Irwin, is that 

It is possible that, assuming banks pass along the negative rates through either fees or explicitly charging negative interest, people will withdraw their money as cash rather than keeping it on deposit at banks. … That is one big reason that the E.C.B. and other central banks are going to be reluctant to make rates highly negative; it could result in people pulling cash out of the banking system.

Again the quantity theory in its most naive and stupid form: there is a fixed quantity of “money” out there, which is either being kept in banks — which function, in Irwin’s world, as glorified safe deposit boxes — or under mattresses. Evidently he’s never thought about why the majority of us who already face negative rates on our checking accounts continue to hold them. More fundamentally, there’s no explanation of what makes negative rates special. Bank deposits don’t, in general, finance holdings of reserves, they finance bank loans. Any kind of expansionary policy must reduce the yield on bank loans and also — if margins are constant — on deposits and other bank liabilities. Making returns to creditors the acid test of policy, as Irwin does, would seem to be an argument against expansionary monetary policy in general — which of course it is.
What’s amazing to me in this piece is that here we have an article about monetary policy that literally makes no mention of loans or borrowers. In Irwin’s world, “you” are, by definition, an owner of financial assets; no other entities exist. It’s the 180-proof distillation of the bondholder’s view of the world.
Heterodox criticism of the loanable-funds theory of interest and insistence that loans create deposits, can sometimes come across as theological, almost ritual.  Articles like this are a reminder of why we can’t let these issues slide, if we want to make any sense of the financial universe in which we live.

“Recession Is a Time of Harvest”

Noah and Seth say pretty much everything that needs to be said about this latest #Slatepitch provocation from Matt Yglesias.  [1] So, traa dy lioaur, I am going to say something that does not need to be said, but is possibly interesting.

Yglesias claims that “the left” is wrong to focus on efforts to increase workers’ money incomes, because higher wages just mean higher prices. Real improvements in workers’ living standards — he says — come from the same source as improvements for rich people, namely technological innovation. What matters is rising productivity, and a rise in productivity necessarily means a fall in (someone’s) nominal income. So we need to forget about raising the incomes of particular people and trust the technological tide to lift all boats.

As Noah and Seth say, the logic here is broken in several places. Rising productivity in a particular sector can raise incomes in that sector as easily as reduce them. Changes in wages aren’t always passed through to prices, they can also reflect changes in the distribution between wages and other income.

I agree, it’s definitely wrong as a matter of principle to say that there’s no link, or a negative link, between changes in nominal wages and changes in the real standard of living. But what kind of link is there, actually? What did our forebears think?

Keynes notoriously took the Yglesias line in the General Theory, arguing that real and nominal wages normally moved in opposite directions. He later retracted this view, the only major error he conceded in the GT (which makes it a bit unfortunate that it’s also the book’s first substantive claim.) Schumpeter made a similar argument in Business Cycles, suggesting that the most rapid “progress in the standard of life of the working classes” came in periods of deflation, like 1873-1897. Marx on the other hand generally assumed that the wage was set in real terms, so as a first approximation we should expect higher productivity in wage-good industries to lead to lower money wages, and leave workers’ real standard of living unchanged. Productivity in this framework (and in post-GT Keynes) does set a ceiling on wages, but actual wages are almost well below this, with their level set by social norms and the relative power of workers and employers.

But back to Schumpeter and the earlier Keynes. It’s worth taking a moment to think through why they thought there would be a negative relationship between nominal and real wages, to get a better understanding of when we might expect such a relationship.

For Keynes, the logic is simple. Wages are equal to marginal product. Output is produced in conditions of declining marginal returns. (Both of assumptions are wrong, as he conceded in the 1939 article.) So when employment is high, the real wage must be low. Nominal wages and prices generally move proportionately, however, rising in booms and falling in slumps. (This part is right.) So we should expect a move toward higher employment to be associated with rising nominal wages, even though real wages must fall. You still hear this exact argument from people like David Glasner.

Schumpeter’s argument is more interesting. His starting point is that new investment is not generally financed out of savings, but by purchasing power newly created by banks. Innovations are almost never carried out by incumbent producers simply adopting the new process in place of the existing one, but rather by some new entrant — the famous entrepreneur– operating with borrowed funds. This means that the entrepreneur must bid away labor and other inputs from their current uses (importantly, Schumpeter assumes full employment) pushing up costs and prices. Furthermore, there will be some extended period of demand from the new entrants for labor and intermediate goods while the incumbents have not yet reduced theirs — the initial period of investment in the new process (and various ancillary processes — Schumpeter is thinking especially of major innovations like railroads, which will increase demand in a whole range of related industries), and later periods where the new entrants are producing but don’t yet have a decisive cost advantage, and a further period where the incumbents are operating at a loss before they finally exit. So major innovations tend to involve extended periods of rising prices. It’s only once the new producers have thoroughly displaced the old ones that demand and prices fall back to their old level. But it’s also only then that the gains from the innovation are fully realized. As he puts it (page 148):

Times of innovation are times of effort and sacrifice, of work for the future, while the harvest comes after… ; and that the harvest is gathered under recessive symptoms and with more anxiety than rejoicing … does not alter the principle. Recession [is] a time of harvesting the results of preceding innovation…

I don’t think Schumpeter was wrong when he wrote. There is probably some truth to idea that falling prices and real wages went together in 19th century. (Maybe by 1939, he was wrong.)

I’m interested in Schumpeter’s story, though, as more than just intellectual history, fascinating tho that is. Todays consensus says that technology determines the long-term path of the economy, aggregate demand determines cyclical deviations from that path, and never the twain shall meet. But that’s not the only possibility. We talked the other day about demand dynamics not as — as in conventional theory — deviations from the growth path in response to exogenous shocks, but as an endogenous process that may, or may not, occasionally converge to a long-term growth trajectory, which it also affects.

In those Harrod-type models, investment is simply required for higher output — there’s no innovation or autonomous investment booms. Those are where Schumpeter comes in. What I like about his vision is it makes it clear that periods of major innovation, major shifts from one production process to another, are associated with higher demand — the major new plant and equipment they require, the reorganization of the spatial and social organization of production they entail (“new plant, new firms, new men,” as he says) make large additional claims on society’s resources. This is the opposite of the “great recalculation” claim we were hearing a couple years ago, about how high unemployment was a necessary accompaniment to major geographic or sectoral shifts in output; and also of the more sophisticated version of the recalculation argument that Joe Stiglitz has been developing. [2] Schumpeter is right, I think, when he explicitly says that if we really were dealing with “recalculation” by a socialist planner, then yes, we might see labor and resources withdrawn from the old industries first, and only then deployed to the new ones. But under capitalism things don’t work like that  (page 110-111):

Since the central authority of the socialist state controls all existing means of production, all it has to do in case it decides to set up new production functions is simply to issue orders to those in charge of the productive functions to withdraw part of them from the employments in which they are engaged, and to apply the quantities so withdrawn to the new purposes envisaged. We may think of a kind of Gosplan as an illustration. In capitalist society the means of production required must also be … [redirected] but, being privately owned, they must be bought in their respective markets. The issue to the entrepreneurs of new means of payments created ad hoc [by banks] is … what corresponds in capitalist society to the order issued by the central bureau in the socialist state. 

In both cases, the carrying into effect of an innovation involves, not primarily an increase in existing factors of production, but the shifting of existing factors from old to new uses. There is, however, this difference between the two methods of shifting the factors : in the case of the socialist community the new order to those in charge of the factors cancels the old one. If innovation were financed by savings, the capitalist method would be analogous… But if innovation is financed by credit creation, the shifting of the factors is effected not by the withdrawal of funds—”canceling the old order”—from the old firms, but by … newly created funds put at the disposal of entrepreneurs : the new “order to the factors” comes, as it were, on top of the old one, which is not thereby canceled.

This vision of banks as capitalist Gosplan, but with the limitation that they can only give orders for new production on top of existing production, seems right to me. It might have been written precisely as a rebuttal to the “recalculation” arguments, which explicitly imagined capitalist investment as being guided by a central planner. It’s also a corrective to the story implied in the Slate piece, where one day there are people driving taxis and the next day there’s a fleet of automated cars. [3] Before that can happen, there’s a long period of research, investment, development — engineers are getting paid, the technology is getting designed and tested and marketed, plants are being built and equipment installed — before the first taxi driver loses a dollar of income. And even once the driverless cars come on line, many of the new companies will fail, and many of the old drivers will hold on for as long as their credit lasts. Both sets of loss-making enterprises have high expenditure relative to their income, which by definition boosts aggregate demand. In short, a period of major innovations must be a period of rising nominal incomes — as we most recently saw, on a moderate scale, in the late 1990s.

Now for Schumpeter, this was symmetrical: High demand and rising prices in the boom were balanced by falling prices in the recession — the “harvest” of the fruits of innovation. And it was in the recession that real wages rose. This was related to his assumption that the excess demand from the entrepreneurs mainly bid up the price of the fixed stock of factors of production, rather than activating un- or underused factors. Today, of course, deflation is extremely rare (and catastrophic), and output and employment vary more over the cycle than wages and prices do. And there is a basic asymmetry between the boom and the bust. New capital can be added very rapidly in growing enterprises, in principle; but gross investment in the declining incumbents cannot fall below zero. So aggregate investment will always be highest when there are large shifts taking place between sectors or processes. Add to this that new industries will take time to develop the corespective market structure that protects firms in capital-intensive industries from cutthroat competition, so they are more likely to see “excess” investment. And in a Keynesian world, the incomes from innovation-driven investment will also boost consumption, and investment in other sectors. So major innovations are likely to be associated with booms, with rising prices and real wages.

So, but: Why do we care what Schumpeter thought 75 years ago, especially if we think half of it no longer holds? Well, it’s always interesting to see how much today’s debates rehash the classics. More importantly, Schumpeter is one of the few economists to have focused on the relation of innovation, finance and aggregate demand (even if, like a good Wicksellian, he thought the latter was important only for the price level); so working through his arguments is a useful exercise if we want to think more systematically about this stuff ourselves. I realize that as a response to Matt Y.’s silly piece, this post is both too much and poorly aimed. But as I say, Seth and Noah have done what’s needed there. I’m more interested in what relationship we think does hold, between innovation and growth, the price level, and wages.

As an economist, my objection to the Yglesias column (and to stuff like the Stiglitz paper, which it’s a kind of bowdlerization of) is that the intuition that connects rising real incomes to falling nominal incomes is just wrong, for the reasons sketched out above. But shouldn’t we also say something on behalf of “the left” about the substantive issue? OK, then: It’s about distribution. You might say that the functional distribution is more or less stable in practice. But if that’s true at all, it’s only over the very long run, it certainly isn’t in the short or medium run — as Seth points out, the share of wages in the US is distinctly lower than it was 25 years ago. And even to the extent it is true, it’s only because workers (and, yes, the left) insist that nominal wages rise. Yglesias here sounds a bit like the anti-environmentalists who argue that the fact that rivers are cleaner now than when the Clean Water Act was passed, shows it was never needed. More fundamentally, as a leftist, I don’t agree with Yglesias that the only important thing about income is the basket of stuff it procures. There’s overwhelming reason — both first-principle and empirical — to believe that in advanced countries, relative income, and the power, status and security it conveys, is vastly more important than absolute income. “Don’t worry about conflicting interests or who wins or loses, just let the experts make things better for everyone”: It’s an uncharitable reading of the spirit behind this post, but is it an entirely wrong one?

UPDATE: On the other hand. In his essay on machine-breaking, Eric Hobsbawm observes that in 18th century England,

miners’ riots were still directed against high food prices, and the profiteers believed to be responsible for them.

And of course more generally, there have been plenty of working-class protests and left political programs aimed at reducing the cost of living, as well as raising wages. Food riots are a major form of popular protest historically, subsidies for food and other necessities are a staple policy of newly independent states in the third world (and, I suspect, also disapproved by the gentlemen of Slate), and food prices are a preoccupation of plenty of smart people on the left. (Not to mention people like this guy.) So Yglesias’s notion that “the left” ignores this stuff is stupid. But if we get past the polemics, there is an interesting question here, which is why mass politics based around people’s common interests as workers is so much more widespread and effective than this kind of politics around the cost of living. Or, maybe better, why one kind of conflict is salient in some times and places and the other kind of conflict in others; and of course in some, both.

[1] Seth’s piece in particular is a really masterful bit of polemic. He apologizes for responding to trollery, which, yeah, the Yglesias post arguably is. But if you must feed trolls, this is how it’s done. I’m not sure if the metaphor requires filet mignon and black caviar, or dogshit garnished with cigarette butts, or fresh human babies, but whatever it is you should ideally feed a troll, Seth serves it up.

[2] It appears that Stiglitz’s coauthor Bruce Greenwald came up with this first, and it was adopted by right-wing libertarians like Arnold Kling afterwards.

[3] I admit I’m rather skeptical about the prospects for driverless cars. Partly it’s that the point is they can operate with much smaller error tolerances than existing cars — “bumper to bumper at highway speeds” is the line you always hear — but no matter how inherently reliable the technology, these things are going to be owned maintained by millions of individual nonprofessionals. Imagine a train where the passengers in each car were responsible for making sure it was securely coupled to the next one. Yeah, no. But I think there’s an even more profound reason, connected to the kinds of risk we will and will not tolerate. I was talking to my friend E. about this a while back, and she said something interesting: “People will never accept it, because no one is responsible for an accident. Right now, if  there’s a bad accident you can deal with it by figuring out who’s at fault. But if there were no one you could blame, no one you could punish, if it were just something that happened — no one would put up with that.” I think that’s right. I think that’s one reason we’re much more tolerant of car accidents than plane accidents, there’s a sense that in a car accident at least one of the people involved must be morally responsible. Totally unrelated to this post, but it’s a topic I’d like to return to at some point — that what moral agency really means, is a social convention that we treat causal chains as being broken at certain points — that in some contexts we treat people’s actions as absolutely indeterminate. That there are some kinds of in principle predictable actions by other people that we act — that we are morally obliged to act — as if we cannot predict.

In Which I Dare to Correct Felix Salmon

Felix Salmon is my favorite business blogger — super smart, cosmopolitan and impressively unimpressed by the Masters of the Universe he spends his days observing. In general, I’d expect him to be much more on top of current financial data than I am. But in today’s post on the commercial paper market, he makes an uncharacteristic mistake — or rather, uncharacteristic for him but highly characteristic of the larger conversation around finance.

According to Felix:

The commercial paper market has to a first approximation become an entirely financial market, a place for banks and shadow banks to do their short-term borrowing while the interbank market remains closed.

According to David S. Scharfstein of Harvard Business School, who also testified last week, of the 50 largest issuers of debt to money market funds today, only two are nonfinancial firms; the rest are banks and other financial companies, many of them foreign.

Once upon a time, before the financial crisis, money-market funds were a mechanism whereby individual investors could make safe, short-term loans to big corporates, disintermediating the banks. But all that has changed now. For one thing, says Davidoff, “about two-thirds of money market users are sophisticated finance investors”. For another, the corporates have evaporated away, to be replaced by financials. In the corporate world, it seems, the price mechanism isn’t working any more: either you’re a big and safe corporate and don’t want to run the refinancing risk of money-market funds suddenly drying up, or else you’re small enough and risky enough that the money market funds don’t want to lend to you at any price.

I’m sorry, but I don’t think that’s right.

Reading Felix, you get the clear impression that before the crisis, or anyway not too long ago, most borrowers in the commercial paper market were nonfinancial corporations. It has only “become an entirely financial market” relatively recently, he suggests, as nonfinancial borrowers have dropped out. But, to me at least, the real picture looks rather different.

Source: Flow of Funds

The graph shows outstanding financial and nonfinancial commercial paper on the left scale, and the financial share of the total on the right scale. As you can see the story is almost the opposite of the one Felix tells. Financial borrowers have always dominated the commercial paper market, and their share has fallen, not risen, in the wake of the financial crisis and recession. Relative to the economy, nonfinancial commercial paper outstanding is close to where it was at the peak of the past cycle. But financial paper is down by almost two-thirds. As a result, the nonfinancial share of the commercial paper market has doubled, from 7 to 15 percent — the highest it’s been since the 1990s.

Why does this matter? Well, of course, it’s important to get these things right. But I think Felix’s mistake here is revealing of a larger problem.

One of the most dramatic features of the financial crisis of fall 2008, bringing the Fed as close as it got to socializing the means of intermediation, was the collapse of the commercial paper market. But as I’ve written here before, it was almost never acknowledged that the collapse was largely limited to financial commercial paper. Nonfinancial borrowers did not lose access to credit in the way that banks and shadow banks did. The gap between the financial and nonfinancial commercial paper markets wasn’t discussed, I believe, because of the way the crisis was seen entirely through the eyes of finance.

I suspect the same thing is happening with the evolution of the commercial paper market in the past few years. The Flow of Funds shows clearly that commercial borrowing by nonfinancial borrowers has held up reasonably well; the fall in commercial paper lending is limited to financial borrowers. But that banks’ problems are everyone’s problems is taken for granted, or at most justified with a pious handwave about the importance of credit to the real economy.

And that’s the second  assumption, again usually unstated, at issue here: that providing credit to households and businesses is normally the main activity of finance, with departures from that role an anomalous recent development. But what if the main action in the financial system has never been intermediating between ultimate lenders and borrowers? What if banks have always mostly been, not to put too fine a point on it, parasites?

During the crisis of 2008 one big question was if it was possible to let the big banks fail, or if the consequences for the real economy would be prohibitively awful. On the left, Dean Baker took the first position while Doug Henwood took the second, arguing that the alternative to bailouts could be a second Great Depression. I was ambivalent at the time, but I’ve been moving toward the let-them-fail view. (Especially if the counterfactual is that governments and central banks putting comparable resources into sheltering the real economy from collapsing banks, as they have into propping them up.) The evolution of the commercial paper market looks to me like one more datapoint supporting that view. The collapse of interbank lending doesn’t seem to have affected nonbank borrowers much.

(Which brings us to a larger point, of whether the continued depressed state of the real economy is due to a lack of access to credit. Obviously I think not, but that’s beyond the scope of this post.)

An insidious feature of the world we live in is an unconscious tendency to adopt finance’s point of view. This is as true of intellectuals as of everyone else. An anthropologist of my acquaintance, for instance, did his fieldwork on the New York financial industry. Nothing wrong with that — he’s got some very smart things to say about it — but you really can’t imagine someone doing a similar project on any other industry, apart from high-tech internet stuff. In our culture, finance is just interesting in a way that other businesses are not. I’m not exempting myself from this, by the way. The financial crisis and its aftermath was the most exciting time in memory to be thinking about economics; I’m not going to deny it, it was fun. And there are plenty of people on the left who would say that a tendency, which I confess to, to let the conflict between Wall Street and the real economy displace the conflict between labor and capital in our political language, is a symptom — a kind of reaction-formation — of the same intellectual capture.

But that is perhaps over-broadening the point, which is just this: That someone as smart as Felix Salmon could so badly misread the commercial paper market is a sign of how hard we have to work to distinguish the state of the banks, from the state of the economy.

Adventures in Cognitive Dissonance

Brad DeLong, May 25:

WHAT ARE THE CORE COMPETENCES OF HIGH FINANCE? 

The core competences of high finance are supposed to be (a) assessing risk, and (b) matching people with risks to be carried with people with the risk-bearing capacity to carry them. Robert Waldmann has a different view:

I think their core competencies are (a) finding fools for counterparties and (b) evading regulations/disguising gambling as hedging.

Regulatory arbitrage, and persuading those who do not understand risks that they should bear them–those are not socially-valuable activities.

Brad DeLong, yesterday:

NEXT YEAR’S EXPECTED EQUITY RETURN PREMIUM IS 9% 

If you have any risk-bearing capacity at all, now is the time to use it.

So I guess last week’s doubts have been assuaged. Or did he really mean to write “If you have any capacity for being fooled into being a swindler’s counterparty, now is the time to use it”?

EDIT: Oh and then, the post just after that one argued — well, really, assumed — that the current value of Facebook shares gives an unbiased estimate of future Facebook earnings, and therefore of the net wealth that Facebook has created. (I guess not a single dollar of FB revenue comes at the expense of other firms, which must be a first in the history of capitalism.) Is there some way of consistently believing both that current stock values give an unbiased estimate of the present value of future earnings, and that stock values a year from now will be much higher than they are today? I can’t see one. But then I’ve never had the brain for theodicy.

UPDATE: Anyone reading this should immediately go and read rsj’s much better take on the same DeLong post over at Windyanabasis. He explains exactly why DeLong is confused here.

What We Talk About When We Don’t Talk About Demand

There sure are a lot of ways to not say aggregate demand.

Here’s the estimable Joseph Stiglitz, not saying aggregate demand in Vanity Fair:

The parallels between the story of the origin of the Great Depression and that of our Long Slump are strong. Back then we were moving from agriculture to manufacturing. Today we are moving from manufacturing to a service economy. The decline in manufacturing jobs has been dramatic—from about a third of the workforce 60 years ago to less than a tenth of it today. … There are two reasons for the decline. One is greater productivity—the same dynamic that revolutionized agriculture and forced a majority of American farmers to look for work elsewhere. The other is globalization… (As Greenwald has pointed out, most of the job loss in the 1990s was related to productivity increases, not to globalization.) Whatever the specific cause, the inevitable result is precisely the same as it was 80 years ago: a decline in income and jobs. The millions of jobless former factory workers once employed in cities such as Youngstown and Birmingham and Gary and Detroit are the modern-day equivalent of the Depression’s doomed farmers.

This sounds reasonable, but is it? Nick Rowe doesn’t think so. Let’s leave aside globalization for another post — as Stieglitz says, it’s less important anyway. It’s certainly true that manufacturing employment has fallen steeply, even while the US — despite what you sometimes here — continues to produce plenty of manufactured goods. But does it make sense to say that the rise in manufacturing productivity be responsible for mass unemployment in the country as a whole?

There’s certainly an argument in principle for the existence of technological unemployment, caused by rapid productivity growth. Lance Taylor has a good discussion in chapter 5 of his superb new book Maynard’s Revenge (and a more technical version in Reconstructing Macroeconomics.) The idea is that with the real wage fixed, an increase in labor productivity will have two effects. First, it reduces the amount of labor required to produce a given level of output, and second, it redistributes income from labor to capital. Insofar as the marginal propensity to consume out of profit income is lower than the marginal propensity to consume out of wage income, this redistribution tends to reduce consumption demand. But insofar as investment demand is driven by profitability, it tends to increase investment demand. There’s no a priori reason to think that one of these effects is stronger than the other. If the former is stronger — if demand is wage-led — then yes, productivity increases will tend to lower demand. But if the latter is stronger — if demand is profit-led — then productivity increases will tend to raise demand, though perhaps not by enough to offset the reduced labor input required for a given level of output. For what it’s worth, Taylor thinks the US economy has profit-led demand, but not necessarily enough so to avoid a Luddite outcome.

Taylor is a structuralist. (The label I think I’m going to start wearing myself.) You would be unlikely to find this story in the mainstream because technological unemployment is impossible if wages equal the marginal product of labor, and because it requires that output to be normally, and not just exceptionally, demand-constrained.

It’s a good story but I have trouble seeing it having much to do with the current situation. Because, where’s the productivity acceleration? Underlying hourly labor productivity growth just keeps bumping along at 2 percent and change a year. Over the whole postwar period, it averages 2.3 percent. Over the past twenty years, 2.2 percent. Over the past decade, 2.3 percent. Where’s the technological revolution?

Just do the math. If underlying productivity rises at 2 percent a year, and demand constraints cause output to stay flat for four years [1], then we would expect employment to fall by 8 percent. In other words, lack of demand explains the whole fall in employment. [2] There’s no need to bring in structural shifts or anything else happening on the supply side. A fall in demand, plus a stable rate of productivity increase, gets you exactly what we’ve seen.

It’s important to understand why demand fell, but from a policy standpoint, no actually it isn’t. As the saying goes, you don’t refill a flat tire through the hole. The important point is that we don’t need to know anything about the composition of output to understand why unemployment is so high, because the relationship between the level of output and employment is no different than it’s always been.

But isn’t it true that since the end of the recession we’ve seen a recovery in output but no recovery in employment? Yes, it is. So doesn’t that suggest there’s something different happening in the labor market this time? No, it doesn’t. Here’s why.

There’s a well-established empirical relationship in macroeconomics called Okun’s law, which says that, roughly, a one percentage point change in output relative to potential changes employment by one a third to a half a percentage point. There are two straightforward reasons for this: first, a significant fraction of employment is overhead labor, which firms need an equal amount of whether their current production levels are high or low. And second, if hiring and training employees is costly, firms will be reluctant to lay off workers in the face of declines in output that are believe to be temporary. For both these reasons (and directly contrary to the predictions of a “sticky wages” theory of recessions) employment invariably falls by less than output in recessions. Let’s look at some pictures.

These graphs show the quarter by quarter annualized change in output (vertical axis) and employment (horizontal axis) over recent US business cycles. The diagonal line is the regression line for the postwar period as a whole; as you would expect, it passes through zero employment growth around two percent output growth, corresponding to the long-run rate of labor productivity growth.

1960 recession

1969 recession

1980 and 1981 recessions
1990 recession

2001 recession
2007 recession

What you see is that in every case, there’s the same clockwise motion. The initial phase of the recession (1960:2 to 1961:1, 1969:1 to 1970:4, etc.) is below the line, meaning growth has fallen more than employment. This is the period when firms are reducing output but not reducing employment proportionately. Then there’s a vertical upward movement at the left, when growth is accelerating and employment is not; this is the period when, because of their excess staffing at the bottom of the recession, firms are able to increase output without much new hiring. Finally there’s a movement toward the right as labor hoards are exhausted and overhead employment starts to increase, which brings the economy back to the long-term relationship between employment and output. [3] As the figures show, this cycle is found in every recession; it’s the inevitable outcome when an economy experiences negative demand shocks and employment is costly to adjust. (It’s a bit harder to see in the 1980-1981 graph because of the double-dip recession of 1980-1981; the first cycle is only halfway finished in 1981:2 when the second cycle begins.)

There’s nothing exceptional, in these pictures, about the most recent recession. Indeed, the accumulated deviations to the right of the long-term trend (i.e., higher employment than one would expect based on output) are somewhat greater than the accumulated deviations to the left of it. Nothing exceptional, that is, except how big it is, and how far it lies to the lower-left. In terms of the labor market, in other words, the Great Recession was qualitatively no different from other postwar recessions; it was just much deeper.

I understand the intellectual temptation to look for a more interesting story. And of course there are obviously structural explanations for why demand fell so far in 2007, and why conventional remedies have been relatively ineffective in boosting it. (Tho I suspect those explanations have more to do with the absence of major technological change, than an excess of it.) But if you want to know the proximate reason why unemployment is so high today, there’s a recession on still looks like a sufficiently good working hypothesis.

[1] Real GDP is currently less than 0.1 percent above its level at the end of 2007.

[2] Actually employment is down by only about 5 percent, suggesting that if anything we need a structural story for why it hasn’t fallen more. But there’s no real mystery here, productivity growth is not really independent of demand conditions and always decelerates in recessions.

[3] Changes in hours worked per employee are also part of the story, in both downturn and recovery.

Do Prices Matter?

Do exchange rates drive trade flows? Yes, says Dean Baker David Rosnick. Prices matter:

What happens to the economy as the dollar falls? …Over time, Americans notice that British goods have become more expensive in comparison to domestically produced goods. In other words, the price of U.S.-made sweaters becomes cheaper relative to the price of sweaters imported from Britain. This will lead us to buy fewer sweaters from Britain and more domestically manufactured sweaters.

At the same time, the British notice that American goods have become relatively inexpensive in comparison to goods made at home. This means it takes fewer pounds to buy a sweater made in the United States, so the British will buy more sweaters made in the United States and fewer of their domestically manufactured sweaters.

While American producers notice the increased demand for their exports, allowing them to raise their prices somewhat and still sell more than they had before the dollar fell. Similarly, for British exporters to continue selling they must lower prices.
Thus, as everyone eventually adjusts to the fall in the dollar, the trade deficit shrinks. This is not new economics by any stretch…

Indeed it’s not. Changes in prices induce changes in transaction volumes that smoothly restore equilibrium, is the first article of the economist’s catechism. But how much a given volume responds to a given price change, and whether the response is reliable and strong enough to make the resulting equilibrium relevant to real economies, are empirical questions. You could tell a similar parable about an increase in the minimum wage leading to a lower demand for low-wage labor, but as I’m sure Dean folks at CEPR would agree, that doesn’t it mean it’s what we actually see. You have to look at the evidence.

So what’s the evidence on this point? Dean Rosnick offers a graph showing two big falls in the value of the dollar after peaks in the mid-80s and mid-2000s, and falls in the trade deficit a few years later, in the early 90s and late 2000s. Early 90s and late 2000s … hm, what else was happening in those years? Oh, right, deep recessions. (The early 2000s recession was very mild.) Funny that the same guy who’s constantly chastising economists for ignoring the growth and collapse of a huge housing bubble, when he turns to trade … ignores a huge housing bubble.

Still, isn’t the picture is basically consistent with the story that when the dollar declines, US imports get more expensive and fall, and US exports get cheaper and rise? Not necessarily: Dean’s Rosnick’s graph doesn’t show imports and exports separately. And when we separate them out, we see something funny.

Click the graph to make it legible.

In a world where trade flows were mainly governed by exchange rates, a country’s imports and exports would show a negative correlation. After all, the same exchange-rate change that makes exports more expensive on world markets makes imports cheaper here, and vice versa. But that isn’t what we see at all. Except in the 1980s (when the exchange rate clearly did matter, but not in the way Dean Rosnick supposes; see Robert Blecker) exports and imports very clearly move together. And it’s not just a matter of a long-term rise in both imports and exports. Every period that saw a significant fall in imports — 1980-1984; 2000-2002; 2007-2009 — saw a large fall in exports as well. This is simply not what happens in a world where prices (are the main things that) matter.

(This discrepancy between real-world trade patterns and the textbook vision applies to almost all industrialized countries, and always has. It was noticed long ago by Robert Triffin, who brought it up to argue that movements in relative price levels did not govern trade patterns under the gold standard, as Ricardo and his successors had claimed. But it is just a strong counterargument to today’s conventional wisdom that exchange rates govern trade.)

So if changes in exchange rates don’t drive trade, what does? Lots of things, many of them no doubt hard to measure, or to influence through policy. But one obvious candidate is changes in incomes. One of the big advances of the first generation of Keynesian economists — people like Triffin, and especially Joan Robinson — was to show how, just as prices (the wage and interest rates) fail to equilibrate the domestic economy, leaving aggregate income to adjust, relative prices internationally don’t equilibrate the global economy, leaving output or growth rates to adjust. In the short run, business cycle-type fluctuations reliably involve changes in investment and consumer-durable purchases larger disproportionate to the change in output as a whole; given the mix of traded and non-trade goods for most countries, this creates an allometry in which short-run changes in output are accompanied by even larger short-run changes in imports and exports. For a country that runs a trade deficit in “normal” times, this means that recessions are reliably associated with smaller deficits and booms with larger ones. In the long run, there is also a reliable tendency for increments to income to involve demand for a changing mix of goods, with a greater share of demand falling on “leading sectors,” historically manufactured goods. (The flipside of this is Engels’ law, which states that the share of income spent on food falls as income rises.) Given that both a country’s mix of industries and its trade partners are relatively fixed, this creates a stable relationship between relative growth rates and trade balance movements. Neither of these channels is perfectly reliable, by any means — and the whole point of the industrial policy is to circumvent the second one — but they are still much stronger influences on trade flows than exchange rates (or other relative prices) are.

So with that in mind, let’s look at another version of the graph. This one shows the year-over-year change in the trade balance as a share of GDP, the same change as predicted by an OLS regression on total GDP growth over the past three years, and as predicted by the change in the value of the dollar over the past three years. [1]

It will be legible if you click it.

Not surprisingly, neither prediction gives a terribly close fit. But qualitatively, at least, the predictions based on GDP do a reasonable job: They capture every major worsening and improvement in the trade balance. True, they under-predict the improvement in the trade balance in the later 80s — the Plaza Accords mattered — but it’s clear that if you knew the rate of GDP growth over the next three years, you could make a reasonably reliable prediction about the the behavior of the trade balance. Knowing the change in the value of the dollar, on the other hand,wouldn’t help you much at all. (And just to be clear, this isn’t about the particular choice of three years. Two years and four years look roughly similar, and at a one-year horizon exchange rates aren’t predictive at all.)

Here’s another presentation of the same data, a scatterplot comparing the three-year change in the trade balance to the three-year growth of GDP (blue, left axis) and three-year change in the exchange rate (red, right axis). Again, while the correlation is fairly loose for both, it’s clearly tighter for GDP. All the periods of strongest improvement in the trade balance are associated with weak GDP growth, and vice versa; similarly all the periods of strong GDP growth are associated with worsening of the trade balance, and vice versa. There’s no such consistent association for the trade balance and changes in the exchange rate.

If you want to be able to read the graph, you should click it.

The fit could be improved by using some measure of disposable income — ideally adjusted for wealth effects — in place of GDP, and by using some better measure of relative prices in place of the exchange-rate index — altho there’s some controversy about what that better measure would be. And theoretically, instead of just the three-year change, you should use the individual lags.

Still, the takeaway, if you’re a policymaker, is clear. If you want to improve the trade balance, slower growth is the way to go. And if you want to boost growth, you probably are going to have to ignore the trade balance. Personally, I want door number two. I assume Dean Rosnick doesn’t want door one. But what I’m not at all sure about, is what concrete evidence makes him think that exchange-rate policy opens up a third door.

[1] Technicalities. I separately regressed the changes in imports and exports (as a percent of GDP) over three years earlier, on the percentage change over the same period in GDP and in the Fed’s trade-weighted major-partners dollar index, respectively. It’s all quarterly data, downloaded from FRED. The graphs shows the predicted values from the two regressions. This is admittedly crude, but I would argue that the more sophisticated approaches are in some respects less appropriate for the specific question being addressed here. For example, suppose hypothetically that a currency devaluation really did tend to improve the trade balance, but that it also tended to raise GDP growth, raising imports and offsetting the initial improvement. From an analytic standpoint, it might be appropriate to correct for the induced GDP change to get a better estimate of the pure exchange-rate effect. But from a policy perspective, the offsetting growth-imports effect has to be taken into account in evaluating the effects of a devaluation, just as much as the initial trade-balance improvement. Maybe we should say: Academics are interested in partial derivatives, policymakers in total derivatives?
EDIT: Oops! How did I not notice that this article was by somebody named David Rosnick, not Dean Baker? Makes me feel a bit better — I know Dean does share this article’s basic view of trade and the dollar, but I would hope his take would be a little less dependent on textbook syllogisms, and a little more attentive to actual patterns of trade. Clothing from the UK is hardly representative of US imports; to the extent we do import any, it’s like to be high-end branded stuff that is particularly price-inelastic.