There has been a lot of debate about whether the high inflation of 2021-2022 has been due mainly to supply or demand factors. Joe Stiglitz and Ira Regmi have a new paper from Roosevelt making the case for supply disruptions as the decisive factor. It’s the most thorough version of that case that I’ve seen, and I agree with almost all of it. I highly recommend reading it.
What I want to do in this post is something different. I want to clarify what it would mean, if inflation were in fact driven by demand. Because there are two quite distinct stories here that I think tend to get mixed up.
In the textbook story, production takes place with constant returns to scale and labor as the only input. (We could introduce other inputs like land or imports without affecting the logic.) Firms have market power, so price are set as a positive markup over unit costs. The markup depends on various things (regulations, market structure, etc.) but not on the current level of output. With constant output per worker, this means that the real wage and wage share are also constant.
The nominal wage, however, depends on the state of the labor market. The lower the unemployment rate, and the more bargaining power workers have, the higher the wage they will be in a position to demand. (We can think of this as an expected real wage, or as a rate of change from current wages.) When unemployment falls, workers command higher wages; but given markup pricing, these higher wages are simply passed on to higher prices. If we think of wages as a decreasing function of unemployment, there will be a unique level of unemployment where wage growth is equal to productivity growth plus the target inflation rate.
You can change this in various ways without losing the fundamental logic. If there are non-labor costs, then rising nominal wages can be passed less than one for one, and tight labor markets may result in faster real wage growth along with higher inflation. But there will still be a unique level of wage growth, and underlying labor-market conditions, that is consistent with the central bank’s target.This is the so-called NAIRU or natural rate of unemployment. You don’t hear that term as much as you used to, but the logic is very present in modern textbooks and the Fed’s communications.
There’s a different way of thinking about demand and inflation, though, that you hear a lot in popular discussions — variations on “too much money chasing too few goods.” In this story, rather than production being perfectly elastic at a given cost, production is perfectly inelastic — the amount of output is treated as fixed. (That’s what it means to talk about “too few goods”.) In this case, there is no relationship between costs of production and prices. Instead, the price ends up at the level where demand is just equal to the fixed quantity of goods.
In this story, there is no relationship between wages and prices — or at least, the former has no influence on the latter. Profit maximizing businesses will set their price as high as they can and still sell their available stocks, regardless of what it cost to produce them.
In the first story, the fundamental scarcity is inputs, meaning basically labor. In the second, what is scarce is final goods. Both of these are stories about how an increase in the flow of spending can cause prices to rise. But the mechanism is different. In the first case, transmission happens through the labor market. In the second, labor market conditions are at best an indicator of broader scarcities. In the first story, the inflation barrier is mediated by all sorts of institutional factors that can change the market power of businesses and the bargaining power of workers. In the second story it comes straightforwardly from the quantity of stuff available for purchase.
Once concrete difference between the stories is that only in the first one is there a tight quantitive relationship between wages and prices. When you say “wage growth consistent with price stability,” as Powell has in almost all of his recent press conferences, you are evidently thinking of wages as a cost. If we are thinking of wages as a source of demand, or an indicator of broader supply constraints, we might expect a positive relationship between wages and inflation but not the sort of exact quantitive relationship that this kind of language implies.
in any case, what we don’t want to do at this point is to say that one of these stories is right and the other is wrong. Our goal is simply to clarify what people are saying. Substantively, both could be wrong.
Or, both could be right, but in different contexts.
If we imagine cost curves as highly convex, it’s very natural to think of these two cases as describing two different situations or regimes or time scales in the same economy.1 Imagine something like the figure below. At a point like c, marginal costs are basically constant, and shifts in demand simply result in changes in output. At a point like b, on the other hand, output is very inelastic, and shifts in demand result almost entirely in changes in price.
Note that we can still have price equal to marginal cost, or a fixed markup to it, in both cases. It’s just that in the steeply upward-sloping section, price determines cost rather than vice versa.
Another point here is that once we are facing quantity constraints, the markup over average cost (which is all that we can normally observe) is going to rise. But this doesn’t necessarily reflect an increase in the markup over (unobservable) marginal cost, or any change in producers’ market power or pricing decisions.
We might think of this at the level of a firm, an industry or the economy as a whole. Normally, production is at a point like a — capitalists will invest to the point where capacity is a bit greater than normal levels of output. As long as production is taking place within the normal level of utilization, marginal costs are constant. But once normal capacity is exceeded by more than some reasonable margin, costs rise rapidly.
This framework does a couple of things. First, it clarifies that demand can lead to higher prices in two different ways. First, it shifts the demand curve (not shown here, but you can imagine a downward-sloping diagonal line) up and to the right. Second, insofar as it raises wages, it shifts the cost curve upward. The first effect does not matter for prices as long production is within normal capacity limits. The second effect does not matter once production has exceeded those limits.
Second, it helps explain why shifts in the composition of output led to a rise in the overall price level. Imagine a situation where most industries were at a position like a, operating at normal capacity levels. A big change in the mix of demand would shift some to b and others to c. The first would see lower output at their old prices, while the latter would see little increase in output but a big rise in prices. This has nothing to do with price stickiness or anything like that. It simply reflects the fact that it’s easy to produce at less than full capacity and very hard to produce much above it.
ETA: One of the striking features of the current disinflation is that it is happening without any noticeable weakening of the labor market. We could see that as just one more piece of evidence for the Stiglitz-Regmi position that it was transitory supply problems all along. But if you really want to credit the Fed, you could use the framework here to do it. Something like this:
In a sustained situation of strong demand, businesses will expect to be able to sell more in the future, and will invest enough to raise capacity in line with output. So the cost curve will shift outward as demand rises, and production will remain In the normal capacity, constant marginal cost range. In this situation, the way that demand is raising prices is via wages. (Unlike business capacity, the labor force does not, in this story, respond to demand.) Rising wages raise costs even at normal utilization levels, so the only way that policy can slow process growth is via weaker labor markets that reduce wage growth. But, when demand rises rapidly and unexpectedly, capacity will not be able to keep up in the short run, and we’ll end up on the righthand, steeply upward sloping part of the cost curve. At this point, price increases are not coming from wages or the cost side in general. Businesses cannot meaningful increase output in the short run, so prices are determined from the demand side rather than as a markup. In this context, price stability calls for policy to reduce desired purchases to what business can currently produce (presumably by reducing aggregate income). In principle this can happen without higher unemployment or slower wage growth.
I personally am not inclined to credit the Fed with a soft landing, even if all the inflation news is good from here on out. But if you do want to tell that story, convex supply curves are something you might like to have in your toolkit.
On December 2-3, 2022, the Political Economy Research Institute at the University of Massachusetts-Amherst (where I did my economics PhD) will be hosting a conference on “Global Inflation Today: What Is To Be Done?”1
I will be speaking on “Rethinking Supply Constraints,” a new project I am working on with Arjun Jayadev. Our argument is that we should think of supply constraints as limits on the speed at which production can be reorganized and labor and other resources can be reallocated via markets, as opposed to limits on the level of production determined by “real” resources. The idea is that this makes better sense of recent macroeconomic developments; fits better with a broader conception of the economy in terms of human productive activity rather than the exchange of pre-existing goods; and points toward more promising responses to the current inflation.
I was hoping to have a draft of the paper done for the conference, but that is not to be. But I do have a set of slides, which give at least a partial sketch of the argument. Feedback is most welcome!
Here’s the very short version of this very long post:
Hysteresis means that a change in GDP today has effects on GDP many years in the future. In principle, this could be because it affects either future aggregate demand or potential output. These two cases aren’t distinguished clearly in the literature, but they have very different implications. The fact that the Great Recession was followed by a period of low inflation, slow wage growth and low interest rates, rather than the opposite, suggests that the persistent-demand form of hysteresis is more important than potential-output hysteresis. The experience of the Great Recession is consistent with perhaps 20 percent of a shock to demand in this period carrying over to demand in future periods. This value in turn lets us estimate how much additional spending would be needed to permanently return GDP to the pre-2007 trend: 50-60 percent of GDP, or $10-12 trillion, spread out over a number of years.
Supply Hysteresis and Demand Hysteresis
The last few years have seen renewed interest in hysteresis – the idea that shifts in demand can have persistent effects on GDP, well beyond the period of the “shock” itself. But it seems to me that the discussion of hysteresis doesn’t distinguish clearly between two quite different forms it could take.
On the one hand, demand could have persistent effects on output because demand influences supply – this seems to be what people usually have in mind. But on the other hand, demand itself might be persistent. In time-series terms, in this second story aggregate spending behaves like a random walk with drift. If we just look at the behavior of GDP, the two stories are equivalent. But in other ways they are quite different.
Let’s say we have a period in which total spending in the economy is sharply reduced for whatever reason. Following this, output is lower than we think it otherwise would have been. Is this because (a) the economy’s productive potential was permanently reduced by the period of reduced spending? Or is it (b) because the level of spending in the economy was permanently reduced? I will call the first case supply hysteresis and the second demand hysteresis.
It might seem like a semantic distinction, but it’s not. The critical thing to remember is that what matters for much of macroeconomic policy is not the absolute level of output but the output gap — the difference between actual and potential output. If current output is above potential, then we expect to see rising inflation. (Depending on how “potential” is understood, this is more or less definitional.) We also expect to see rising wages and asset prices, shrinking inventories, longer delivery times, and other signs of an economy pushing against supply constraints. If current output is below potential, we expect the opposite — lower inflation or deflation, slower wage growth, markets in general that favor buyers over sellers. So while lower aggregate supply and lower aggregate demand may both translate into lower GDP, in other respects their effects are quite different. As you can see in my scribbles above, the two forms of hysteresis imply opposite output gaps in the period following a deep recession.
Imagine a hypothetical case where there is large fall in public spending for a few years, after which spending returns to its old level. For purposes of this thought experiment, assume there is no change in monetary policy – we’re at the ZLB the whole time, if you like. In the period after the depressed spending ends, will we have (1) lower unemployment and higher inflation than before, as the new income created during the period of high public spending leads to permanently higher demand. Or will we have (2) higher unemployment and lower inflation than if the spending had not occurred, because the period of high spending permanently raised labor force participation and productivity, while demand returns to its old level?
Supply hysteresis implies (1), that a temporary negative demand shock will lead to persistently higher inflation and lower unemployment (because the labor force will be smaller). Demand hysteresis implies (2), that a temporary negative demand shock will lead to permanently lower inflation and higher unemployment. Since the two forms of hysteresis make diametrically opposite predictions in this case, seems important to be clear which one we are imagining. Of course in the real world, could see a combination of both, but they are still logically distinct.
Most people reading this have probably seen a versions of the picture below. On the eve of the pandemic, real per-capita GDP was about 15 percent below where you’d expect it to be based on the pre-2007 trend. (Or based on pre-2007 forecasts, which largely followed the trend.) Let’s say we agree that the deviation is in some large part due to the financial crisis: Are we imagining that output has persistently fallen short of potential, or that potential has fallen below trend? Or again, it might be a combination of both.
In the first case, we would expect monetary policy to be generally looser in the period after a negative demand shock, in the second case tighter. In the first case we’d expect lower inflation in period after shock, in the second case higher.
It seems to me that most of the literature on hysteresis does not really distinguish these cases. This recent IMF paper by Antonio Fatas and coauthors, for example, defines hysteresis as a persistent effect of demand shocks on GDP. This could be either of the two cases. In the text of the paper,they generally assume hysteresis means an effect of demand on supply, and not a persistence of demand itself, but they don’t explicitly spell this out or make an argument for why the latter is not important.
It is clear that the original use of the term hysteresis was understood strictly as what I am calling supply hysteresis. (So perhaps it would be better to reserve the word for that, and make ups new name for the other thing.) If you read the early literature on hysteresis, like these widely-cited Laurence Ballpapers, the focus was on the European experience of the 1980s and 1990s; hysteresis is described as a change in the NAIRU, not as an effect on employment itself. The mechanism is supposed to be a specific labor-market phenomenon: the long term unemployed are no longer really available for work, even if they are counted in the statistics. In other words, sustained unemployment effectively shrinks the labor force, which means that in the absence of policy actions to reduce demand, the period following a deep recession will see faster wage growth and higher inflation than we would have expected.
(This specific form of supply hysteresis implies a persistent rise in unemployment following a downturn, just as demand hysteresis does. The other distinctions above still apply, and other forms of supply hysteresis would not have this implication.)
Set aside for now whether supply-hysteresis was a reasonable description of Europe in the 1980s and 1990s. Certainly it was a welcome alternative to the then-dominant view that Europe needed high unemployment because of over-protective labor market institutions. But whether or not thinking of hysteresis in terms of the NAIRU made sense in that context, it does not make sense for either Europe or the US (or Japan) in the past decade. Everything we’ve seen has been consistent with a negative output gap — with actual output below potential — with a depressed level of demand, not of supply. Wage growth has been unexpectedly weak, not strong; inflation has been below target; and central banks have been making extra efforts to boost spending rather than to rein it in.
Assuming we think that all this is at least partly the result of the 2007-2009 financial crisis — and thinking that is pretty much the price of entry to this conversation — that suggests we should be thinking primarily about demand-hysteresis rather than supply-hysteresis. We should be asking not, or not only, how much and how durably the Great Recession reduced the country’s productive potential, but how how durably it reduced the flow of money through the economy.
It’s weird, once you think about it, how unexplored this possibility is in the literature. It seems to be taken for granted that if demand shocks have a lasting effect on GDP, that must be because they affect aggregate supply. I suspect one reason for this is the assumption — which profoundly shapes modern macroeconomics — that the level of spending in the economy is directly under the control of the central bank. As Peter Dorman observes, it’s a very odd feature of modern macroeconomic modeling that the central bank is inside the model — the reaction of the monetary authorities to, say, rising inflation is treated as a basic fact about the economy, like the degree to which investment responds to changes in the interest rate, rather than as a policy choice. In an intermediate macroeconomics textbook like Carlin and Soskice (a good one as far as they go), students are taught to think about the path of unemployment and inflation as coming out of a “central bank preference function,” which is taken as a fundamental parameter of the economy. Obviously there is no place for demand hysteresis in this framework. To the extent that we think of the actual path of spending in the economy as being chosen by the central bank as part of some kind of optimizing process, past spending in itself will have no effect on current spending.
Be that as it may, it seems hard to deny that in real economies, the level of spending today is strongly influenced by the level of spending in the recent past. This is the whole reason we see booms and depressions as discrete events rather than just random fluctuations, and why they’re described with metaphors of positive-feedback process like “stall speed” or “pump-priming.”1
How Persistent Is Demand?
Let’s say demand is at least somewhat persistent. That brings us to the next question: How persistent? If we were to get extra spending of 1 percent of GDP in one year, how much higher would we now expect demand to be several years later?
We can formalize this question if we write a simple model like:
Zt = Z*t + Xt
Z*t = (1+g) Z*t-1 + a(Zt-1 – Z*t-1)
Here Z is total spending or demand, Z* is the trend, what we might think of as normal or expected demand, g is the normal growth rate, and X is the influence of transitory influences outside of normal growth.
With a = 0, then, we have the familiar story where demand is a trend plus random fluctuations. If we see periods of above- and below-trend demand, that’s because the X influences are themselves extended over time. If a boom year is followed by another boom year, in this story, that’s because whatever forces generated it in the first year are still operating, not because the initial boom itself was persistent.
Alternatively, with a = 1, demand shocks are permanent. Anything that increases spending this year, should be expected to lead to just as much additional spending next year, the year after that, and so on.
Or, of course, a can have any intermediate value.
Think back to 2015, in the debate over the first Sanders’ campaign’s spending plans that was an important starting point for current discussions of hysteresis. The basic mistake Jerry Friedman was accused of making was assuming that changes in demand were persistent — that is, if the multiplier was, say 1.5, that an increase in spending of $500 billion would raise output by $750 billion not only in that year and but in all subsequent years. As his critics correctly pointed out, that is not how conventional multipliers work. In terms of my equations above, he was setting a=1, while the conventional models have a=0.
He didn’t spell this out, and I didn’t think of it that way at the time. I don’t think anyone did. But once you do, it seems to me that while Friedman was wrong in terms of the standard multiplier, he was not wrong about the economy — or at least, no more wrong than the critics. It seems to me that both sides were using unrealistically extreme values. Demand shocks aren’t entirely permanent, but they also aren’t entirely transitory. Arealistic model should have 0 < a < 1.
Demand Persistence and Fiscal Policy
There’s no point in refighting those old battles now. But the same question is very relevant for the future. Most obviously, if demand shocks are persistent to some significant degree, it becomes much more plausible that the economy has been well below potential for the past decade-plus. Which means there is correspondingly greater space for faster growth before we encounter supply constraints in the form of rising inflation.
Both forms of hysteresis should make us less worried about inflation. If we are mainly dealing with supply hysteresis, then rapid growth might well lead to inflation, but it would be a transitory phenomenon as supply catches up to the new higher level of demand.On the other hand, to the extent we are dealing with demand hysteresis, it will take much more growth before we even have to worry about inflation.
Of course, both forms of hysteresis may exist. In which case, both reason for worrying less about inflation would be valid. But we still need to be clear which we are talking about at any given moment.
A slightly trickier point is that the degree of demand persistence is critical for assessing how much spending it will take to get back to the pre-2007 trend.
If the failure to return to the pre-2007 is the lasting effect of the negative demand shock of the Great Recession, it follows thatsufficient spending should be able to reverse the damage and return GDP to its earlier trend. The obvious next question is, how much? The answer really depends on your preferred value for a. In the extreme (but traditional) case of a=0, each year we need enough spending to fill the entire gap, every year, forever. Given a gap of around 12 percent, if we assume a multiplier of 1.5 or so, that implies additional public spending of $1.6 trillion. In the opposite extreme case, where a=1, we just need enough total spending to fill the gap, spread out over however many years. In general, if we want to get close a permanent (as opposed to transitory) output gap of W, we need W/(a μ) total spending, where μ is the conventional multiplier.2
If you project forward the pre-2007 trend in real per-capita GDP to the end of 2019, you are going to get a number that is about 15% higher than the actual figure, implying an output gap on the order of $3.5 trillion. In the absence of demand persistence, that’s the gap that would need to be filled each year. But with persistent demand, a period of elevated public spending would gradually pull private spending up to the old trend, after which it would remain there without further stimulus.
What Does the Great Recession Tell Us about Demand Persistence?
At this point, it might seem that we need to turn to time-series econometrics and try to estimate a value for a, using whatever methods we prefer for such things. And I think that would be a great exercise!
But it seems to me we can actually put some fairly tight limits on a without any econometrics, simple by looking back to the Great Recession. Keep in mind, once we pick an output gap for a starting year, then given the actual path of GDP, each possible value of a implies a corresponding sequence of shocks Xt. (“Shock” here just means anything that causes a deviation of demand from its trend, that is not influenced by demand in the previous period.) In other words, whatever belief we may hold about the persistence of demand, that implies a corresponding belief about the size and duration of the initial fall in demand during the recession. And since we know a fair amount about the causes of the recession, some of these sequences are going to be more plausible than others.
The following figures are an attempt to do this. I start by assuming that the output gap was zero in the fourth quarter of 2004. We can debate this, of course,, but there’s nothing heterodox about this assumption — the CBO says the same thing. Then I assume that in the absence of exogenous disturbances, real GDP per capita would have subsequently grown at 1.4 percent per year. This is the growth rate during the expansion between the Great Recession and the pandemic; it’s a bit slower than the pre-recession trend.3 I then take the gap between this trend and actual GDP in each subsequent quarter and divide it into the part predictable from the previous quarter’s gap, given an assumed value for a, and the part that represents a new disturbance in that period. So each possible value of a, implies a corresponding series of disturbances. Those are what are shown in the figures.
If you’re not used to this kind of reasoning, this is probably a bit confusing. So let me put it a different way. The points in the graphs above show where real GDP would have been relative to the long-term trend if there had been no Great Recession. For example, if you think a = 0, then GDP in 2015 would have been just the same in the absence of the recession, so the values there are just the actual deviation from trend. So you can think of the different figures here as showing the exogenous shocks that would be required under different assumptions about persistence, to explain the actual deviation from trend. They are answering this question: Given your beliefs about how persistent demand is, what must you think GDP would have been in subsequent years in a world where the Great Recession did not take place? (Or maybe better, where the fall in demand form the housing bubble was fully offset by stimulus.)
The first graph, with persistence = 0, is easiest to understand. If there is no carryover of demand shocks from one period to the next, then there must be some factor reducing demand in each later period by the full extent of the gap from trend. If we move on to, say, the persistence=0.1 figure, that is saying that, if you think 10 percent of a demand shock is normally carried over into future periods, that means that there was something happening in 2012 that would have depressed demand by 2 percent relative to the earlier trend, even if there had been no Great Recession.
Because people are used to overcomplicated economics models, I want to stress again. What I am showing you here is what you definitionally believe, if you think that in the absence of the Great Recession, growth in the 2010s would have been at about the same rate it was, just from a higher base, and you think that whatever fraction of a change in spending in one year is carried over to the next year. There are no additional assumptions. I’m just showing what the logical corollary of those beliefs would be for the pattern of demand shocks,
Another important feature of these figures is how large the initial fall in demand is. Logically, if you think demand is very persistent, you must also think the initial shock was smaller. If most of the fall in spending in the first half of 2008, say, was carried over to the second half of 2008, then it takes little additional fall in spending in that period to match the observed path of GDP. Conversely, if you think that very little of a change in demand in one period carries over to the next one then the autonomous fall in demand in 2009 must have been larger.
The question now is, given what we know about the forces impacting demand a decade ago, which of these figures is most plausible? If there had been sufficient stimulus to completely eliminate the fall in demand in 2007-2009, how strong would the headwinds have been a few years late?
Based on what we know about the Great Recession, I think demand persistence in the 0.15 – 0.25 range most plausible. This suggests that a reasonable baseline guess for total spending required to return to the pre-2007 would be around 50 percent of GDP, spread out over a number of years. With an output gap of 15 percent of GDP, a multiplier of 1.5, and demand persistence of 0.2, we have 15 / (1.5 * 0.2) = 50 percent of GDP. This is, obviously, a very rough guess, but if you put me on the spot and asked how much spending over ten years it would take to get GDP permanently back to the pre-2007 trend, $10-12 trillion would be my best guess.
How do we arrive at persistence in the 0.15 – 0.25 range?
On the lower end, we can ask: What are the factors that would have pushed down demand in the mid 2010s, even in the absence of the Great Recession Remember, if we use demand persistence of 0.1, that implies there were factors operating in 2014 that would have reduced demand by 2 percent of GDP, even if the recession had not taken place. What would those be?
I don’t think it makes sense to say housing — housing prices had basically recovered by then. State and local spending is a better candidate — it remained quite depressed and I think it’s hard to see this as a direct effect of the recession. Relative to trend, state and local investment was down about 1 percent of GDP in 2014, while the federal stimulus was basically over. On the other hand, unless we think that monetary policy is totally ineffective, we have to include the stimulative effect of a zero policy rate and QE in our demand shocks. This makes me think that by 2014, the gap between actual GDP and the earlier trend was probably almost all overhang from the recession. And this implies a persistence of at least 0.15. (If you look back at the figures, you’ll see that with persistence=0.15, the implied shock reaches zero in 2014.)
Meanwhile, on the high end, a persistence of 0.5 would mean that the demand shock maxed out at a bit over 3 percent of GDP, and was essentially over by the second half of 2009. This seems implausibly small and implausibly brief. Residential investment fell from 6.5 percent of GDP in 2004 to less than 2.5 percent by 2010. And that is leaving aside housing wealth-driven consumption. Meanwhile, the ARRA stimulus didn’t really come online until the second half of 2009. I don’t believe monetary policy is totally ineffective, but I do think it operates slowly, especially on loosening side. So I find it hard to believe that the autonomous fall in demand in early 2009 was much less than 5 percent of GDP. That implies a demand persistence of no more than 0.25.
Within the 0.15 to 0.25 range, probably the most important variable is your judgement of the effectiveness of monetary policy and the ARRA stimulus. If you think that one or both was very effective, you might think that by mid-2010, they were fully offsetting the fall in demand from the housing bust. This would be consistent withpersistence around 0.25. Conversely, if you’re doubtful about the effectiveness of monetary policy and the ARRA (too little direct spending), you should prefer a value of 0.2 or 0.15.
In any case, it seems to me that the implied shocks with persistence in the 0.15 – 0.25 range look much more plausible than for values outside that range. I don’t believe that the underlying forces that reduced demand in the Great Recession had ceased to operate by the second half of 2009. I also don’t think that they were autonomously reducing demand by as much as 2 points still in early 2014.
You will have your own priors, of course. My fundamental point is that your priors on this stuff have wider implications. I have not seen anyone spell out the question of the persistence of demand in the way I have done here. But the idea is implicit in the way we talk about business cycles. Logically, a demand shortfall in any given period can be described as a mix of forces pulling down spending in that period, and the the ongoing effect of weak demand in earlier periods. And whatever opinion you have about the proportions of each, this can be quantified. What I am doing in this post, in other words, is not proposing a new theory, but trying to make explicit a theory that’s already present in these debates, but not normally spelled out.
Why Is Demand Persistent?
The history of real economies should be enough to convince us that demand can be persistent. Deep downturns — not only in the US after 2007, but in much of Europe, in Japan after 1990, and of course the Great Depression — show clearly that if the level of spending in an economy falls sharply for whatever reason, it is likely to remain low years later, even after the precipitating factor is removed. But why should economies behave this way?
I can think of a couple of reasons.
First, there’s the pure coordination story. Businesses pay wages to workers in order to carry out production. Production is carried out for sale. Sales are generated by spending. And spending depends on incomes, most of which are generated from production. This is the familiar reasoning of the multiplier, where it is used to show how an autonomous change in spending can lead to a larger (or smaller) change in output. The way the multiplier is taught, there is one unique level of output for each level of autonomous demand. But if we formalized the same intuition differently, we could imagine a system with multiple equilibria. Each would have a different level of income, expenditure and production, but in each one people would be making the “right” expenditure choices given their income.
We can make this more concrete in two ways. First, balance sheets. One reason that there is a link from current income to current expenditure is that most economic units are financially constrained to some degree. Even if you knew your lifetime income with great precision, you wouldn’t be able to make your spending decisions on that basis because, in general, you can’t spend the money you will receive in the distant future today.
Now obviously there is some capacity to shift spending around in time, both through credit and through spending down liquid assets. The degree to which this is possible depends on the state of the balance sheet. To the extent a period of depressed demand leaves households and businesses with weaker balance sheets and tighter financial constraints, it will result in lower spending for an extended period. A version of this idea was put forward by Richard Koo as a “balance sheet recession,” in a rather boldly titled book.
Finally there is expectations. There is not, after all, a true lifetime income out there for you to know. All you can do is extrapolate from the past, and from the experiences of other people like you. Businesses similarly must make decisions about how much investment to carry out based on extrapolation from the past – on what other basis could they do it?
A short period of unusually high or low demand may not move expectations much, but a sustained one almost certainly will. A business that has seen demand fall short of what they were counting on is going to make more conservative forecasts for the future. Again, how could they not? With the balance sheet channel, one could plausibly agree that demand shocks will be persistent but not permanent. But with expectations, once they have been adjusted, the resulting behavior will in general make them self-confirming, so there is no reason spending should ever return to its old path.
This, to me, is the critical point. Mainstream economists and policy makers worry a great deal about inflation expectations, and whether they are becoming “unanchored.” But expectations of inflation are not the only ones that can slip their moorings. Households and businesses make decisions based on expectations of future income and sales, and if those expectations turn out to be wrong, they will be adjusted accordingly. And, as with inflation, the outcomes of which people form expectations themselves largely depend on expectations.
This was a point emphasized by Keynes:
It is an essential characteristic of the boom that investments which will in fact yield, say, 2 per cent in conditions of full employment are made in the expectation of a yield of, say, 6 per cent, and are valued accordingly.
When the disillusion comes, this expectation is replaced by a contrary ‘error of pessimism’, with the result that the investments, which would in fact yield 2 per cent in conditions of full employment, are expected to yield less than nothing; and the resulting collapse of new investment then leads to a state of unemployment in which the investments, which would have yielded 2 per cent in conditions of full employment, in fact yield less than nothing. We reach a condition where there is a shortage of houses, but where nevertheless no one can afford to live in the houses that there are.
He continues the thought in terms that are very relevant today:
Thus the remedy for the boom is not a higher rate of interest but a lower rate of interest! For that may enable the so-called boom to last. The right remedy for the trade cycle is not to be found in abolishing booms and thus keeping us permanently in a semi-slump; but in abolishing slumps and thus keeping us permanently in a quasi-boom.
Arjun Jayadev and I have a new piece up at the Institute for New Economic Thinking, trying to clarify the relationship between Modern Monetary Theory (MMT) and textbook macroeconomics. (There is also a pdf version here, which I think is a bit more readable.) I will have a blogpost summarizing the argument later today or tomorrow, but in the meantime here is the abstract:
An increasingly visible school of heterodox macroeconomics, Modern Monetary Theory (MMT), makes the case for functional finance—the view that governments should set their fiscal position at whatever level is consistent with price stability and full employment, regardless of current debt or deficits. Functional finance is widely understood, by both supporters and opponents, as a departure from orthodox macroeconomics. We argue that this perception is mistaken: While MMT’s policy proposals are unorthodox, the analysis underlying them is largely orthodox. A central bank able to control domestic interest rates is a sufficient condition to allow a government to freely pursue countercyclical fiscal policy with no danger of a runaway increase in the debt ratio. The difference between MMT and orthodox policy can be thought of as a different assignment of the two instruments of fiscal position and interest rate to the two targets of price stability and debt stability. As such, the debate between them hinges not on any fundamental difference of analysis, but rather on different practical judgements—in particular what kinds of errors are most likely from policymakers.
Last week, the Washington Post ran an article by Jim Tankersley on what would happen if Trump got his way and the US imposed steep tariffs on goods from Mexico and China. I ended up as the objectively pro-Trump voice in the piece. The core of it was an estimates from Mark Zandi at Moody’s that a 45% tariff on goods from China and a 35% tariff on goods from Mexico (I don’t know where these exact numbers came from) would have an effect on the US comparable to the Great Recession, with output and employment both falling by about 5 percent relative to the baseline. About half this 5 percent fall in GDP would be due to retaliatory tariffs from China and Mexcio, and about half would come from the US tariffs themselves. As I told the Post, I think this is nuts.
Let me explain why I think that, and what a more realistic estimate would look like. But first, I should say that Tankersley did exactly what one ought to do with this story — asked the right question and went to a respected expert to help him answer it. The problem is with what that expert said, not the reporting. I should also say that my criticisms were presented clearly and accurately in the piece. But of course, there’s only so much you can say in even a generous quote in a newspaper article. Hence this post.
I haven’t seen the Moody’s analysis (it’s proprietary). All I know is what’s in the article, and the general explanation that Tankersley gave me in the interview. But from what I can tell, Zandi and his team want to tell a story like this. When the US imposes a tariff, it boosts the price of imported goods but leads to no substitution away from them. Instead, higher import prices just mean lower real incomes in the US. Then, when China and Mexico retaliate, that does lead to substitution away from US goods, and the lost exports reduce US real incomes further. But only under the most extreme assumptions can you get Zandi’s numbers out of this story.
While this kind of forecasting might seem mysterious, it mostly comes down to picking values for a few parameters — that is, making numerical guesses about relationships between the variables of interest. In this case, we have to answer three questions. The first question is, how much of the tariff is paid by the purchasers of imported goods, as opposed to the producers? The second question is, how do purchasers respond to higher prices — by substituting to domestic goods, by substituting to imports from other countries, or by simply paying the higher prices? Substitution to domestic goods is expansionary (boosts demand here), substitution to imports from elsewhere is neutral, and paying the higher prices is contractionary, since it reduces the income available for domestic spending. And the third question is, how much does a given shift in demand ultimately move GDP? The answer to the first question gives us the passthrough parameter. The answer to the second question gives us two price elasticities — a bilateral elasticity for imports from that one country, and an overall elasticity for total imports. The answer to the third question gives us the multiplier. Combine these and you have the change in GDP resulting from the tariff. Of course if you think the initial tariffs will provoke retaliatory tariffs from the other countries, you have to do the same exercise for those, with perhaps different parameters.
Let’s walk through this. Suppose the US — or any country — increases taxes on imports: What can happen? The first question is, how is the price of the imported good set — by costs in the producing country, or by market conditions in the destination? If conditions in the destination country affect price — if the producer is unable or unwilling to raise prices by the full amount of the tariff — then they will have to accept lower revenue per unit sold. This is referred to as pricing to market or incomplete passthrough, and empirical studies suggest it is quite important in import prices, especially in the US. Incomplete passthrough may result in changing profit margins for producers, or they may be able to adjust their own costs — wages especially — in turn. Where trade is a large fraction of GDP, some of the tax may eventually be translated into a lower price level in the exporting country.
Under floating exchange rates, the tariff may also lead a depreciation of the exporting country currency relative to the currency of the country imposing the tariff. This is especially likely where trade between the two countries is a large share of total trade for one or both of them. In this case, a tariff is more likely to cause a depreciation of the Mexican peso than of the Chinese renminbi, since the US accounts for a higher fraction of Mexico’s exports than of China’s, and the renminbi is actively managed by China’s central bank.
Taking all these effects together, passthrough for US imports is probably less than 0.5. In other words, the majority of a tariff’s impact will probably be on dollar revenue for producers, rather than dollar costs for consumers. So a 10 percent tariff increases costs of imported goods by something less than 5 percent and reduces the revenue per unit of producers by something more than 5 percent.
The fraction of the tax that is not absorbed by lower exporter profit margins, lower wages in the export industry or a lower price level in the exporting country, or by exchange rate changes, will be reflected in higher prices in the importing country. The majority of trade goods for the US (as for most countries) are intermediate and capital goods, and even imported consumption goods are almost never purchased directly by the final consumer. So on the importing side, too, there will be firms making a choice between accepting lower profit margins, reducing wages and other domestic costs, or raising prices. Depending on exactly where we are measuring import prices, this might further reduce passthrough.
Let’s ignore this last complication and assume that a tax that is not absorbed on the exporting-country side is fully passed on to final price of imported goods. Purchasers of imported goods now respond to the higher price either by substituting to domestic goods, or substituting to imported goods from some third country not subject to the tax, or continuing to purchase the imports at the higher price. To the extent they substitute to domestic goods, that will boost demand here; to the extent they substitute to third-country goods, the tax will have no effect here.
These rates of substitution are described by the price elasticity of imports, computed as the ratio of the percentage change in the price, to the resulting percentage change in the quantity imported. So for instance if we thought that a 1 percent increase in the price of imported goods leads to a 2 percent fall in the quantity purchased, we would say the price elasticity is 2. There are two elasticities we have to think about — the bilateral elasticity and the overall elasticity. For example, we might think that the bilateral elasticity for US imports from China was 3 while the overall price elasticity for was 1. In that case, a 1 percent increase in the price of Chinese imports would lead to a 3 percent fall in US imports from China but only one-third of that would be through lower total US imports; the rest would be through higher imports from third countries.
To the extent the higher priced imported goods are purchased, this may result in a higher price of domestic goods for which the imports are an input or a substitute; to the extent this happens, the tax will raise domestic inflation but leave real income unchanged. For the US, import prices have a relatively small effect on overall inflation, so we’ll ignore this effect here. If we included it, we would end up with a smaller effect.
To the extent that the increase in import prices neither leads to any substitution away from the imported goods, nor to any price increase in domestic goods, it will reduce real incomes in the importing country, and leave incomes in the exporting country unchanged. Conversely, to the extent that the tariff is absorbed by lower wages or profit margins in the exporting country, or leads to substitution away from that country’s goods, it reduces incomes in the exporting country, but not in the importing country. And of course, to the extent that there is no substitution away from the taxed goods, government revenue will increase. Zandi does not appear to have explicitly modeled this last effect, but it is important in thinking about the results — a point I’ll return to.
Whether the increase in import prices increases domestic incomes (by leading to substitution to domestic goods) or reduces them, the initial effect will be compounded as the change in income leads to changes in other spending flows. If, let’s say, an increase in the price of Chinese consumer goods forces Americans to cut back purchases of American-made goods, then the workers and business owners in the affected industries will find themselves with less income, which will cause them to reduce their spending in turn. This is the familiar multiplier. The direct effect may be compounded or mitigated by financial effects — the multiplier will be larger if you think (as Zandi apparently does) that a fall in income will be accompanied by a fall in asset prices with a further negative effect on credit and consumption, and smaller if you think that a trade-induced change in income will be offset by a change in monetary (or fiscal) policy. In the case where central bank’s interest rate policy is always able to hold output at potential, the multiplier will be zero — shocks to demand will have no effect on output. This extreme case looked more reasonable a decade ago than it does today. In conditions where the Fed can’t or won’t offset demand impacts, estimates of the US multiplier range as high as 2.5; a respectable middle-of-the-road estimate would be 1.5.
Let’s try this with actual numbers.
Start with passthrough. The overwhelming consensus in the empirical literature is that less than half of even persistent changes in exchange rates are passed through to US import prices. This recent survey from the New York Fed, for instance, reports a passthrough of about 0.3:
following a 10 percent depreciation of the dollar, U.S. import prices increase about 1 percentage point in the contemporaneous quarter and an additional 2 percentage points over the next year, with little if any subsequent increases.
The factors that lead to incomplete passthrough of exchange rate movements — such as the size of the US market, and the importance exporters of maintaining market share — generally apply to a tariff as well, so it’s reasonable to think passthrough would be similar. So a 45% tariff on Chinese goods would probably raise prices to American purchasers by only about 15%, with the remainder absorbed by profits and/or wages at Chinese exporters.
Next we need to ask about the effect of that price on American purchases. There is a large literature estimating trade price elasticities; a sample is shown in the table below. As you can see, almost all the import price elasticities are between 0.2 and 1.0. (Price elasticities seem to be greater for US exports than for imports; they also seem to be higher for most other countries than for the US.) The median estimates is around 0.5 for overall US imports. Country-specific estimates are harder to find but I’ve seen values around 1.0 for US imports from both China and Mexico. Using those estimates, we would expect a 15% increase in the price of Chinese imports to lead to a 15% fall in imports from China, with about half of the substitution going to US goods and half going to imports from other countries. Similarly, a 10% increase in the price of goods from Mexico (a 35% tariff times passthrough of 0.3) would lead to a 10% fall in imports from Mexico, with half of that being a switch to US goods and half to imports from elsewhere.
Finally, we ask how the combination of substitution away from imports from Mexico and China, and the rise in price of the remaining imports, would affect US output. US imports from China are about 2.7 percent of US GDP, and imports from Mexico are about 1.7 percent of GDP. So with the parameters above, substitution to US goods raises GDP by 7.5% x 2.7% (China) plus 5% x 1.7% (Mexico), or 0.29% of GDP. Meanwhile the higher prices of the remaining imports from China and Mexico reduce US incomes by 0.22 percent, for a net impact of a trivial one twentieth of one percent of GDP. Apply a standard multiplier of 1.5, and the tariffs boost GDP by 0.08 percent.
You could certainly get a larger number than this, for instance if you thought that passthrough of a tariff would be substantially greater than passthrough of exchange rate changes. And making US import demand just a bit less price-elastic is enough to turn the small positive impact into a small negative one. But it would be very hard to get an impact of even one percent of GDP in either direction. And it would be almost impossible to get a negative impact of the kind that Zandi describes. If you assume both that the tariffs are fully passed through to final purchasers, and that US import demand is completely insensitive to price then with a multiplier of 1.5, you get a 2.7 percent reduction in US GDP. Since this is close to Zandi’s number, this may be what he did. But again, these are extreme assumptions, with no basis in the empirical literature. That doesn’t mean you can’t use them, but you need to justify them; just saying the magic word “proprietary” is not enough. (Imagine all the trouble Jerry Friedman could have saved himself with that trick!)
And the very low price elasticity you need for this result has some funny implications. For instance, it implies that when China intervenes to weaken their currency, they are just impoverishing themselves, since — if demand is really price-inelastic — they are now sending us the same amount of goods and getting fewer dollars for each one. I doubt Zandi would endorse this view, but it’s a logical corollary of the ultra-low elasticity he needs to get a big cost to the US from the initial tariff. Note also that the low-elasticity assumption means that the tariff creates no costs for China or Mexico: their exporters pass the increased tariff on completely to US consumers, and lose no sales as a result. It’s not clear why they would “retaliate” for this.
Let’s assume, though, that China and Mexico do impose tariffs on US goods. US exports to China and Mexico equal 0.7 and 1.3 percent of US GDP respectively. Passthrough is probably higher for US exports — let’s say 0.6 rather than 0.3. Price elasticity is also probably higher — we’ll say 1.5 for both bilateral elasticities and for overall export elasticity. (In the absence of exchange-rate changes, there’s no reason to think that a fall in exports to China and Mexico will lead to a rise in exports to third countries.) And again, we’ll use a multiplier of 1.5. This yields a fall in US GDP from the countertariffs of just a hair under 1 percent. Combine that with the small demand boost from the tariff, and we get an overall impact of -0.9 percent of GDP.
I admit, this is a somewhat larger hit than I expected before I worked through this exercise. But it’s still much smaller than Zandi’s number.
My preferred back-of-the-envelope for the combined impact of the tariffs and countertariffs would be a reduction in US GDP of 0.9 percent, but I’m not wedded to this exact number. I think reasonable parameters could get you an impact on US GDP anywhere from positive 1 percent to, at the worst, negative 2 percent or so. But it’s very hard to get Zandi’s negative 5 percent. You need an extremely high passthrough for both import and export prices, plus extremely price-inelastic US import demand and extremely price-elastic demand for US exports — all three parameters well outside the range in the empirical literature. At one point a few years ago, I collected about 20 empirical estimates of US trade elasticities, and none of them had a price elasticity for US exports greater than 1.5. But even with 100% passthrough, and a generous multiplier of 2.0, you need an export price elasticity of 4 or so to get US GDP to fall by 5 points.
Still, while Zandi’s 5 percent hit to GDP seems beyond the realm of the plausible, one could perhaps defend a still-substantial 2 percent. Let’s think for a moment, though, about what this would mean.
First of all, it’s worth noting — as I didn’t, unfortunately, to the Post reporter — that tariff increases are, after all, tax increases. Whatever its effect on trade flows, a big increase in taxes will be contractionary. This is Keynes 101. Pick any activity accounting for 5 percent of GDP and slap a 40 percent tax on it, and it’s a safe bet that aggregate income will be lower as a result. The logic of the exercise would have been clearer if the tariff revenue were offset by a cut in some other tax, or increase in government spending. (Maybe this is what Trump means when he says Mexico will pay for the wall?) Then it would be clearer how much of the predicted impact comes from the tariff specifically, as opposed to the shift toward austerity that any such a big tax increase implies. The point is, even if you decide that a 2 percent fall in US GDP is the best estimate of the tariff’s impact, it wouldn’t follow that tariffs as such are a bad idea. It could be that a big tax increase is.
Second, let’s step back for a moment. While Mexico and China are two of our largest trade partners, they still account for less than a quarter of total US trade. Given passthrough of 0.3, the 45/35 percent tariff on Chinese/Mexican goods would raise overall US import prices by about 3 percent. Even with 100 percent passthrough, the tariffs would raise overall import prices by just 10 percent. The retaliatory tariffs would raise US export prices by about half this — 5 percent with full passthrough. (The difference is because these two countries account for a smaller share of US exports than of US imports). Now, let’s look at the movements of the dollar in recent years.
Since 2014, the dollar has risen 15 percent. That’s a 15 percent increase in the price of US goods in all our export markets — three times the impact of the hypothetical Mexican and Chinese tariffs. But before that, from 2002 to 2008, the dollar fell by over 20 percent. That raised the price of US imports by twice as much as the hypothetical Trump tariff. And so on on back to the 1970s. If you believe Zandi’s numbers, then the rise in the dollar over the past two years should already have triggered a severe recession. Of course it has not. It would be foolish to deny that movements of the dollar have had some effect on US output and employment. But no one, I think, would claim impacts on anything like this scale. Still, one thing is for sure: If you believe anything like Zandi’s numbers on the macro impacts of trade price changes, then it’s insane to allow exchange rates to be set by private speculators.
So if Zandi is wrong about the macro impact of tariffs, does that mean Trump is right? No. First of all, while I don’t think there’s any way to defend Zandi’s claim of a very large negative impact on GDP of a tariff (or of a more respectable, but economically equivalent, depreciation of the dollar), it’s almost as hard to defend a large positive impact. Despite all the shouting, the relative price of Chinese goods is just not a very big factor for aggregate demand in the US. If the goal is stronger demand and higher wages here, there are various things we can do. A more favorable trade balance with China (or Mexico, or anywhere else) is nowhere near the top of that list. Second, the costs of the tariff would be substantial for the rest of the world. It’s important not to lose sight of the fact that China, over the past generation, has seen perhaps the largest rise in living standards in human history. We can debate how critical exports to the US were in this process, but certainly the benefits to China of exports to the US were vastly greater than whatever costs they created here.
But the fact that an idea is wrong, doesn’t mean that we can ignore evidence and logic in refuting it. Trumpism is bad enough on the merits. There’s no need to exaggerate its costs.
UPDATE: My spreadsheet is here, if you want to play with alternative parameter values.
In this post, I first talk about a variety of ways that we can formalize the relationship between wages, inflation and productivity. Then I talk briefly about why these links matter, and finally how, in my view, we should think about the existence of a variety of different possible relationships between these variables.
*
My Jacobin piece on the Fed was, on a certain abstract level, about varieties of the Phillips curve. The Phillips curve is any of a family graphs with either unemployment or “real” GDP on the X axis, and either the level or the change of nominal wages or the level of prices or the level or change of inflation on the Y axis. In any of the the various permutations (some of which naturally are more common than others) this purports to show a regular relationship between aggregate demand and prices.
This apparatus is central to the standard textbook account of monetary policy transmission. In this account, a change in the amount of base money supplied by the central bank leads to a change in market interest rates. (Newer textbooks normally skip this part and assume the central bank sets “the” interest rate by some unspecified means.) The change in interest rates leads to a change in business and/or housing investment, which results via a multiplier in a change in aggregate output. [1] The change in output then leads to a change in unemployment, as described by Okun’s law. [2] This in turn leads to a change in wages, which is passed on to prices. The Phillips curve describes the last one or two or three steps in this chain.
Here I want to focus on the wage-price link. What are the kinds of stories we can tell about the relationship between nominal wages and inflation?
*
The starting point is this identity:
(1) w = y + p + s
That is, the percentage change in nominal wages (w) is equal to the sum of the percentage changes in real output per worker (y; also called labor productivity), in the price level (p, or inflation) and in the labor share of output (s). [3] This is the essential context for any Phillips curve story. This should be, but isn’t, one of the basic identities in any intermediate macroeconomics textbook.
Now, let’s call the increase in “real” or inflation-adjusted wages r. [4] That gives us a second, more familiar, identity:
(2) r = w – p
The increase in real wages is equal to the increase in nominal wages less the inflation rate.
As always with these kinds of accounting identities, the question is “what adjusts”? What economic processes ensure that individual choices add up in a way consistent with the identity? [5]
Here we have five variables and two equations, so three more equations are needed for it to be determined. This means there are large number of possible closures. I can think of five that come up, explicitly or implicitly, in actual debates.
Closure 1:
First is the orthodox closure familiar from any undergraduate macroeconomics textbook.
(3a) w = pE + f(U); f’ < 0
(4a) y = y*
(5a) p = w – y
Equation 3a says that labor-market contracts between workers and employers result in nominal wage increases that reflect expected inflation (pE) plus an additional increase, or decrease, that reflects the relative bargaining power of the two sides. [6] The curve described by f is the Phillips curve, as originally formulated — a relationship between the unemployment rate and the rate of change of nominal wages. Equation 4a says that labor productivity growth is given exogenously, based on technological change. 5a says that since prices are set as a fixed markup over costs (and since there is only labor and capital in this framework) they increase at the same rate as unit labor costs — the difference between the growth of nominal wages and labor productivity.
It follows from the above that
(6a) w – p = y
and
(7a) s = 0
Equation 6a says that the growth rate of real wages is just equal to the growth of average labor productivity. This implies 7a — that the labor share remains constant. Again, these are not additional assumptions, they are logical implications from closing the model with 3a-5a.
This closure has a couple other implications. There is a unique level of unemployment U* such that w = y + p; only at this level of unemployment will actual inflation equal expected inflation. Assuming inflation expectations are based on inflation rates realized in the past, any departure from this level of unemployment will cause inflation to rise or fall without limit. This is the familiar non-accelerating inflation rate of unemployment, or NAIRU. [7] Also, an improvement in workers’ bargaining position, reflected in an upward shift of f(U), will do nothing to raise real wages, but will simply lead to higher inflation. Even more: If an inflation-targetting central bank is able to control the level of output, stronger bargaining power for workers will leave them worse off, since unemployment will simply rise enough to keep nominal wage growth in line with y* and the central bank’s inflation target.
Finally, notice that while we have introduced three new equations, we have also introduced a new variable, pE, so the model is still underdetermined. This is intended. The orthodox view is that the same set of “real“ values is consistent with any constant rate of inflation, whatever that rate happens to be. It follows that a departure of the unemployment rate from U* will cause a permanent change in the inflation rate. It is sometimes suggested, not quite logically, that this is an argument in favor of making price stability the overriding goal of policy. [8]
If you pick up an undergraduate textbook by Carlin and Soskice, Krugman and Wells, or Blanchard, this is the basic structure you find. But there are other possibilities.
Closure 2: Bargaining over the wage share
A second possibility is what Anwar Shaikh calls the “classical” closure. Here we imagine the Phillips curve in terms of the change in the wage share, rather than the change in nominal wages.
(3b) s = f(U); f’ < 0
(4b) y = y*
(5b) p = p*
Equation 3b says that the wage share rises when unemployment is low, and falls when unemployment is high. In this closure, inflation as well as labor productivity growth are fixed exogenously. So again, we imagine that low unemployment improves the bargaining position of workers relative to employers, and leads to more rapid wage growth. But now there is no assumption that prices will follow suit, so higher nominal wages instead translate into higher real wages and a higher wage share. It follows that:
(6b) w = f(U) + p + y
Or as Shaikh puts it, both productivity growth and inflation act as shift parameters for the nominal-wage Phillips curve. When we look at it this way, it’s no longer clear that there was any breakdown in the relationship during the 1970s.
If we like, we can add an additional equation making the change in unemployment a function of the wage share, writing the change in unemployment as u.
(7b) u = g(s); g’ > 0 or g’ < 0
If unemployment is a positive function of the wage share (because a lower profit share leads to lower investment and thus lower demand), then we have the classic Marxist account of the business cycle, formalized by Goodwin. But of course, we might imagine that demand is “wage-led” rather than “profit-led” and make U a negative function of the wage share — a higher wage share leads to higher consumption, higher demand, higher output and lower unemployment. Since lower unemployment will, according to 3b, lead to a still higher wage share, closing the model this way leads to explosive dynamics — or more reasonably, if we assume that g’ < 0 (or impose other constraints), to two equilibria, one with a high wage share and low unemployment, the other with high unemployment and a low wage share. This is what Marglin and Bhaduri call a “stagnationist” regime.
Let’s move on.
Closure 3: Real wage fixed.
I’ll call this the “Classical II” closure, since it seems to me that the assumption of a fixed “subsistence” wage is used by Ricardo and Malthus and, at times at least, by Marx.
(3c) w – p = 0
(4c) y = y*
(5c) p = p*
Equation 3c says that real wages are constant the change in nominal wages is just equal to the change in the price level. [9] Here again the change in prices and in labor productivity are given from outside. It follows that
(6c) s = -y
Since the real wage is fixed, increases in labor productivity reduce the wage share one for one. Similarly, falls in labor productivity will raise the wage share.
This latter, incidentally, is a feature of the simple Ricardian story about the declining rate of profit. As lower quality land if brought into use, the average productivity of labor falls, but the subsistence wage is unchanged. So the share of output going to labor, as well as to landlords’ rent, rises as the profit share goes to zero.
Closure 4:
(3d) w = f(U); f’ < 0
(4d) y = y*
(5d) p = p*
This is the same as the second one except that now it is the nominal wage, rather than the wage share, that is set by the bargaining process. We could think of this as the naive model: nominal wages, inflation and productivity are all just whatever they are, without any regular relationships between them. (We could even go one step more naive and just set wages exogenously too.) Real wages then are determined as a residual by nominal wage growth and inflation, and the wage share is determined as a residual by real wage growth and productivity growth. Now, it’s clear that this can’t apply when we are talking about very large changes in prices — real wages can only be eroded by inflation so far. But it’s equally clear that, for sufficiently small short-run changes, the naive closure may be the best we can do. The fact that real wages are not entirely a passive residual, does not mean they are entirely fixed; presumably there is some domain over which nominal wages are relatively fixed and their “real” purchasing power depends on what happens to the price level.
Closure 5:
One more.
(3e) w = f(U) + a pE; f’ < 0; 0 < a < 1
(4e) y = b (w – p); 0 < b < 1
(5e) p = c (w – y); 0 < c < 1
This is more generic. It allows for an increase in nominal wages to be distributed in some proportion between higher inflation, an increase in the wage share, and faster productivity growth. The last possibility is some version of Verdoorn’s law. The idea that scarce labor, or equivalently rising wages, will lead to faster growth in labor productivity is perfectly admissible in an orthodox framework. But somehow it doesn’t seem to make it into policy discussions.
In other word, lower unemployment (or a stronger bargaining position for workers more generally) will lead to an increase in the nominal wage. This will in turn increase the wage share, to the extent that it does not induce higher inflation and/or faster productivity growth:
(6e) s = (1 – b – c) w
This closure includes the first two as special cases: closure 1 if we set a = 0, b = 0, and c = 1, closure 2 if we set a = 1, b = 0, and c < 1. It’s worth framing the more general case to think clearly about the intermediate possibilities. In Shaikh’s version of the classical view, tighter labor markets are passed through entirely to a higher labor share. In the conventional view, they are passed through entirely to higher inflation. There is no reason in principle why it can’t be some to each, and some to higher productivity as well. But somehow this general case doesn’t seem to get discussed.
Here is a typical example of the excluded middle in the conventional wisdom: “economic theory suggests that increases in labor costs in excess of productivity gains should put upward pressure on prices; hence, many models assume that prices are determined as a markup over unit labor costs.” Notice the leap from the claim that higher wages put some pressure on prices, to the claim that wage increases are fully passed through to higher prices. Or in terms of this last framework: theory suggests that b should be greater than zero, so let’s assume b is equal to one. One important consequence is to implicitly exclude the possibility of a change in the wage share.
*
So what do we get from this?
First, the identity itself. On one level it is obvious. But too many policy discussions — and even scholarship — talk about various forms of the Phillips curve without taking account of the logical relationship between wages, inflation, productivity and factor shares. This is not unique to this case, of course. It seems to me that scrupulous attention to accounting relationships, and to logical consistency in general, is one of the few unambiguous contributions economists make to the larger conversation with historians and other social scientists. [10]
For example: I had some back and forth with Phil Pilkington in comments and on twitter about the Jacobin piece. He made some valid points. But at one point he wrote: “Wages>inflation + productivity = trouble!” Now, wages > inflation + productivity growth just means, an increasing labor share. It’s two ways of saying the same thing. But I’m pretty sure that Phil did not intend to write that an increase in the labor share always means trouble. And if he did seriously mean that, I doubt one reader in a hundred would understand it from what he wrote.
More consequentially, austerity and liberalization are often justified by the need to prevent “real unit labor costs” from rising. What’s not obvious is that “real unit labor costs” is simply another word for the labor share. Since by definition the change real unit labor costs is just the change in nominal wages less sum of inflation and productivity growth. Felipe and Kumar make exactly this point in their critique of the use of unit labor costs as a measure of competitiveness in Europe: “unit labor costs calculated with aggregate data are no more than the economy’s labor share in total output multiplied by the price level.” As they note, one could just as well compute “unit capital costs,” whose movements would be just the opposite. But no one ever does, instead they pretend that a measure of distribution is a measure of technical efficiency.
Second, the various closures. To me the question of which behavioral relations we combine the identity with — that is, which closure we use — is not about which one is true, or best in any absolute sense. It’s about the various domains in which each applies. Probably there are periods, places, timeframes or policy contexts in which each of the five closures gives the best description of the relevant behavioral links. Economists, in my experience, spend more time working out the internal properties of formal systems than exploring rigorously where those systems apply. But a model is only useful insofar as you know where it applies, and where it doesn’t. Or as Keynes put it in a quote I’m fond of, the purpose of economics is “to provide ourselves with an organised and orderly method of thinking out particular problems” (my emphasis); it is “a way of thinking … in terms of models joined to the art of choosing models which are relevant to the contemporary world.” Or in the words of Trygve Haavelmo, as quoted by Leijonhufvud:
There is no reason why the form of a realistic model (the form of its equations) should be the same under all values of its variables. We must face the fact that the form of the model may have to be regarded as a function of the values of the variables involved. This will usually be the case if the values of some of the variables affect the basic conditions of choice under which the behavior equations in the model are derived.
I might even go a step further. It’s not just that to use a model we need to think carefully about the domain over which it applies. It may even be that the boundaries of its domain are the most interesting thing about it. As economists, we’re used to thinking of models “from the inside” — taking the formal relationships as given and then asking what the world looks like when those relationships hold. But we should also think about them “from the outside,” because the boundaries within which those relationships hold are also part of the reality we want to understand. [11] You might think about it like laying a flat map over some curved surface. Within a given region, the curvature won’t matter, the flat map will work fine. But at some point, the divergence between trajectories in our hypothetical plane and on the actual surface will get too large to ignore. So we will want to have a variety of maps available, each of which minimizes distortions in the particular area we are traveling through — that’s Keynes’ and Haavelmo’s point. But even more than that, the points at which the map becomes unusable, are precisely how we learn about the curvature of the underlying territory.
Some good examples of this way of thinking are found in the work of Lance Taylor, which often situates a variety of model closures in various particular historical contexts. I think this kind of thinking was also very common in an older generation of development economists. A central theme of Arthur Lewis’ work, for example, could be thought of in terms of poor-country labor markets that look like what I’ve called Closure 3 and rich-country labor markets that look like Closure 5. And of course, what’s most interesting is not the behavior of these two systems in isolation, but the way the boundary between them gets established and maintained.
To put it another way: Dialectics, which is to say science, is a process of moving between the concrete and the abstract — from specific cases to general rules, and from general rules to specific cases. As economists, we are used to grounding concrete in the abstract — to treating things that happen at particular times and places as instances of a universal law. The statement of the law is the goal, the stopping point. But we can equally well ground the abstract in the concrete — treat a general rule as a phenomenon of a particular time and place.
[1] In graduate school you then learn to forget about the existence of businesses and investment, and instead explain the effect of interest rates on current spending by a change in the optimal intertemporal path of consumption by a representative household, as described by an Euler equation. This device keeps academic macroeconomics safely quarantined from contact with discussion of real economies.
[2] In the US, Okun’s law looks something like Delta-U = 0.5(2.5 – g), where Delta-U is the change in the unemployment rate and g is inflation-adjusted growth in GDP. These parameters vary across countries but seem to be quite stable over time. In my opinion this is one of the more interesting empirical regularities in macroeconomics. I’ve blogged about it a bit in the past and perhaps will write more in the future.
[3] To see why this must be true, write L for total employment, Z for the level of nominal GDP, Y for per-capita GDP, W for the average wage, and P for the price level. The labor share S is by definition equal to total wages divided by GDP:
S = WL / Z
Real output per worker is given by
Y = (Z/P) / L
Now combine the equations and we get W = P Y S. This is in levels, not changes. But recall that small percentage changes can be approximated by log differences. And if we take the log of both sides, writing the log of each variable in lowercase, we get w = y + p + s. For the kinds of changes we observe in these variables, the approximation will be very close.
[4] I won’t keep putting “real” in quotes. But it’s important not to uncritically accept the dominant view that nominal quantities like wages are simply reflections of underlying non-monetary magnitudes. In fact the use of “real” in this way is deeply ideological.
[5] A discovery that seems to get made over and over again, is that since an identity is true by definition, nothing needs to adjust to maintain its equality. But it certainly does not follow, as people sometimes claim, that this means you cannot use accounting identities to reason about macroeconomic outcomes. The point is that we are always using the identities along with some other — implicit or explicit — claims about the choices made by economic units.
[6] Note that it’s not necessary to use a labor supply curve here, or to make any assumption about the relationship between wages and marginal product.
[7] Often confused with Milton Friedman’s natural rate of unemployment. But in fact the concepts are completely different. In Friedman’s version, causality runs the other way, from the inflation rate to the unemployment rate. When realized inflation is different from expected inflation, in Friedman’s story, workers are deceived about the real wage they are being offered and so supply the “wrong” amount of labor.
[8] Why a permanently rising price level is inconsequential but a permanently rising inflation rate is catastrophic, is never explained. Why are real outcomes invariant to the first derivative of the price level, but not to the second derivative? We’re never told — it’s an article of faith that money is neutral and super-neutral but not super-super-neutral. And even if one accepts this, it’s not clear why we should pick a target of 2%, or any specific number. It would seem more natural to think inflation should follow a random walk, with the central bank holding it at its current level, whatever that is.
[9] We could instead use w – p = r*, with an exogenously given rate of increase in real wages. The logic would be the same. But it seems simpler and more true to the classics to use the form in 3c. And there do seem to be domains over which constant real wages are a reasonable assumption.
[10] I was just starting grad school when I read Robert Brenner’s long article on the global economy, and one of the things that jumped out at me was that he discussed the markup and the wage share as if they were two independent variables, when of course they are just two ways of describing the same thing. Using s still as the wage share, and m as the average markup of prices over wages, s = 1 / (1 + m). This is true by definition (unless there are shares other than wages or profits, but none such figure in Brenner’s analysis). The markup may reflect the degree of monopoly power in product markets while the labor share may reflect bargaining power within the firm, but these are two different explanations of the same concrete phenomenon. I like to think that this is a mistake an economist wouldn’t make.
[11] The Shaikh piece mentioned above is very good. I should add, though, the last time I spoke to Anwar, he criticized me for “talking so much about the things that have changed, rather than the things that have not” — that is, for focusing so much on capitalism’s concrete history rather than its abstract logic. This is certainly a difference between Shaikh’s brand of Marxism and whatever it is I do. But I’d like to think that both approaches are called for.
EDIT: As several people pointed out, some of the equations were referred to by the wrong numbers. Also, Equation 5a and 5e had inflation-expectation terms in them that didn’t belong. Fixed.
EDIT 2: I referred to an older generation of development economics, but I think this awareness that the territory requires various different maps, is still more common in development than in most other fields. I haven’t read Dani Rodrik’s new book, but based on reviews it sounds like it puts forward a pretty similar view of economics methodology.
[Apologies to any non-econ readers, this is even more obscure than usual.]
Brad DeLong observed last week that one of the most surprising things about the Great Recession is how far long-term interest rates have followed short rates toward zero.
I have gotten three significant pieces of the past four years wrong. Three things surprised and still surprise me: (1.) The failure of central banks to adopt a rule like nominal GDP targeting, or it’s equivalent. (2.) The failure of wage inflation in the North Atlantic to fall even farther than it has–toward, even if not to, zero. (3.) The failure of the yield curve to sharply steepen: federal funds rates at zero I expected, but 30-Year U.S. Treasury bond nominal rates at 2.7% I did not.
… The third… may be most interesting.
Back in March 2009, the University of Chicago’s Robert Lucas confidently predicted that within three years the U.S. economy would be back to normal. A normal U.S. economy has a short-term nominal interest rate of 4%. Since the 10-Year U.S. Treasury bond rate tends to be one percentage point more than the average of expected future short-term interest rates over the next decade, even five expected years of a deeply depressed economy with essentially zero short-term interest rates should not push the 10-Year Treasury rate below 3%. (And, indeed, the Treasury rate fluctuated around 3 to 3.5% for the most part from late 2008 through mid 2011.) But in July of 2011 the 10-Year U.S. Treasury bond rate crashed to 2%, and at the start of June it was below 1.5%. [
The possible conclusions are stark: either those investing in financial markets expect … [the] current global depressed economy to endure in more-or-less its current state for perhaps a decade, perhaps more; or … the ability of financial markets to do their job and sensibly price relative risks and returns at a rational level has been broken at a deep and severe level… Neither alternative is something I would have or did predict, or even imagine.
I also am surprised by this, and for similar reasons to DeLong. But I think the fact that it’s surprising has some important implications, which he does not draw out.
Here’s a picture:
The dotted black line is the Federal Funds rate, set, of course, by the central bank. The red line is the 10-year Treasury; it’s the dip at the far right in that one that surprises DeLong (and me). The green line is the 30-year Treasury, which behaves similarly but has fallen by less. Finally, the blue line is the BAA bond rate, a reasonable proxy for the interest rate faced by large business borrowers; the 2008 financial crisis is clearly visible. (All rates are nominal.) While the Treasury rates are most relevant for the expectations story, it’s the interest rates faced by private borrowers that matter for policy.
The recent fall in 10-year treasuries is striking. But it’s at least as striking how slowly and incompletely they, and corporate bonds, respond to changes in Fed policy, especially recently. It’s hard to look at this picture and not feel a twinge of doubt about the extent to which the Fed “sets” “the” interest rate in any economically meaningful sense. As I’ve mentioned here before, when Keynes referred to the “liquidity trap,” he didn’t mean the technical zero lower bound to policy rates, but its delinking from the economically-important long rates. Clearly, it makes no difference whether or not you can set a policy rate below zero if there’s reason to think that longer rates wouldn’t follow it down in any case. And I think there is reason to think that.
The snapping of the link between monetary policy and other rates was written about years ago by Benjamin Friedman, as a potential; it figured in my comrade Hasan Comert’s dissertation more recently, as an actuality. Both of them attribute the disconnect to institutional and regulatory changes in the financial system. And I agree, that’s very important. But after reading Leijonhufvud’s On Keynesian Economics and the Economics of Keynes [1], I think there may be a deeper structural explanation.
As DeLong says, in general we think that long interest rates should be equal to the average expected short rates over their term, perhaps plus a premium. [2] So what can we say about interest rate expectations? One obvious question is, are they elastic or inelastic? Elastic expectations change easily; in particular, unit-elastic expectations mean that whatever the current short rate is, it’s expected to continue indefinitely. Inelastic expectations change less easily; in the extreme case of perfectly inelastic interest rate expectations, your prediction for short-term interest rates several years from now is completely independent of what they are now.
Inelastic interest-rate expectations are central to Keynes’ vision of the economy. (Far more so than, for instance, sticky wages.) They are what limit the effectiveness of monetary policy in a depression or recession, with the liquidity trap simply the extreme case of the general phenomenon. [3] His own exposition is a little hard to follow, but the simplest way to look at it is to recall that when interest rates fall, bond prices rise, and vice versa. (In fact they are just two ways of describing the same thing.) So if you expect a rise in interest rates in the future that means you’ll expect a capital loss if you hold long-duration bonds, and if you expect a fall in interest rates you’ll expect a capital gain. So the more likely it seems that short-term interest rates will revert to some normal level in the future, the less long rates should follow short ones.
This effect gets stronger as we consider longer maturities. In the limiting case of a perpetuity — a bond that makes a fixed dollar period every period forever — the value of the bond is just p/i, where p is the payment in each period and i is the interest rate. So when you consider buying a bond, you have to consider not just the current yield, but the possibility that interest rates will change in the future. Because if they do, the value of the bonds you own will rise or fall, and you will experience a capital gain or loss. Of course future interest rates are never really known. But Keynes argued that there is almost always a strong convention about the normal or “safe” level of interest.
Note that the logic above means that the relationship between short and long rates will be different when rates are relatively high vs. when they are relatively low. The lower are rates, the greater the capital loss from an increase in rates. As long rates approach zero, the potential capital loss from an increase approaches infinity.
Let’s make this concrete. If we write i_s for the short interest rate and i_l for the long interest rate, B for the current price of long bonds, and BE for the expected price of long bonds a year from now, then for all assets to be willing held it must be the case that i_l = i_s – (BE/B – 1), that is, interest on the long bond will need to be just enough higher (or lower) than the short rate to cancel out the capital loss (or gain) expected from holding the long bond. If bondholders expect the long run value of bond prices to be the same as the current value, then long and short rates should be the same. [*] Now for simplicity let’s assume we are talking about perpetuities (the behavior of long but finite bonds will be qualitatively similar), so B is just 1/i_l. [4] Then we can ask the question, how much do short rates have to fall to produce a one point fall in long rates.
Obviously, the answer will depend on expectations. The standard economist’s approach to expectations is to say they are true predictions of the future state of the world, an approach with some obvious disadvantages for those of us without functioning time machines. A simpler, and more empirically relevant, way of framing the question, is to ask how expectations change based on changes in the current state of the world — which unlike the future, we can observe. Perfectly inelastic expectations mean that your best guess about interest rates at some future date is not affected at all by the current level of interest rates; unit-elastic expectations mean that your best guess changes one for one with the current level. An of course there are all the possibilities in between. Let’s quantify this as the subjective annual probability that a departure of interest rates from their current or “normal” level will subsequently be reversed. Now we can calculate the exact answer to the question posed above, as shown in the next figure.
For instance, suppose short rates are initially at 6 percent, and suppose this is considered the “normal” level, in the sense that the marginal participant in the bond market regards an increase or decrease as equally likely. Then the long rate will also be 6 percent. Now we want to get the long rate down to 5 percent. Suppose interest rate expectations are a bit less than unit elastic — i.e. when market rates change, people adjust their views of normal rates by almost but not quite as much. Concretely, say that the balance of expectations is that there is net 5 percent annual chance that rates will return to their old normal level. If the long rate does rise back to 6 percent, people who bought bonds at 5 percent will suffer a capital loss of 20 percent. A 5 percent chance of a 20 percent loss equals an expected annual loss of 1 percent, so long rates will need to be one point higher than short rates for people to hold them. [5] So from a starting point of equality, for long rates to fall by one point, short rates must fall by two points. You can see that on the blue line on the graph. You can also see that if expectations are more than a little inelastic, the change in short rates required for a one-point change in long rates is impossibly large unless rates are initially very high.
It’s easy enough to do these calculations; the point is that unless expectations are perfectly elastic, we should always expect long rates to change less than one for one with short rates; the longer the rates considered, the more inelastic expectations, and the lower initial rates, the less responsive long rates will be. At the longest end of the term structure — the limiting case of a perpetuity — it is literally impossible for interest rates to reach zero, since that would imply an infinite price.
This dynamic is what Keynes was talking about when he wrote:
If . . . the rate of interest is already as low as 2 percent, the running yield will only offset a rise in it of as little as 0.04 percent per annum. This, indeed, is perhaps the chief obstacle to a fall in the rate of interest to a very low level . . . [A] long-term rate of interest of (say) 2 percent leaves more to fear than to hope, and offers, at the same time, a running yield which is only sufficient to offset a very small measure of fear.
Respectable economists like DeLong believe that there is a true future path of interest rates out there, which current rates should reflect; either the best current-information prediction is of government policy so bad that the optimal interest rate will continue to be zero for many years to come, or else financial markets have completely broken down. I’m glad the second possibility is acknowledged, but there is a third option: There is no true future course of “natural” rates out there, so markets adopt a convention for normal interest rates based on past experience. Given the need to take forward-looking actions without true knowledge of the future, this is perfectly rational in the plain-English sense, if not in the economist’s.
A final point: For Keynes — a point made more clearly in the Treatise than in the General Theory — the effectivness of monetary policy depends critically on the fact that there are normally market participants with differing expectations about future interest rates. What this means is that when interest rates rise, people who think the normal or long-run rate of interest is relatively low (“bulls”) can sell bonds to people who think the normal rate is high (“bears”), and similarly when interest rates fall the bears can sell to the bulls. Thus the marginal bond will be held held by someone who thinks the current rate of interest is the normal one, and so does not require a premium for expected capital gains or losses. This is the same as saying that the market as a whole behaves as if expectations are unit-elastic, even though this is not the case for individual participants. [6] But when interest rates move too far, there will no longer be enough people who think the new rate is normal to willingly hold the stock of bonds without an interest-rate risk premium. In other words, you run out of bulls or bears. Keynes was particularly concerned that an excess of bear speculators relative to bulls could keep long interest rates permanently above the level compatible with full employment. The long rate, he warned,
may fluctuate for decades about a level which is chronically too high for full employment; – particularly if it is the prevailing opinion that the rate of interest is self-adjusting, so that the level established by convention is thought to be rooted in objective grounds much stronger than convention, the failure of employment to attain an optimum level being in no way associated, in the minds either of the public or of authority, with the prevalence of an inappropriate range of rates of interest’.
If the belief that interest rates cannot fall below a certain level is sufficiently widespread, it becomes self-fulfilling. If people believe that long-term interest rates can never persistently fall below, say, 3 percent, then anyone who buys long bonds much below that is likely to lose money. And, as Keynes says, this kind of self-stabilizing convention is more likely to the extent that people believe that it’s not just a convention, but that there is some “natural rate of interest” fixed by non-monetary fundamentals.
So what does all this mean concretely?
1. It’s easy to see inelastic interest-rate expectations in the data. Long rates consistently lag behind short rates. During the 1960s and 1970s, when rates were secularly rising, long rates were often well below the Federal Funds rate, especially during tightening episodes; during the period of secularly falling rates since 1980, this has almost never happened, but very large term spreads have become more common, especially during loosening episodes.
2. For the central bank to move long rates, it must persuade markets that changes in policy are permanent, or at least very persistent; this is especially true when rates are low. (This is the main point of this post.) The central bank can change rates on 30-year bonds, say, only by persuading markets that average rates over the next 30 years will be different than previously believed. Over small ranges, the existence of varying beliefs in the bond market makes this not too difficult (since the central bank doesn’t actually have to change any individual’s expectations if bond sales mean the marginal bondholder is now a bull rather than a bear, or vice versa) but for larger changes it is more difficult. And it becomes extremely difficult to the extent that economic theory has taught people that there is a long run “natural” rate of interest that depends only on technology and time preferences, which monetary policy cannot affect.
Now, the obvious question is, how sure are we that long rates are what matters? I’ve been treating a perpetual bond as an approximation of the ultimate target of monetary policy, but is that reasonable? Well, one point on which Keynes and today’s mainstream agree is that the effect of interest rates on the economy comes through demand for long-lived assets — capital goods and housing. [7] According to the BEA, the average current-cost age of private fixed assets in the US is a bit over 21 years, which implies that the expected lifetime of a new fixed asset must be quite a bit more than that. For Keynes (Leijonhufvud stresses this point; it’s not so obvious in the original texts) the main effect of interest rates is not on the financing conditions for new fixed assets, as most mainstream and heterodox writers both assume, but on the discount rate used of the assets. In that case the maturity of assets is what matters. On the more common view, it’s the maturity of the debt used to finance them, which may be a bit less; but the maturity of debt is usually matched to the maturity of assets, so the conclusion is roughly the same. The relevant time horizon for fixed assets is long enough that perpetuities are a reasonable first approximation. [8]
3. So if long rates are finally falling now, it’s only because an environment of low rates is being established as new normal. There’s a great deal of resistance to this, since if interest rates do return to their old normal levels, the capital losses to bondholders will be enormous. So to get long rates down, the Fed has to overcome intense resistance from bear speculators. Only after a great deal of money has been lost betting on a return of interest rates to old levels will market participants begin to accept that ultra-low rates are the new normal. The recent experience of Bill Gross of PIMCO (the country’s largest bond fund) is a perfect example of this story. In late 2010, he declared that interest rates could absolutely fall no further; it was the end of the 30-year bull market in bonds. A year later, he put his money where his mouth was and sold all his holdings of Treasuries. As it turned out, this was just before bond prices rose by 30 percent (the flipside of the fall in rates), a misjudgment that cost his investors billions. But Gross and the other “bears” had to suffer those kinds of losses for the recent fall in long rates to be possible. (It is also significant that they have not only resisted in the market, but politically as well.) The point is, outside a narrow range, changes in monetary policy are only effective when they cease to be perceived as just countercyclical, but as carrying information about “the new normal.” Zero only matters if it’s permanent zero.
4. An implication of this is that in a world where the lifespan of assets is much longer than the scale of business-cycle fluctuations, we cannot expect interest rates to be stationary if monetary policy is the main stabilization tool. Unless expectations are very elastic, effective monetary policy require secular drift in interest rates, since each short-term stabilization episode will result in a permanent change in interest rates. [9] You can see this historically: the fall in long rates in the 1990 and 2000 loosenings both look about equal to the permanent components of those changes. This is a problem for two reasons: First, because it means that monetary policy must be persistent enough to convince speculators that it does represent a permanent change, which means that it will act slower, and require larger changes in short rates (with the distortions those entail) than in the unit-elastic expectations case. And second, because if there is some reason to prefer one long-ru level of interest rates to another (either because you believe in a “natural” rate, or because of the effects on income distribution, asset price stability, etc.) it would seem that maintaining that rate is incompatible with the use of monetary policy for short-run stabilization. And of course the problem is worse, the lower interest rates are.
5. One way of reading this is that monetary policy works better when interest rates are relatively high, implying that if we want to stabilize the economy with the policy tools we have, we should avoid persistently low interest rates. Perhaps surprisingly, given what I’ve written elsewhere, I think there is some truth to this. If “we” are social-welfare-maximizing managers of a capitalist economy, and we are reliant on monetary policy for short-run stabilization, then we should want full employment to occur in the vicinity of nominal rates around 10 percent, versus five percent. (One intuitive way of seeing this: Higher interest rates are equivalent to attaching a low value to events in the future, while low interest rates are equivalent to a high value on those events. Given the fundamental uncertainty about the far future, choices in the present will be more stable if they don’t depend much on far-off outcomes.) In particular — I think it is a special case of the logic I’ve been outlining here, though one would have to think it through — very low interest rates are likely to be associated with asset bubbles. But the conclusion, then, is not to accept a depressed real economy as the price of stable interest rates and asset prices, but rather to “tune” aggregate demand to a higher level of nominal interest rates. One way to do this, of course, is higher inflation; the other is a higher level of autonomous demand, either for business investment (the actual difference between the pre-1980 period and today, I think), or government spending.
[1] The most invigorating economics book I’ve read in years. It’ll be the subject of many posts here in the future, probably.
[2] Why there should be a pure term premium is seldom discussed but actually not straightforward. It’s usually explained in terms of liquidity preference of lenders, but this invites the questions of (1) why liquidity preference outweighs “solidity preference”; and (2) why lenders’ preferences should outweigh borrowers’. Leijonhufvud’s answer, closely related to the argument of this post, is that the “excessively long” lifespan of physical capital creates chronic excess supply at the long end of the asset market. In any case, for the purpose of this post, we will ignore the pure premium and assume that long rates are simply the average of expected short rates.
[3] Keynes did not, as is sometimes suggested by MMTers and other left Keynesians, reject the effectiveness of monetary policy in general. But he did believe that it was much more effective at stabilizing full employment than at restoring full employment from a depressed state.
[4] I will do up these equations properly once the post is done.
[5] I anticipate an objection to reasoning on the basis of an equilibrium condition in asset markets. I could just say, Keynes does it. But I do think it’s legitimate, despite my rejection of the equilibrium methodology more generally. I don’t think there’s any sense that human behavior can be described as maximizing some quantity called utility,” not even as a rough approximation; but I do think that capitalist enterprises can be usefully described as maximizing profit. I don’t think that expectations in financial markets are “rational” in the usual economists’ sense, but I do think that one should be able to describe asset prices in terms of some set of expectations.
[6] We were talking a little while ago with Roger Farmer, Rajiv Sethi, and others about the desirability of limiting economic analysis to equilibria, i.e. states where all expectations are fulfilled. This implies, among other things, that all expectations must be identical. Keynes’ argument for why long rates are more responsive to short rates within some “normal” range of variation is — whether you think it’s right or not — an example of something you just can’t say within Farmer’s preferred framework.
[7] Despite this consensus, this may not be entirely the case; and in fact to the extent that monetary policy is effective in the real world, other channels, like income distribution, may be important. But let’s assume for now that demand for long-lived assets is what matters.
[8] Hicks had an interesting take on this, according to Leijonhufvud. Since the production process is an integrated whole, “capital” does not consist of particular goods but of a claim on the output of the process as a whole. Since this process can be expected to continue indefinitely, capital should be generally assumed to be infinitely-lived. When you consider how much of business investment is motivated by maintaining the firm’s competitive position — market share, up to date technology, etc. — it does seem reasonable to see investment as buying not a particular capital good but more of the firm as a whole.
[9] There’s an obvious parallel with the permanent inflation-temporary employment tradeoff of mainstream theory. Except, I think mine is correct!
More teaching: We’re starting on the open economy now. Exchange rates, trade, international finance, the balance of payments. So one of the first things you have to explain, is the definition of real and nominal exchange rates:
e_R = e_N P*/P
where P and P* are the home and foreign price levels respectively, and the exchange rate e is defined as the price of foreign exchange (so an appreciation means that e falls and a depreciation means that it rises).
This is a useful definition to know — though of course it’s not as straightforward as it seems, since as we’ve discussed before there are various possibles Ps, and once we are dealing with more than two countries we have to decide how to weight them, with different statistical agencies using different weightings. But set all that aside. What I want to talk about now, is what a nice little example this equation offers of a structuralist perspective on the economy.
As given above, the equation is an accounting identity. It’s always exactly true, simply because that’s how we’ve defined the real exchange rate. As an accounting identity, it doesn’t in itself say anything about causation. But that doesn’t mean it’s vaacuous. After all, we picked this particular definition because we think it is associated with some causal story. [1] The question is, what story? And that’s where things get interesting.
Since we have one equation, we should have one endogenous (or dependent) variable. But which one, depends on the context.
If we are telling a story about exchange rate determination, we might think that the endogenous variable is e_N. If price levels are determined by the evolution of aggregate supply and demand (or the growth of the money stock, if you prefer) in each country, and if arbitrage in the goods market enforces something like Purchasing Power Parity (PPP), then the nominal exchange rate will have to adjust to keep the real price of a comparable basket of goods from diverging across countries.
On the other hand, we might not think PPP holds, at least in the short run, and we might think that the nominal exchange rate cannot adjust freely. (A fixed exchange rate is the obvious reason, but it’s also possible that the forex markets could push the nominal exchange rate to some arbitrary level.) In that case, it’s the real exchange rate that is endogenous, so we can see changes in the price of comparable goods in one country relative to another. This is implicitly the causal structure that people have in mind when they argue that China is pursuing a mercantilist strategy by pegging its nominal exchange rate, that devaluation would improve current account balances in the European periphery, or that the US could benefit from a lower (nominal) dollar. Here the causal story runs from e_N to e_R.
Alternatively, maybe the price level is endogenous. This is less intuitive, but there’s at least one important story where it’s the case. Anti-inflation programs in a number of countries, especially in Latin America, have made use of a fixed exchange rate as a “nominal anchor.” The idea here is that in a small open economy, especially where high inflation has led to widespread use of a foreign currency as the unit of account, the real exchange rate is effectively fixed. So if the nominal exchange rate can also be effectively fixed, then, like it or not, the domestic price level P will have to be fixed as well. Here’s Jeffrey Sachs on the Bolivian stabilization:
The sudden end of a 60,000 percent inflation seems almost miraculous… Thomas Sargent (1986) argued that such a dramatic change in price inflation results from a sudden and drastic change in the public’s expectations of future government policies… I suggest, in distinction to Sargent, that the Bolivian experience highlights a different and far simpler explanation of the very rapid end of hyperinflations. By August 1985,… prices were set either explicitly or implicitly in dollars, with transactions continuing to take place in peso notes, at prices determined by the dollar prices converted at the spot exchange rate. Therefore, by stabilizing the exchange rate, domestic inflation could be made to revert immediately to the US dollar inflation rate.
So here the causal story runs from e_N to P.
In the three cases so far, we implicitly assume that P* is fixed, or at least exogenous. This makes sense; since a single country is much smaller than the world as a whole, we don’t expect anything it does to affect the world price level much. So the last logical possibility, P* as the endogenous variable, might seem to lack a corresponding real world story. But an individual countries is not always so much smaller than the world as a whole, at least not if the individual country is the United States. It’s legitimate to ask whether a change in our price level or exchange rate might not show up as as inflation or deflation elsewhere. This is particularly likely if we are focusing on a bilateral relationship. For instance, it might well be that a devaluation of the dollar relative to the renminbi would simply (or mostly) produce corresponding deflation [2] in China, leaving the real exchange rate unchanged.
Here, of course, we have only one equation. But if we interpret it causally, that is already a model, and the question of “what adjusts?” can be rephrased as the choice between alternative model closures. With multiple-equation models, that choice gets trickier — and it can be tricky enough with one equation.
In my opinion, sensitivity to alternative model closures is at the heart of structuralist economics, and is the great methodological innovation of Keynes. The specific application that defines the General Theory is the model closure that endogenizes aggregate income — the interest rate, which was supposed to equilibrate savings and investment, is pinned down by the supply and demand of liquidity, so total income is what adjusts — but there’s a more general methodological principle. “Thinking like an economist,” that awful phrase, should mean being able to choose among different stories — different model closures — based on the historical context and your own interests. It should mean being able look at a complex social reality and judge which logical relationships represent the aspects of it you’re currently interested in, and which accounting identities are most relevant to the story you want to tell. Or as Keynes put it, economics should be thought of as
a branch of logic, a way of thinking … in terms of models, joined to the art of choosing models which are relevant to the contemporary world. … [The goal is] not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organised and orderly method of thinking out particular problems.
Much of mainstream macroeconomics assumes there is a “true” model of the world. Connected to this, there’s an insistence — shared even by a lot of heterodox macro — on regarding some variables as being strictly exogenous and others as strictly endogenous, so that in every story causality runs the same way. In the canonical story, tastes, technology and endowments (one can’t help hearing: by the Creator) are perfectly fixed, and everything else is perfectly adjustable. [3]
Better to follow Keynes, and think about models as more or less useful for clarifying the logic of particular stories.
EDIT: Of course not everyone who recognizes the methodological distinction I’m making here agrees that the eclecticism of structuralism is an advantage. Here is my teacher Peter Skott (with Ben Zipperer):
The `heterodox’ tradition in macroeconomics contains a wide range of models. Kaleckian models treat the utilization rate as an accommodating variable, both in the short and the long run. Goodwin’s celebrated formalization of Marx, by contrast, take the utilization rate as fixed and looks at the interaction between employment and distribution. Distribution is also central to Kaldorian and Robinsonian theories which, like Goodwin, endogenize the profit share and take the utilization rate as structurally determined in the long run but, like the Kaleckians, view short-run variations in utilization as an intrinsic part of the cycle. The differences in these and other areas are important, and this diversity of views on core issues is no cause for celebration.
There is no reason why the form of a realistic model (the form of its equations) should be the same under all values of its variables. We must face the fact that the form of the model may have to be regarded as a function of the values of the variables involved. This will usually be the case if the values of some of the variables affect the basic conditions of choice under which the behavior equations in the model are derived.
That’s what I’m talking about. There is no “true” model of the economy. The behavioral relationships change depending where we are in economic space.
Also, Bruce Wilder has a long and characteristically thoughtful comment below. I don’t agree with everything he says — it seems a little too hopeless about the possibility of useful formal analysis even in principle — but it’s very worth reading.
[1] “Accounting identities don’t tell causal stories” is a bit like “correlation doesn’t imply causation.”Both statements are true in principle, but the cases we’re interested in are precisely the cases where we have some reason to believe that it’s not true. And for both statements, the converse does not hold. A causal story that violates accounting identities, or for which there is no corresponding correlation, has a problem.
[2] Or lower real wages, the same thing in this context.
[3] Or you sometimes get a hierarchy of “fast” and “slow” variables, where the fast ones are supposed to fully adjust before the slow ones change at all.
Does anybody else remember that Kurt Vonnegut story “Harrison Bergeron”? (It’s an early one; he reused the conceit, I think, in one of his novels — The Sirens of Titan maybe?) The idea is that in a future egalitarian dystopia, perfect fairness is achieved by subjecting everyone to penalties corresponding to their talents — the physically fit have to wear burdensome weights, smart people like you and me and Kurt have earphones subjecting us to distracting noises, and so on.
As a story, it’s not much — sort of a Simple English version of The Fountainhead. But I thought of it when I read this post from Nick Rowe last month. Microeconomics isn’t normally my bag, but this was fun.
Suppose we have a group of similar people. One of them has to do some unpleasant or dangerous job, defending the border against the Blefuscudians, say. Has to be one person, they can’t rotate. So what is the welfare-maximizing way to allocate this bad job? Have a draft where someone is picked by lot and compelled to do it, or offer enough extra pay for it that someone volunteers? You’d think that standard micro would say the market solution is best. But — well, here’s Nick:
The volunteer army is fair ex post. The one who volunteers gets the same level of utility as the other nine. … The lottery is unfair ex post, because they all get the same consumption but one has a nastier job. That’s obvious. What is not obvious, until you think about it, is that … the lottery gives higher expected utility. That’s the result of Theodore Bergstrom’s minor classic “Soldiers of Fortune” paper.
The intuition is straightforward. Think about the problem from the Utilitarian perspective, of maximising the sum of the ten utilities. This requires equalising the marginal utility of consumption for all ten men. … The volunteer army gives the soldier higher consumption, and so lower marginal utility of consumption, so does not maximise total utility. ….
If we assume, as may be reasonable, that taking the job reduces the marginal utility of consumption, that strengthens the advantages of the lottery over the volunteer army. It also means they would actually prefer a lottery where the soldier has lower consumption than those who stays home. The loser pays the winners, as well as risking his life, in the most efficient lottery.
It’s a clever argument. You need to pay someone extra to do a crap job. (Never mind that those sorts of compensating differentials are a lot more common in theory than in the real world, where the crappiest jobs are also usually the worst paid. We’re thinking like economists here.) But each dollar of consumption contributes less to our happiness than the last one. So implementing the fair outcome leaves everyone with lower expected utility than just telling the draftee to suck it up.
Of course, this point has broader applications. I’d be shocked if some version of it hasn’t been deployed as part of an anti-Rawlsian case against social insurance. Nick uses it to talk about CEO pay. That’s the direction I want to go in, too.
We all know why Bill Gates and Warren Buffett and Carlos Slim Helu are so rich, right? It’s because they sit on top of a vast machine for transforming human lives into commodities market income is equal to marginal product, and Buffet and Gates and Slim and everybody named Walton are just so damn productive. We have to pay them what they’re worth or they won’t produce all this valuable stuff that no one else can. Right?
The problem is, even if the monstrously rich really were just as monstrously productive, that wouldn’t make them utility monsters. Even if you think that the distribution of income is determined by the distribution of ability, there’s no reason to think that people’s ability to produce and their ability to derive enjoyment from consumption coincide. Indeed, to the extent that being super productive means having less leisure, and means developing your capacity for engineering or order-giving rather than for plucking the hour and the day virtuously and well, they might well be distributed inversely. But even if Paul Allen really does get an ecstasy from taking one of his jets to his helicopter to his boat off the coast of Southern France that we plebes, with our puny so-called vacations [1], can’t even imagine, the declining marginal utility of consumption is still going to catch up with him eventually. Two private jets may be better than one, but surely they’re not twice as good.
And that, if you believe the marginal product story, is a problem. The most successful wealth-creators will eventually reach a point where they may be as productive as ever, but it’s no longer worth their while to keep working. Look at Bill Gates. Can you blame him for retiring? He couldn’t spend the money he’s got in ten lifetimes, he can’t even give it away. But if you believe his salary up til now has reflected his contribution to the social product, his retirement is a catastrophe for the rest of us. Atlas may not shrug, but he yawns.
Wealth blunts the effects of incentives. So we want the very productive to have lots of income, but very little wealth. They should want to work 12 hour days to earn more, but they shouldn’t be tempted to cut their hours back to spend what they already earned. It seems like an insoluble problem, closely related to Suresh’s superstar doctor problem, which liberalism has no good answer to. [2]
But that’s where we come to Harry Bergeron. It’s perfectly possible for superstar doctors to have both a very high income and very low wealth. All that’s required is that they start in a very deep hole.
If we really believed that the justification for income disparities is to maintain incentives for the productive, we’d adopt a version of the Bergeron plan. We’d have tests early in life to assess people’s innate abilities, and the better they scored, the bigger the bill we’d stick them with. If it’s important that “he who does not work, neither shall he eat,” [3] it’s most important for those who have the greatest capacity to work. Keep Bill Gates hungry, and he might have spent another 20 years extracting rents from network externalities creating value for Microsoft’s shareholders and customers.
There’s no shortage of people to tell you that it might seem unfair that Paul Allen has two private jets in a world where kids in Kinshasa eat only every two days, but that in the long run the tough love of proper incentives will make more pie for everyone. Many of those people would go on to say that the reason Paul Allen needs to be encouraged so strenuously is because of his innate cognitive abilities. But very few of those people, I think, would feel anything but moral outrage at the idea that if people with Allen’s cognitive capacities could be identified at an early age, they should be stuck with a very big bill and promised a visit from very big bailiffs if they ever missed a payment. And yet the logic is exactly the same.
Of course I’m not endorsing this idea; I don’t think the rich, by and large, have any special cognitive capacities so I’m happy just to expropriate them; we don’t have to work them until they drop. (People who do believe that income inequality is driven by marginal productivity don’t have such an easy out.)
But it’s funny, isn’t it: As a society we seem to be adopting something a bit like the Bergeron Solution. People who are very productive, at least as measured by their expected salaries, do begin their lives, or at least their careers, with a very big bill. Which ensures that they’ll be reliable creators of value for society, where value is measured, as always, in dollars. God forbid that someone who could be doctor or lawyer should decide to write novels or raise children or spend their days surfing. Of course one doesn’t want to buy into some naive functionalism, not to say conspiracy theory. I’m not saying that the increase in student debt happened in order that people who might otherwise have been tempted into projects of self-valorization would continue to devote their lives to the valorization of capital instead. But, well, I’m not not saying that.
[1] What, you think that “family” you’re always going on about could provide a hundredth the utility Paul Allen gets from his yacht?
[2] That post from Suresh is where I learned about utility monsters.
[3] I couldn’t be bothered to google it, but wasn’t it Newt’s line back in the day, before Michele Bachman picked it up?
Today Paul Krugman takes up the question of the post below, are recessions all about (excess demand for) money? The post is in response to an interesting criticism by Henry Kaspar of what Kaspar calls “quasi-monetarists,” a useful term. Let me rephrase Kaspar’s summary of the quasi-monetarist position [1]:
1. Logically, insufficient demand for goods implies excess demand for money, and vice versa.
2. Causally, excess demand for money (i.e. an increase in liquidity preference or a fall in the money supply) is what leads to insufficient demand for goods.
3. The solution is for the monetary authority to increase the supply of money.
Quasi-monetarists say that 2 is true and 3 follows from it. Kaspar says that 2 doesn’t imply 3, and anyway both are false. And Krugman says that 3 is false because of the zero lower bound, and it doesn’t matter if 2 is true, since asking for “the” cause of the crisis is a fool’s errand. But everyone agrees on 1.
Me, though, I have doubts.
Krugman:
An overall shortfall of demand, in which people just don’t want to buy enough goods to maintain full employment, can only happen in a monetary economy; it’s correct to say that what’s happening in such a situation is that people are trying to hoard money instead (which is the moral of the story of the baby-sitting coop). And this problem can ordinarily be solved by simply providing more money.
For those who don’t know it, Krugman’s baby-sitting co-op story is about a group that let members “sell” baby-sitting services to each other in return for tokens, which they could redeem later when they needed baby-sitting themselves. The problem was, too many people wanted to save up tokens, meaning nobody would use them to buy baby-sitting and the system was falling apart. Then someone realizes the answer is to increase the number of tokens, and the whole system runs smoothly again. It’s a great story, one of the rare cases where Keynesian conclusions can be drawn by analogizing the macroeconomy to everyday experience. But I’m not convinced that the fact that demand constraints can arise from money-hoarding, means that they always necessarily do.
Let’s think of the baby-sitting co-op again, but now as a barter economy. Every baby-sitting contract involves two households [2] committing to baby-sit for each other (on different nights, obviously). Unlike in Krugman’s case, there’s no scrip; the only way to consume baby-sitting services is to simultaneously agree to produce them at a given date. Can there be a problem of aggregate demand in this barter economy. Krugman says no; there are plenty of passages where Keynes seems to say no too. But I say, sure, why not?
Let’s assume that participants in the co-op decide each period whether or not to submit an offer, consisting of the nights they’d like to go out and the nights they’re available to baby-sit. Whether or not a transaction takes place depends, of course, on whether some other participant has submitted an offer with corresponding nights to baby-sit and go out. Let’s call the expected probability of an offer succeeding p. However, there’s a cost to submitting an offer: because it takes time, because it’s inconvenient, or just because, as Janet Malcolm says, it isn’t pleasant for a grown man or woman to ask for something when there’s a possibility of being refused. Call the cost c. And, the net benefit from fulfilling a contract — that is, the enjoyment of going out baby-free less the annoyance of a night babysitting — we’ll call U.
So someone will make an offer only when U > c/p. (If say, there is a fifty-fifty chance that an offer will result in a deal, then the benefit from a contract must be at least twice the cost of an offer, since on average you will make two offers for eve contract.) But the problem is, p depends on the behavior of other participants. The more people who are making offers, the greater the chance that any given offer will encounter a matching one and a deal will take place.
It’s easy to show that this system can have multiple, demand-determined equilibria, even though it is a pure barter economy. Let’s call p* the true probability of an offer succeeding; p* isn’t known to the participants, who instead form p by some kind of backward-looking expectations looking at the proportion of their own offers that have succeeded or failed recently. Let’s assume for simplicity that p* is simply equal to the proportion of participants who make offers in any given week. Let’s set c = 2. And let’s say that every week, participants are interested in a sitter one night. In half those weeks, they really want it (U = 6) and in the other half, they’d kind of like it (U = 3). If everybody makes offers only when they really need a sitter, then p = 0.5, meaning half the contracts are fulfilled, giving an expected utility per offer of 2. Since the expected utility from making an offer on a night you only kind of want a sitter is – 1, nobody tries to make offers for those nights, and the equilibrium is stable. On the other hand, if people make offers on both the must-go-out and could-go-out nights, then p = 1, so all the offers have positive expected utility. That equilibrium is stable too. In the first equilibrium, total output is 1 util per participant per week, in the second it’s 2.5.
Now suppose you are stuck in the low equilibrium. How can you get to the high one? Not by increasing the supply of money — there’s no money in the system. And not by changing prices — the price of a night of baby-sitting, in units of nights of baby-sitting, can’t be anything but one. But suppose half the population decided they really wanted to go out every week. Now p* rises to 3/4, and over time, as people observe more of their offers succeeding, p rises toward 3/4 as well. And once p crosses 2/3, offers on the kind-of-want-to-go-out nights have positive expected utility, so people start making offers for those nights as well, so p* rises further, toward one. At that point, even if the underlying demand functions go back to their original form, with a must-go-out night only every other week, the new high-output equilibrium will be stable.
As with any model, of course, the formal properties are less interesting in themselves than for what they illuminate in the real world. Is the Krugman token-shortage model or my pure coordination failure model a better heuristic for understanding recessions in the real world? That’s a hard question!
Hopefully I’ll offer some arguments on that question soon. But I do want to make one logical point first, the same as in the last post but perhaps clearer now. The statement “if there is insufficient demand for currently produced goods, there must excess be demand for money” may look quite similar to the statement “if current output is limited by demand, there must be excess demand for money.” But they’re really quite different; and while the first must be true in some sense, the second, as my hypothetical babysitting co-op shows, is not true at all. As Bruce Wilder suggests in comments, the first version is relevant to acute crises, while the second may be more relevant to prolonged periods of depressed output. But I don’t think either Krugman, Kaspar or the quasi-monetarists make the distinction clearly.
EDIT: Thanks to anonymous commenter for a couple typo corrections, one of them important. Crowd-sourced editing is the best.
Also, you could think of my babysitting example as similar to a Keynesian Cross, which we normally think of as the accounting identity that expenditure equals output, Z = Y, plus the behavioral equation for expenditure, Z = A + cY, except here with A = 0 and c = 1. In that case any level of output is an equilibrium. This is quasi-monetarist Nick Rowe’s idea, but he seems to be OK with my interpretation of it.
FURTHER EDIT: Nick Rowe has a very thoughtful response here. And my new favorite econ blogger, the mysterious rsj, has a very good discussion of these same questions here. Hopefully there’ll be some responses here to both, soonish.
[1] Something about typing this sentence reminds me unavoidably of Lucky Jim. This what neglected topic? This strangely what topic? Summary of the quasi-what?
[2] Can’t help being bugged a little by the way Krugman always refers to the participants as “couples,” even if they mostly were. There are all kinds of families!