Posts in Three Lines

I haven’t been blogging much lately. I’ve been doing real work, some of which will be appearing soon. But if I were blogging, here are some of the posts I might write.

*

Lessons from the 1990s. I have a new paper coming out from the Roosevelt Institute, arguing that we’re not as close to potential as people at the Fed and elsewhere seem to believe, and as I’ve been talking with people about it, it’s become clear that your priors depend a lot on how you think of the rapid growth of the 1990s. If you think it was a technological one-off, with no value as precedent — a kind of macroeconomic Bush v. Gore — then you’re likely to see today’s low unemployment as reflecting an economy working at full capacity, despite the low employment-population ratio and very weak productivity growth. But if you think the mid-90s is a possible analogue to the situation facing policymakers today, then it seems relevant that the last sustained episode of 4 percent unemployment led not to inflation but to employers actively recruiting new entrants to the laborforce among students, poor people, even prisoners.

Inflation nutters. The Fed, of course, doesn’t agree: Undeterred by the complete disappearance of the statistical relationship between unemployment and inflation, they continue to see low unemployment as a threatening sign of incipient inflation (or something) that must be nipped in the bud. Whatever other effects rate increases may have, the historical evidence suggests that one definite consequence will be rising private and public debt ratios. Economists focus disproportionately on the behavioral effects of interest rate changes and ignore their effects on the existing debt stock because “thinking like an economist” means, among other things, thinking in terms of a world in which decisions are made once and for all, in response to “fundamentals” rather than to conditions inherited from the past.

An army with only a signal corps. What are those other effects, though? Arguments for doubting central bankers’ control over macroeconomic outcomes have only gotten stronger than they were in the 2000s, when they were already strong; at the same time, when the ECB says, “let the government of Spain borrow at 2 percent,” it carries only a little less force than the God of genesis. I think we exaggerate power of central banks over real economy, but underestimate their power over financial markets (with the corollary that economists — heterodox as much as mainstream — see finance and real activity as much more tightly linked than they are).

It’s easy to be happy if you’re heterodox. This spring I was at a conference up at the University of Massachusetts, the headwaters of American heterodox economics, where I did my Phd. Seeing all my old friends reminded me what good prospects we in the heterodox world have – literally everyone I know from grad school has a good job. If you are wondering whether your prospects would be better at a nowhere-ranked heterodox economics program like UMass or a top-ranked program in some other social science, let me assure you, it’s the former by a mile — and you’ll probably have better drinking buddies as well.

The euro is not the gold standard. One of the topics I was talking about at the UMass conference was the euro which, I’ve argued, was intended to create something like a new gold standard, a hard financial constraint on governments. But that that was the intention doesn’t mean its the reality — in practice the TARGET2 system means that national central banks don’t face any binding constraint , unlike under the gold standard the central bank is “outside” the national monetary membrane. In this sense the euro is structurally more like Keynes’ proposals at Bretton Woods, it’s just not Keynes running it.

Can jobs be guaranteed? In principle I’m very sympathetic to the widespread (at least among my friends on social media) calls for a job guarantee. It makes sense as a direction of travel, implying a commitment to a much lower unemployment rate, expanded public employment, organizing work to fit people’s capabilities rather than vice versa, and increasing the power of workers vis-a-vis employers. But I have a nagging doubt: A job is contingent by its nature – without the threat of unemployment, can there even be employment as we know it?

The wit and wisdom of Haavelmo. I was talking a while back about Merijn Knibbe’s articles on the disconnect between economic theory and the national accounts with my friend Enno, and he mentioned Trygve Haavelmo’s 1944 article on The Probability Approach in Econometrics, which I’ve finally gotten around to reading. One of the big points of this brilliant article is that economic variables, and the models they enter into, are meaningful only via the concrete practices through which the variables are measured. A bigger point is that we study economics in order to “become master of the happenings of real life”: You can contribute to economics in the course of advancing a political project, or making money in financial markets, or administering a government agency (Keynes did all three), but you will not contribute if you pursue economics as an end in itself.

Coney Island. Laura and I took the boy down to Coney Island a couple days ago, a lovely day, his first roller coaster ride, rambling on the beach, a Cyclones game. One of the wonderful things about Coney Island is how little it’s changed from a century ago — I was rereading Delmore Schwartz’s In Dreams Begin Responsibilities the other day, and the title story’s description of a young immigrant couple walking the boardwalk in 1909 could easily be set today — so it’s disconcerting to think that the boy will never take his grandchildren there. It will all be under water.

I Don’t See Any Method At All

I’ve felt for a while that most critiques of economics miss the mark. They start from the premise that economics is a systematic effort to understand the concrete social phenomena we call “the economy,” an effort that has gone wrong in some way.

I don’t think that’s the right way to think about it. I think McCloskey was right to say that economics is just what economists do. Economic theory is essentially closed formal system; it’s a historical accident that there is some overlap between its technical vocabulary and the language used to describe concrete economic phenomena. Economics the discipline is to the economy the sphere of social reality as chess theory is to medieval history: The statement, say, that “queens are most effective when supported by strong bishops” might be reasonable in both domains, but studying its application in the one case will not help at all in applying it in in the other. A few years ago Richard Posner said that he used to think economics meant the study of “rational” behavior in whatever domain, but after the financial crisis he decided it should mean the study of the behavior of the economy using whatever methodologies. (I can’t find the exact quote.) Descriptively, he was right the first time; but the point is, these are two different activities. Or to steal a line from my friend Suresh, the best way to think about what most economists do is as a kind of constrained-maximization poetry. Makes no more sense to ask “is it true” than of a haiku.

One consequence of this is, as I say, that radical criticism of the realism or logical consistency of orthodox economics do nothing to get us closer to a positive understanding of the economy. How is a raven unlike a writing desk? An endless number of ways, and enumerating them will leave you no wiser about either corvids or carpentry. Another consequence, the topic of the remainder of this post, is that when we turn to concrete economic questions there isn’t really a “mainstream” at all. Left critics want to take academic orthodoxy, a right-wing political vision, and the economic policy preferred by the established authorities, and roll them into a coherent package. But I don’t think you can. I think there is a mix of common-sense opinions, political prejudices, conventional business practice, and pragmatic rules of thumb, supported in an ad hoc, opportunistic way by bits and pieces of economic theory. It’s not possible to deduce the whole tottering pile from a few foundational texts.

More concretely: An economics education trains you to think in terms of real exchange — in terms of agents who (somehow or other) have come into possession of a bundle of goods, which they trade with each other. You can only use this framework to make statements about real economic phenomena if they are understood in terms of the supply side — if economic outcomes are understood in terms of different endowments of goods, or different real uses for them. Unless you’re in a position to self-consciously take another perspective, fitting your understanding of economic phenomena into a broader framework is going to mean expressing it as this kind of story, about the limited supply of real resources available, and the unlimited demands on them to meet real human needs. But there may be no sensible story of that kind to tell.

More concretely: What are the major macroeconomic developments of the past ten to twenty years, compared, say, with the previous fifty? For the US and most other developed countries, the list might look like:

– low and falling inflation

– low and falling interest rates

– slower growth of output

– slower growth of employment

– low business investment

– slower growth of labor productivity growth

– a declining share of wages in income

If you pick up an economics textbook and try to apply it to the world around you, these are some of the main phenomena you’d want to explain. What does the orthodox, supply-side theory tell us?

The textbook says that lower inflation is normally the result of a positive supply shock — an increase in real resources or an improvement in technology. OK. But then what do we make of the slowdown in output and productivity?

The textbook says that, over the long run interest rates must reflect the marginal product of capital — the central bank (and monetary factors in general) can only change interest rates in the short run, not over a decade or more. In the Walrasian world, the interest rate and the return on investment are the same thing. So a sustained decline in interest rates must mean a decline in the marginal product of capital.

OK. So in combination with the slowdown in output growth, that suggests a negative technological shock. But that should mean higher inflation. Didn’t we just say that lower inflation implies a positive technological shock?

Employment growth in this framework is normally determined by demographics, or perhaps by structural changes in labor markets that change the effective labor supply. Slower employment growth means a falling labor supply — but that should, again, be inflationary. And it should be associated with higher wagess: If labor is becoming relatively scarce, its price should rise. Yes, the textbook combines a bargaining mode of wage determination for the short run with a marginal product story for the long run, without ever explaining how they hook up, but in this case it doesn’t matter, the two stories agree. A fall in the labor supply will result in a rise in the marginal product of labor as it’s withdrawn from the least productive activities — that’s what “marginal” means! So either way the demographic story of falling employment is inconsistent with low inflation, with a falling wage share, and with the showdown in productivity growth.

Slower growth of labor productivity could be explained by an increase in labor supply  — but then why has employment decelerated so sharply? More often it’s taken as technologically determined. Slower productivity growth then implies a slowdown in innovation — which at least is consistent with low interest rates and low investment. But this “negative technology shock” should again, be inflationary. And it should be associated with a fall in the return to capital, not a rise.

On the other hand, the decline in the labor share is supposed to reflect a change in productive technology that encourages substitution of capital for labor, robots and all that. But how is this reconciled with the fall in interest rates, in investment and in labor productivity? To replace workers with robots, someone has to make the robots, and someone has to buy them. And by definition this raises the productivity of the remaining workers.

Which subset of these mutually incompatible stories does the “mainstream” actually believe? I don’t know that they consistently believe any of them. My impression is that people adopt one or another based on the question at hand, while avoiding any systematic analysis through violent abuse of the ceteris paribus condition.

To paraphrase Leijonhufvud, on Mondays and Wednesdays wages are low because technological progress has slowed down, holding down labor productivity. On Tuesdays and Thursdays wages are low because technological progress has sped up, substituting capital for labor. Students may come away a bit confused but the main takeaway is clear: Low wages are the result of inexorable, exogenous technological change, and not of any kind of political choice. And certainly not of weak aggregate demand.

Larry Summers in this actually quite good Washington Post piece, at least is no longer talking about robots. But he can’t completely resist the supply-side lure: “The situation is worse in other countries with more structural issues and slower labor-force growth.” Wait, why would they be worse? As he himself says, “our problem today is insufficient inflation,” so what’s needed “is to convince people that prices will rise at target rates in the future,” which will “require … very tight markets.” If that’s true, then restrictions on labor supply are a good thing — they make it easier to generate wage and price increases. But that is still an unthought.

I admit, Summers does go on to say:

In the presence of chronic excess supply, structural reform has the risk of spurring disinflation rather than contributing to a necessary increase in inflation.  There is, in fact, a case for strengthening entitlement benefits so as to promote current demand. The key point is that the traditional OECD-type recommendations cannot be right as both a response to inflationary pressures and deflationary pressures. They were more right historically than they are today.

That’s progress, for sure — “less right” is a step toward “completely wrong”. The next step will be to say what his argument logically requires. If the problem is as he describes it then structural “problems” are part of the solution.

Varieties of the Phillips Curve

In this post, I first talk about a variety of ways that we can formalize the relationship between wages, inflation and productivity. Then I talk briefly about why these links matter, and finally how, in my view, we should think about the existence of a variety of different possible relationships between these variables.

*

My Jacobin piece on the Fed was, on a certain abstract level, about varieties of the Phillips curve. The Phillips curve is any of a family graphs with either unemployment or “real” GDP on the X axis, and either the level or the change of nominal wages or the level of prices or the level or change of inflation on the Y axis. In any of the the various permutations (some of which naturally are more common than others) this purports to show a regular relationship between aggregate demand and prices.

This apparatus is central to the standard textbook account of monetary policy transmission. In this account, a change in the amount of base money supplied by the central bank leads to a change in market interest rates. (Newer textbooks normally skip this part and assume the central bank sets “the” interest rate by some unspecified means.) The change in interest rates  leads to a change in business and/or housing investment, which results via a multiplier in a change in aggregate output. [1] The change in output then leads to a change in unemployment, as described by Okun’s law. [2] This in turn leads to a change in wages, which is passed on to prices. The Phillips curve describes the last one or two or three steps in this chain.

Here I want to focus on the wage-price link. What are the kinds of stories we can tell about the relationship between nominal wages and inflation?

*

The starting point is this identity:

(1) w = y + p + s

That is, the percentage change in nominal wages (w) is equal to the sum of the percentage changes in real output per worker (y; also called labor productivity), in the price level (p, or inflation) and in the labor share of output (s). [3] This is the essential context for any Phillips curve story. This should be, but isn’t, one of the basic identities in any intermediate macroeconomics textbook.

Now, let’s call the increase in “real” or inflation-adjusted wages r. [4] That gives us a second, more familiar, identity:

(2) r = w – p

The increase in real wages is equal to the increase in nominal wages less the inflation rate.

As always with these kinds of accounting identities, the question is “what adjusts”? What economic processes ensure that individual choices add up in a way consistent with the identity? [5]

Here we have five variables and two equations, so three more equations are needed for it to be determined. This means there are large number of possible closures. I can think of five that come up, explicitly or implicitly, in actual debates.

Closure 1:

First is the orthodox closure familiar from any undergraduate macroeconomics textbook.

(3a) w = pE + f(U); f’ < 0

(4a) y = y*

(5a) p = w – y

Equation 3a says that labor-market contracts between workers and employers result in nominal wage increases that reflect expected inflation (pE) plus an additional increase, or decrease, that reflects the relative bargaining power of the two sides. [6] The curve described by f is the Phillips curve, as originally formulated — a relationship between the unemployment rate and the rate of change of nominal wages. Equation 4a says that labor productivity growth is given exogenously, based on technological change. 5a says that since prices are set as a fixed markup over costs (and since there is only labor and capital in this framework) they increase at the same rate as unit labor costs — the difference between the growth of nominal wages and labor productivity.

It follows from the above that

(6a) w – p = y

and

(7a) s = 0

Equation 6a says that the growth rate of real wages is just equal to the growth of average labor productivity. This implies 7a — that the labor share remains constant. Again, these are not additional assumptions, they are logical implications from closing the model with 3a-5a.

This closure has a couple other implications. There is a unique level of unemployment U* such that w = y + p; only at this level of unemployment will actual inflation equal expected inflation. Assuming inflation expectations are based on inflation rates realized in the past, any departure from this level of unemployment will cause inflation to rise or fall without limit. This is the familiar non-accelerating inflation rate of unemployment, or NAIRU. [7] Also, an improvement in workers’ bargaining position, reflected in an upward shift of f(U), will do nothing to raise real wages, but will simply lead to higher inflation. Even more: If an inflation-targetting central bank is able to control the level of output, stronger bargaining power for workers will leave them worse off, since unemployment will simply rise enough to keep nominal wage growth in line with y*  and the central bank’s inflation target.

Finally, notice that while we have introduced three new equations, we have also introduced a new variable, pE, so the model is still underdetermined. This is intended. The orthodox view is that the same set of “real“ values is consistent with any constant rate of inflation, whatever that rate happens to be. It follows that a departure of the unemployment rate from U* will cause a permanent change in the inflation rate. It is sometimes suggested, not quite logically, that this is an argument in favor of making price stability the overriding goal of policy. [8]

If you pick up an undergraduate textbook by Carlin and Soskice, Krugman and Wells, or Blanchard, this is the basic structure you find. But there are other possibilities.

Closure 2: Bargaining over the wage share

A second possibility is what Anwar Shaikh calls the “classical” closure. Here we imagine the Phillips curve in terms of the change in the wage share, rather than the change in nominal wages.

(3b) s =  f(U); f’ < 0

(4b) y = y*

(5b) p = p*

Equation 3b says that the wage share rises when unemployment is low, and falls when unemployment is high. In this closure, inflation as well as labor productivity growth are fixed exogenously. So again, we imagine that low unemployment improves the bargaining position of workers relative to employers, and leads to more rapid wage growth. But now there is no assumption that prices will follow suit, so higher nominal wages instead translate into higher real wages and a higher wage share. It follows that:

(6b) w = f(U) + p + y

Or as Shaikh puts it, both productivity growth and inflation act as shift parameters for the nominal-wage Phillips curve. When we look at it this way, it’s no longer clear that there was any breakdown in the relationship during the 1970s.

If we like, we can add an additional equation making the change in unemployment a function of the wage share, writing the change in unemployment as u.

(7b) u = g(s); g’ > 0 or g’ < 0

If unemployment is a positive function of the wage share (because a lower profit share leads to lower investment and thus lower demand), then we have the classic Marxist account of the business cycle, formalized by Goodwin. But of course, we might imagine that demand is “wage-led” rather than “profit-led” and make U a negative function of the wage share — a higher wage share leads to higher consumption, higher demand, higher output and lower unemployment. Since lower unemployment will, according to 3b, lead to a still higher wage share, closing the model this way leads to explosive dynamics — or more reasonably, if we assume that g’ < 0 (or impose other constraints), to two equilibria, one with a high wage share and low unemployment, the other with high unemployment and a low wage share. This is what Marglin and Bhaduri call a “stagnationist” regime.

Let’s move on.

Closure 3: Real wage fixed.

I’ll call this the “Classical II” closure, since it seems to me that the assumption of a fixed “subsistence” wage is used by Ricardo and Malthus and, at times at least, by Marx.

(3c) w – p = 0

(4c) y = y*

(5c) p = p*

Equation 3c says that real wages are constant the change in nominal wages is just equal to the change in the price level. [9] Here again the change in prices and in labor productivity are given from outside. It follows that

(6c) s = -y

Since the real wage is fixed, increases in labor productivity reduce the wage share one for one. Similarly, falls in labor productivity will raise the wage share.

This latter, incidentally, is a feature of the simple Ricardian story about the declining rate of profit. As lower quality land if brought into use, the average productivity of labor falls, but the subsistence wage is unchanged. So the share of output going to labor, as well as to landlords’ rent, rises as the profit share goes to zero.

Closure 4:

(3d) w =  f(U); f’ < 0

(4d) y = y*

(5d) p = p*

This is the same as the second one except that now it is the nominal wage, rather than the wage share, that is set by the bargaining process. We could think of this as the naive model: nominal wages, inflation and productivity are all just whatever they are, without any regular relationships between them. (We could even go one step more naive and just set wages exogenously too.) Real wages then are determined as a residual by nominal wage growth and inflation, and the wage share is determined as a residual by real wage growth and productivity growth. Now, it’s clear that this can’t apply when we are talking about very large changes in prices — real wages can only be eroded by inflation so far.  But it’s equally clear that, for sufficiently small short-run changes, the naive closure may be the best we can do. The fact that real wages are not entirely a passive residual, does not mean they are entirely fixed; presumably there is some domain over which nominal wages are relatively fixed and their “real” purchasing power depends on what happens to the price level.

Closure 5:

One more.

(3e) w =  f(U) + a pE; f’ < 0; 0 < a < 1

(4e) y = b (w – p); 0 < b < 1

(5e) p =  c (w – y); 0 < c < 1

This is more generic. It allows for an increase in nominal wages to be distributed in some proportion between higher inflation, an increase in the wage share,  and faster productivity growth. The last possibility is some version of Verdoorn’s law. The idea that scarce labor, or equivalently rising wages, will lead to faster growth in labor productivity is perfectly admissible in an orthodox framework.  But somehow it doesn’t seem to make it into policy discussions.

In other word, lower unemployment (or a stronger bargaining position for workers more generally) will lead to an increase in the nominal wage. This will in turn increase the wage share, to the extent that it does not induce higher inflation and/or faster productivity growth:

(6e) s = (1  – b – c) w

This closure includes the first two as special cases: closure 1 if we set a = 0, b = 0, and c = 1, closure 2 if we set a = 1, b = 0, and c < 1. It’s worth framing the more general case to think clearly about the intermediate possibilities. In Shaikh’s version of the classical view, tighter labor markets are passed through entirely to a higher labor share. In the conventional view, they are passed through entirely to higher inflation. There is no reason in principle why it can’t be some to each, and some to higher productivity as well. But somehow this general case doesn’t seem to get discussed.

Here is a typical example  of the excluded middle in the conventional wisdom: “economic theory suggests that increases in labor costs in excess of productivity gains should put upward pressure on prices; hence, many models assume that prices are determined as a markup over unit labor costs.” Notice the leap from the claim that higher wages put some pressure on prices, to the claim that wage increases are fully passed through to higher prices. Or in terms of this last framework: theory suggests that b should be greater than zero, so let’s assume b is equal to one. One important consequence is to implicitly exclude the possibility of a change in the wage share.

*

So what do we get from this?

First, the identity itself. On one level it is obvious. But too many policy discussions — and even scholarship — talk about various forms of the Phillips curve without taking account of the logical relationship between wages, inflation, productivity and factor shares. This is not unique to this case, of course. It seems to me that scrupulous attention to accounting relationships, and to logical consistency in general, is one of the few unambiguous contributions economists make to the larger conversation with historians and other social scientists. [10]

For example: I had some back and forth with Phil Pilkington in comments and on twitter about the Jacobin piece. He made some valid points. But at one point he wrote: “Wages>inflation + productivity = trouble!” Now, wages > inflation + productivity growth just means, an increasing labor share. It’s two ways of saying the same thing. But I’m pretty sure that Phil did not intend to write that an increase in the labor share always means trouble. And if he did seriously mean that, I doubt one reader in a hundred would understand it from what he wrote.

More consequentially, austerity and liberalization are often justified by the need to prevent “real unit labor costs” from rising. What’s not obvious is that “real unit labor costs” is simply another word for the labor share. Since by definition the change real unit labor costs is just the change in nominal wages less sum of inflation and productivity growth. Felipe and Kumar make exactly this point in their critique of the use of unit labor costs as a measure of competitiveness in Europe: “unit labor costs calculated with aggregate data are no more than the economy’s labor share in total output multiplied by the price level.” As they note, one could just as well compute “unit capital costs,” whose movements would be just the opposite. But no one ever does, instead they pretend that a measure of distribution is a measure of technical efficiency.

Second, the various closures. To me the question of which behavioral relations we combine the identity with — that is, which closure we use — is not about which one is true, or best in any absolute sense. It’s about the various domains in which each applies. Probably there are periods, places, timeframes or policy contexts in which each of the five closures gives the best description of the relevant behavioral links. Economists, in my experience, spend more time working out the internal properties of formal systems than exploring rigorously where those systems apply. But a model is only useful insofar as you know where it applies, and where it doesn’t. Or as Keynes put it in a quote I’m fond of, the purpose of economics is “to provide ourselves with an organised and orderly method of thinking out particular problems” (my emphasis); it is “a way of thinking … in terms of models joined to the art of choosing models which are relevant to the contemporary world.” Or in the words of Trygve Haavelmo, as quoted by Leijonhufvud:

There is no reason why the form of a realistic model (the form of its equations) should be the same under all values of its variables. We must face the fact that the form of the model may have to be regarded as a function of the values of the variables involved. This will usually be the case if the values of some of the variables affect the basic conditions of choice under which the behavior equations in the model are derived.

I might even go a step further. It’s not just that to use a model we need to think carefully about the domain over which it applies. It may even be that the boundaries of its domain are the most interesting thing about it. As economists, we’re used to thinking of models “from the inside” — taking the formal relationships as given and then asking what the world looks like when those relationships hold. But we should also think about them “from the outside,” because the boundaries within which those relationships hold are also part of the reality we want to understand. [11] You might think about it like laying a flat map over some curved surface. Within a given region, the curvature won’t matter, the flat map will work fine. But at some point, the divergence between trajectories in our hypothetical plane and on the actual surface will get too large to ignore. So we will want to have a variety of maps available, each of which minimizes distortions in the particular area we are traveling through — that’s Keynes’ and Haavelmo’s point. But even more than that, the points at which the map becomes unusable, are precisely how we learn about the curvature of the underlying territory.

Some good examples of this way of thinking are found in the work of Lance Taylor, which often situates a variety of model closures in various particular historical contexts. I think this kind of thinking was also very common in an older generation of development economists. A central theme of Arthur Lewis’ work, for example, could be thought of in terms of poor-country labor markets that look  like what I’ve called Closure 3 and rich-country labor markets that look like Closure 5. And of course, what’s most interesting is not the behavior of these two systems in isolation, but the way the boundary between them gets established and maintained.

To put it another way: Dialectics, which is to say science, is a process of moving between the concrete and the abstract — from specific cases to general rules, and from general rules to specific cases. As economists, we are used to grounding concrete in the abstract — to treating things that happen at particular times and places as instances of a universal law. The statement of the law is the goal, the stopping point. But we can equally well ground the abstract in the concrete — treat a general rule as a phenomenon of a particular time and place.

 

 

 

[1] In graduate school you then learn to forget about the existence of businesses and investment, and instead explain the effect of interest rates on current spending by a change in the optimal intertemporal path of consumption by a representative household, as described by an Euler equation. This device keeps academic macroeconomics safely quarantined from contact with discussion of real economies.

[2] In the US, Okun’s law looks something like Delta-U = 0.5(2.5 – g), where Delta-U is the change in the unemployment rate and g is inflation-adjusted growth in GDP. These parameters vary across countries but seem to be quite stable over time. In my opinion this is one of the more interesting empirical regularities in macroeconomics. I’ve blogged about it a bit in the past  and perhaps will write more in the future.

[3] To see why this must be true, write L for total employment, Z for the level of nominal GDP, Y for per-capita GDP, W for the average wage, and P for the price level. The labor share S is by definition equal to total wages divided by GDP:

S = WL / Z

Real output per worker is given by

Y = (Z/P) / L

Now combine the equations and we get W = P Y S. This is in levels, not changes. But recall that small percentage changes can be approximated by log differences. And if we take the log of both sides, writing the log of each variable in lowercase, we get w = y + p + s. For the kinds of changes we observe in these variables, the approximation will be very close.

[4] I won’t keep putting “real” in quotes. But it’s important not to uncritically accept the dominant view that nominal quantities like wages are simply reflections of underlying non-monetary magnitudes. In fact the use of “real” in this way is deeply ideological.

[5] A discovery that seems to get made over and over again, is that since an identity is true by definition, nothing needs to adjust to maintain its equality. But it certainly does not follow, as people sometimes claim, that this means you cannot use accounting identities to reason about macroeconomic outcomes. The point is that we are always using the identities along with some other — implicit or explicit — claims about the choices made by economic units.

[6] Note that it’s not necessary to use a labor supply curve here, or to make any assumption about the relationship between wages and marginal product.

[7] Often confused with Milton Friedman’s natural rate of unemployment. But in fact the concepts are completely different. In Friedman’s version, causality runs the other way, from the inflation rate to the unemployment rate. When realized inflation is different from expected inflation, in Friedman’s story, workers are deceived about the real wage they are being offered and so supply the “wrong” amount of labor.

[8] Why a permanently rising price level is inconsequential but a permanently rising inflation rate is catastrophic, is never explained. Why are real outcomes invariant to the first derivative of the price level, but not to the second derivative? We’re never told — it’s an article of faith that money is neutral and super-neutral but not super-super-neutral. And even if one accepts this, it’s not clear why we should pick a target of 2%, or any specific number. It would seem more natural to think inflation should follow a random walk, with the central bank holding it at its current level, whatever that is.

[9] We could instead use w – p = r*, with an exogenously given rate of increase in real wages. The logic would be the same. But it seems simpler and more true to the classics to use the form in 3c. And there do seem to be domains over which constant real wages are a reasonable assumption.

[10] I was just starting grad school when I read Robert Brenner’s long article on the global economy, and one of the things that jumped out at me was that he discussed the markup and the wage share as if they were two independent variables, when of course they are just two ways of describing the same thing. Using s still as the wage share, and m as the average markup of prices over wages, s = 1 / (1 + m). This is true by definition (unless there are shares other than wages or profits, but none such figure in Brenner’s analysis). The markup may reflect the degree of monopoly power in product markets while the labor share may reflect bargaining power within the firm, but these are two different explanations of the same concrete phenomenon. I like to think that this is a mistake an economist wouldn’t make.

[11] The Shaikh piece mentioned above is very good. I should add, though, the last time I spoke to Anwar, he criticized me for “talking so much about the things that have changed, rather than the things that have not” — that is, for focusing so much on capitalism’s concrete history rather than its abstract logic. This is certainly a difference between Shaikh’s brand of Marxism and whatever it is I do. But I’d like to think that both approaches are called for.

 

EDIT: As several people pointed out, some of the equations were referred to by the wrong numbers. Also, Equation 5a and 5e had inflation-expectation terms in them that didn’t belong. Fixed.

EDIT 2: I referred to an older generation of development economics, but I think this awareness that the territory requires various different maps, is still more common in development than in most other fields. I haven’t read Dani Rodrik’s new book, but based on reviews it sounds like it puts forward a pretty similar view of economics methodology.

A Quick Point on Models

According to Keynes the purpose of economics is “to provide ourselves with an organised and orderly method of thinking out particular problems”; it is “a way of thinking … in terms of models joined to the art of choosing models which are relevant to the contemporary world.” (Quoted here.)

I want to amplify on that just a bit. The test of a good model is not whether it corresponds to the true underlying structure of the world, but whether it usefully captures some of the regularities in the concrete phenomena we observe. There are lots of different regularities, more or less bounded in time, space and other dimensions, so we are going to need lots of different models, depending on the questions we are asking and the setting we are asking them in. Thus the need for the “art of choosing”.

I don’t think this point is controversial in the abstract. But people often lose sight of it. Obvious case: Piketty and “capital”. A lot of the debate between Piketty and his critics on the left has focused on whether there really is, in some sense, a physical quantity of capital, or not. I don’t think we need to have this argument.

We observe “capital” as a set of money claims, whose aggregate value varies in relation to other observable monetary aggregates (like income) over time and across space. There is a component of that variation that corresponds to the behavior of a physical stock — increasing based on identifiable inflows (investment) and decreasing based on identifiable outflows (depreciation). Insofar as we are interested in that component of the observed variation, we can describe it using models of capital as a physical stock. The remaining components (the “residual” from the point of view of a model of physical K) will require a different set of models or stories. So the question is not, is there such a thing as a physical capital stock? It’s not even, is it in general useful to think about capital as a physical stock? The question is, how much of the particular variation we are interested is accounted for by the component corresponding to the evolution of a physical stock? And the answer will depend on which variation we are interested in.

For example, Piketty could say “It’s true that my model, which treats K as a physical stock, does not explain much of the historical variation in capital-output ratios at decadal frequencies, like the fall and rise over the course of the 20th century. But I believe it does explain very long-frequency variation, and in particular captures important long-run possibilities for the future.” (I think he has in fact said something like this, though I can’t find the quote at the moment.) You don’t have to agree with him — you could dispute that his model is a good fit for even the longest-frequency historical variation, or you could argue that the shorter frequency variation is more interesting (and is what his book often seems to be about). But it would be pointless to criticize him on the grounds that there isn’t “really” such a thing as a physical capital stock, or that there is no consistent way in principle to measure it. That, to me, would show a basic misunderstanding of what models are.

An example of good scientific practice along these lines is biologists’ habit of giving genes names for what happens when gross mutations are induced in them experimentally. Names like eyeless or shaggy or buttonhead: the fly lacks eyes, grows extra hair, or has a head without segments if the gene is removed. It might seem weird to describe genes in terms of what goes wrong when they are removed, as opposed to what they do normally, but I think this practice shows good judgement about what we do and don’t know. In particular, it avoids any claim about what the gene is “for.” There are many many relationships between a given locus in the genome and the phenotype, and no sense in which any of them is more or less important in an absolute sense. Calling it the “eye gene” would obscure that, make it sound like this is the relationship that exists out in the world, when for all we know the variation in eye development in wild populations is driven by variation in entirely other locuses. Calling it eyeless makes it clear that it’s referring to what you observe in a particular experimental context.

EDIT: I hate discussions of methodology. I should not have written this post. (I only did because I liked the gene-naming analogy.)  That said, if you, unlike me, enjoy this sort of thing, Tom Hickey wrote a long and thoughtful response to it. He mentions among others, Tony Lawson, who I would certainly want to read more of if I were going to write about this stuff.

New Keynesians Don’t Believe Their Models

Here’s the thing about about saltwater, New Keynesian economists: They don’t believe their own theory.

Via John Cochrane, here is a great example. In the NBER Macroeconomics Annual a couple years ago, Gauti Eggertson laid out the canonical New Keynesian case for the effectiveness of fiscal policy when interest rates are at the Zero Lower Bound. In the model Eggertson describes there — the model that is supposed to provide the intellectual underpinnings for fiscal stimulus — the multiplier on government spending at the ZLB is indeed much larger than in normal conditions, 2.3 rather than 0.48. But the same model says that at the ZLB, cuts in taxes on labor are contractionary, with a multiplier of -1. Every dollar of “stimulus” from the Making Work Pay tax credit, in other words, actually reduced GDP by a dollar. Or as Eggertson puts it, “Cutting taxes on labor … is contractionary under the circumstances the United States is experiencing today. “

Now, obviously there are reasons why one might believe this. For instance, maybe lower payroll taxes just allow employers to reduce wages by the same amount, and then in response to their lower costs they reduce prices, which is deflationary. There’s nothing wrong with that story in principle. No, the point isn’t that the New Keynesian claim that payroll tax cuts reduce demand is wrong — though I think that it is. The point is that nobody actually believes it.

In the debates over the stimulus bill back at the beginning of 2009, everyone agreed that payroll tax cuts were stimulus just as much as spending increases. The CBO certainly did. There were plenty of “New Keynesian” economists involved in that debate, and while they may have said that tax cuts would boost demand less than direct government spending, I’m pretty sure that not one of them said that payroll tax cuts would actually reduce demand. And when the payroll tax cuts were allowed to expire at the end of 2012, did anyone make the case that this was actually expansionary? Of course not. The conventional wisdom was that the payroll tax cuts had a large, positive effect on demand, with a multiplier around positive 1. Regardless of good New Keynesian theory.

As a matter of fact, even Eggertson doesn’t seem to believe that raising taxes on labor will boost demand, whether or not it’s what the math says. The “natural result” of his model, he admits, is that any increase in government spending should be financed by higher taxes. But:

There may, however, be important reasons outside the model that suggest that an increase in labor and capital taxes may be unwise and/or impractical. For these reasons I am not ready to suggest, based on this analysis alone, that raising capital and labor taxes is a good idea at zero interest rates. Indeed, my conjecture is that a reasonable case can be made for a temporary budget deficit to finance a stimulus plan… 

Well, yeah. I think most of us can agree that raising payroll taxes in a recession is probably not the best idea. But at this point, what are we even doing here? If you’re going to defer to arguments “outside the model” whenever the model says something inconvenient or surprising, why are you even doing it?

EDIT: I put this post up a few days ago, then took it down because it seemed a little thin and I thought I would add another example or two of the same kind of thing. But I’m feeling now that more criticism of mainstream economics is not a good use of my time. If that’s what you want, you should check out this great post by Noah Smith. Noah is so effective here for the same reason that he’s sometimes so irritatingly wrong — he’s writing from inside the mainstream. The truth is, to properly criticize these models, you have to have a deep knowledge of them, which he has and I do not.

Arjun and I have a piece in an upcoming Economics and Politics Weekly on how liberal, “saltwater” economists share the blame for creating an intellectual environment favorable to austerian arguments, however much they oppose them in particular cases. I feel pretty good about it — will link here when it comes out — I think for me, that’s enough criticism of modern macro. In general, the problem with radical economists is they spend too much time on negative criticism of the economics profession, and not enough making a positive case for an alternative. This criticism applies to me too. My comparative advantage in econblogging is presenting interesting Keynesian and Marxist work.

One thing one learns working at a place like Working Families, the hard thing is not convincing people that shit is fucked up and bullshit, the hard thing is convincing them there’s anything they can do about it.  Same deal here: The real challenge isn’t showing the other guys are wrong, it’s showing that we have something better.

What Drives Trade Flows? Mostly Demand, Not Prices

I just participated (for the last time, thank god) in the UMass-New School economics graduate student conference, which left me feeling pretty good about the next generation of heterodox economists. [1] A bunch of good stuff was presented, but for my money, the best and most important work was Enno Schröder’s: “Aggregate Demand (Not Competitiveness) Caused the German Trade Surplus and the U.S. Deficit.” Unfortunately, the paper is not yet online — I’ll link to it the moment it is — but here are his slides.

The starting point of his analysis is that, as a matter of accounting, we can write the ratio of a county’s exports to imports as :

X/M = (m*/m) (D*/D)

where X and M are export and import volumes, m* is the fraction of foreign expenditure spent on the home country’s goods, m is the fraction of the home expenditure spent on foreign goods, and D* and D are total foreign and home expenditure.

This is true by definition. But the advantage of thinking of trade flows this way, is that it allows us to separate the changes in trade attributable to expenditure switching (including, of course, the effect of relative price changes) and the changes attributable to different growth rates of expenditure. In other words, it lets us distinguish the changes in trade flows that are due to changes in how each dollar is spent in a given country, from changes in trade flows that are due to changes in the distribution of dollars across countries.

(These look similar to price and income elasticities, but they are not the same. Elasticities are estimated, while this is an accounting decomposition. And changes in m and m*, in this framework, capture all factors that lead to a shift in the import share of expenditure, not just relative prices.)

The heart of the paper is an exercise in historical accounting, decomposing changes in trade ratios into m*/m and D*/D. We can think of these as counterfactual exercises: How would trade look if growth rates were all equal, and each county’s distribution of spending across countries evolved as it did historically; and how would trade look if each country had had a constant distribution of spending across countries, and growth rates were what they were historically? The second question is roughly equivalent to: How much of the change in trade flows could we predict if we knew expenditure growth rates for each country and nothing else?

The key results are in the figure below. Look particularly at Germany,  in the middle right of the first panel:

The dotted line is the actual ratio of exports to imports. Since Germany has recently had a trade surplus, the line lies above one — over the past decade, German exports have exceed German imports by about 10 percent. The dark black line is the counterfactual ratio if the division of each county’s expenditures among various countries’ goods had remained fixed at their average level over the whole period. When the dark black line is falling, that indicates a country growing more rapidly than the countries it exports to; with the share of expenditure on imports fixed, higher income means more imports and a trade balance moving toward deficit. Similarly, when the black line is rising, that indicates a country’s total expenditure growing more slowly than expenditure its export markets, as was the case for Germany from the early 1990s until 2008. The light gray line is the other counterfactual — the path trade would have followed if all countries had grown at an equal rate, so that trade depended only on changes in competitiveness. When the dotted line and the heavy black line move more or less together, we can say that shifts in trade are mostly a matter of aggregate demand; when the dotted line and the gray line move together, mostly a matter of competitiveness (which, again, includes all factors that cause people to shift expenditure between different countries’ goods, including but not limited to exchange rates.)
The point here is that if you only knew the growth of income in Germany and its trade partners, and nothing at all about German wages or productivity, you could fully explain the German trade surplus of the past decade. In fact, based on income growth alone you would predict an even larger surplus; the fraction of the world’s dollars falling on German goods actually fell. Or as Enno puts it: During the period of the German export boom, Germany became less, not more, competitive. [2] The cases of Spain, Portugal and Greece (tho not Italy) are symmetrical: Despite the supposed loss of price competitiveness they experienced under the euro, the share of expenditure falling on these countries’ goods and services actually rose during the periods when their trade balances worsened; their growing deficits were entirely a product of income growth more rapid than their trade partners’.
These are tremendously important results. In my opinion, they are fatal to the claim (advanced by Krugman among others) that the root of the European crisis is the inability to adjust exchange rates, and that a devaluation in the periphery would be sufficient to restore balanced trade. (It is important to remember, in this context, that southern Europe was running trade deficits for many years before the establishment of the euro.) They also imply a strong criticism of free trade. If trade flows depend mostly or entirely on relative income, and if large trade imbalances are unsustainable for most countries, then relative growth rates are going to be constrained by import shares, which means that most countries are going to grow below their potential. (This is similar to the old balance-of-payments constrained growth argument.) But the key point, as Enno stresses, is that both the “left” argument about low German wage growth and the “right” argument about high German productivity growth are irrelevant to the historical development of German export surpluses. Slower income growth in Germany than its trade partners explains the whole story.
I really like the substantive argument of this paper. But I love the methodology. There is an econometrics section, which is interesting (among other things, he finds that the Marshall-Lerner condition is not satisfied for Germany, another blow to the relative-prices story of the euro crisis.) But the main conclusions of the paper don’t depend in any way on it. In fact, the thing can be seen as an example of an alternative methodology to econometrics for empirical economics, historical accounting or decomposition analysis. This is the same basic approach that Arjun Jayadev and I take in our paper on household debt, and which has long been used to analyze the historical evolution of public debt. Another interesting application of this kind of historical accounting: the decomposition of changes in the profit rate into the effects of the profit share, the utilization rate, and the technologically-determined capital-output ratio, an approach pioneered by Thomas Weisskopf, and developed by others, including Ed WolffErdogan Bakir, and my teacher David Kotz.
People often say that these accounting exercises can’t be used to establish claims about causality. And strictly speaking this is true, though they certainly can be used to reject certain causal stories. But that’s true of econometrics too. It’s worth taking a step back and remembering that no matter how fancy our econometrics, all we are ever doing with those techniques is describing the characteristics of a matrix. We have the observations we have, and all we can do is try to summarize the relationships between them in some useful way. When we make causal claims using econometrics, it’s by treating the matrix as if it were drawn from some stable underlying probability distribution function (pdf). One of the great things about these decomposition exercises — or about other empirical techniques, like principal component analysis — is that they limit themselves to describing the actual data. In many cases — lots of labor economics, for instance — the fiction of a stable underlying pdf is perfectly reasonable. But in other cases — including, I think, almost all interesting questions in macroeconomics — the conventional econometrics approach is a bit like asking, If a whale were the top of an island, what would the underlying geology look like? It’s certainly possible to come up with a answer to that question. But it is probably not the simplest way of describing the shape of the whale.
[1] A perennial question at these things is whether we should continue identifying ourselves as “heterodox,” or just say we’re doing economics. Personally, I’ll be happy to give up the distinct heterodox identity just as soon as economists are willing to give up their distinct identity and dissolve into the larger population of social scientists, or of guys with opinions.
[2] The results for the US are symmetrical with those for Germany: the growing US trade deficit since 1990 is fully explained by more rapid US income growth relative to its trade partners. But it’s worth noting that China is not: Knowing only China’s relative income growth, which has been of course very high, you would predict that China would be moving toward trade deficits, when in fact it has ben moving toward surplus. This is consistent with a story that explains China’s trade surpluses by an undervalued currency, tho it is consistent with other stories as well.

Only Ever Equilibrium?

Roger Farmer has a somewhat puzzling guest post up at Noah Smith’s place, arguing that economics is right to limit discussion to equilibrium:

An economic equilibrium, in the sense of Nash, is a situation where a group of decision makers takes a sequence of actions that is best, (in a well defined sense), on the assumption that every other decision maker in the group is acting in a similar fashion. In the context of a competitive economy with a large number of players, Nash equilibrium collapses to the notion of perfect competition.  The genius of the rational expectations revolution, largely engineered by Bob Lucas, was to apply that concept to macroeconomics by successfully persuading the profession to base our economic models on Chapter 7 of Debreu’s Theory of Value… In Debreu’s vision, a commodity is indexed by geographical location, by date and by the state of nature.  Once one applies Debreu’s vision of general equilibrium theory to macroeconomics, disequilibrium becomes a misleading and irrelevant distraction. 

The use of equilibrium theory in economics has received a bad name for two reasons. 

First, many equilibrium environments are ones where the two welfare theorems of competitive equilibrium theory are true, or at least approximately true. That makes it difficult to think of them as realistic models of a depression, or of a financial collapse… Second, those macroeconomic models that have been studied most intensively, classical and new-Keynesian models, are ones where there is a unique equilibrium. Equilibrium, in this sense, is a mapping from a narrowly defined set of fundamentals to an outcome, where  an outcome is an observed temporal sequence of unemployment rates, prices, interest rates etc. Models with a unique equilibrium do not leave room for non-fundamental variables to influence outcomes… 

Multiple equilibrium models do not share these shortcomings… [But] a model with multiple equilibria is an incomplete model. It must be closed by adding an equation that explains the behavior of an agent when placed in an indeterminate environment. In my own work I have argued that this equation is a new fundamental that I call a belief function.

(Personally, I might just call it a convention.)

Some recent authors have argued that rational expectations must be rejected and replaced by a rule that describes how agents use the past to forecast the future. That approach has similarities to the use of a belief function to determine outcomes, and when added to a multiple equilibrium model of the kind I favor, it will play the same role as the belief function. The important difference of multiple equilibrium models, from the conventional approach to equilibrium theory, is that the belief function can coexist with the assumption of rational expectations. Agents using a rule of this kind, will not find that their predictions are refuted by observation. …

So his point here is that in a model with multiple equilibria, there is no fundamental reason why the economy should occupy one rather than another. You need to specify agents’ expectations independently, and once you do, whatever outcome they expect, they’ll be correct. This allows for an economy to experience involuntary unemployment, for example, as expectations of high or low income lead to increased or curtailed expenditure, which results in expected income, whatever it was, being realized. This is the logic of the Samuelson Cross we teach in introductory macro. But it’s not, says Farmer, a disequilibrium in any meaningful way:

If by disequilibrium, I am permitted to mean that the economy may deviate for a long time, perhaps permanently, from a social optimum; then I have no trouble with championing the cause. But that would be an abuse of the the term ‘disequilibrium’. If one takes the more normal use of disequilibrium to mean agents trading at non-Walrasian prices, … I do not think we should revisit that agenda. Just as in classical and new-Keynesian models where there is a unique equilibrium, the concept of disequilibrium in multiple equilibrium models is an irrelevant distraction.

I quote this at such length because it’s interesting. But also because, to me at least, it’s rather strange. There’s nothing wrong with the multiple equilibrium approach he’s describing here, which seems like a useful way of thinking about a number of important questions. But to rule out a priori any story in which people’s expectations are not fulfilled rules out a lot of other useful ways about thinking about important questions.

At INET in Berlin, the great Axel Leijonhufvud gave a talk where he described the defining feature of a crisis as the existence of inconsistent contractual commitments, so that some of them would have to be voided or violated.

What is the nature of our predicament? The web of contracts has developed serious inconsistencies. All the promises cannot possibly be fulfilled. Insisting that they should be fulfilled will cause a collapse of very large portions of the web.

But Farmer is telling us that economists not only don’t need to, but positively should not, attempt to understand crises in this sense. It’s an “irrelevant distraction” to consider the case where people entered into contracts with inconsistent expectations, which will not all be capable of being fulfilled. Farmer can hardly be unfamiliar with these ideas; after all he edited Leijonhufvud’s festschrift volume. So why is he being so dogmatic here?

I had an interesting conversation with Rajiv Sethi after Leijonhufvud’s talk; he said he thought that the inability to consider cases where plans were not realized was a fundamental theoretical shortcoming of mainstream macro models. I don’t disagree.

The thing about the equilibrium approach, as Farmer presents it, isn’t just that it rules out the possibility of people being systematically wrong; it rules out the possibility that they disagree. This strikes me as a strong and importantly empirically false proposition. (Keynes suggested that the effectiveness of monetary policy depends on the existence of both optimists and pessimists in financial markets.) In Farmer’s multiple equilibrium models, whatever outcome is set by convention, that’s the outcome expected by everyone. This is certainly reasonable in some cases, like the multiple equilibria of driving on the left or the right side of the road. Indeed, I suspect that the fact that people are irrationally confident in these kinds of conventions, and expect them to hold even more consistently than they do, is one of the main things that stabilizes these kind of equilibria. But not everything in economics looks like that.

Here’s Figure 1 from my Fisher dynamics paper with Arjun Jayadev:

See those upward slopes way over on the left? Between 1929 and 1933, household debt relative to GDP rose by abut 40 percent, and nonfinancial business debt relative to GDP nearly doubled. This is not, of course, because families and businesses were borrowing more in the Depression; on the contrary, they were paying down debt as fast as they could. But in the classic debt-deflation story, falling prices and output meant that incomes were falling even fast than debt, so leverage actually increased.

Roger Farmer, if I’m understanding him correctly, is saying that we must see this increase in debt-income ratios as an equilibrium phenomenon. He is saying that households and businesses taking out loans in 1928 must have known that their incomes were going to fall by half over the next five years, while their debt payments would stay unchanged, and chose to borrow anyway. He is saying not just that he believes that, but that as economists we should not consider any other view; we can rule out on methodological grounds the  possibility that the economic collapse of the early 1930s caught people by surprise. To Irving Fisher, to Keynes, to almost anyone, to me, the rise in debt ratios in the early 1930s looks like a pure disequilibrium phenomenon; people were trading at false prices, signing nominal contracts whose real terms would end up being quite different from what they expected. It’s one of the most important stories in macroeconomics, but Farmer is saying that we should forbid ourselves from telling it. I don’t get it.

What am I missing here?

What Adjusts?

More teaching: We’re starting on the open economy now. Exchange rates, trade, international finance, the balance of payments. So one of the first things you have to explain, is the definition of real and nominal exchange rates:

e_R = e_N P*/P 

where P and P* are the home and foreign price levels respectively, and the exchange rate e is defined as the price of foreign exchange (so an appreciation means that e falls and a depreciation means that it rises).

This is a useful definition to know — though of course it’s not as straightforward as it seems, since as we’ve discussed before there are various possibles Ps, and once we are dealing with more than two countries we have to decide how to weight them, with different statistical agencies using different weightings. But set all that aside. What I want to talk about now, is what a nice little example this equation offers of a structuralist perspective on the economy.

 As given above, the equation is an accounting identity. It’s always exactly true, simply because that’s how we’ve defined the real exchange rate. As an accounting identity, it doesn’t in itself say anything about causation. But that doesn’t mean it’s vaacuous. After all, we picked this particular definition because we think it is associated with some causal story. [1] The question is, what story? And that’s where things get interesting.

Since we have one equation, we should have one endogenous (or dependent) variable. But which one, depends on the context.

If we are telling a story about exchange rate determination, we might think that the endogenous variable is e_N. If price levels are determined by the evolution of aggregate supply and demand (or the growth of the money stock, if you prefer) in each country, and if arbitrage in the goods market enforces something like Purchasing Power Parity (PPP), then the nominal exchange rate will have to adjust to keep the real price of a comparable basket of goods from diverging across countries.

On the other hand, we might not think PPP holds, at least in the short run, and we might think that the nominal exchange rate cannot adjust freely. (A fixed exchange rate is the obvious reason, but it’s also possible that the forex markets could push the nominal exchange rate to some arbitrary level.) In that case, it’s the real exchange rate that is endogenous, so we can see changes in the price of comparable goods in one country relative to another. This is implicitly the causal structure that people have in mind when they argue that China is pursuing a mercantilist strategy by pegging its nominal exchange rate, that devaluation would improve current account balances in the European periphery, or that the US could benefit from a lower (nominal) dollar. Here the causal story runs from e_N to e_R.

Alternatively, maybe the price level is endogenous. This is less intuitive, but there’s at least one important story where it’s the case. Anti-inflation programs in a number of countries, especially in Latin America, have made use of a fixed exchange rate as a “nominal anchor.” The idea here is that in a small open economy, especially where high inflation has led to widespread use of a foreign currency as the unit of account, the real exchange rate is effectively fixed. So if the nominal exchange rate can also be effectively fixed, then, like it or not, the domestic price level P will have to be fixed as well. Here’s Jeffrey Sachs on the Bolivian stabilization:

The sudden end of a 60,000 percent inflation seems almost miraculous… Thomas Sargent (1986) argued that such a dramatic change in price inflation results from a sudden and drastic change in the public’s expectations of future government policies… I suggest, in distinction to Sargent, that the Bolivian experience highlights a different and far simpler explanation of the very rapid end of hyperinflations. By August 1985,… prices were set either explicitly or implicitly in dollars, with transactions continuing to take place in peso notes, at prices determined by the dollar prices converted at the spot exchange rate. Therefore, by stabilizing the exchange rate, domestic inflation could be made to revert immediately to the US dollar inflation rate. 

So here the causal story runs from e_N to P.

In the three cases so far, we implicitly assume that P* is fixed, or at least exogenous. This makes sense; since a single country is much smaller than the world as a whole, we don’t expect anything it does to affect the world price level much. So the last logical possibility, P* as the endogenous variable, might seem to lack a corresponding real world story. But an individual countries is not always so much smaller than the world as a whole, at least not if the individual country is the United States. It’s legitimate to ask whether a change in our price level or exchange rate might not show up as as inflation or deflation elsewhere. This is particularly likely if we are focusing on a bilateral relationship. For instance, it might well be that a devaluation of the dollar relative to the renminbi would simply (or mostly) produce corresponding deflation [2] in China, leaving the real exchange rate unchanged.

Here, of course, we have only one equation. But if we interpret it causally, that is already a model, and the question of “what adjusts?” can be rephrased as the choice between alternative model closures. With multiple-equation models, that choice gets trickier — and it can be tricky enough with one equation.

In my opinion, sensitivity to alternative model closures is at the heart of structuralist economics, and is the great methodological innovation of Keynes. The specific application that defines the General Theory is the model closure that endogenizes aggregate income — the interest rate, which was supposed to equilibrate savings and investment, is pinned down by the supply and demand of liquidity, so total income is what adjusts — but there’s a more general methodological principle. “Thinking like an economist,” that awful phrase, should mean being able to choose among different stories — different model closures — based on the historical context and your own interests. It should mean being able look at a complex social reality and judge which logical relationships represent the aspects of it you’re currently interested in, and which accounting identities are most relevant to the story you want to tell. Or as Keynes put it, economics should be thought of as

a branch of logic, a way of thinking … in terms of models, joined to the art of choosing models which are relevant to the contemporary world. … [The goal is] not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organised and orderly method of thinking out particular problems.

Much of mainstream macroeconomics assumes there is a “true” model of the world. Connected to this, there’s an insistence — shared even by a lot of heterodox macro — on regarding some variables as being strictly exogenous and others as strictly endogenous, so that in every story causality runs the same way. In the canonical story, tastes, technology and endowments (one can’t help hearing: by the Creator) are perfectly fixed, and everything else is perfectly adjustable. [3]

Better to follow Keynes, and think about models as more or less useful for clarifying the logic of particular stories.

EDIT: Of course not everyone who recognizes the methodological distinction I’m making here agrees that the eclecticism of structuralism is an advantage. Here is my teacher Peter Skott (with Ben Zipperer):

The `heterodox’ tradition in macroeconomics contains a wide range of models. Kaleckian models treat the utilization rate as an accommodating variable, both in the short and the long run. Goodwin’s celebrated formalization of Marx, by contrast, take the utilization rate as fixed and looks at the interaction between employment and distribution. Distribution is also central to Kaldorian and Robinsonian theories which, like Goodwin, endogenize the profit share and take the utilization rate as structurally determined in the long run but, like the Kaleckians, view short-run variations in utilization as an intrinsic part of the cycle. The differences in these and other areas are important, and this diversity of views on core issues is no cause for celebration.

EDIT 2: Trygve Haavelmo, quoted by Leijonhufvud:

There is no reason why the form of a realistic model (the form of its equations) should be the same under all values of its variables. We must face the fact that the form of the model may have to be regarded as a function of the values of the variables involved. This will usually be the case if the values of some of the variables affect the basic conditions of choice under which the behavior equations in the model are derived.

That’s what I’m talking about. There is no “true” model of the economy. The behavioral relationships change depending where we are in economic space.

Also, Bruce Wilder has a long and characteristically thoughtful comment below. I don’t agree with everything he says — it seems a little too hopeless about the possibility of useful formal analysis even in principle — but it’s very worth reading.

[1] “Accounting identities don’t tell causal stories” is a bit like “correlation doesn’t imply causation.”Both statements are true in principle, but the cases we’re interested in are precisely the cases where we have some reason to believe that it’s not true. And for both statements, the converse does not hold. A causal story that violates accounting identities, or for which there is no corresponding correlation, has a problem.

[2] Or lower real wages, the same thing in this context.

[3] Or you sometimes get a hierarchy of “fast” and “slow” variables, where the fast ones are supposed to fully adjust before the slow ones change at all.

More Anti-Krugmanism

[Some days it feels like that could be the title for about 40 percent of the posts on here.]

Steve Keen takes up the cudgels. (Via.)

There is a pattern to neoclassical attempts to increase the realism of their models… The author takes the core model – which cannot generate the real world phenomenon under discussion – and then adds some twist to the basic assumptions which, hey presto, generate the phenomenon in some highly stylised way. The mathematics (or geometry) of the twist is explicated, policy conclusions (if any) are then drawn, and the paper ends.

The flaw with this game is the very starting point, and since Minsky put it best, I’ll use his words to explain it: “Can ‘It’ – a Great Depression – happen again? And if ‘It’ can happen, why didn’t ‘It’ occur in the years since World War II? … To answer these questions it is necessary to have an economic theory which makes great depressions one of the possible states in which our type of capitalist economy can find itself.”

The flaw in the neoclassical game is that it never achieves Minsky’s final objective, because the “twists” that the author adds to the basic assumptions of the neoclassical model are never incorporated into its core. The basic theory therefore remains one in which the key phenomenon under investigation … cannot happen. The core theory remains unaltered – rather like a dog that learns how to walk on its hind legs, but which then reverts to four legged locomotion when the performance is over.

Right.

Any theory is an abstraction of the real world, but the question is which features of the world you can abstract from, and which, for the purposes of theory, are fundamental. Today’s consensus macroeconomics [1] treats intertemporal maximization of a utility function (with consumption and labor as the only arguments) under given endowments and production functions, and unique, stable market-clearing equilibria as the essential features that any acceptable theory has to start from. It treats firms (profit-maximizing or otherwise), money, credit, uncertainty, the existence of classes, and technological change as non-essential features that need to be derived from intertemporal maximization by households, can be safely ignored, or at best added in an ad hoc way. And change is treated in terms of comparative statics rather than dynamic processes or historical evolution.

Now people will say, But can’t you make the arguments you want to within standard techniques? And in that case, shouldn’t you? Even if it’s not strictly necessary, isn’t it wise to show your story is compatible with the consensus approach, since that way you’ll be more likely to convince other economists, have more political influence, etc.?

If you’re a super smart micro guy (as are the two friends I’ve recently had this conversation with) then there’s probably a lot of truth to this. The type of work you do if you genuinely want to understand a labor market question, say, and the type of work you do if you want to win an argument within the economics profession about labor markets, may not be exactly the same, but they’re reasonably compatible. Maybe the main difference is that you need fancier econometrics to convince economists than to learn about the world?

But if you’re doing macroeconomics, the concessions you have to make to make your arguments acceptable are more costly. When you try to force Minsky into a DSGE box, as Krugman does; or when half of your paper on real exchange rates is taken up with models of utility maximization by households; then you’re not just wasting an enormous amount of time and brainpower. You’re arguing against everyone else trying top do realistic work on other questions, including yourself on other occasions. And you’re ensuring that your arguments will have a one-off, ad hoc quality, instead of being developed in a systematic way.

(Not to mention that the consensus view isn’t even coherent on its own terms. Microfoundations are a fraud, since the representative household can’t be derived from a more general model of utility maximizing agents; and it seems clear that intertemporal maximization and comparative statics are logically incompatible.) 

If we want to get here, we shouldn’t start from there. We need an economics whose starting points are production for profit by firms employing wage labor, under uncertainty, in a monetary economy,  that evolves in historical terms. That’s what Marx, Keynes and Schumpeter in their different ways were all doing. They, and their students, have given us a lot to build on. But to do so, we [2] have to give up on trying to incorporate their insights piecemeal into the consensus framework, and cultivate a separate space to develop a different kind of economics, one that starts from premises corresponding to the fundamental features of modern capitalist economies.

[1] I’ve decided to stop using “mainstream” in favor of “consensus”, largely because the latter term is used by people to describe themselves.

[2] By “we,” again, I mean heterodox macroeconomists specifically. I’m not sure how much other economists face the same sharp tradeoff between winning particular debates within the economics profession and building an economics that gives us genuine, useful knowledge about the world.

Summers on Microfoundations

From The Economist’s report on this weekend’s Institute for New Economic Thinking conference at Bretton Woods:

The highlight of the first evening’s proceedings was a conversation between Harvard’s Larry Summers, till recently President Obama’s chief economic advisor, and Martin Wolf of the Financial Times. Much of the conversation centred on Mr Summers’s assessments of how useful economic research had been in recent years. Paul Krugman famously said that much of recent macroeconomics had been “spectacularly useless at best, and positively harmful at worst”. Mr Summers was more measured… But in its own way, his assessment of recent academic research in macroeconomics was pretty scathing.For instance, he talked about all the research papers that he got sent while he was in Washington. He had a fairly clear categorisation for which ones were likely to be useful: read virtually all the ones that used the words leverage, liquidity, and deflation, he said, and virtually none that used the words optimising, choice-theoretic or neoclassical (presumably in the titles or abstracts). His broader point—reinforced by his mentions of the knowledge contained in the writings of Bagehot, Minsky, Kindleberger, and Eichengreen—was, I think, that while it would be wrong to say economics or economists had nothing useful to say about the crisis, much of what was the most useful was not necessarily the most recent, or even the most mainstream. Economists knew a great deal, he said, but they had also forgotten a great deal and been distracted by a lot.Even more scathing, perhaps, was his comment that as a policymaker he had found essentially no use for the vast literature devoted to providing sound micro-foundations to macroeconomics.

Pretty definitive, no?

And that’s it it — I promise! — on microfoundations, methodology, et hoc genus omne in these parts, at least for a while. I have a couple new posts at least purporting to offer concrete analysis of the concrete situation, just about ready to go.