Lecture Notes for Research Methods

I’m teaching a new class this semester, a masters-level class on research methods. It could be taught as simply the second semester of an econometrics sequence, but I’m taking a different approach, trying to think about what will help students do effective empirical work in policy/political settings. We’ll see how it works.

For anyone interested, here are the slides I will use on the first day. I’m not sure it’s all right, in fact I’m sure some of it is wrong But that is how you figure out what you really think and know and don’t know about something, by teaching it.

After we’ve talked through this, we will discuss this old VoxEU piece as an example of effective use of simple scatterplots to make an economic argument.

I gave a somewhat complementary talk on methodology and heterodox macroeconomics at the Eastern Economics Association meetings last year. I’ve been meaning to transcribe it into a blogpost, but in the meantime you can listen to a recording, if you’re interested.

 

The Wit and Wisdom of Trygve Haavelmo

I was talking some time ago with my friend Enno about Merijn Knibbe’s series of articles on the disconnect between the variables used in economic models and the corresponding variables in the national accounts.1 Enno mentioned Trygve Haavelmo’s 1944 article The Probability Approach in Econometrics; he thought Haavelmo’s distinction between “theroetical variables,” “true variables,” and “observable variables” could be a useful way of thinking about the slippages between economic reality, economic data and economic theory.

I finally picked up the Haavelmo article, and it turns out to be a deep and insightful piece — for the reason Enno mentioned, but also more broadly on how to think about empirical economics. It’s especially interesting coming from soeone who won the Nobel Prize for his foundational work in econometrics. Another piece of evidence that orthodox economists in the mid-20th century thought more deeply and critically about the nature of their project than their successors do today.

It’s a long piece, with a lot of mathematical illustrations that someone reading it today can safely skip. The central argument comes down to three overlapping points. First, economic models are tools, developed to solve specific problems. Second, economic theories have content only insofar as they’re associated with specific procedures for measurement. Third, we have positive economic knowledge only insofar as we can make unconditional predictions about the distribution of observable variables.

The first point: We study economics in order to “become master of the happenings of real life.” This is on some level obvious, or vacuous, but it'[s important; it functions as a kind of “he who has ears, let him hear.” It marks the line between those who come to economics as a means to some other end — a political commitment, for many of us; but it could just as well come from a role in business or policy — and those for whom economic theory is an end in itself. Economics education must, obviously, be organized on the latter principle. As soon as you walk into an economics classroom, the purpose of your being there is to learn economics. But you can’t, from within the classroom, make any judgement about what is useful or interesting for the world outside. Or as Hayek put it, “One who is only an economist, cannot be a good economist.”2

Here is what Haavelmo says:

Theoretical models are necessary tools in our attempts to understand and explain events in real life. … Whatever be the “explanations” we prefer, it is not to be forgotten that they are all our own artificial inventions in a search for an understanding of real life; they are not hidden truths to be “discovered.”

It’s an interesting question, which we don’t have to answer here, whether or to what extent this applies to the physical sciences as well. Haavelmo thinks this pragmatic view of scientific laws applies across the board:

The phrase “In the natural sciences we have laws” means not much more and not much less than this: The natural sciences have chosen fruitful ways of looking upon physical reality.

We don’t need to decide here whether we want to apply this pragmatic view to the physical sciences. It is certainly the right way to look at economic models, in particular the models we construct in econometrics. The “data generating process” is not an object existing out in the world. It is a construct you have created for one or both of these reasons: It is an efficient description of the structure of a specific matrix of observed data; it allows you to make predictions about some specific yet-to-be-observed outcome. The idea of a data-generating process is obviously very useful in thinking about the logic of different statistical techniques. It may be useful to do econometrics as if there were a certain data generating process. It is dangerously wrong to believe there really is one.

Speaking of observation brings us to Haavelmo’s second theme: the meaningless of economic theory except in the context of a specific procedure for observation.  It might naively seem, he says, that

since the facts we want to study present themselves in the form of numerical measurement, we shall have to choose our models from … the field of mathematics. But the concepts of mathematics obtain their quantitative meaning implicitly through the system of logical operations we impose. In pure mathematics there really is no such problem as quantitative definition of a concept per se …

When economists talk about the problem of quantitative definitions of economic variables, they must have something in mind which has to do with real economic phenomena. More precisely, they want to give exact rules how to measure certain phenomena of real life.

Anyone who got a B+ in real analysis will have no problem with the first part of this statement. For the rest, this is the point: economic quantities come into existence only through some concrete human activity that involves someone writing down a number. You can ignore this, most of the time; but you should not ignore it all of the time. Because without that concrete activity there’s no link between economic theory and the social reality it hopes to help us master or make sense of.

Haavelmo has some sharp observations on the kind of economics that ignores the concrete activity that generates its data, which seem just as relevant to economic practice today:

Does a system of questions become less mathematical and more economic in character just by calling x “consumption,” y “price,” etc.? There are certainly many examples of studies to be found that do not go very much further than this, as far as economic significance is concerned.

There certainly are!

An equation, Haavelmo continues,

does not become an economic theory just by using economic terminology to name the variables invovled. It becomes an economic theory when associated with the rule of actual measurement of economic variables.

I’ve seen plenty of papers where the thought process seems to have been somthing like, “I think this phenomenaon is cyclical. Here is a set of difference equations that produce a cycle. I’ll label the variables with names of parts of the phenomenon. Now I have a theory of it!” With no discussion of how to measure the variables or in what sense the objects they describe exist in the external world.

What makes a piece of mathematical economics not only mathematics but also economics is this: When we set up a system of theoretical relationships and use economic names for the otherwise purely theoretical variables involved, we have in mind some actual experiment, or some design of an experiment, which we could at least imagine arranging, in order to measure those quantities in real economic life that we think might obey the laws imposed on their theoretical namesakes.

Right. A model has positive content only insofar as we can describe the concrete set of procedures that gets us from the directly accessible evidence of our senses. In my experience this comes through very clearly if you talk to someone who actually works in the physical sciences. A large part of their time is spent close to the interface with concrete reality — capturing that lizard, calibrating that laser.  The practice of science isn’t simply constructing a formal analog of physical reality, a model trainset. It’s actively pushing against unknown reality and seeing how it pushes back.

Haavelmo:

When considering a theoretical setup … it is common to ask about the actual meaning of this or that variable. But this question has no sense within the theoretical model. And if the question applies to reality it has no precise answer … we will always need some willingness among our fellow research workers to agree “for practical purposes” on questions of definitions and measurement …A design of experiments … is an essential appendix to any quantitative theory.

With respect to macroeconomics, the “design of experiments” means, in the first instance, the design of the national accounts. Needless to say, national accounting concepts cannot be treated as direct observations of the corresponding terms in economic theory, even if they have been reconstructed with that theory in mind. Cynamon and Fazzari’s paper on the measurement of household spending gives some perfect examples of this. There can’t be many contexts in which Medicare payments to hospitals, for example, are what people have in mind when they construct models of household consumption. But nonetheless that’s what they’re measuring, when they use consumption data from the national accounts.

I think there’s an important sense in which the actual question of any empirical macroeconomics work has to be: What concrete social process led the people working at the statistics office to enter these particular values in the accounts?

Or as Haavelmo puts it:

There is hardly an economist who feels really happy about identifying the current series of “national income, “consumptions,” etc. with the variables by those names in his theories. Or, conversely, he would think it too complicated or perhaps uninteresting to try to build models … [whose] variables would correspond to those actually given by current economic statistics. … The practical conclusion… is the advice that economists hardly ever fail to give, but that few actually follow, that one should study very carefully the actual series considered and the conditions under which they were produced, before identifying them with the variables of a particular theoretical model.

Good advice! And, as he says, hardly ever followed.

I want to go back to the question of the “meaning” of a variable, because this point is so easy to miss. Within a model, the variables have no meaning, we simply have a set of mathematical relationships that are either tautologous, arbitrary, or false. The variables only acquire meaning insofar as we can connect them to concrete social phenomena. It may be unclear to you, as a blog reader, why I’m banging on this point so insistently. Go to an economics conference and you’ll see.

The third central point of the piece is that meaningful explanation requires being able to identify a few causal links as decisive, so that all the other possible ones can be ignored.

Think back to that Paul Romer piece on what’s wrong with modern macroeconomics. One of the most interesting parts of it, to me, was its insistent Humean skepticism about the possibility of a purely inductive economics, or for that matter science of any kind. Paraphrasing Romer: suppose we have n variables, any of which may potentially influence the others. Well then, we have n equations, one for each variable, and n2 parameters (counting intercepts). In general, we are not going to be able to estimate this system based on data alone. We have to restrict the possible parameter space either on the basis of theory, or by “experiments” (natural or otherwise) that let us set most of the parameters to zero on the grounds that there is no independent variation in those variables between observations. I’m not sure that Romer fully engages with this point, whose implications go well beyond the failings of real business cycle theory. But it’s a central concern for Haavelmo:

A theoretical model may be said to be simply a restriction upon the joint variations of a system of quantities … which otherwise might have any value. … Our hope in economic theory and research is that it may be possible to establish contant and relatively simple relations between dependent variables … and a realtively small number of independent variables. … We hope that for each variable y to be explained, there is a realtively small number of explaining factors the variations of which are practically decisive in determining the variations of y. …  If we are trying to explain a certain observable varaible, y, by a system of causal factors, there is, in general, no limit to the number of such factors that might have a potential influence upon y. But Nature may limit the number of fctors that have a nonneglible factual influence to a relatively small number. Our hope for simple laws in economics rests upon the assumption that we may proceed as if such natural limitations of the number of relevant factors exist.

One way or another, to do empirical economic, we have to ignore mst of the logically possible relationships between our variables. Our goal, after all, is to explain variation in the dependent variable. Meaningful explanation is possible only if the number of relevant causal factors is small. If someone asks “why is unemployment high”, a meaningful answer is going to involve at most two or three causes. If you say, “I have no idea, but all else equal wage regulations are making it higher,” then you haven’t given an answer at all. To be masters of the hapennings of real life, we need to focus on causes of effects, not effects of causes.

In other words, ceteris paribus knowledge isn’t knowledge at all. Only unconditional claims count — but they don’t have to be predictions of a single variable, they can be claims about the joint distribution of several. But in any case we have positive knowledge only to the extent we can unconditionally say that future observations will fall entirely in a certain part of the state space. This fails if we have a ceteris paribus condition, or if our empirical works “corrects” for factors whose distribution and the nature of whose influence we have not invstigated.3 Applied science is useful because it gives us knowledge of the kind, “If I don’t turn the key, the car will not start, if I do turn the key, it will — or if it doesn’t there is a short list of possible reasons why not.” It doesn’t give us knowledge like “All else equal, the car is more likely to start when the key is turned than when it isn’t.”4

If probability distributions are simply tools for making unconditional claims about specific events, then it doesn’t make sense to think of them as existing out in the world. They are, as Keynes also emphasized, simply ways of describing our own subjective state of belief:

We might interpret “probability” simply as a measure of our a priori confidence in the occurrence of a certain event. Then the theoretical notion of a probability distribution serves us chiefly as a tool for deriving statements that have a very high probability of being true.

Another way of looking at this. Research in economics is generally framed in terms of uncovering universal laws, for which the particular phenomenon being  studied merely serves as a case study.5 But in the real world, it’s more oftne the other way: We are interested in some specific case, often the outcome of some specific action we are considering. Or as Haavelmo puts it,

As a rule we are not particularly interested in making statements about a large number of observations. Usually, we are interested in a relatively small number of observations points; or perhaps even more frequently, we are interested in a practical statement about just one single new observation.

We want economics to answer questions like, “what will happen if US imposes tariffs on China”? The question of what effects tariffs have on trade in the abstract is, itself, uninteresting and unanswerable.

What do we take from this? How, according to Haavelmo, should empirical economics be?

First, the goal of empirical work is to explain concrete phenomena — what happened, or will happen, in some particular case.

Second, the content of a theory is inseparable from the procedures for measuring the variables in it.

Third, empirical work requires restrictions on the logically possible space of parameters, some of which have to be imposed a priori.

Finally, prediction (the goal) means making unconditional claims about the joint distribution of one or more variables. “Everything else equal” means “I don’t know.”

All of this based on the idea that we study economics not as an end in itself, but in response to the problems forced on us by the world.

Varieties of the Phillips Curve

In this post, I first talk about a variety of ways that we can formalize the relationship between wages, inflation and productivity. Then I talk briefly about why these links matter, and finally how, in my view, we should think about the existence of a variety of different possible relationships between these variables.

*

My Jacobin piece on the Fed was, on a certain abstract level, about varieties of the Phillips curve. The Phillips curve is any of a family graphs with either unemployment or “real” GDP on the X axis, and either the level or the change of nominal wages or the level of prices or the level or change of inflation on the Y axis. In any of the the various permutations (some of which naturally are more common than others) this purports to show a regular relationship between aggregate demand and prices.

This apparatus is central to the standard textbook account of monetary policy transmission. In this account, a change in the amount of base money supplied by the central bank leads to a change in market interest rates. (Newer textbooks normally skip this part and assume the central bank sets “the” interest rate by some unspecified means.) The change in interest rates  leads to a change in business and/or housing investment, which results via a multiplier in a change in aggregate output. [1] The change in output then leads to a change in unemployment, as described by Okun’s law. [2] This in turn leads to a change in wages, which is passed on to prices. The Phillips curve describes the last one or two or three steps in this chain.

Here I want to focus on the wage-price link. What are the kinds of stories we can tell about the relationship between nominal wages and inflation?

*

The starting point is this identity:

(1) w = y + p + s

That is, the percentage change in nominal wages (w) is equal to the sum of the percentage changes in real output per worker (y; also called labor productivity), in the price level (p, or inflation) and in the labor share of output (s). [3] This is the essential context for any Phillips curve story. This should be, but isn’t, one of the basic identities in any intermediate macroeconomics textbook.

Now, let’s call the increase in “real” or inflation-adjusted wages r. [4] That gives us a second, more familiar, identity:

(2) r = w – p

The increase in real wages is equal to the increase in nominal wages less the inflation rate.

As always with these kinds of accounting identities, the question is “what adjusts”? What economic processes ensure that individual choices add up in a way consistent with the identity? [5]

Here we have five variables and two equations, so three more equations are needed for it to be determined. This means there are large number of possible closures. I can think of five that come up, explicitly or implicitly, in actual debates.

Closure 1:

First is the orthodox closure familiar from any undergraduate macroeconomics textbook.

(3a) w = pE + f(U); f’ < 0

(4a) y = y*

(5a) p = w – y

Equation 3a says that labor-market contracts between workers and employers result in nominal wage increases that reflect expected inflation (pE) plus an additional increase, or decrease, that reflects the relative bargaining power of the two sides. [6] The curve described by f is the Phillips curve, as originally formulated — a relationship between the unemployment rate and the rate of change of nominal wages. Equation 4a says that labor productivity growth is given exogenously, based on technological change. 5a says that since prices are set as a fixed markup over costs (and since there is only labor and capital in this framework) they increase at the same rate as unit labor costs — the difference between the growth of nominal wages and labor productivity.

It follows from the above that

(6a) w – p = y

and

(7a) s = 0

Equation 6a says that the growth rate of real wages is just equal to the growth of average labor productivity. This implies 7a — that the labor share remains constant. Again, these are not additional assumptions, they are logical implications from closing the model with 3a-5a.

This closure has a couple other implications. There is a unique level of unemployment U* such that w = y + p; only at this level of unemployment will actual inflation equal expected inflation. Assuming inflation expectations are based on inflation rates realized in the past, any departure from this level of unemployment will cause inflation to rise or fall without limit. This is the familiar non-accelerating inflation rate of unemployment, or NAIRU. [7] Also, an improvement in workers’ bargaining position, reflected in an upward shift of f(U), will do nothing to raise real wages, but will simply lead to higher inflation. Even more: If an inflation-targetting central bank is able to control the level of output, stronger bargaining power for workers will leave them worse off, since unemployment will simply rise enough to keep nominal wage growth in line with y*  and the central bank’s inflation target.

Finally, notice that while we have introduced three new equations, we have also introduced a new variable, pE, so the model is still underdetermined. This is intended. The orthodox view is that the same set of “real“ values is consistent with any constant rate of inflation, whatever that rate happens to be. It follows that a departure of the unemployment rate from U* will cause a permanent change in the inflation rate. It is sometimes suggested, not quite logically, that this is an argument in favor of making price stability the overriding goal of policy. [8]

If you pick up an undergraduate textbook by Carlin and Soskice, Krugman and Wells, or Blanchard, this is the basic structure you find. But there are other possibilities.

Closure 2: Bargaining over the wage share

A second possibility is what Anwar Shaikh calls the “classical” closure. Here we imagine the Phillips curve in terms of the change in the wage share, rather than the change in nominal wages.

(3b) s =  f(U); f’ < 0

(4b) y = y*

(5b) p = p*

Equation 3b says that the wage share rises when unemployment is low, and falls when unemployment is high. In this closure, inflation as well as labor productivity growth are fixed exogenously. So again, we imagine that low unemployment improves the bargaining position of workers relative to employers, and leads to more rapid wage growth. But now there is no assumption that prices will follow suit, so higher nominal wages instead translate into higher real wages and a higher wage share. It follows that:

(6b) w = f(U) + p + y

Or as Shaikh puts it, both productivity growth and inflation act as shift parameters for the nominal-wage Phillips curve. When we look at it this way, it’s no longer clear that there was any breakdown in the relationship during the 1970s.

If we like, we can add an additional equation making the change in unemployment a function of the wage share, writing the change in unemployment as u.

(7b) u = g(s); g’ > 0 or g’ < 0

If unemployment is a positive function of the wage share (because a lower profit share leads to lower investment and thus lower demand), then we have the classic Marxist account of the business cycle, formalized by Goodwin. But of course, we might imagine that demand is “wage-led” rather than “profit-led” and make U a negative function of the wage share — a higher wage share leads to higher consumption, higher demand, higher output and lower unemployment. Since lower unemployment will, according to 3b, lead to a still higher wage share, closing the model this way leads to explosive dynamics — or more reasonably, if we assume that g’ < 0 (or impose other constraints), to two equilibria, one with a high wage share and low unemployment, the other with high unemployment and a low wage share. This is what Marglin and Bhaduri call a “stagnationist” regime.

Let’s move on.

Closure 3: Real wage fixed.

I’ll call this the “Classical II” closure, since it seems to me that the assumption of a fixed “subsistence” wage is used by Ricardo and Malthus and, at times at least, by Marx.

(3c) w – p = 0

(4c) y = y*

(5c) p = p*

Equation 3c says that real wages are constant the change in nominal wages is just equal to the change in the price level. [9] Here again the change in prices and in labor productivity are given from outside. It follows that

(6c) s = -y

Since the real wage is fixed, increases in labor productivity reduce the wage share one for one. Similarly, falls in labor productivity will raise the wage share.

This latter, incidentally, is a feature of the simple Ricardian story about the declining rate of profit. As lower quality land if brought into use, the average productivity of labor falls, but the subsistence wage is unchanged. So the share of output going to labor, as well as to landlords’ rent, rises as the profit share goes to zero.

Closure 4:

(3d) w =  f(U); f’ < 0

(4d) y = y*

(5d) p = p*

This is the same as the second one except that now it is the nominal wage, rather than the wage share, that is set by the bargaining process. We could think of this as the naive model: nominal wages, inflation and productivity are all just whatever they are, without any regular relationships between them. (We could even go one step more naive and just set wages exogenously too.) Real wages then are determined as a residual by nominal wage growth and inflation, and the wage share is determined as a residual by real wage growth and productivity growth. Now, it’s clear that this can’t apply when we are talking about very large changes in prices — real wages can only be eroded by inflation so far.  But it’s equally clear that, for sufficiently small short-run changes, the naive closure may be the best we can do. The fact that real wages are not entirely a passive residual, does not mean they are entirely fixed; presumably there is some domain over which nominal wages are relatively fixed and their “real” purchasing power depends on what happens to the price level.

Closure 5:

One more.

(3e) w =  f(U) + a pE; f’ < 0; 0 < a < 1

(4e) y = b (w – p); 0 < b < 1

(5e) p =  c (w – y); 0 < c < 1

This is more generic. It allows for an increase in nominal wages to be distributed in some proportion between higher inflation, an increase in the wage share,  and faster productivity growth. The last possibility is some version of Verdoorn’s law. The idea that scarce labor, or equivalently rising wages, will lead to faster growth in labor productivity is perfectly admissible in an orthodox framework.  But somehow it doesn’t seem to make it into policy discussions.

In other word, lower unemployment (or a stronger bargaining position for workers more generally) will lead to an increase in the nominal wage. This will in turn increase the wage share, to the extent that it does not induce higher inflation and/or faster productivity growth:

(6e) s = (1  – b – c) w

This closure includes the first two as special cases: closure 1 if we set a = 0, b = 0, and c = 1, closure 2 if we set a = 1, b = 0, and c < 1. It’s worth framing the more general case to think clearly about the intermediate possibilities. In Shaikh’s version of the classical view, tighter labor markets are passed through entirely to a higher labor share. In the conventional view, they are passed through entirely to higher inflation. There is no reason in principle why it can’t be some to each, and some to higher productivity as well. But somehow this general case doesn’t seem to get discussed.

Here is a typical example  of the excluded middle in the conventional wisdom: “economic theory suggests that increases in labor costs in excess of productivity gains should put upward pressure on prices; hence, many models assume that prices are determined as a markup over unit labor costs.” Notice the leap from the claim that higher wages put some pressure on prices, to the claim that wage increases are fully passed through to higher prices. Or in terms of this last framework: theory suggests that b should be greater than zero, so let’s assume b is equal to one. One important consequence is to implicitly exclude the possibility of a change in the wage share.

*

So what do we get from this?

First, the identity itself. On one level it is obvious. But too many policy discussions — and even scholarship — talk about various forms of the Phillips curve without taking account of the logical relationship between wages, inflation, productivity and factor shares. This is not unique to this case, of course. It seems to me that scrupulous attention to accounting relationships, and to logical consistency in general, is one of the few unambiguous contributions economists make to the larger conversation with historians and other social scientists. [10]

For example: I had some back and forth with Phil Pilkington in comments and on twitter about the Jacobin piece. He made some valid points. But at one point he wrote: “Wages>inflation + productivity = trouble!” Now, wages > inflation + productivity growth just means, an increasing labor share. It’s two ways of saying the same thing. But I’m pretty sure that Phil did not intend to write that an increase in the labor share always means trouble. And if he did seriously mean that, I doubt one reader in a hundred would understand it from what he wrote.

More consequentially, austerity and liberalization are often justified by the need to prevent “real unit labor costs” from rising. What’s not obvious is that “real unit labor costs” is simply another word for the labor share. Since by definition the change real unit labor costs is just the change in nominal wages less sum of inflation and productivity growth. Felipe and Kumar make exactly this point in their critique of the use of unit labor costs as a measure of competitiveness in Europe: “unit labor costs calculated with aggregate data are no more than the economy’s labor share in total output multiplied by the price level.” As they note, one could just as well compute “unit capital costs,” whose movements would be just the opposite. But no one ever does, instead they pretend that a measure of distribution is a measure of technical efficiency.

Second, the various closures. To me the question of which behavioral relations we combine the identity with — that is, which closure we use — is not about which one is true, or best in any absolute sense. It’s about the various domains in which each applies. Probably there are periods, places, timeframes or policy contexts in which each of the five closures gives the best description of the relevant behavioral links. Economists, in my experience, spend more time working out the internal properties of formal systems than exploring rigorously where those systems apply. But a model is only useful insofar as you know where it applies, and where it doesn’t. Or as Keynes put it in a quote I’m fond of, the purpose of economics is “to provide ourselves with an organised and orderly method of thinking out particular problems” (my emphasis); it is “a way of thinking … in terms of models joined to the art of choosing models which are relevant to the contemporary world.” Or in the words of Trygve Haavelmo, as quoted by Leijonhufvud:

There is no reason why the form of a realistic model (the form of its equations) should be the same under all values of its variables. We must face the fact that the form of the model may have to be regarded as a function of the values of the variables involved. This will usually be the case if the values of some of the variables affect the basic conditions of choice under which the behavior equations in the model are derived.

I might even go a step further. It’s not just that to use a model we need to think carefully about the domain over which it applies. It may even be that the boundaries of its domain are the most interesting thing about it. As economists, we’re used to thinking of models “from the inside” — taking the formal relationships as given and then asking what the world looks like when those relationships hold. But we should also think about them “from the outside,” because the boundaries within which those relationships hold are also part of the reality we want to understand. [11] You might think about it like laying a flat map over some curved surface. Within a given region, the curvature won’t matter, the flat map will work fine. But at some point, the divergence between trajectories in our hypothetical plane and on the actual surface will get too large to ignore. So we will want to have a variety of maps available, each of which minimizes distortions in the particular area we are traveling through — that’s Keynes’ and Haavelmo’s point. But even more than that, the points at which the map becomes unusable, are precisely how we learn about the curvature of the underlying territory.

Some good examples of this way of thinking are found in the work of Lance Taylor, which often situates a variety of model closures in various particular historical contexts. I think this kind of thinking was also very common in an older generation of development economists. A central theme of Arthur Lewis’ work, for example, could be thought of in terms of poor-country labor markets that look  like what I’ve called Closure 3 and rich-country labor markets that look like Closure 5. And of course, what’s most interesting is not the behavior of these two systems in isolation, but the way the boundary between them gets established and maintained.

To put it another way: Dialectics, which is to say science, is a process of moving between the concrete and the abstract — from specific cases to general rules, and from general rules to specific cases. As economists, we are used to grounding concrete in the abstract — to treating things that happen at particular times and places as instances of a universal law. The statement of the law is the goal, the stopping point. But we can equally well ground the abstract in the concrete — treat a general rule as a phenomenon of a particular time and place.

 

 

 

[1] In graduate school you then learn to forget about the existence of businesses and investment, and instead explain the effect of interest rates on current spending by a change in the optimal intertemporal path of consumption by a representative household, as described by an Euler equation. This device keeps academic macroeconomics safely quarantined from contact with discussion of real economies.

[2] In the US, Okun’s law looks something like Delta-U = 0.5(2.5 – g), where Delta-U is the change in the unemployment rate and g is inflation-adjusted growth in GDP. These parameters vary across countries but seem to be quite stable over time. In my opinion this is one of the more interesting empirical regularities in macroeconomics. I’ve blogged about it a bit in the past  and perhaps will write more in the future.

[3] To see why this must be true, write L for total employment, Z for the level of nominal GDP, Y for per-capita GDP, W for the average wage, and P for the price level. The labor share S is by definition equal to total wages divided by GDP:

S = WL / Z

Real output per worker is given by

Y = (Z/P) / L

Now combine the equations and we get W = P Y S. This is in levels, not changes. But recall that small percentage changes can be approximated by log differences. And if we take the log of both sides, writing the log of each variable in lowercase, we get w = y + p + s. For the kinds of changes we observe in these variables, the approximation will be very close.

[4] I won’t keep putting “real” in quotes. But it’s important not to uncritically accept the dominant view that nominal quantities like wages are simply reflections of underlying non-monetary magnitudes. In fact the use of “real” in this way is deeply ideological.

[5] A discovery that seems to get made over and over again, is that since an identity is true by definition, nothing needs to adjust to maintain its equality. But it certainly does not follow, as people sometimes claim, that this means you cannot use accounting identities to reason about macroeconomic outcomes. The point is that we are always using the identities along with some other — implicit or explicit — claims about the choices made by economic units.

[6] Note that it’s not necessary to use a labor supply curve here, or to make any assumption about the relationship between wages and marginal product.

[7] Often confused with Milton Friedman’s natural rate of unemployment. But in fact the concepts are completely different. In Friedman’s version, causality runs the other way, from the inflation rate to the unemployment rate. When realized inflation is different from expected inflation, in Friedman’s story, workers are deceived about the real wage they are being offered and so supply the “wrong” amount of labor.

[8] Why a permanently rising price level is inconsequential but a permanently rising inflation rate is catastrophic, is never explained. Why are real outcomes invariant to the first derivative of the price level, but not to the second derivative? We’re never told — it’s an article of faith that money is neutral and super-neutral but not super-super-neutral. And even if one accepts this, it’s not clear why we should pick a target of 2%, or any specific number. It would seem more natural to think inflation should follow a random walk, with the central bank holding it at its current level, whatever that is.

[9] We could instead use w – p = r*, with an exogenously given rate of increase in real wages. The logic would be the same. But it seems simpler and more true to the classics to use the form in 3c. And there do seem to be domains over which constant real wages are a reasonable assumption.

[10] I was just starting grad school when I read Robert Brenner’s long article on the global economy, and one of the things that jumped out at me was that he discussed the markup and the wage share as if they were two independent variables, when of course they are just two ways of describing the same thing. Using s still as the wage share, and m as the average markup of prices over wages, s = 1 / (1 + m). This is true by definition (unless there are shares other than wages or profits, but none such figure in Brenner’s analysis). The markup may reflect the degree of monopoly power in product markets while the labor share may reflect bargaining power within the firm, but these are two different explanations of the same concrete phenomenon. I like to think that this is a mistake an economist wouldn’t make.

[11] The Shaikh piece mentioned above is very good. I should add, though, the last time I spoke to Anwar, he criticized me for “talking so much about the things that have changed, rather than the things that have not” — that is, for focusing so much on capitalism’s concrete history rather than its abstract logic. This is certainly a difference between Shaikh’s brand of Marxism and whatever it is I do. But I’d like to think that both approaches are called for.

 

EDIT: As several people pointed out, some of the equations were referred to by the wrong numbers. Also, Equation 5a and 5e had inflation-expectation terms in them that didn’t belong. Fixed.

EDIT 2: I referred to an older generation of development economics, but I think this awareness that the territory requires various different maps, is still more common in development than in most other fields. I haven’t read Dani Rodrik’s new book, but based on reviews it sounds like it puts forward a pretty similar view of economics methodology.

What Drives Trade Flows? Mostly Demand, Not Prices

I just participated (for the last time, thank god) in the UMass-New School economics graduate student conference, which left me feeling pretty good about the next generation of heterodox economists. [1] A bunch of good stuff was presented, but for my money, the best and most important work was Enno Schröder’s: “Aggregate Demand (Not Competitiveness) Caused the German Trade Surplus and the U.S. Deficit.” Unfortunately, the paper is not yet online — I’ll link to it the moment it is — but here are his slides.

The starting point of his analysis is that, as a matter of accounting, we can write the ratio of a county’s exports to imports as :

X/M = (m*/m) (D*/D)

where X and M are export and import volumes, m* is the fraction of foreign expenditure spent on the home country’s goods, m is the fraction of the home expenditure spent on foreign goods, and D* and D are total foreign and home expenditure.

This is true by definition. But the advantage of thinking of trade flows this way, is that it allows us to separate the changes in trade attributable to expenditure switching (including, of course, the effect of relative price changes) and the changes attributable to different growth rates of expenditure. In other words, it lets us distinguish the changes in trade flows that are due to changes in how each dollar is spent in a given country, from changes in trade flows that are due to changes in the distribution of dollars across countries.

(These look similar to price and income elasticities, but they are not the same. Elasticities are estimated, while this is an accounting decomposition. And changes in m and m*, in this framework, capture all factors that lead to a shift in the import share of expenditure, not just relative prices.)

The heart of the paper is an exercise in historical accounting, decomposing changes in trade ratios into m*/m and D*/D. We can think of these as counterfactual exercises: How would trade look if growth rates were all equal, and each county’s distribution of spending across countries evolved as it did historically; and how would trade look if each country had had a constant distribution of spending across countries, and growth rates were what they were historically? The second question is roughly equivalent to: How much of the change in trade flows could we predict if we knew expenditure growth rates for each country and nothing else?

The key results are in the figure below. Look particularly at Germany,  in the middle right of the first panel:

The dotted line is the actual ratio of exports to imports. Since Germany has recently had a trade surplus, the line lies above one — over the past decade, German exports have exceed German imports by about 10 percent. The dark black line is the counterfactual ratio if the division of each county’s expenditures among various countries’ goods had remained fixed at their average level over the whole period. When the dark black line is falling, that indicates a country growing more rapidly than the countries it exports to; with the share of expenditure on imports fixed, higher income means more imports and a trade balance moving toward deficit. Similarly, when the black line is rising, that indicates a country’s total expenditure growing more slowly than expenditure its export markets, as was the case for Germany from the early 1990s until 2008. The light gray line is the other counterfactual — the path trade would have followed if all countries had grown at an equal rate, so that trade depended only on changes in competitiveness. When the dotted line and the heavy black line move more or less together, we can say that shifts in trade are mostly a matter of aggregate demand; when the dotted line and the gray line move together, mostly a matter of competitiveness (which, again, includes all factors that cause people to shift expenditure between different countries’ goods, including but not limited to exchange rates.)
The point here is that if you only knew the growth of income in Germany and its trade partners, and nothing at all about German wages or productivity, you could fully explain the German trade surplus of the past decade. In fact, based on income growth alone you would predict an even larger surplus; the fraction of the world’s dollars falling on German goods actually fell. Or as Enno puts it: During the period of the German export boom, Germany became less, not more, competitive. [2] The cases of Spain, Portugal and Greece (tho not Italy) are symmetrical: Despite the supposed loss of price competitiveness they experienced under the euro, the share of expenditure falling on these countries’ goods and services actually rose during the periods when their trade balances worsened; their growing deficits were entirely a product of income growth more rapid than their trade partners’.
These are tremendously important results. In my opinion, they are fatal to the claim (advanced by Krugman among others) that the root of the European crisis is the inability to adjust exchange rates, and that a devaluation in the periphery would be sufficient to restore balanced trade. (It is important to remember, in this context, that southern Europe was running trade deficits for many years before the establishment of the euro.) They also imply a strong criticism of free trade. If trade flows depend mostly or entirely on relative income, and if large trade imbalances are unsustainable for most countries, then relative growth rates are going to be constrained by import shares, which means that most countries are going to grow below their potential. (This is similar to the old balance-of-payments constrained growth argument.) But the key point, as Enno stresses, is that both the “left” argument about low German wage growth and the “right” argument about high German productivity growth are irrelevant to the historical development of German export surpluses. Slower income growth in Germany than its trade partners explains the whole story.
I really like the substantive argument of this paper. But I love the methodology. There is an econometrics section, which is interesting (among other things, he finds that the Marshall-Lerner condition is not satisfied for Germany, another blow to the relative-prices story of the euro crisis.) But the main conclusions of the paper don’t depend in any way on it. In fact, the thing can be seen as an example of an alternative methodology to econometrics for empirical economics, historical accounting or decomposition analysis. This is the same basic approach that Arjun Jayadev and I take in our paper on household debt, and which has long been used to analyze the historical evolution of public debt. Another interesting application of this kind of historical accounting: the decomposition of changes in the profit rate into the effects of the profit share, the utilization rate, and the technologically-determined capital-output ratio, an approach pioneered by Thomas Weisskopf, and developed by others, including Ed WolffErdogan Bakir, and my teacher David Kotz.
People often say that these accounting exercises can’t be used to establish claims about causality. And strictly speaking this is true, though they certainly can be used to reject certain causal stories. But that’s true of econometrics too. It’s worth taking a step back and remembering that no matter how fancy our econometrics, all we are ever doing with those techniques is describing the characteristics of a matrix. We have the observations we have, and all we can do is try to summarize the relationships between them in some useful way. When we make causal claims using econometrics, it’s by treating the matrix as if it were drawn from some stable underlying probability distribution function (pdf). One of the great things about these decomposition exercises — or about other empirical techniques, like principal component analysis — is that they limit themselves to describing the actual data. In many cases — lots of labor economics, for instance — the fiction of a stable underlying pdf is perfectly reasonable. But in other cases — including, I think, almost all interesting questions in macroeconomics — the conventional econometrics approach is a bit like asking, If a whale were the top of an island, what would the underlying geology look like? It’s certainly possible to come up with a answer to that question. But it is probably not the simplest way of describing the shape of the whale.
[1] A perennial question at these things is whether we should continue identifying ourselves as “heterodox,” or just say we’re doing economics. Personally, I’ll be happy to give up the distinct heterodox identity just as soon as economists are willing to give up their distinct identity and dissolve into the larger population of social scientists, or of guys with opinions.
[2] The results for the US are symmetrical with those for Germany: the growing US trade deficit since 1990 is fully explained by more rapid US income growth relative to its trade partners. But it’s worth noting that China is not: Knowing only China’s relative income growth, which has been of course very high, you would predict that China would be moving toward trade deficits, when in fact it has ben moving toward surplus. This is consistent with a story that explains China’s trade surpluses by an undervalued currency, tho it is consistent with other stories as well.