A Quick Point on Models

According to Keynes the purpose of economics is “to provide ourselves with an organised and orderly method of thinking out particular problems”; it is “a way of thinking … in terms of models joined to the art of choosing models which are relevant to the contemporary world.” (Quoted here.)

I want to amplify on that just a bit. The test of a good model is not whether it corresponds to the true underlying structure of the world, but whether it usefully captures some of the regularities in the concrete phenomena we observe. There are lots of different regularities, more or less bounded in time, space and other dimensions, so we are going to need lots of different models, depending on the questions we are asking and the setting we are asking them in. Thus the need for the “art of choosing”.

I don’t think this point is controversial in the abstract. But people often lose sight of it. Obvious case: Piketty and “capital”. A lot of the debate between Piketty and his critics on the left has focused on whether there really is, in some sense, a physical quantity of capital, or not. I don’t think we need to have this argument.

We observe “capital” as a set of money claims, whose aggregate value varies in relation to other observable monetary aggregates (like income) over time and across space. There is a component of that variation that corresponds to the behavior of a physical stock — increasing based on identifiable inflows (investment) and decreasing based on identifiable outflows (depreciation). Insofar as we are interested in that component of the observed variation, we can describe it using models of capital as a physical stock. The remaining components (the “residual” from the point of view of a model of physical K) will require a different set of models or stories. So the question is not, is there such a thing as a physical capital stock? It’s not even, is it in general useful to think about capital as a physical stock? The question is, how much of the particular variation we are interested is accounted for by the component corresponding to the evolution of a physical stock? And the answer will depend on which variation we are interested in.

For example, Piketty could say “It’s true that my model, which treats K as a physical stock, does not explain much of the historical variation in capital-output ratios at decadal frequencies, like the fall and rise over the course of the 20th century. But I believe it does explain very long-frequency variation, and in particular captures important long-run possibilities for the future.” (I think he has in fact said something like this, though I can’t find the quote at the moment.) You don’t have to agree with him — you could dispute that his model is a good fit for even the longest-frequency historical variation, or you could argue that the shorter frequency variation is more interesting (and is what his book often seems to be about). But it would be pointless to criticize him on the grounds that there isn’t “really” such a thing as a physical capital stock, or that there is no consistent way in principle to measure it. That, to me, would show a basic misunderstanding of what models are.

An example of good scientific practice along these lines is biologists’ habit of giving genes names for what happens when gross mutations are induced in them experimentally. Names like eyeless or shaggy or buttonhead: the fly lacks eyes, grows extra hair, or has a head without segments if the gene is removed. It might seem weird to describe genes in terms of what goes wrong when they are removed, as opposed to what they do normally, but I think this practice shows good judgement about what we do and don’t know. In particular, it avoids any claim about what the gene is “for.” There are many many relationships between a given locus in the genome and the phenotype, and no sense in which any of them is more or less important in an absolute sense. Calling it the “eye gene” would obscure that, make it sound like this is the relationship that exists out in the world, when for all we know the variation in eye development in wild populations is driven by variation in entirely other locuses. Calling it eyeless makes it clear that it’s referring to what you observe in a particular experimental context.

EDIT: I hate discussions of methodology. I should not have written this post. (I only did because I liked the gene-naming analogy.)  That said, if you, unlike me, enjoy this sort of thing, Tom Hickey wrote a long and thoughtful response to it. He mentions among others, Tony Lawson, who I would certainly want to read more of if I were going to write about this stuff.

New Keynesians Don’t Believe Their Models

Here’s the thing about about saltwater, New Keynesian economists: They don’t believe their own theory.

Via John Cochrane, here is a great example. In the NBER Macroeconomics Annual a couple years ago, Gauti Eggertson laid out the canonical New Keynesian case for the effectiveness of fiscal policy when interest rates are at the Zero Lower Bound. In the model Eggertson describes there — the model that is supposed to provide the intellectual underpinnings for fiscal stimulus — the multiplier on government spending at the ZLB is indeed much larger than in normal conditions, 2.3 rather than 0.48. But the same model says that at the ZLB, cuts in taxes on labor are contractionary, with a multiplier of -1. Every dollar of “stimulus” from the Making Work Pay tax credit, in other words, actually reduced GDP by a dollar. Or as Eggertson puts it, “Cutting taxes on labor … is contractionary under the circumstances the United States is experiencing today. “

Now, obviously there are reasons why one might believe this. For instance, maybe lower payroll taxes just allow employers to reduce wages by the same amount, and then in response to their lower costs they reduce prices, which is deflationary. There’s nothing wrong with that story in principle. No, the point isn’t that the New Keynesian claim that payroll tax cuts reduce demand is wrong — though I think that it is. The point is that nobody actually believes it.

In the debates over the stimulus bill back at the beginning of 2009, everyone agreed that payroll tax cuts were stimulus just as much as spending increases. The CBO certainly did. There were plenty of “New Keynesian” economists involved in that debate, and while they may have said that tax cuts would boost demand less than direct government spending, I’m pretty sure that not one of them said that payroll tax cuts would actually reduce demand. And when the payroll tax cuts were allowed to expire at the end of 2012, did anyone make the case that this was actually expansionary? Of course not. The conventional wisdom was that the payroll tax cuts had a large, positive effect on demand, with a multiplier around positive 1. Regardless of good New Keynesian theory.

As a matter of fact, even Eggertson doesn’t seem to believe that raising taxes on labor will boost demand, whether or not it’s what the math says. The “natural result” of his model, he admits, is that any increase in government spending should be financed by higher taxes. But:

There may, however, be important reasons outside the model that suggest that an increase in labor and capital taxes may be unwise and/or impractical. For these reasons I am not ready to suggest, based on this analysis alone, that raising capital and labor taxes is a good idea at zero interest rates. Indeed, my conjecture is that a reasonable case can be made for a temporary budget deficit to finance a stimulus plan… 

Well, yeah. I think most of us can agree that raising payroll taxes in a recession is probably not the best idea. But at this point, what are we even doing here? If you’re going to defer to arguments “outside the model” whenever the model says something inconvenient or surprising, why are you even doing it?

EDIT: I put this post up a few days ago, then took it down because it seemed a little thin and I thought I would add another example or two of the same kind of thing. But I’m feeling now that more criticism of mainstream economics is not a good use of my time. If that’s what you want, you should check out this great post by Noah Smith. Noah is so effective here for the same reason that he’s sometimes so irritatingly wrong — he’s writing from inside the mainstream. The truth is, to properly criticize these models, you have to have a deep knowledge of them, which he has and I do not.

Arjun and I have a piece in an upcoming Economics and Politics Weekly on how liberal, “saltwater” economists share the blame for creating an intellectual environment favorable to austerian arguments, however much they oppose them in particular cases. I feel pretty good about it — will link here when it comes out — I think for me, that’s enough criticism of modern macro. In general, the problem with radical economists is they spend too much time on negative criticism of the economics profession, and not enough making a positive case for an alternative. This criticism applies to me too. My comparative advantage in econblogging is presenting interesting Keynesian and Marxist work.

One thing one learns working at a place like Working Families, the hard thing is not convincing people that shit is fucked up and bullshit, the hard thing is convincing them there’s anything they can do about it.  Same deal here: The real challenge isn’t showing the other guys are wrong, it’s showing that we have something better.

What Drives Trade Flows? Mostly Demand, Not Prices

I just participated (for the last time, thank god) in the UMass-New School economics graduate student conference, which left me feeling pretty good about the next generation of heterodox economists. [1] A bunch of good stuff was presented, but for my money, the best and most important work was Enno Schröder’s: “Aggregate Demand (Not Competitiveness) Caused the German Trade Surplus and the U.S. Deficit.” Unfortunately, the paper is not yet online — I’ll link to it the moment it is — but here are his slides.

The starting point of his analysis is that, as a matter of accounting, we can write the ratio of a county’s exports to imports as :

X/M = (m*/m) (D*/D)

where X and M are export and import volumes, m* is the fraction of foreign expenditure spent on the home country’s goods, m is the fraction of the home expenditure spent on foreign goods, and D* and D are total foreign and home expenditure.

This is true by definition. But the advantage of thinking of trade flows this way, is that it allows us to separate the changes in trade attributable to expenditure switching (including, of course, the effect of relative price changes) and the changes attributable to different growth rates of expenditure. In other words, it lets us distinguish the changes in trade flows that are due to changes in how each dollar is spent in a given country, from changes in trade flows that are due to changes in the distribution of dollars across countries.

(These look similar to price and income elasticities, but they are not the same. Elasticities are estimated, while this is an accounting decomposition. And changes in m and m*, in this framework, capture all factors that lead to a shift in the import share of expenditure, not just relative prices.)

The heart of the paper is an exercise in historical accounting, decomposing changes in trade ratios into m*/m and D*/D. We can think of these as counterfactual exercises: How would trade look if growth rates were all equal, and each county’s distribution of spending across countries evolved as it did historically; and how would trade look if each country had had a constant distribution of spending across countries, and growth rates were what they were historically? The second question is roughly equivalent to: How much of the change in trade flows could we predict if we knew expenditure growth rates for each country and nothing else?

The key results are in the figure below. Look particularly at Germany,  in the middle right of the first panel:

The dotted line is the actual ratio of exports to imports. Since Germany has recently had a trade surplus, the line lies above one — over the past decade, German exports have exceed German imports by about 10 percent. The dark black line is the counterfactual ratio if the division of each county’s expenditures among various countries’ goods had remained fixed at their average level over the whole period. When the dark black line is falling, that indicates a country growing more rapidly than the countries it exports to; with the share of expenditure on imports fixed, higher income means more imports and a trade balance moving toward deficit. Similarly, when the black line is rising, that indicates a country’s total expenditure growing more slowly than expenditure its export markets, as was the case for Germany from the early 1990s until 2008. The light gray line is the other counterfactual — the path trade would have followed if all countries had grown at an equal rate, so that trade depended only on changes in competitiveness. When the dotted line and the heavy black line move more or less together, we can say that shifts in trade are mostly a matter of aggregate demand; when the dotted line and the gray line move together, mostly a matter of competitiveness (which, again, includes all factors that cause people to shift expenditure between different countries’ goods, including but not limited to exchange rates.)
The point here is that if you only knew the growth of income in Germany and its trade partners, and nothing at all about German wages or productivity, you could fully explain the German trade surplus of the past decade. In fact, based on income growth alone you would predict an even larger surplus; the fraction of the world’s dollars falling on German goods actually fell. Or as Enno puts it: During the period of the German export boom, Germany became less, not more, competitive. [2] The cases of Spain, Portugal and Greece (tho not Italy) are symmetrical: Despite the supposed loss of price competitiveness they experienced under the euro, the share of expenditure falling on these countries’ goods and services actually rose during the periods when their trade balances worsened; their growing deficits were entirely a product of income growth more rapid than their trade partners’.
These are tremendously important results. In my opinion, they are fatal to the claim (advanced by Krugman among others) that the root of the European crisis is the inability to adjust exchange rates, and that a devaluation in the periphery would be sufficient to restore balanced trade. (It is important to remember, in this context, that southern Europe was running trade deficits for many years before the establishment of the euro.) They also imply a strong criticism of free trade. If trade flows depend mostly or entirely on relative income, and if large trade imbalances are unsustainable for most countries, then relative growth rates are going to be constrained by import shares, which means that most countries are going to grow below their potential. (This is similar to the old balance-of-payments constrained growth argument.) But the key point, as Enno stresses, is that both the “left” argument about low German wage growth and the “right” argument about high German productivity growth are irrelevant to the historical development of German export surpluses. Slower income growth in Germany than its trade partners explains the whole story.
I really like the substantive argument of this paper. But I love the methodology. There is an econometrics section, which is interesting (among other things, he finds that the Marshall-Lerner condition is not satisfied for Germany, another blow to the relative-prices story of the euro crisis.) But the main conclusions of the paper don’t depend in any way on it. In fact, the thing can be seen as an example of an alternative methodology to econometrics for empirical economics, historical accounting or decomposition analysis. This is the same basic approach that Arjun Jayadev and I take in our paper on household debt, and which has long been used to analyze the historical evolution of public debt. Another interesting application of this kind of historical accounting: the decomposition of changes in the profit rate into the effects of the profit share, the utilization rate, and the technologically-determined capital-output ratio, an approach pioneered by Thomas Weisskopf, and developed by others, including Ed WolffErdogan Bakir, and my teacher David Kotz.
People often say that these accounting exercises can’t be used to establish claims about causality. And strictly speaking this is true, though they certainly can be used to reject certain causal stories. But that’s true of econometrics too. It’s worth taking a step back and remembering that no matter how fancy our econometrics, all we are ever doing with those techniques is describing the characteristics of a matrix. We have the observations we have, and all we can do is try to summarize the relationships between them in some useful way. When we make causal claims using econometrics, it’s by treating the matrix as if it were drawn from some stable underlying probability distribution function (pdf). One of the great things about these decomposition exercises — or about other empirical techniques, like principal component analysis — is that they limit themselves to describing the actual data. In many cases — lots of labor economics, for instance — the fiction of a stable underlying pdf is perfectly reasonable. But in other cases — including, I think, almost all interesting questions in macroeconomics — the conventional econometrics approach is a bit like asking, If a whale were the top of an island, what would the underlying geology look like? It’s certainly possible to come up with a answer to that question. But it is probably not the simplest way of describing the shape of the whale.
[1] A perennial question at these things is whether we should continue identifying ourselves as “heterodox,” or just say we’re doing economics. Personally, I’ll be happy to give up the distinct heterodox identity just as soon as economists are willing to give up their distinct identity and dissolve into the larger population of social scientists, or of guys with opinions.
[2] The results for the US are symmetrical with those for Germany: the growing US trade deficit since 1990 is fully explained by more rapid US income growth relative to its trade partners. But it’s worth noting that China is not: Knowing only China’s relative income growth, which has been of course very high, you would predict that China would be moving toward trade deficits, when in fact it has ben moving toward surplus. This is consistent with a story that explains China’s trade surpluses by an undervalued currency, tho it is consistent with other stories as well.

Only Ever Equilibrium?

Roger Farmer has a somewhat puzzling guest post up at Noah Smith’s place, arguing that economics is right to limit discussion to equilibrium:

An economic equilibrium, in the sense of Nash, is a situation where a group of decision makers takes a sequence of actions that is best, (in a well defined sense), on the assumption that every other decision maker in the group is acting in a similar fashion. In the context of a competitive economy with a large number of players, Nash equilibrium collapses to the notion of perfect competition.  The genius of the rational expectations revolution, largely engineered by Bob Lucas, was to apply that concept to macroeconomics by successfully persuading the profession to base our economic models on Chapter 7 of Debreu’s Theory of Value… In Debreu’s vision, a commodity is indexed by geographical location, by date and by the state of nature.  Once one applies Debreu’s vision of general equilibrium theory to macroeconomics, disequilibrium becomes a misleading and irrelevant distraction. 

The use of equilibrium theory in economics has received a bad name for two reasons. 

First, many equilibrium environments are ones where the two welfare theorems of competitive equilibrium theory are true, or at least approximately true. That makes it difficult to think of them as realistic models of a depression, or of a financial collapse… Second, those macroeconomic models that have been studied most intensively, classical and new-Keynesian models, are ones where there is a unique equilibrium. Equilibrium, in this sense, is a mapping from a narrowly defined set of fundamentals to an outcome, where  an outcome is an observed temporal sequence of unemployment rates, prices, interest rates etc. Models with a unique equilibrium do not leave room for non-fundamental variables to influence outcomes… 

Multiple equilibrium models do not share these shortcomings… [But] a model with multiple equilibria is an incomplete model. It must be closed by adding an equation that explains the behavior of an agent when placed in an indeterminate environment. In my own work I have argued that this equation is a new fundamental that I call a belief function.

(Personally, I might just call it a convention.)

Some recent authors have argued that rational expectations must be rejected and replaced by a rule that describes how agents use the past to forecast the future. That approach has similarities to the use of a belief function to determine outcomes, and when added to a multiple equilibrium model of the kind I favor, it will play the same role as the belief function. The important difference of multiple equilibrium models, from the conventional approach to equilibrium theory, is that the belief function can coexist with the assumption of rational expectations. Agents using a rule of this kind, will not find that their predictions are refuted by observation. …

So his point here is that in a model with multiple equilibria, there is no fundamental reason why the economy should occupy one rather than another. You need to specify agents’ expectations independently, and once you do, whatever outcome they expect, they’ll be correct. This allows for an economy to experience involuntary unemployment, for example, as expectations of high or low income lead to increased or curtailed expenditure, which results in expected income, whatever it was, being realized. This is the logic of the Samuelson Cross we teach in introductory macro. But it’s not, says Farmer, a disequilibrium in any meaningful way:

If by disequilibrium, I am permitted to mean that the economy may deviate for a long time, perhaps permanently, from a social optimum; then I have no trouble with championing the cause. But that would be an abuse of the the term ‘disequilibrium’. If one takes the more normal use of disequilibrium to mean agents trading at non-Walrasian prices, … I do not think we should revisit that agenda. Just as in classical and new-Keynesian models where there is a unique equilibrium, the concept of disequilibrium in multiple equilibrium models is an irrelevant distraction.

I quote this at such length because it’s interesting. But also because, to me at least, it’s rather strange. There’s nothing wrong with the multiple equilibrium approach he’s describing here, which seems like a useful way of thinking about a number of important questions. But to rule out a priori any story in which people’s expectations are not fulfilled rules out a lot of other useful ways about thinking about important questions.

At INET in Berlin, the great Axel Leijonhufvud gave a talk where he described the defining feature of a crisis as the existence of inconsistent contractual commitments, so that some of them would have to be voided or violated.

What is the nature of our predicament? The web of contracts has developed serious inconsistencies. All the promises cannot possibly be fulfilled. Insisting that they should be fulfilled will cause a collapse of very large portions of the web.

But Farmer is telling us that economists not only don’t need to, but positively should not, attempt to understand crises in this sense. It’s an “irrelevant distraction” to consider the case where people entered into contracts with inconsistent expectations, which will not all be capable of being fulfilled. Farmer can hardly be unfamiliar with these ideas; after all he edited Leijonhufvud’s festschrift volume. So why is he being so dogmatic here?

I had an interesting conversation with Rajiv Sethi after Leijonhufvud’s talk; he said he thought that the inability to consider cases where plans were not realized was a fundamental theoretical shortcoming of mainstream macro models. I don’t disagree.

The thing about the equilibrium approach, as Farmer presents it, isn’t just that it rules out the possibility of people being systematically wrong; it rules out the possibility that they disagree. This strikes me as a strong and importantly empirically false proposition. (Keynes suggested that the effectiveness of monetary policy depends on the existence of both optimists and pessimists in financial markets.) In Farmer’s multiple equilibrium models, whatever outcome is set by convention, that’s the outcome expected by everyone. This is certainly reasonable in some cases, like the multiple equilibria of driving on the left or the right side of the road. Indeed, I suspect that the fact that people are irrationally confident in these kinds of conventions, and expect them to hold even more consistently than they do, is one of the main things that stabilizes these kind of equilibria. But not everything in economics looks like that.

Here’s Figure 1 from my Fisher dynamics paper with Arjun Jayadev:

See those upward slopes way over on the left? Between 1929 and 1933, household debt relative to GDP rose by abut 40 percent, and nonfinancial business debt relative to GDP nearly doubled. This is not, of course, because families and businesses were borrowing more in the Depression; on the contrary, they were paying down debt as fast as they could. But in the classic debt-deflation story, falling prices and output meant that incomes were falling even fast than debt, so leverage actually increased.

Roger Farmer, if I’m understanding him correctly, is saying that we must see this increase in debt-income ratios as an equilibrium phenomenon. He is saying that households and businesses taking out loans in 1928 must have known that their incomes were going to fall by half over the next five years, while their debt payments would stay unchanged, and chose to borrow anyway. He is saying not just that he believes that, but that as economists we should not consider any other view; we can rule out on methodological grounds the  possibility that the economic collapse of the early 1930s caught people by surprise. To Irving Fisher, to Keynes, to almost anyone, to me, the rise in debt ratios in the early 1930s looks like a pure disequilibrium phenomenon; people were trading at false prices, signing nominal contracts whose real terms would end up being quite different from what they expected. It’s one of the most important stories in macroeconomics, but Farmer is saying that we should forbid ourselves from telling it. I don’t get it.

What am I missing here?

What Adjusts?

More teaching: We’re starting on the open economy now. Exchange rates, trade, international finance, the balance of payments. So one of the first things you have to explain, is the definition of real and nominal exchange rates:

e_R = e_N P*/P 

where P and P* are the home and foreign price levels respectively, and the exchange rate e is defined as the price of foreign exchange (so an appreciation means that e falls and a depreciation means that it rises).

This is a useful definition to know — though of course it’s not as straightforward as it seems, since as we’ve discussed before there are various possibles Ps, and once we are dealing with more than two countries we have to decide how to weight them, with different statistical agencies using different weightings. But set all that aside. What I want to talk about now, is what a nice little example this equation offers of a structuralist perspective on the economy.

 As given above, the equation is an accounting identity. It’s always exactly true, simply because that’s how we’ve defined the real exchange rate. As an accounting identity, it doesn’t in itself say anything about causation. But that doesn’t mean it’s vaacuous. After all, we picked this particular definition because we think it is associated with some causal story. [1] The question is, what story? And that’s where things get interesting.

Since we have one equation, we should have one endogenous (or dependent) variable. But which one, depends on the context.

If we are telling a story about exchange rate determination, we might think that the endogenous variable is e_N. If price levels are determined by the evolution of aggregate supply and demand (or the growth of the money stock, if you prefer) in each country, and if arbitrage in the goods market enforces something like Purchasing Power Parity (PPP), then the nominal exchange rate will have to adjust to keep the real price of a comparable basket of goods from diverging across countries.

On the other hand, we might not think PPP holds, at least in the short run, and we might think that the nominal exchange rate cannot adjust freely. (A fixed exchange rate is the obvious reason, but it’s also possible that the forex markets could push the nominal exchange rate to some arbitrary level.) In that case, it’s the real exchange rate that is endogenous, so we can see changes in the price of comparable goods in one country relative to another. This is implicitly the causal structure that people have in mind when they argue that China is pursuing a mercantilist strategy by pegging its nominal exchange rate, that devaluation would improve current account balances in the European periphery, or that the US could benefit from a lower (nominal) dollar. Here the causal story runs from e_N to e_R.

Alternatively, maybe the price level is endogenous. This is less intuitive, but there’s at least one important story where it’s the case. Anti-inflation programs in a number of countries, especially in Latin America, have made use of a fixed exchange rate as a “nominal anchor.” The idea here is that in a small open economy, especially where high inflation has led to widespread use of a foreign currency as the unit of account, the real exchange rate is effectively fixed. So if the nominal exchange rate can also be effectively fixed, then, like it or not, the domestic price level P will have to be fixed as well. Here’s Jeffrey Sachs on the Bolivian stabilization:

The sudden end of a 60,000 percent inflation seems almost miraculous… Thomas Sargent (1986) argued that such a dramatic change in price inflation results from a sudden and drastic change in the public’s expectations of future government policies… I suggest, in distinction to Sargent, that the Bolivian experience highlights a different and far simpler explanation of the very rapid end of hyperinflations. By August 1985,… prices were set either explicitly or implicitly in dollars, with transactions continuing to take place in peso notes, at prices determined by the dollar prices converted at the spot exchange rate. Therefore, by stabilizing the exchange rate, domestic inflation could be made to revert immediately to the US dollar inflation rate. 

So here the causal story runs from e_N to P.

In the three cases so far, we implicitly assume that P* is fixed, or at least exogenous. This makes sense; since a single country is much smaller than the world as a whole, we don’t expect anything it does to affect the world price level much. So the last logical possibility, P* as the endogenous variable, might seem to lack a corresponding real world story. But an individual countries is not always so much smaller than the world as a whole, at least not if the individual country is the United States. It’s legitimate to ask whether a change in our price level or exchange rate might not show up as as inflation or deflation elsewhere. This is particularly likely if we are focusing on a bilateral relationship. For instance, it might well be that a devaluation of the dollar relative to the renminbi would simply (or mostly) produce corresponding deflation [2] in China, leaving the real exchange rate unchanged.

Here, of course, we have only one equation. But if we interpret it causally, that is already a model, and the question of “what adjusts?” can be rephrased as the choice between alternative model closures. With multiple-equation models, that choice gets trickier — and it can be tricky enough with one equation.

In my opinion, sensitivity to alternative model closures is at the heart of structuralist economics, and is the great methodological innovation of Keynes. The specific application that defines the General Theory is the model closure that endogenizes aggregate income — the interest rate, which was supposed to equilibrate savings and investment, is pinned down by the supply and demand of liquidity, so total income is what adjusts — but there’s a more general methodological principle. “Thinking like an economist,” that awful phrase, should mean being able to choose among different stories — different model closures — based on the historical context and your own interests. It should mean being able look at a complex social reality and judge which logical relationships represent the aspects of it you’re currently interested in, and which accounting identities are most relevant to the story you want to tell. Or as Keynes put it, economics should be thought of as

a branch of logic, a way of thinking … in terms of models, joined to the art of choosing models which are relevant to the contemporary world. … [The goal is] not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organised and orderly method of thinking out particular problems.

Much of mainstream macroeconomics assumes there is a “true” model of the world. Connected to this, there’s an insistence — shared even by a lot of heterodox macro — on regarding some variables as being strictly exogenous and others as strictly endogenous, so that in every story causality runs the same way. In the canonical story, tastes, technology and endowments (one can’t help hearing: by the Creator) are perfectly fixed, and everything else is perfectly adjustable. [3]

Better to follow Keynes, and think about models as more or less useful for clarifying the logic of particular stories.

EDIT: Of course not everyone who recognizes the methodological distinction I’m making here agrees that the eclecticism of structuralism is an advantage. Here is my teacher Peter Skott (with Ben Zipperer):

The `heterodox’ tradition in macroeconomics contains a wide range of models. Kaleckian models treat the utilization rate as an accommodating variable, both in the short and the long run. Goodwin’s celebrated formalization of Marx, by contrast, take the utilization rate as fixed and looks at the interaction between employment and distribution. Distribution is also central to Kaldorian and Robinsonian theories which, like Goodwin, endogenize the profit share and take the utilization rate as structurally determined in the long run but, like the Kaleckians, view short-run variations in utilization as an intrinsic part of the cycle. The differences in these and other areas are important, and this diversity of views on core issues is no cause for celebration.

EDIT 2: Trygve Haavelmo, quoted by Leijonhufvud:

There is no reason why the form of a realistic model (the form of its equations) should be the same under all values of its variables. We must face the fact that the form of the model may have to be regarded as a function of the values of the variables involved. This will usually be the case if the values of some of the variables affect the basic conditions of choice under which the behavior equations in the model are derived.

That’s what I’m talking about. There is no “true” model of the economy. The behavioral relationships change depending where we are in economic space.

Also, Bruce Wilder has a long and characteristically thoughtful comment below. I don’t agree with everything he says — it seems a little too hopeless about the possibility of useful formal analysis even in principle — but it’s very worth reading.

[1] “Accounting identities don’t tell causal stories” is a bit like “correlation doesn’t imply causation.”Both statements are true in principle, but the cases we’re interested in are precisely the cases where we have some reason to believe that it’s not true. And for both statements, the converse does not hold. A causal story that violates accounting identities, or for which there is no corresponding correlation, has a problem.

[2] Or lower real wages, the same thing in this context.

[3] Or you sometimes get a hierarchy of “fast” and “slow” variables, where the fast ones are supposed to fully adjust before the slow ones change at all.

More Anti-Krugmanism

[Some days it feels like that could be the title for about 40 percent of the posts on here.]

Steve Keen takes up the cudgels. (Via.)

There is a pattern to neoclassical attempts to increase the realism of their models… The author takes the core model – which cannot generate the real world phenomenon under discussion – and then adds some twist to the basic assumptions which, hey presto, generate the phenomenon in some highly stylised way. The mathematics (or geometry) of the twist is explicated, policy conclusions (if any) are then drawn, and the paper ends.

The flaw with this game is the very starting point, and since Minsky put it best, I’ll use his words to explain it: “Can ‘It’ – a Great Depression – happen again? And if ‘It’ can happen, why didn’t ‘It’ occur in the years since World War II? … To answer these questions it is necessary to have an economic theory which makes great depressions one of the possible states in which our type of capitalist economy can find itself.”

The flaw in the neoclassical game is that it never achieves Minsky’s final objective, because the “twists” that the author adds to the basic assumptions of the neoclassical model are never incorporated into its core. The basic theory therefore remains one in which the key phenomenon under investigation … cannot happen. The core theory remains unaltered – rather like a dog that learns how to walk on its hind legs, but which then reverts to four legged locomotion when the performance is over.

Right.

Any theory is an abstraction of the real world, but the question is which features of the world you can abstract from, and which, for the purposes of theory, are fundamental. Today’s consensus macroeconomics [1] treats intertemporal maximization of a utility function (with consumption and labor as the only arguments) under given endowments and production functions, and unique, stable market-clearing equilibria as the essential features that any acceptable theory has to start from. It treats firms (profit-maximizing or otherwise), money, credit, uncertainty, the existence of classes, and technological change as non-essential features that need to be derived from intertemporal maximization by households, can be safely ignored, or at best added in an ad hoc way. And change is treated in terms of comparative statics rather than dynamic processes or historical evolution.

Now people will say, But can’t you make the arguments you want to within standard techniques? And in that case, shouldn’t you? Even if it’s not strictly necessary, isn’t it wise to show your story is compatible with the consensus approach, since that way you’ll be more likely to convince other economists, have more political influence, etc.?

If you’re a super smart micro guy (as are the two friends I’ve recently had this conversation with) then there’s probably a lot of truth to this. The type of work you do if you genuinely want to understand a labor market question, say, and the type of work you do if you want to win an argument within the economics profession about labor markets, may not be exactly the same, but they’re reasonably compatible. Maybe the main difference is that you need fancier econometrics to convince economists than to learn about the world?

But if you’re doing macroeconomics, the concessions you have to make to make your arguments acceptable are more costly. When you try to force Minsky into a DSGE box, as Krugman does; or when half of your paper on real exchange rates is taken up with models of utility maximization by households; then you’re not just wasting an enormous amount of time and brainpower. You’re arguing against everyone else trying top do realistic work on other questions, including yourself on other occasions. And you’re ensuring that your arguments will have a one-off, ad hoc quality, instead of being developed in a systematic way.

(Not to mention that the consensus view isn’t even coherent on its own terms. Microfoundations are a fraud, since the representative household can’t be derived from a more general model of utility maximizing agents; and it seems clear that intertemporal maximization and comparative statics are logically incompatible.) 

If we want to get here, we shouldn’t start from there. We need an economics whose starting points are production for profit by firms employing wage labor, under uncertainty, in a monetary economy,  that evolves in historical terms. That’s what Marx, Keynes and Schumpeter in their different ways were all doing. They, and their students, have given us a lot to build on. But to do so, we [2] have to give up on trying to incorporate their insights piecemeal into the consensus framework, and cultivate a separate space to develop a different kind of economics, one that starts from premises corresponding to the fundamental features of modern capitalist economies.

[1] I’ve decided to stop using “mainstream” in favor of “consensus”, largely because the latter term is used by people to describe themselves.

[2] By “we,” again, I mean heterodox macroeconomists specifically. I’m not sure how much other economists face the same sharp tradeoff between winning particular debates within the economics profession and building an economics that gives us genuine, useful knowledge about the world.

Summers on Microfoundations

From The Economist’s report on this weekend’s Institute for New Economic Thinking conference at Bretton Woods:

The highlight of the first evening’s proceedings was a conversation between Harvard’s Larry Summers, till recently President Obama’s chief economic advisor, and Martin Wolf of the Financial Times. Much of the conversation centred on Mr Summers’s assessments of how useful economic research had been in recent years. Paul Krugman famously said that much of recent macroeconomics had been “spectacularly useless at best, and positively harmful at worst”. Mr Summers was more measured… But in its own way, his assessment of recent academic research in macroeconomics was pretty scathing.For instance, he talked about all the research papers that he got sent while he was in Washington. He had a fairly clear categorisation for which ones were likely to be useful: read virtually all the ones that used the words leverage, liquidity, and deflation, he said, and virtually none that used the words optimising, choice-theoretic or neoclassical (presumably in the titles or abstracts). His broader point—reinforced by his mentions of the knowledge contained in the writings of Bagehot, Minsky, Kindleberger, and Eichengreen—was, I think, that while it would be wrong to say economics or economists had nothing useful to say about the crisis, much of what was the most useful was not necessarily the most recent, or even the most mainstream. Economists knew a great deal, he said, but they had also forgotten a great deal and been distracted by a lot.Even more scathing, perhaps, was his comment that as a policymaker he had found essentially no use for the vast literature devoted to providing sound micro-foundations to macroeconomics.

Pretty definitive, no?

And that’s it it — I promise! — on microfoundations, methodology, et hoc genus omne in these parts, at least for a while. I have a couple new posts at least purporting to offer concrete analysis of the concrete situation, just about ready to go.

On Other Blogs, Other Wonders

… or at least some interesting posts.

1. What Kind of Science Would Economics Be If It Really Were a Science?

Peter Dorman is one of those people who I agree with on the big questions but find myself strenuously disagreeing with on many particulars. So it’s nice to wholeheartedly approve this piece on economics and the physical sciences.

The post is based on this 2008 paper that argues that there is no reason that economics cannot be scientific in the same rigorous sense as geology, biology, etc., but only if economists learn to (1) emphasize mechanisms rather than equilibrium and (2) strictly avoid Type I error, even at the cost of Type II error. Type I error is accepting a false claim, Type II is failing to accept a true one. Which is not the same as rejecting it — one can simply be uncertain. Science’s progressive character comes from its rigorous refusal to accept any proposition until every possible effort to disprove it has failed. Of course this means that on many questions, science can take no position at all (an important distinction from policy and other forms of practical activity, where we often have to act one way or another without any very definite knowledge). It sounds funny to say that ignorance is the heart of the practice of science, but I think it’s right. Unfortunately, says Dorman, rather than seeing science as the systematic effort to limit our knowledge claims to things we can know with (near-)certainty, “economists have been seduced by a different vision … that the foundation of science rests on … deduction from top-level theory.”

The mechanisms vs. equilibria point is, if anything, even more important, since it has positive content for how we do economics. Rather than focusing our energy on elucidating theoretical equilibria, we should be thinking about concrete processes of change over time. For example:

Consider the standard supply-and-demand diagram. The professor draws this on the chalkboard, identifies the equilibrium point, and asks for questions. One student asks, are there really supply and demand curves? … Yes, in principle these curves exist, but they are not directly observed in nature. …

there is another way the answer might proceed. … we can use them to identify two other things that are real, excess supply and excess demand. We can measure them directly in the form of unsold goods or consumers who are frustrated in their attempts to make a purchase. And not only can we measure these things, we can observe the actions that buyers and sellers take under conditions of surplus or shortage.

One of the best brief discussions of economics methodology I’ve read.

2. Beware the Predatory Pro Se Borrower!

In general, I assume that anyone here interested in Yves Smith is already reading her, so there’s no point in a post pointing to a post there. But this one really must be read.

It’s a presentation from a law firm representing mortgage servicers, with the Dickensian name LockeLordBissell, meant for servicers conducting foreclosures that meet with legal challenges. That someone would even choose to go to court to avoid being thrown out of their house needs special explanation; it must be a result of “negative press surrounding mortgage lenders” and outside agitators on the Internet. People even think they can assert their rights without a lawyer; they “do not want to pay for representation,” it being inconceivable that someone facing foreclosure might, say, have lost their job and not be able to afford a lawyer. “Predatory borrowers” are “unrealistic and unreasonable borrowers who are trying to capitalize on the current industry turmoil and are willing to employ any tactic to obtain a free home,” including demands to see the note, claims of lack of standing by the servicer, and “other Internet-based machinations.” What’s the world coming to when any random loser has access to the courts? And imagine, someone willing to employ tactics like asking for proof that the company trying to take their home has a legal right to it! What’s more, these stupid peasants “are emotionally tied to their cases [not to mention their houses]; the more a case progresses, the less reasonable the plaintiff becomes.” Worst of all, “pro se cases are expensive to defend because the plaintiff’s lack of familiarity with the legal process often creates more work for the defendant.”

If you want an illustration of how our masters think of us, you couldn’t ask for a clearer example. Our stubborn idea that we have rights or interests of our own is just an annoying interference with their prerogatives.

Everyone knows about bucket lists. At the bar last weekend, someone suggested we should keep bat lists — the people whose heads you’d take a Louisville slugger to, if you knew you just had a few months to live. This being the Left Forum, my friend had “that class traitor Andy Stern” at the top of his list. But I’m putting the partners at LockeLordBissell high up on mine.

3. Palin and Playing by the Rules

Jonathan Bernstein, on why Sarah Palin isn’t going to be the Republican nominee:

For all one hears about efforts to market candidates to mass electorates (that’s what things like the “authenticity” debate are all about), the bulk of nomination politics is retail, not wholesale — and the customers candidates are trying to reach are a relatively small group of party elites…. That’s what Mitt Romney and Tim Pawlenty have been doing for the last two-plus years… It’s what, by every report I’ve seen since November 2008, Sarah Palin has just not done.

Are you telling me that [Republican Jewish Committee] board members are going to be so peeved that Sarah Palin booked her Israel trip with some other organization that they’re [going to] turn it into a presidential nomination preference, regardless of how Palin or any other candidate actually stands on issues of public policy?

Yup. And even more: I’ll tell you that it’s not petty. They’re correct to do so. … if you’re a party leader, what can you do? Sure, you can collect position papers, but you know how meaningless those are going to be…. Much better, even if still risky, is assessing the personal commitment the candidates have to your group. What’s the rapport like? Who has the candidate hired on her staff that has a history of working with you? Will her White House take your calls? …

It’s how presidential nominees are really chosen. … Candidates do have to demonstrate at least some ability to appeal to mass electorates, but first and foremost they need to win the support of the most active portions of the party.

It’s not a brilliant or especially original point, but it’s a very important one. My first-hand experience of electoral politics is limited to state and local races, but I’ve worked on quite a few of those, and Bernstein’s descriptions fits them exactly. I don’t see any reason to think national races are different.

It’s part of the narcissism of intellectuals to imagine politics as a kind of debating society, with the public granting authority to whoever makes the best arguments — what intellectuals specialize in. And it’s natural that people whose only engagement with politics comes through the mass media to suppose that what happens in the media is very important, or even all there is. But Bernstein is right: That stuff is secondary, and the public comes in as object, not subject.

Not always, of course — there are moments when the people does become an active political subject, and those are the most important political moments there are. But they’re very rare. That’s why someone like Luciano Canfora makes a sharp distinction between the institutions and electoral procedures conventionally referred to as democracy, on the one hand, and genuine democracy, on the other — those relatively brief moments of “ascendancy of the demos,” which “may assert itself within the most diverse political-constitutional forms.” For Canfora, democracy can’t be institutionalized through elections; it’s inherently “an unstable phenomenon: the temporary ascendancy of the poorer classes in the course of an endless struggle for equality—a concept which itself widens with time to include ever newer, and ever more strongly challenged, ‘rights’“. (Interestingly, a liberal like Brad DeLong would presumably agree that elections have nothing to do with democracy, but are a mechanism for the circulation of elites.)

I don’t know how far Bernstein would go with Canfora, but he’s taken the essential first step; it would be a good thing for discussions of electoral politics if more people followed him.

EDIT: Just to be clear, Bernstein’s point is a bit more specific than the broad only-elites-matter argument. What candidates are selling to elites isn’t so much a basket of policy positions or desirable personal qualities, but relationships based on trust. It’s interesting, I think it’s true; it doesn’t contradict my gloss, but it does go beyond it.

Microfoundations, Again

Sartre has a wonderful bit in the War Diaries about his childhood discovery of atheism:

One day at La Rochelle, while waiting for the Machado girls who used to keep me company every morning on my way to the lycee, I grew impatient with their lateness and, to while away the time, decided to think about God. “Well,” I said, “he doesn’t exist.” It was something authentically self-evident, although I have no idea any more what it was based on. And then it was over and done with…

Similarly with microfoundations: First of all, they don’t exist. But this rather important point tends to get lost sight of when we follow the conceptual questions too far out into the weeds.

Yes, your textbook announces that “Nothing appears in this book that is not based on explicit microfoundations.” But then 15 pages later, you find that “We assume that all individuals in the economy are identical,” and that these identical individuals have intertemporally-additive preferences. How is this representative agent aggregated up from a market composed of many individuals with differing preferences? It’s not. And in general, it can’t be. As Sonnenschein, Mantel and Debreu showed decades ago, there is no mathematical way to consistently aggregate a set of individual demand functions into a well-behaved aggregate demand function, let alone one consistent with temporally additive preferences. So let’s say we are interested in the relationship between aggregate income and consumption. The old Keynesian (or structuralist) approach is to stipulate a relationship like C = cY, where c < 1 in the short run and approaches 1 over longer horizons; while the modern approach is to derive the relationship explicitly from a representative agent maximizing utility intertemporally. But since there's no way to get that representative agent by aggregating heterogeneous individuals -- and since even the representative agent approach doesn't produce sensible results unless we impose restrictive conditions on its preferences -- there is no sense in which the latter is any more microfounded than the former. So if the representative agent can’t actually be derived from any model of individual behavior, why is it used? The Obstfeld and Rogoff book I quoted before at least engages the question; it considers various answers before concluding that “Fundamentally,” a more general approach “would yield few concrete behavioral predictions.” Which is really a pretty damning admission of defeat for the microfoundations approach. Microeconomics doesn’t tell us anything about what to expect at a macro level, so macroeconomics has to be based on observations of macro phenomena; the “microfoundations” are bolted on afterward. None of this is at all original. If Cosma Shalizi and Daniel Davies didn’t explicitly say this, it’s because they assume anyone interested in this debate knows it already. Why the particular mathematical formalism misleadingly called microfoundations has such a hold on the imagination of economists is a good question, for which I don’t have a good answer. But the unbridgeable gap between our supposed individual-level theory of economic behavior and the questions addressed by macroeconomics is worth keeping in mind before we get too carried away with discussions of principle.

Are Microfoundations Necessary?

A typically thoughtful piece by Cosma Shalizi, says Arjun.

And it is thoughtful, for sure, and smart, and interesting. But I think it concedes too much to really existing economics. In particular:

Obviously, macroeconomic phenomena are the aggregated (or, if you like, the emergent) consequences of microeconomic interactions.

No, that isn’t obvious at all. Two arguments.

First, for the moment, lets grant that macroeconomics, as a theory of aggregate behavior, needs some kind of micro foundations in a theory of individual behavior. Does it need specifically microeconomic foundations? I don’t think so. Macroeconomics studies the dynamics of aggregate output and price level, distribution and growth. Microeconomics studies the the dynamics of allocation and the formation of relative prices. It’s not at all clear — it certainly shouldn’t be assumed — that the former are emergent phenomena of the latter. Of course, even if not, one could say that means we have the wrong microeconomics. (Shalizi sort of gestures in that direction.) But if we’re going to use the term microeconomics the way that it’s used, then it’s not at all obvious at all that, even modified and extended, it’s the right microfoundation for macroeconomics. Even if valid in its own terms, it may not be studying the domains of individual behavior from which the important macro behavior is aggregated.

Second, more broadly, does macroeconomics need microfoundations at all? In other words,do we really know a priori that since macroeconomics is a theory of aggregate behavior, it must be a special case of a related but more general theory of individual behavior?

We’re used to a model of science where simpler, more general, finer-scale sciences are aggregated up to progressively coarser, more particular and more contingent sciences. Physics -> chemistry -> geology; physics -> chemistry -> biology -> psychology. (I guess many people would put economics at the end of that second chain.) And this model certainly works well in many contexts. The higher-scale sciences deal with emergent phenomena and have their own particular techniques and domain-specific theories, but they are understood to be, at the end of the day, approximations to the dynamics of the more precise and universal theories microfounding them.

It’s not an epistemological given, though, that domains of knowledge will always be nested in this logical way. It is perfectly possible, especially when we’re talking societies of rational beings, for the regular to emerge from the contingent, rather than vice versa. I would argue, somewhat tentatively, that economics is, with law, the prime example of this — in effect, a locally law-like system, i.e. one that can be studied scientifically within certain bounds but whose regularities become less lawlike rather than more lawlike as they are generalized.

Let me give a more concrete example: chess. Chess exhibits many lawlike regularities and has given rise to a substantial body of theory. Since this theory deals with the entire game, the board and all the pieces considered together, does it “obviously” follow that it must be founded in a microtheory of chess, studying the behavior of individual pieces or individual squares on the board? No, that’s silly, no such microtheory is possible. Individual chess pieces, qua chess pieces, don’t exist outside the context of the game. Chess theory does ultimately have microfoundations in the relevant branches of math. But if you want to want to understand the origins of the specific rules of chess as a game, there’s no way to derive them from a disaggregated theory of individual chess pieces. Rather, you’d have to look at the historical process by which the game as a whole evolved into its current form. And unlike the case in the physical sciences, where we expect the emergent phenomena to have a greater share of contingent, particular elements than the underlying phenomenon (just ask anyone who’s studied organic chemistry!) here the emergent phenomenon — chess with its rules — is much simpler and more regular than the more general phenomenon it’s grounded in.

And that’s how I think of macroeconomics. It’s not an aggregating-up of a more general theory of “how people make choices,” as you’re told in your first undergrad economics class. It is, rather, a theory about the operation of capitalism. And while capitalism is lawlike in much of its operations, those laws don’t arise out of some more general laws of individual behavior. Rather they arose historically, as a whole, through a concrete, contingent process. Microeconomics is as likely to arise from macroeconomics as the reverse. The profit-maximizing behavior of firms, for example, is not, as it’s often presented, a mere aggregating-up of utility maximizing behavior of individuals. [1] Rather, firms are profit maximizers because of the process of accumulation, whereby the survival or growth of the firm in later periods depends on the profits of the firm in earlier periods. There’s no analogous sociological basis for maximization by individuals. [2] Utility-maximizing individuals aren’t the basis of profit-maximizing firms, they’re their warped reflection in the imagination of economists. Profit maximization by capitalist firms, on the other hand, is a very powerful generalization, explaining endless features of the social landscape. And yet the funny thing is, when you try to look behind it, and ask how it’s maintained, you find yourself moving toward more particular, historically specific explanations. Profit maximization is a local peak of lawlikeness.

Descend from the peak, and you’re in treacherous territory. But I just don’t think there’s any way to fit (macro)economics into the mold of positivism. And there’s some comfort in knowing that Keynes seems to have thought along similar lines. Economics, he wrote, is a “moral rather than a pseudo-natural science, a branch of logic, a way of thinking … in terms of models joined to the art of choosing models which are relevant to the contemporary world.” It’s purpose is “not to provide a machine or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organized and orderly way of thinking about our particular problems.” (Quoted in Carabelli and Cedrini.)

[1] Economist readers will know that most mainstream macro models, including the “saltwater” ones, don’t include firms at all, but conduct the whole analysis in terms of utility-maximizing households.

[2] This point is strangely neglected, even by radicals. I heard someone offer recently, as a critique of a paper, that it assumed that employers behaved “rationally,” in the sense of maximizing some objective function, while workers did not. But as a Marxist that’s exactly what one should expect.

EDIT: Just to amplify one point a bit:

I suggest in the post that universal laws founded in historical contingency is characteristic of (some) social phenomena, whereas in natural science particular cases always arise from more general laws. But there seems to be one glaring exception. As far as we know, the initial condition of the universe is the mother of all historical contingencies, in the sense that many features of the universe (in particular, the arrow of time) depend on it beginning in its lowest entropy (least probable) state, a brute fact for which there is not (yet) any further explanation. So if we imagine a graph with the coarse-grainness of phenomena on the x-axis and the generality of the laws governing them on the y-axis, we would mostly see the smoothly descending curve implied by the idea of microfoundations. But we would see an anomalous spike out toward the coarse-grained end corresponding to economics, and another, somewhat smaller one corresponding to law (which, despite the adherents of legal realism, natural law, law and economics, etc., remains an ungrounded island of order). And we would see a huge dip at the fine-grained end corresponding to the boundary conditions of the universe.

FURTHER EDIT: Daniel Davies agrees, so this has got to be right.