Summers on Microfoundations

From The Economist’s report on this weekend’s Institute for New Economic Thinking conference at Bretton Woods:

The highlight of the first evening’s proceedings was a conversation between Harvard’s Larry Summers, till recently President Obama’s chief economic advisor, and Martin Wolf of the Financial Times. Much of the conversation centred on Mr Summers’s assessments of how useful economic research had been in recent years. Paul Krugman famously said that much of recent macroeconomics had been “spectacularly useless at best, and positively harmful at worst”. Mr Summers was more measured… But in its own way, his assessment of recent academic research in macroeconomics was pretty scathing.For instance, he talked about all the research papers that he got sent while he was in Washington. He had a fairly clear categorisation for which ones were likely to be useful: read virtually all the ones that used the words leverage, liquidity, and deflation, he said, and virtually none that used the words optimising, choice-theoretic or neoclassical (presumably in the titles or abstracts). His broader point—reinforced by his mentions of the knowledge contained in the writings of Bagehot, Minsky, Kindleberger, and Eichengreen—was, I think, that while it would be wrong to say economics or economists had nothing useful to say about the crisis, much of what was the most useful was not necessarily the most recent, or even the most mainstream. Economists knew a great deal, he said, but they had also forgotten a great deal and been distracted by a lot.Even more scathing, perhaps, was his comment that as a policymaker he had found essentially no use for the vast literature devoted to providing sound micro-foundations to macroeconomics.

Pretty definitive, no?

And that’s it it — I promise! — on microfoundations, methodology, et hoc genus omne in these parts, at least for a while. I have a couple new posts at least purporting to offer concrete analysis of the concrete situation, just about ready to go.

On Other Blogs, Other Wonders

… or at least some interesting posts.

1. What Kind of Science Would Economics Be If It Really Were a Science?

Peter Dorman is one of those people who I agree with on the big questions but find myself strenuously disagreeing with on many particulars. So it’s nice to wholeheartedly approve this piece on economics and the physical sciences.

The post is based on this 2008 paper that argues that there is no reason that economics cannot be scientific in the same rigorous sense as geology, biology, etc., but only if economists learn to (1) emphasize mechanisms rather than equilibrium and (2) strictly avoid Type I error, even at the cost of Type II error. Type I error is accepting a false claim, Type II is failing to accept a true one. Which is not the same as rejecting it — one can simply be uncertain. Science’s progressive character comes from its rigorous refusal to accept any proposition until every possible effort to disprove it has failed. Of course this means that on many questions, science can take no position at all (an important distinction from policy and other forms of practical activity, where we often have to act one way or another without any very definite knowledge). It sounds funny to say that ignorance is the heart of the practice of science, but I think it’s right. Unfortunately, says Dorman, rather than seeing science as the systematic effort to limit our knowledge claims to things we can know with (near-)certainty, “economists have been seduced by a different vision … that the foundation of science rests on … deduction from top-level theory.”

The mechanisms vs. equilibria point is, if anything, even more important, since it has positive content for how we do economics. Rather than focusing our energy on elucidating theoretical equilibria, we should be thinking about concrete processes of change over time. For example:

Consider the standard supply-and-demand diagram. The professor draws this on the chalkboard, identifies the equilibrium point, and asks for questions. One student asks, are there really supply and demand curves? … Yes, in principle these curves exist, but they are not directly observed in nature. …

there is another way the answer might proceed. … we can use them to identify two other things that are real, excess supply and excess demand. We can measure them directly in the form of unsold goods or consumers who are frustrated in their attempts to make a purchase. And not only can we measure these things, we can observe the actions that buyers and sellers take under conditions of surplus or shortage.

One of the best brief discussions of economics methodology I’ve read.

2. Beware the Predatory Pro Se Borrower!

In general, I assume that anyone here interested in Yves Smith is already reading her, so there’s no point in a post pointing to a post there. But this one really must be read.

It’s a presentation from a law firm representing mortgage servicers, with the Dickensian name LockeLordBissell, meant for servicers conducting foreclosures that meet with legal challenges. That someone would even choose to go to court to avoid being thrown out of their house needs special explanation; it must be a result of “negative press surrounding mortgage lenders” and outside agitators on the Internet. People even think they can assert their rights without a lawyer; they “do not want to pay for representation,” it being inconceivable that someone facing foreclosure might, say, have lost their job and not be able to afford a lawyer. “Predatory borrowers” are “unrealistic and unreasonable borrowers who are trying to capitalize on the current industry turmoil and are willing to employ any tactic to obtain a free home,” including demands to see the note, claims of lack of standing by the servicer, and “other Internet-based machinations.” What’s the world coming to when any random loser has access to the courts? And imagine, someone willing to employ tactics like asking for proof that the company trying to take their home has a legal right to it! What’s more, these stupid peasants “are emotionally tied to their cases [not to mention their houses]; the more a case progresses, the less reasonable the plaintiff becomes.” Worst of all, “pro se cases are expensive to defend because the plaintiff’s lack of familiarity with the legal process often creates more work for the defendant.”

If you want an illustration of how our masters think of us, you couldn’t ask for a clearer example. Our stubborn idea that we have rights or interests of our own is just an annoying interference with their prerogatives.

Everyone knows about bucket lists. At the bar last weekend, someone suggested we should keep bat lists — the people whose heads you’d take a Louisville slugger to, if you knew you just had a few months to live. This being the Left Forum, my friend had “that class traitor Andy Stern” at the top of his list. But I’m putting the partners at LockeLordBissell high up on mine.

3. Palin and Playing by the Rules

Jonathan Bernstein, on why Sarah Palin isn’t going to be the Republican nominee:

For all one hears about efforts to market candidates to mass electorates (that’s what things like the “authenticity” debate are all about), the bulk of nomination politics is retail, not wholesale — and the customers candidates are trying to reach are a relatively small group of party elites…. That’s what Mitt Romney and Tim Pawlenty have been doing for the last two-plus years… It’s what, by every report I’ve seen since November 2008, Sarah Palin has just not done.

Are you telling me that [Republican Jewish Committee] board members are going to be so peeved that Sarah Palin booked her Israel trip with some other organization that they’re [going to] turn it into a presidential nomination preference, regardless of how Palin or any other candidate actually stands on issues of public policy?

Yup. And even more: I’ll tell you that it’s not petty. They’re correct to do so. … if you’re a party leader, what can you do? Sure, you can collect position papers, but you know how meaningless those are going to be…. Much better, even if still risky, is assessing the personal commitment the candidates have to your group. What’s the rapport like? Who has the candidate hired on her staff that has a history of working with you? Will her White House take your calls? …

It’s how presidential nominees are really chosen. … Candidates do have to demonstrate at least some ability to appeal to mass electorates, but first and foremost they need to win the support of the most active portions of the party.

It’s not a brilliant or especially original point, but it’s a very important one. My first-hand experience of electoral politics is limited to state and local races, but I’ve worked on quite a few of those, and Bernstein’s descriptions fits them exactly. I don’t see any reason to think national races are different.

It’s part of the narcissism of intellectuals to imagine politics as a kind of debating society, with the public granting authority to whoever makes the best arguments — what intellectuals specialize in. And it’s natural that people whose only engagement with politics comes through the mass media to suppose that what happens in the media is very important, or even all there is. But Bernstein is right: That stuff is secondary, and the public comes in as object, not subject.

Not always, of course — there are moments when the people does become an active political subject, and those are the most important political moments there are. But they’re very rare. That’s why someone like Luciano Canfora makes a sharp distinction between the institutions and electoral procedures conventionally referred to as democracy, on the one hand, and genuine democracy, on the other — those relatively brief moments of “ascendancy of the demos,” which “may assert itself within the most diverse political-constitutional forms.” For Canfora, democracy can’t be institutionalized through elections; it’s inherently “an unstable phenomenon: the temporary ascendancy of the poorer classes in the course of an endless struggle for equality—a concept which itself widens with time to include ever newer, and ever more strongly challenged, ‘rights’“. (Interestingly, a liberal like Brad DeLong would presumably agree that elections have nothing to do with democracy, but are a mechanism for the circulation of elites.)

I don’t know how far Bernstein would go with Canfora, but he’s taken the essential first step; it would be a good thing for discussions of electoral politics if more people followed him.

EDIT: Just to be clear, Bernstein’s point is a bit more specific than the broad only-elites-matter argument. What candidates are selling to elites isn’t so much a basket of policy positions or desirable personal qualities, but relationships based on trust. It’s interesting, I think it’s true; it doesn’t contradict my gloss, but it does go beyond it.

Microfoundations, Again

Sartre has a wonderful bit in the War Diaries about his childhood discovery of atheism:

One day at La Rochelle, while waiting for the Machado girls who used to keep me company every morning on my way to the lycee, I grew impatient with their lateness and, to while away the time, decided to think about God. “Well,” I said, “he doesn’t exist.” It was something authentically self-evident, although I have no idea any more what it was based on. And then it was over and done with…

Similarly with microfoundations: First of all, they don’t exist. But this rather important point tends to get lost sight of when we follow the conceptual questions too far out into the weeds.

Yes, your textbook announces that “Nothing appears in this book that is not based on explicit microfoundations.” But then 15 pages later, you find that “We assume that all individuals in the economy are identical,” and that these identical individuals have intertemporally-additive preferences. How is this representative agent aggregated up from a market composed of many individuals with differing preferences? It’s not. And in general, it can’t be. As Sonnenschein, Mantel and Debreu showed decades ago, there is no mathematical way to consistently aggregate a set of individual demand functions into a well-behaved aggregate demand function, let alone one consistent with temporally additive preferences. So let’s say we are interested in the relationship between aggregate income and consumption. The old Keynesian (or structuralist) approach is to stipulate a relationship like C = cY, where c < 1 in the short run and approaches 1 over longer horizons; while the modern approach is to derive the relationship explicitly from a representative agent maximizing utility intertemporally. But since there's no way to get that representative agent by aggregating heterogeneous individuals -- and since even the representative agent approach doesn't produce sensible results unless we impose restrictive conditions on its preferences -- there is no sense in which the latter is any more microfounded than the former. So if the representative agent can’t actually be derived from any model of individual behavior, why is it used? The Obstfeld and Rogoff book I quoted before at least engages the question; it considers various answers before concluding that “Fundamentally,” a more general approach “would yield few concrete behavioral predictions.” Which is really a pretty damning admission of defeat for the microfoundations approach. Microeconomics doesn’t tell us anything about what to expect at a macro level, so macroeconomics has to be based on observations of macro phenomena; the “microfoundations” are bolted on afterward. None of this is at all original. If Cosma Shalizi and Daniel Davies didn’t explicitly say this, it’s because they assume anyone interested in this debate knows it already. Why the particular mathematical formalism misleadingly called microfoundations has such a hold on the imagination of economists is a good question, for which I don’t have a good answer. But the unbridgeable gap between our supposed individual-level theory of economic behavior and the questions addressed by macroeconomics is worth keeping in mind before we get too carried away with discussions of principle.

Are Microfoundations Necessary?

A typically thoughtful piece by Cosma Shalizi, says Arjun.

And it is thoughtful, for sure, and smart, and interesting. But I think it concedes too much to really existing economics. In particular:

Obviously, macroeconomic phenomena are the aggregated (or, if you like, the emergent) consequences of microeconomic interactions.

No, that isn’t obvious at all. Two arguments.

First, for the moment, lets grant that macroeconomics, as a theory of aggregate behavior, needs some kind of micro foundations in a theory of individual behavior. Does it need specifically microeconomic foundations? I don’t think so. Macroeconomics studies the dynamics of aggregate output and price level, distribution and growth. Microeconomics studies the the dynamics of allocation and the formation of relative prices. It’s not at all clear — it certainly shouldn’t be assumed — that the former are emergent phenomena of the latter. Of course, even if not, one could say that means we have the wrong microeconomics. (Shalizi sort of gestures in that direction.) But if we’re going to use the term microeconomics the way that it’s used, then it’s not at all obvious at all that, even modified and extended, it’s the right microfoundation for macroeconomics. Even if valid in its own terms, it may not be studying the domains of individual behavior from which the important macro behavior is aggregated.

Second, more broadly, does macroeconomics need microfoundations at all? In other words,do we really know a priori that since macroeconomics is a theory of aggregate behavior, it must be a special case of a related but more general theory of individual behavior?

We’re used to a model of science where simpler, more general, finer-scale sciences are aggregated up to progressively coarser, more particular and more contingent sciences. Physics -> chemistry -> geology; physics -> chemistry -> biology -> psychology. (I guess many people would put economics at the end of that second chain.) And this model certainly works well in many contexts. The higher-scale sciences deal with emergent phenomena and have their own particular techniques and domain-specific theories, but they are understood to be, at the end of the day, approximations to the dynamics of the more precise and universal theories microfounding them.

It’s not an epistemological given, though, that domains of knowledge will always be nested in this logical way. It is perfectly possible, especially when we’re talking societies of rational beings, for the regular to emerge from the contingent, rather than vice versa. I would argue, somewhat tentatively, that economics is, with law, the prime example of this — in effect, a locally law-like system, i.e. one that can be studied scientifically within certain bounds but whose regularities become less lawlike rather than more lawlike as they are generalized.

Let me give a more concrete example: chess. Chess exhibits many lawlike regularities and has given rise to a substantial body of theory. Since this theory deals with the entire game, the board and all the pieces considered together, does it “obviously” follow that it must be founded in a microtheory of chess, studying the behavior of individual pieces or individual squares on the board? No, that’s silly, no such microtheory is possible. Individual chess pieces, qua chess pieces, don’t exist outside the context of the game. Chess theory does ultimately have microfoundations in the relevant branches of math. But if you want to want to understand the origins of the specific rules of chess as a game, there’s no way to derive them from a disaggregated theory of individual chess pieces. Rather, you’d have to look at the historical process by which the game as a whole evolved into its current form. And unlike the case in the physical sciences, where we expect the emergent phenomena to have a greater share of contingent, particular elements than the underlying phenomenon (just ask anyone who’s studied organic chemistry!) here the emergent phenomenon — chess with its rules — is much simpler and more regular than the more general phenomenon it’s grounded in.

And that’s how I think of macroeconomics. It’s not an aggregating-up of a more general theory of “how people make choices,” as you’re told in your first undergrad economics class. It is, rather, a theory about the operation of capitalism. And while capitalism is lawlike in much of its operations, those laws don’t arise out of some more general laws of individual behavior. Rather they arose historically, as a whole, through a concrete, contingent process. Microeconomics is as likely to arise from macroeconomics as the reverse. The profit-maximizing behavior of firms, for example, is not, as it’s often presented, a mere aggregating-up of utility maximizing behavior of individuals. [1] Rather, firms are profit maximizers because of the process of accumulation, whereby the survival or growth of the firm in later periods depends on the profits of the firm in earlier periods. There’s no analogous sociological basis for maximization by individuals. [2] Utility-maximizing individuals aren’t the basis of profit-maximizing firms, they’re their warped reflection in the imagination of economists. Profit maximization by capitalist firms, on the other hand, is a very powerful generalization, explaining endless features of the social landscape. And yet the funny thing is, when you try to look behind it, and ask how it’s maintained, you find yourself moving toward more particular, historically specific explanations. Profit maximization is a local peak of lawlikeness.

Descend from the peak, and you’re in treacherous territory. But I just don’t think there’s any way to fit (macro)economics into the mold of positivism. And there’s some comfort in knowing that Keynes seems to have thought along similar lines. Economics, he wrote, is a “moral rather than a pseudo-natural science, a branch of logic, a way of thinking … in terms of models joined to the art of choosing models which are relevant to the contemporary world.” It’s purpose is “not to provide a machine or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organized and orderly way of thinking about our particular problems.” (Quoted in Carabelli and Cedrini.)

[1] Economist readers will know that most mainstream macro models, including the “saltwater” ones, don’t include firms at all, but conduct the whole analysis in terms of utility-maximizing households.

[2] This point is strangely neglected, even by radicals. I heard someone offer recently, as a critique of a paper, that it assumed that employers behaved “rationally,” in the sense of maximizing some objective function, while workers did not. But as a Marxist that’s exactly what one should expect.

EDIT: Just to amplify one point a bit:

I suggest in the post that universal laws founded in historical contingency is characteristic of (some) social phenomena, whereas in natural science particular cases always arise from more general laws. But there seems to be one glaring exception. As far as we know, the initial condition of the universe is the mother of all historical contingencies, in the sense that many features of the universe (in particular, the arrow of time) depend on it beginning in its lowest entropy (least probable) state, a brute fact for which there is not (yet) any further explanation. So if we imagine a graph with the coarse-grainness of phenomena on the x-axis and the generality of the laws governing them on the y-axis, we would mostly see the smoothly descending curve implied by the idea of microfoundations. But we would see an anomalous spike out toward the coarse-grained end corresponding to economics, and another, somewhat smaller one corresponding to law (which, despite the adherents of legal realism, natural law, law and economics, etc., remains an ungrounded island of order). And we would see a huge dip at the fine-grained end corresponding to the boundary conditions of the universe.

FURTHER EDIT: Daniel Davies agrees, so this has got to be right.

The Lucas Critique: A Critique

Old-fashioned economic models (multiplier-accelerator models of the business cycle, for example) operate in historical time: outcomes in one period determine decisions in the next period. That is, agents are backward-looking. The Lucas critique is that this assumes that people cannot predict what will happen in the future. The analyst on the other hand can derive later outcomes from earlier ones (or we would not be able to tell a causal story), so why can’t the agents in the model?
Lucas says this is an unacceptable contradiction, and resolves it by attributing to the agents the model used by the analyst. (Interestingly some Post Keynesians (e.g. Shackle) seem to see the same contradiction but they resolve it the other way, and take the inability to predict the future attributed to the agents in the model as a fundamental feature of the universe, so applicable to the analyst too.) But is the idea of predictable but unpredicted outcomes such an unacceptable contradiction?

One reason to say no is that the idea that agents must know as much as analyst rests on a sociological foundation – that institutions are such as to foster knowledge of the best estimate of future outcomes. This need not be the case. For example, consider the owners of an asset that has recently appreciated in value, where there is some doubt about whether the appreciation is transitory or permanent, or whether further appreciation should be expected. Those asset-owners who have a convincing story of why further appreciation is likely will be most successful at selling at a higher price, and so will increase their weight in the market. And to have a convincing story you should yourself be convinced by it – this is true both logically and psychologically. Similarly with various arm’s-length relationships that must be periodically renewed – the most accurate promises are not necessarily the most likely to bring success. Or on the other side, classes and organizations to maintain their coherence need their members to hold certain beliefs. This could take the deep form of ideology of various kinds, or the simple form of the practical requirements of organizational decision-making implying a limited set of inputs. The other reason comes if you carry the Lucas critique through to its logical conclusion. Those who accept rational expectations also use the method of comparative statics, where transitions from one equilibrium to another is the result of “shocks”. One set of technologies, tastes, endowments, policies, etc. yields equilibrium A. Then a shock changes those parameters, and now there’s equilibrium B. Joan Robinson objected to this procedure on grounds that it ignored dynamics of transition from A to B, but there is another problem. Evidently B is a possible future of A. The analyst knows this. So why don’t the inhabitants of A? Unless the shock is literally divine intervention, presumably its probability can be affected by the their actions, so doesn’t the analysis of A have to take that into account? Or, even if the shock is indeed an act of God, it’s possibility must be known – since it is known to the analyst – and so it must affect decisions made in A. But in that case the effects of the shock can be hedged and nothing happens as a result of the shock; there is no longer two equilibria, just one. So we either have agents with perfect knowledge of everything and no knowledge of shocks, which must literally be divine interventions; or we can have only a single equilibrium which nothing can change; or we can become nihilists like Shackle; or we can reject the Lucas critique and accept that there are regularities in economic behavior that are not anticipated by the actors involved.

Fragment of an Argument

David Colender, in Beyond Microfoundations: Post Walrasian Macroeconomics, explains that Post Walrasian macro is based on the idea that complexity of macro economy and limits to individual rationality mean that there will not be a unique equilibrium. Institutions and non-price coordinating mechanisms are needed to constrain the available degrees of freedom, to produce stability. But “while many past critics of Walrasian economics have based their criticism on the excessive mathematical nature of Walrasian models, I want to be clear that this is not the Post Walrasian criticism; if anything, the post Walrasian criticism is that the mathematics used in Walrasian models is too simple. … The reason Marshall stuck with partial equilibrium was not that he did not understand the interrelationships among markets…. Instead Marshall felt that general equilibrium issues should be dealt with informally until the math appropriate for them was developed. That has only recently happened.” I heard something very similar from Duncan Foley last week: Heterodox macro needs to be more mathematically sophisticated than the mainstream, with nonlinear regressions and models using statistical mechanics drawn from physics.

Sorry, I’m not buying it. Colender and Foley are right that it’s not possible to construct a consistent, tractable, intuitive model of the economy using linear equations. But the solution is not to construct intractable, non-intuitive models using more complex math. It’s to abandon the search for a general model and focus instead on locally stable aggregate relationships that allow us to tell causally meaningful stories about particular developments. We don’t need a theory of institutions in the abstract, but historically grounded accounts of specific institutions.