The Wit and Wisdom of Trygve Haavelmo

I was talking some time ago with my friend Enno about Merijn Knibbe’s series of articles on the disconnect between the variables used in economic models and the corresponding variables in the national accounts.1 Enno mentioned Trygve Haavelmo’s 1944 article The Probability Approach in Econometrics; he thought Haavelmo’s distinction between “theroetical variables,” “true variables,” and “observable variables” could be a useful way of thinking about the slippages between economic reality, economic data and economic theory.

I finally picked up the Haavelmo article, and it turns out to be a deep and insightful piece — for the reason Enno mentioned, but also more broadly on how to think about empirical economics. It’s especially interesting coming from soeone who won the Nobel Prize for his foundational work in econometrics. Another piece of evidence that orthodox economists in the mid-20th century thought more deeply and critically about the nature of their project than their successors do today.

It’s a long piece, with a lot of mathematical illustrations that someone reading it today can safely skip. The central argument comes down to three overlapping points. First, economic models are tools, developed to solve specific problems. Second, economic theories have content only insofar as they’re associated with specific procedures for measurement. Third, we have positive economic knowledge only insofar as we can make unconditional predictions about the distribution of observable variables.

The first point: We study economics in order to “become master of the happenings of real life.” This is on some level obvious, or vacuous, but it'[s important; it functions as a kind of “he who has ears, let him hear.” It marks the line between those who come to economics as a means to some other end — a political commitment, for many of us; but it could just as well come from a role in business or policy — and those for whom economic theory is an end in itself. Economics education must, obviously, be organized on the latter principle. As soon as you walk into an economics classroom, the purpose of your being there is to learn economics. But you can’t, from within the classroom, make any judgement about what is useful or interesting for the world outside. Or as Hayek put it, “One who is only an economist, cannot be a good economist.”2

Here is what Haavelmo says:

Theoretical models are necessary tools in our attempts to understand and explain events in real life. … Whatever be the “explanations” we prefer, it is not to be forgotten that they are all our own artificial inventions in a search for an understanding of real life; they are not hidden truths to be “discovered.”

It’s an interesting question, which we don’t have to answer here, whether or to what extent this applies to the physical sciences as well. Haavelmo thinks this pragmatic view of scientific laws applies across the board:

The phrase “In the natural sciences we have laws” means not much more and not much less than this: The natural sciences have chosen fruitful ways of looking upon physical reality.

We don’t need to decide here whether we want to apply this pragmatic view to the physical sciences. It is certainly the right way to look at economic models, in particular the models we construct in econometrics. The “data generating process” is not an object existing out in the world. It is a construct you have created for one or both of these reasons: It is an efficient description of the structure of a specific matrix of observed data; it allows you to make predictions about some specific yet-to-be-observed outcome. The idea of a data-generating process is obviously very useful in thinking about the logic of different statistical techniques. It may be useful to do econometrics as if there were a certain data generating process. It is dangerously wrong to believe there really is one.

Speaking of observation brings us to Haavelmo’s second theme: the meaningless of economic theory except in the context of a specific procedure for observation.  It might naively seem, he says, that

since the facts we want to study present themselves in the form of numerical measurement, we shall have to choose our models from … the field of mathematics. But the concepts of mathematics obtain their quantitative meaning implicitly through the system of logical operations we impose. In pure mathematics there really is no such problem as quantitative definition of a concept per se …

When economists talk about the problem of quantitative definitions of economic variables, they must have something in mind which has to do with real economic phenomena. More precisely, they want to give exact rules how to measure certain phenomena of real life.

Anyone who got a B+ in real analysis will have no problem with the first part of this statement. For the rest, this is the point: economic quantities come into existence only through some concrete human activity that involves someone writing down a number. You can ignore this, most of the time; but you should not ignore it all of the time. Because without that concrete activity there’s no link between economic theory and the social reality it hopes to help us master or make sense of.

Haavelmo has some sharp observations on the kind of economics that ignores the concrete activity that generates its data, which seem just as relevant to economic practice today:

Does a system of questions become less mathematical and more economic in character just by calling x “consumption,” y “price,” etc.? There are certainly many examples of studies to be found that do not go very much further than this, as far as economic significance is concerned.

There certainly are!

An equation, Haavelmo continues,

does not become an economic theory just by using economic terminology to name the variables invovled. It becomes an economic theory when associated with the rule of actual measurement of economic variables.

I’ve seen plenty of papers where the thought process seems to have been somthing like, “I think this phenomenaon is cyclical. Here is a set of difference equations that produce a cycle. I’ll label the variables with names of parts of the phenomenon. Now I have a theory of it!” With no discussion of how to measure the variables or in what sense the objects they describe exist in the external world.

What makes a piece of mathematical economics not only mathematics but also economics is this: When we set up a system of theoretical relationships and use economic names for the otherwise purely theoretical variables involved, we have in mind some actual experiment, or some design of an experiment, which we could at least imagine arranging, in order to measure those quantities in real economic life that we think might obey the laws imposed on their theoretical namesakes.

Right. A model has positive content only insofar as we can describe the concrete set of procedures that gets us from the directly accessible evidence of our senses. In my experience this comes through very clearly if you talk to someone who actually works in the physical sciences. A large part of their time is spent close to the interface with concrete reality — capturing that lizard, calibrating that laser.  The practice of science isn’t simply constructing a formal analog of physical reality, a model trainset. It’s actively pushing against unknown reality and seeing how it pushes back.

Haavelmo:

When considering a theoretical setup … it is common to ask about the actual meaning of this or that variable. But this question has no sense within the theoretical model. And if the question applies to reality it has no precise answer … we will always need some willingness among our fellow research workers to agree “for practical purposes” on questions of definitions and measurement …A design of experiments … is an essential appendix to any quantitative theory.

With respect to macroeconomics, the “design of experiments” means, in the first instance, the design of the national accounts. Needless to say, national accounting concepts cannot be treated as direct observations of the corresponding terms in economic theory, even if they have been reconstructed with that theory in mind. Cynamon and Fazzari’s paper on the measurement of household spending gives some perfect examples of this. There can’t be many contexts in which Medicare payments to hospitals, for example, are what people have in mind when they construct models of household consumption. But nonetheless that’s what they’re measuring, when they use consumption data from the national accounts.

I think there’s an important sense in which the actual question of any empirical macroeconomics work has to be: What concrete social process led the people working at the statistics office to enter these particular values in the accounts?

Or as Haavelmo puts it:

There is hardly an economist who feels really happy about identifying the current series of “national income, “consumptions,” etc. with the variables by those names in his theories. Or, conversely, he would think it too complicated or perhaps uninteresting to try to build models … [whose] variables would correspond to those actually given by current economic statistics. … The practical conclusion… is the advice that economists hardly ever fail to give, but that few actually follow, that one should study very carefully the actual series considered and the conditions under which they were produced, before identifying them with the variables of a particular theoretical model.

Good advice! And, as he says, hardly ever followed.

I want to go back to the question of the “meaning” of a variable, because this point is so easy to miss. Within a model, the variables have no meaning, we simply have a set of mathematical relationships that are either tautologous, arbitrary, or false. The variables only acquire meaning insofar as we can connect them to concrete social phenomena. It may be unclear to you, as a blog reader, why I’m banging on this point so insistently. Go to an economics conference and you’ll see.

The third central point of the piece is that meaningful explanation requires being able to identify a few causal links as decisive, so that all the other possible ones can be ignored.

Think back to that Paul Romer piece on what’s wrong with modern macroeconomics. One of the most interesting parts of it, to me, was its insistent Humean skepticism about the possibility of a purely inductive economics, or for that matter science of any kind. Paraphrasing Romer: suppose we have n variables, any of which may potentially influence the others. Well then, we have n equations, one for each variable, and n2 parameters (counting intercepts). In general, we are not going to be able to estimate this system based on data alone. We have to restrict the possible parameter space either on the basis of theory, or by “experiments” (natural or otherwise) that let us set most of the parameters to zero on the grounds that there is no independent variation in those variables between observations. I’m not sure that Romer fully engages with this point, whose implications go well beyond the failings of real business cycle theory. But it’s a central concern for Haavelmo:

A theoretical model may be said to be simply a restriction upon the joint variations of a system of quantities … which otherwise might have any value. … Our hope in economic theory and research is that it may be possible to establish contant and relatively simple relations between dependent variables … and a realtively small number of independent variables. … We hope that for each variable y to be explained, there is a realtively small number of explaining factors the variations of which are practically decisive in determining the variations of y. …  If we are trying to explain a certain observable varaible, y, by a system of causal factors, there is, in general, no limit to the number of such factors that might have a potential influence upon y. But Nature may limit the number of fctors that have a nonneglible factual influence to a relatively small number. Our hope for simple laws in economics rests upon the assumption that we may proceed as if such natural limitations of the number of relevant factors exist.

One way or another, to do empirical economic, we have to ignore mst of the logically possible relationships between our variables. Our goal, after all, is to explain variation in the dependent variable. Meaningful explanation is possible only if the number of relevant causal factors is small. If someone asks “why is unemployment high”, a meaningful answer is going to involve at most two or three causes. If you say, “I have no idea, but all else equal wage regulations are making it higher,” then you haven’t given an answer at all. To be masters of the hapennings of real life, we need to focus on causes of effects, not effects of causes.

In other words, ceteris paribus knowledge isn’t knowledge at all. Only unconditional claims count — but they don’t have to be predictions of a single variable, they can be claims about the joint distribution of several. But in any case we have positive knowledge only to the extent we can unconditionally say that future observations will fall entirely in a certain part of the state space. This fails if we have a ceteris paribus condition, or if our empirical works “corrects” for factors whose distribution and the nature of whose influence we have not invstigated.3 Applied science is useful because it gives us knowledge of the kind, “If I don’t turn the key, the car will not start, if I do turn the key, it will — or if it doesn’t there is a short list of possible reasons why not.” It doesn’t give us knowledge like “All else equal, the car is more likely to start when the key is turned than when it isn’t.”4

If probability distributions are simply tools for making unconditional claims about specific events, then it doesn’t make sense to think of them as existing out in the world. They are, as Keynes also emphasized, simply ways of describing our own subjective state of belief:

We might interpret “probability” simply as a measure of our a priori confidence in the occurrence of a certain event. Then the theoretical notion of a probability distribution serves us chiefly as a tool for deriving statements that have a very high probability of being true.

Another way of looking at this. Research in economics is generally framed in terms of uncovering universal laws, for which the particular phenomenon being  studied merely serves as a case study.5 But in the real world, it’s more oftne the other way: We are interested in some specific case, often the outcome of some specific action we are considering. Or as Haavelmo puts it,

As a rule we are not particularly interested in making statements about a large number of observations. Usually, we are interested in a relatively small number of observations points; or perhaps even more frequently, we are interested in a practical statement about just one single new observation.

We want economics to answer questions like, “what will happen if US imposes tariffs on China”? The question of what effects tariffs have on trade in the abstract is, itself, uninteresting and unanswerable.

What do we take from this? How, according to Haavelmo, should empirical economics be?

First, the goal of empirical work is to explain concrete phenomena — what happened, or will happen, in some particular case.

Second, the content of a theory is inseparable from the procedures for measuring the variables in it.

Third, empirical work requires restrictions on the logically possible space of parameters, some of which have to be imposed a priori.

Finally, prediction (the goal) means making unconditional claims about the joint distribution of one or more variables. “Everything else equal” means “I don’t know.”

All of this based on the idea that we study economics not as an end in itself, but in response to the problems forced on us by the world.

Links for October 6

More methodenstreit. I finally read the Romer piece on the trouble with macro. Some good stuff in there. I’m glad to see someone of his stature making the  point that the Solow residual is simply the part of output growth that is not explained by a production function. It has no business being dressed up as “total factor productivity” and treated as a real thing in the world. Probably the most interesting part of the piece was the discussion of identification, though I’m not sure how much it supports his larger argument about macro.  The impossibility of extracting causal relationships from statistical data would seem to strengthen the argument for sticking with strong theoretical priors. And I found it a bit odd that his modus ponens for reality-based macro was accepting that the Fed brought down output and (eventually) inflation in the early 1980s by reducing the money supply — the mechanisms and efficacy of conventional monetary policy are not exactly settled questions. (Funnily enough, Krugman’s companion piece makes just the opposite accusation of orthodoxy — that they assumed an increase in the money supply would raise inflation.) Unlike Brian Romanchuk, I think Romer has some real insights into the methodology of economics. There’s also of course some broadsides against the policy  views of various rightwing economists. I’m sympathetic to both parts but not sure they don’t add up to less than their sum.

David Glasner’s interesting comment on Romer makes in passing a point that’s bugged me for years — that you can’t talk about transitions from one intertemporal equilibrium to another, there’s only the one. Or equivalently, you can’t have a model with rational expectations and then talk about what happens if there’s a “shock.” To say there is a shock in one period, is just to say that expectations in the previous period were wrong. Glasner:

the Lucas Critique applies even to micro-founded models, those models being strictly valid only in equilibrium settings and being unable to predict the adjustment of economies in the transition between equilibrium states. All models are subject to the Lucas Critique.

Here’s another take on the state of macro, from the estimable Marc Lavoie. I have to admit, I don’t care for way it’s framed around “the crisis”. It’s not like DSGE models were any more useful before 2008.

Steve Keen has his own view of where macro should go. I almost gave up on reading this piece, given Forbes’ decision to ban on adblockers (Ghostery reports 48 different trackers in their “ad-light” site) and to split the article up over six pages. But I persevered and … I’m afraid I don’t see any value in what Keen proposes. Perhaps I’ll leave it at that. Roger Farmer doesn’t see the value either.

In my opinion, the way forward, certainly for people like me — or, dear reader, like you — who have zero influence on the direction of the economics profession, is to forget about finding the right model for “the economy” in the abstract, and focus more on quantitative description of concrete historical developments. I expressed this opinion in a bunch of tweets, storified here.

 

The Gosplan of capitalism. Schumpeter described banks as capitalism’s equivalent of the Soviet planning agency — a bank loan can be thought of as an order allocating part of society’s collective resources to a particular project.  This applies even more to the central banks that set the overall terms of bank lending, but this conscious direction of the economy has been hidden behind layers of ideological obfuscation about the natural rate, policy rules and so on. As DeLong says, central banks are central planners that dare not speak their name. This silence is getting harder to maintain, though. Every day there seems to be a new news story about central banks intervening in some new credit market or administering some new price. Via Ben Bernanke, here is the Bank of Japan announcing it will start targeting the yield of 10-year Japanese government bonds, instead of limiting itself to the very short end where central banks have traditionally operated. (Although as he notes, they “muddle the message somewhat” by also announcing quantities of bonds to be purchased.)  Bernanke adds:

there is a U.S. precedent for the BOJ’s new strategy: The Federal Reserve targeted long-term yields during and immediately after World War II, in an effort to hold down the costs of war finance.

And in the FT, here is the Bank of England announcing it will begin buying corporate bonds, an unambiguous step toward direct allocation of credit:

The bank will conduct three “reverse auctions” this week, each aimed at buying the bonds from particular sectors. Tuesday’s auction focuses on utilities and industries. Individual companies include automaker Rolls-Royce, oil major Royal Dutch Shell and utilities such as Thames Water.

 

Inflation or socialism. That interventions taken in the heat of a crisis to stabilize financial markets can end up being steps toward “a more or less comprehensive socialization of investment,” may be more visible to libertarians, who are inclined to see central banks as a kind of socialism already. At any rate, Scott Sumner has been making some provocative posts lately about a choice between “inflation or socialism”. Personally I don’t have much use for NGDP targeting — Sumner’s idée fixe — or the analysis that underlies it, but I do think he is onto something important here. To translate the argument into Keynes’ terms, the problem is that the minimum return acceptable to wealth owners may be, under current conditions, too high to justify the level of investment consistent with the minimum level of growth and employment acceptable to the rest of society. Bridging this gap requires the state to increasingly take responsibility for investment, either directly or via credit policy. That’s the socialism horn of the dilemma. Or you can get inflation, which, in effect, forces wealthholders to accept a lower return; or put it more positively, as Sumner does, makes it more attractive to hold wealth in forms that finance productive investment.  The only hitch is that the wealthy — or at least their political representatives — seem to hate inflation even more than they hate socialism.

 

The corporate superorganism.  One more for the “finance-as-socialism” files. Here’s an interesting working paper from Jose Azar on the rise of cross-ownership of US corporations, thanks in part to index funds and other passive investment vehicles.

The probability that two randomly selected firms in the same industry from the S&P 1500 have a common shareholder with at least 5% stakes in both firms increased from less than 20% in 1999Q4 to around 90% in 2014Q4 (Figure 1).1 Thus, while there has been some degree of overlap for many decades, and overlap started increasing around 2000, the ubiquity of common ownership of large blocks of stock is a relatively recent phenomenon. The increase in common ownership coincided with the period of fastest growth in corporate profits and the fastest decline in the labor share since the end of World War II…

A common element of theories of the firm boundaries is that … either firms are separately owned, or they combine. In stock market economies, however, the forces of portfolio diversification lead to … blurring firm boundaries… In the limit, when all shareholders hold market portfolios, the ownership of the firms becomes exactly identical. From the point of view of the shareholders, these firms should act “in unison” to maximize the same objective function… In this situation the firms have in some sense become branches of a larger corporate superorganism.

The same assumptions that generate the “efficiency” of market outcomes imply that public ownership could be just as efficient — or more so in the case of monopolies.

The present paper provides a precise efficiency rationale for … consumer and employee representation at firms… Consumer and employee representation can reduce the markdown of wages relative to the marginal product of labor and therefore bring the economy closer to a competitive outcome. Moreover, this provides an efficiency rationale for wealth inequality reduction –reducing inequality makes control, ownership, consumption, and labor supply more aligned… In the limit, when agents are homogeneous and all firms are commonly owned, … stakeholder representation leads to a Pareto efficient outcome … even though there is no competition in the economy.

As Azar notes, cross-ownership of firms was a major concern for progressives in the early 20th century, expressed through things like the Pujo committee. But cross-ownership also has been a central theme of Marxists like Hilferding and Lenin. Azar’s “corporate superorganism” is basically Hilferding’s finance capital, with index funds playing the role of big banks. The logic runs the same way today as 100 years ago. If production is already organized as a collective enterprise run by professional managers in the interest of the capitalist class as a whole, why can’t it just as easily be managed in a broader social interest?

 

Global pivot? Gavyn Davies suggests that there has been a global turn toward more expansionary fiscal policy, with the average rich country fiscal balances shifting about 1.5 points toward deficit between 2013 and 2016. As he says,

This seems an obvious path at a time when governments can finance public investment programmes at less than zero real rates of interest. Even those who believe that government programmes tend to be inefficient and wasteful would have a hard time arguing that the real returns on public transport, housing, health and education are actually negative.

I don’t know about that last bit, though — they don’t seem to find it that hard.

 

Taylor rule toy. The Atlanta Fed has a cool new gadget that lets you calculate the interest rate under various versions of the Taylor Rule. It will definitely be useful in the classroom. Besides the obvious pedagogical value, it also dramatizes a larger point — that macroeconomic variables like “inflation” aren’t objects simply existing in the world, but depend on all kinds of non-obvious choices about measurement and definition.

 

The new royalists. DeLong summarizes the current debates about monetary policy:

1. Do we accept economic performance that all of our predecessors would have characterized as grossly subpar—having assigned the Federal Reserve and other independent central banks a mission and then kept from them the policy tools they need to successfully accomplish it?

2. Do we return the task of managing the business cycle to the political branches of government—so that they don’t just occasionally joggle the elbows of the technocratic professionals but actually take on a co-leading or a leading role?

3. Or do we extend the Federal Reserve’s toolkit in a structured way to give it the tools it needs?

This is a useful framework, as is the discussion that precedes it. But what jumped out to me is how he reflexively rejects option two. When it comes to the core questions of economic policy — growth, employment, the competing claims of labor and capital — the democratically accountable, branches of government must play no role. This is all the more striking given his frank assessment of the performance of the technocrats who have been running the show for the past 30 years: “they—or, rather, we, for I am certainly one of the mainstream economists in the roughly consensus—were very, tragically, dismally and grossly wrong.”

I think the idea that monetary policy is a matter of neutral, technical expertise was always a dodge, a cover for class interests. The cover has gotten threadbare in the past decade, as the range and visibility of central bank interventions has grown. But it’s striking how many people still seem to believe in a kind of constitutional monarchy when it comes to central banks. They can see people who call for epistocracy — rule by knowers — rather than democracy as slightly sinister clowns (which they are). And they can simultaneously see central bank independence as essential to good government, without feeling any cognitive dissonance.

 

Did extending unemployment insurance reduce employment? Arin Dube, Ethan Kaplan, Chris Boone and Lucas Goodman have a new paper on “Unemployment Insurance Generosity and Aggregate Employment.” From the abstract:

We estimate the impact of unemployment insurance (UI) extensions on aggregate employment during the Great Recession. Using a border discontinuity design, we compare employment dynamics in border counties of states with longer maximum UI benefit duration to contiguous counties in states with shorter durations between 2007 and 2014. … We find no statistically significant impact of increasing unemployment insurance generosity on aggregate employment. … Our point estimates vary in sign, but are uniformly small in magnitude and most are estimated with sufficient precision to rule out substantial impacts of the policy…. We can reject negative impacts on the employment-to-population ratio … in excess of 0.5 percentage points from the policy expansion.

Media advisory with synopsis is here.

 

On other blogs, other wonders

Larry Summers: Low laborforce participation is mainly about weak demand, not demographics or other supply-side factors.

Nancy Folbre on Greg Mankiw’s claims that the one percent deserves whatever it gets.

At Crooked Timber, John Quiggin makes some familiar — but correct and important! — points about privatization of public services.

In the Baffler, Sam Kriss has some fun with the new atheists. I hadn’t encountered Kierkegaard’s parable of the madman who tells everyone who will listen “the world is round!” but it fits perfectly.

A valuable article in the Washington Post on cobalt mining in Africa. Tracing out commodity chains is something we really need more of.

Buzzfeed on Blue Apron. The reality of the robot future is often, as here, just that production has been reorganized to make workers less visible.

At Vox, Rachelle Sampson has a piece on corporate short-termism. Supports my sense that this is an area where there may be space to move left in a Clinton administration.

Sven Beckert has edited a new collection of essays on the relationship between slavery and the development of American capitalism. Should be worth looking at — his Empire of Cotton is magnificent.

At Dissent, here’s an interesting review of Jefferson Cowie’s and Robert Gordon’s very different but complementary books on the decline of American growth.

I Don’t See Any Method At All

I’ve felt for a while that most critiques of economics miss the mark. They start from the premise that economics is a systematic effort to understand the concrete social phenomena we call “the economy,” an effort that has gone wrong in some way.

I don’t think that’s the right way to think about it. I think McCloskey was right to say that economics is just what economists do. Economic theory is essentially closed formal system; it’s a historical accident that there is some overlap between its technical vocabulary and the language used to describe concrete economic phenomena. Economics the discipline is to the economy the sphere of social reality as chess theory is to medieval history: The statement, say, that “queens are most effective when supported by strong bishops” might be reasonable in both domains, but studying its application in the one case will not help at all in applying it in in the other. A few years ago Richard Posner said that he used to think economics meant the study of “rational” behavior in whatever domain, but after the financial crisis he decided it should mean the study of the behavior of the economy using whatever methodologies. (I can’t find the exact quote.) Descriptively, he was right the first time; but the point is, these are two different activities. Or to steal a line from my friend Suresh, the best way to think about what most economists do is as a kind of constrained-maximization poetry. Makes no more sense to ask “is it true” than of a haiku.

One consequence of this is, as I say, that radical criticism of the realism or logical consistency of orthodox economics do nothing to get us closer to a positive understanding of the economy. How is a raven unlike a writing desk? An endless number of ways, and enumerating them will leave you no wiser about either corvids or carpentry. Another consequence, the topic of the remainder of this post, is that when we turn to concrete economic questions there isn’t really a “mainstream” at all. Left critics want to take academic orthodoxy, a right-wing political vision, and the economic policy preferred by the established authorities, and roll them into a coherent package. But I don’t think you can. I think there is a mix of common-sense opinions, political prejudices, conventional business practice, and pragmatic rules of thumb, supported in an ad hoc, opportunistic way by bits and pieces of economic theory. It’s not possible to deduce the whole tottering pile from a few foundational texts.

More concretely: An economics education trains you to think in terms of real exchange — in terms of agents who (somehow or other) have come into possession of a bundle of goods, which they trade with each other. You can only use this framework to make statements about real economic phenomena if they are understood in terms of the supply side — if economic outcomes are understood in terms of different endowments of goods, or different real uses for them. Unless you’re in a position to self-consciously take another perspective, fitting your understanding of economic phenomena into a broader framework is going to mean expressing it as this kind of story, about the limited supply of real resources available, and the unlimited demands on them to meet real human needs. But there may be no sensible story of that kind to tell.

More concretely: What are the major macroeconomic developments of the past ten to twenty years, compared, say, with the previous fifty? For the US and most other developed countries, the list might look like:

– low and falling inflation

– low and falling interest rates

– slower growth of output

– slower growth of employment

– low business investment

– slower growth of labor productivity growth

– a declining share of wages in income

If you pick up an economics textbook and try to apply it to the world around you, these are some of the main phenomena you’d want to explain. What does the orthodox, supply-side theory tell us?

The textbook says that lower inflation is normally the result of a positive supply shock — an increase in real resources or an improvement in technology. OK. But then what do we make of the slowdown in output and productivity?

The textbook says that, over the long run interest rates must reflect the marginal product of capital — the central bank (and monetary factors in general) can only change interest rates in the short run, not over a decade or more. In the Walrasian world, the interest rate and the return on investment are the same thing. So a sustained decline in interest rates must mean a decline in the marginal product of capital.

OK. So in combination with the slowdown in output growth, that suggests a negative technological shock. But that should mean higher inflation. Didn’t we just say that lower inflation implies a positive technological shock?

Employment growth in this framework is normally determined by demographics, or perhaps by structural changes in labor markets that change the effective labor supply. Slower employment growth means a falling labor supply — but that should, again, be inflationary. And it should be associated with higher wagess: If labor is becoming relatively scarce, its price should rise. Yes, the textbook combines a bargaining mode of wage determination for the short run with a marginal product story for the long run, without ever explaining how they hook up, but in this case it doesn’t matter, the two stories agree. A fall in the labor supply will result in a rise in the marginal product of labor as it’s withdrawn from the least productive activities — that’s what “marginal” means! So either way the demographic story of falling employment is inconsistent with low inflation, with a falling wage share, and with the showdown in productivity growth.

Slower growth of labor productivity could be explained by an increase in labor supply  — but then why has employment decelerated so sharply? More often it’s taken as technologically determined. Slower productivity growth then implies a slowdown in innovation — which at least is consistent with low interest rates and low investment. But this “negative technology shock” should again, be inflationary. And it should be associated with a fall in the return to capital, not a rise.

On the other hand, the decline in the labor share is supposed to reflect a change in productive technology that encourages substitution of capital for labor, robots and all that. But how is this reconciled with the fall in interest rates, in investment and in labor productivity? To replace workers with robots, someone has to make the robots, and someone has to buy them. And by definition this raises the productivity of the remaining workers.

Which subset of these mutually incompatible stories does the “mainstream” actually believe? I don’t know that they consistently believe any of them. My impression is that people adopt one or another based on the question at hand, while avoiding any systematic analysis through violent abuse of the ceteris paribus condition.

To paraphrase Leijonhufvud, on Mondays and Wednesdays wages are low because technological progress has slowed down, holding down labor productivity. On Tuesdays and Thursdays wages are low because technological progress has sped up, substituting capital for labor. Students may come away a bit confused but the main takeaway is clear: Low wages are the result of inexorable, exogenous technological change, and not of any kind of political choice. And certainly not of weak aggregate demand.

Larry Summers in this actually quite good Washington Post piece, at least is no longer talking about robots. But he can’t completely resist the supply-side lure: “The situation is worse in other countries with more structural issues and slower labor-force growth.” Wait, why would they be worse? As he himself says, “our problem today is insufficient inflation,” so what’s needed “is to convince people that prices will rise at target rates in the future,” which will “require … very tight markets.” If that’s true, then restrictions on labor supply are a good thing — they make it easier to generate wage and price increases. But that is still an unthought.

I admit, Summers does go on to say:

In the presence of chronic excess supply, structural reform has the risk of spurring disinflation rather than contributing to a necessary increase in inflation.  There is, in fact, a case for strengthening entitlement benefits so as to promote current demand. The key point is that the traditional OECD-type recommendations cannot be right as both a response to inflationary pressures and deflationary pressures. They were more right historically than they are today.

That’s progress, for sure — “less right” is a step toward “completely wrong”. The next step will be to say what his argument logically requires. If the problem is as he describes it then structural “problems” are part of the solution.

Posts in Three Lines

There is no long run. This short note from the Fed suggests that the failure of output to return to its earlier trend following the Great Recession is not an anomaly; historically, recessions normally involve permanent output losses. This working paper by Lawrence Summers and Lant Pritchett argues that it is very hard to find persistent growth differences between countries. From opposite directions, these results suggest that there is no reason to think that supposedly “slow” variables are more stable than “fast” ones; in other words, there is no economically meaningful long run.

*
Krugman on the archaeology of “price stability.” Here is Paul Krugman’s talk from the same Roosevelt Institute/AFR/EPI even I spoke at last month. The whole thing is quite good but the most interesting part to me was on the (quite recent) origins of the idea that price stability means 2 percent inflation. From Adam Smith until the 1990s, price stability meant just that, zero inflation; but in the postwar decades it was more or less accepted that that was one objective to trade off against others, rather than the sine qua non of policy success.
*
Capital is back — or is it? Here’s an interesting figure from Piketty and Zucman’s 2013 paper, showing the long-term evolution of capital and labor shares in the UK and France:
What we see is not a stable or rising capital share, but rather a secular shift in favor of labor income, presumably reflecting the long term growth of political power of working people from the early 19th century, when unions were illegal, labor legislation was unknown and only property owners could vote. What’s funny is that this long-term decline in the power of capital is so clearly visible in Piketty’s data, but so invisible in the discussion of his book.
*
Orange is the big lie. Like lots of people, I watched the Netflix show Orange Is the New Black and initially enjoyed it, enough to read the memoir on which it’s based. It’s not often you see ideology operation so visibly: The show systematically omits the book’s depictions of abuse and racism among the guards and solidarity among the prisoners, and introduces violence from the prisoners and compassion from the authorities that is not present in the book. For example, both book and show feature an affair between a female prisoner and a male guard, but in the show nothing happens to the prisoner while the guard is fired and prosecuted, while in reality the prisoner was thrown into solitary confinement and there were no consequences for the guard.

Cochrane on Economic Orthodoxies

John Cochrane has a good post  saying something I’ve been thinking about for a while. There are two disjoint orthodoxies in economics, one in policy and one in scholarship. Both are secure on their own territory, but they have little connection with each other. This isn’t obvious from the outside since many of the same institutions and even individuals contribute to the reproduction of both orthodoxies, but as intellectual projects they are entirely distinct.

Cochrane:

There is … a sharp divide between macroeconomics used in the top levels of policy circles, and that used in academia. 

Static ISLM / ASAD modeling and thinking really did pretty much disappear from academic research economics around 1980. You won’t find it taught in any PhD programs, you won’t find it at any conferences …, you won’t find it in any academic journals…  “New-Keynesian” DSGE (Dynamic Stochastic General Equilibrium) models are much in vogue, but have really nothing to do with static Keynesian ISLM modeling. Many authors would like it to be so, but when you read the equations you will find these are just utterly different models. 

Static ISLM thinking pervades the upper reaches of the policy world. … If you read the analysis guiding policy at the IMF, the Fed, the OECD, the CBO; and the larger policy debate in the pages of the Economist, New York Times, and quite often even the Wall Street Journal, policy analysis is pretty much unchanged from the Keynesian ISLM, ASAD, analysis I learned from Dornbush and Fisher’s textbook, taught in Bob Solow’s undergraduate Macro class at MIT about 1978. 

Note that Cochrane is agnostic about which of these projects is on the wrong track.  This is a habit of mind we should all try to cultivate: The interesting questions are the ones where we can seriously imagine more than one answer.

On Other Blogs, Other Wonders

Some things worth reading:

1. Ozgur Orhangazi on the coming balance of payments crisis in Turkey:

Since the late 1970s, “developing and emerging economies” (DEEs) have experienced boom-bust cycles of private capital flows… The latest boom began in the aftermath of the 2008 U.S. financial crisis, fed by the quantitative easing policies of the Federal Reserve and the European Central Bank (ECB). Much of Fed’s injections of credit into the system ended up in the stock markets of advanced economies and even more in the DEEs.
This latest wave of capital flows into DEEs led to currency appreciations, growing current account deficits, credit expansions and asset bubbles. …  

Turkey is a case in point. While in the last decade it gained praise for its strong growth performance, now it is considered to be among the “fragile five”, together with Brazil, India, Indonesia, and South Africa. The biggest concern is the large current account deficit (reaching as much as 7.5 percent of the GDP by the end of November 2013) that is being financed mostly by short-term volatile capital inflows. … 

Starting right after the Fed’s announcement in May 2013, the Turkish lira began losing its value. … From May to the beginning of this week, the lira lost about 30 percent of its value, forcing the Central Bank, which so far has been resistant to increasing interest rates, to sharply increase interest rates at midnight after an emergency meeting on January 28. 

What is next? While the Central Bank’s sharp interest rate hike has, at least for now, stopped the free fall of the lira, the future does not seem very rosy… the depreciation of the lira will create serious problems for the firms that have borrowed in foreign currencies. … This is likely to lead to, at the very least, payment problems; and most probably to many bankruptcies…

There’s been a lot of discussion of whether low interest rates in the US and other developed countries contribute to excessive credit expansion and asset bubbles here. But it’s worth remembering that these problems are much worse for developing countries, which — in a world of free financial flows — share in the credit conditions of the rich countries whether they want to or not. One can be skeptical of particular forms of the “far too low for far too long” argument and still recognize that if aggregate expenditure is to be stabilized solely through conventional monetary policy, the shifts in interest rates  required may be big enough to be destabilizing on other dimensions.
2. Martin Rapetti on the coming balance of payments crisis in Argentina: 

In July 2011, the stock of FX reserves was around U$ 52 billions. The exchange rate was 4.1 pesos per dollar… In August 2011 —three months before the presidential election— the Central Bank started losing reserves. In November 2011, just a few weeks after Cristina Kirchner was re-elected, FX reserves were U$ 46 billions. Since the depletion of FX reserves showed no sign of stopping, the authorities started to implement a series of measures to limit the demand for foreign currency. The controls triggered the blossoming of black markets. FX reserves kept falling, very rapidly since early 2013. In November 2013, after a poor mid-term election, the president fired the governor of the Central Bank and put in charge a new economic team. FX reserves were U$ 33 billions and the exchange rate had reached 6 pesos per dollar…. a week ago, the price of US dollar jumped to 8 pesos. FX reserves are now close to U$ 28 billions. The Central Bank has managed to keep the exchange rate at 8 pesos at the cost of loosing U$ 150/200 millions of reserves per day. There are still widespread expectations of further devaluation and few believe that authorities can sustain the exchange rate at the new level, especially because the rate of inflation has accelerated above 30% annually. In short, FX reserves have so far fallen by 46% and the exchange rate has risen by 95%. If it looks like a dog, walks like a dog and barks like a dog, then… it’s probably a balance of payments crisis.

In Martin’s telling, the root of the problem here is not monetary policy in the developed world, but rather the difficulty of using the exchange rate as a nominal anchor to control inflation.
3. Laura Tanenbaum on how nostalgia, real estate and race have shaped the idea of “Brooklyn”:

Brooklyn nostalgia has done more than sell hot dogs and baseball memorabilia. … in the early 1960s a flourishing literature of … “urban pastoralism” challenged developers and urban renewal through nostalgic appeals to the authenticity of “urban villages” and daily street life. These writings by artists, activists, and academics, most famously Jane Jacobs’s The Death and Life of Great American Cities, inspired and shaped the views of “brownstoners,” homeowners who sought to resist the tide of suburbanization and white flight. But by the 1970s, … coalitions between brownstoners and low-income residents had unraveled; Democratic New York City mayor Ed Koch turned this politics toward conservative ends. Koch “reveled in ethnic kitsch and cultivated a folksy image of a neighborhood New Yorker” while complaining about “poverty pimps.” It was the Southern strategy with an outer-borough accent. 

More than thirty years later, the crush of development marches forward. Brooklyn is a global brand: overpriced trinkets to be sold at Brooklyn Pizza in Manila, Brooklyn Coffeeshop in Curitiba, Brazil, or one of the other global Brooklyns featured in New York magazine’s retrospective of the Bloomberg era. But hip culture in the United States has long had a deep romantic and nostalgic streak, and the hipster’s most recent incarnation, central to the current branding of Brooklyn, has been no exception. 

The role of artists and hipsters in gentrification — specifically, their degree of complicity in the resulting displacement of low-income residents — has been endlessly debated. Were they the developers’ dupes, their victims, or their willing accomplices? Less often commented upon is the implicit equation behind these questions: associating artists with gentrification suggests that an artist has to be white. This is a particularly bitter irony for a borough and a city that has historically been and remains home to some of the most important African-American artists, musicians, filmmakers, writers, and intellectuals in the country. Looking at hipster culture in relationship to their work offers alternative ways of thinking about how the city changes, how we remember it, and what it might mean to oppose the march of neoliberal development without relying on a nostalgia that reflects our memories and desires more than our actual history.

It’s a great piece. Read the whole thing, as they say.

What Drives Trade Flows? Mostly Demand, Not Prices

I just participated (for the last time, thank god) in the UMass-New School economics graduate student conference, which left me feeling pretty good about the next generation of heterodox economists. [1] A bunch of good stuff was presented, but for my money, the best and most important work was Enno Schröder’s: “Aggregate Demand (Not Competitiveness) Caused the German Trade Surplus and the U.S. Deficit.” Unfortunately, the paper is not yet online — I’ll link to it the moment it is — but here are his slides.

The starting point of his analysis is that, as a matter of accounting, we can write the ratio of a county’s exports to imports as :

X/M = (m*/m) (D*/D)

where X and M are export and import volumes, m* is the fraction of foreign expenditure spent on the home country’s goods, m is the fraction of the home expenditure spent on foreign goods, and D* and D are total foreign and home expenditure.

This is true by definition. But the advantage of thinking of trade flows this way, is that it allows us to separate the changes in trade attributable to expenditure switching (including, of course, the effect of relative price changes) and the changes attributable to different growth rates of expenditure. In other words, it lets us distinguish the changes in trade flows that are due to changes in how each dollar is spent in a given country, from changes in trade flows that are due to changes in the distribution of dollars across countries.

(These look similar to price and income elasticities, but they are not the same. Elasticities are estimated, while this is an accounting decomposition. And changes in m and m*, in this framework, capture all factors that lead to a shift in the import share of expenditure, not just relative prices.)

The heart of the paper is an exercise in historical accounting, decomposing changes in trade ratios into m*/m and D*/D. We can think of these as counterfactual exercises: How would trade look if growth rates were all equal, and each county’s distribution of spending across countries evolved as it did historically; and how would trade look if each country had had a constant distribution of spending across countries, and growth rates were what they were historically? The second question is roughly equivalent to: How much of the change in trade flows could we predict if we knew expenditure growth rates for each country and nothing else?

The key results are in the figure below. Look particularly at Germany,  in the middle right of the first panel:

The dotted line is the actual ratio of exports to imports. Since Germany has recently had a trade surplus, the line lies above one — over the past decade, German exports have exceed German imports by about 10 percent. The dark black line is the counterfactual ratio if the division of each county’s expenditures among various countries’ goods had remained fixed at their average level over the whole period. When the dark black line is falling, that indicates a country growing more rapidly than the countries it exports to; with the share of expenditure on imports fixed, higher income means more imports and a trade balance moving toward deficit. Similarly, when the black line is rising, that indicates a country’s total expenditure growing more slowly than expenditure its export markets, as was the case for Germany from the early 1990s until 2008. The light gray line is the other counterfactual — the path trade would have followed if all countries had grown at an equal rate, so that trade depended only on changes in competitiveness. When the dotted line and the heavy black line move more or less together, we can say that shifts in trade are mostly a matter of aggregate demand; when the dotted line and the gray line move together, mostly a matter of competitiveness (which, again, includes all factors that cause people to shift expenditure between different countries’ goods, including but not limited to exchange rates.)
The point here is that if you only knew the growth of income in Germany and its trade partners, and nothing at all about German wages or productivity, you could fully explain the German trade surplus of the past decade. In fact, based on income growth alone you would predict an even larger surplus; the fraction of the world’s dollars falling on German goods actually fell. Or as Enno puts it: During the period of the German export boom, Germany became less, not more, competitive. [2] The cases of Spain, Portugal and Greece (tho not Italy) are symmetrical: Despite the supposed loss of price competitiveness they experienced under the euro, the share of expenditure falling on these countries’ goods and services actually rose during the periods when their trade balances worsened; their growing deficits were entirely a product of income growth more rapid than their trade partners’.
These are tremendously important results. In my opinion, they are fatal to the claim (advanced by Krugman among others) that the root of the European crisis is the inability to adjust exchange rates, and that a devaluation in the periphery would be sufficient to restore balanced trade. (It is important to remember, in this context, that southern Europe was running trade deficits for many years before the establishment of the euro.) They also imply a strong criticism of free trade. If trade flows depend mostly or entirely on relative income, and if large trade imbalances are unsustainable for most countries, then relative growth rates are going to be constrained by import shares, which means that most countries are going to grow below their potential. (This is similar to the old balance-of-payments constrained growth argument.) But the key point, as Enno stresses, is that both the “left” argument about low German wage growth and the “right” argument about high German productivity growth are irrelevant to the historical development of German export surpluses. Slower income growth in Germany than its trade partners explains the whole story.
I really like the substantive argument of this paper. But I love the methodology. There is an econometrics section, which is interesting (among other things, he finds that the Marshall-Lerner condition is not satisfied for Germany, another blow to the relative-prices story of the euro crisis.) But the main conclusions of the paper don’t depend in any way on it. In fact, the thing can be seen as an example of an alternative methodology to econometrics for empirical economics, historical accounting or decomposition analysis. This is the same basic approach that Arjun Jayadev and I take in our paper on household debt, and which has long been used to analyze the historical evolution of public debt. Another interesting application of this kind of historical accounting: the decomposition of changes in the profit rate into the effects of the profit share, the utilization rate, and the technologically-determined capital-output ratio, an approach pioneered by Thomas Weisskopf, and developed by others, including Ed WolffErdogan Bakir, and my teacher David Kotz.
People often say that these accounting exercises can’t be used to establish claims about causality. And strictly speaking this is true, though they certainly can be used to reject certain causal stories. But that’s true of econometrics too. It’s worth taking a step back and remembering that no matter how fancy our econometrics, all we are ever doing with those techniques is describing the characteristics of a matrix. We have the observations we have, and all we can do is try to summarize the relationships between them in some useful way. When we make causal claims using econometrics, it’s by treating the matrix as if it were drawn from some stable underlying probability distribution function (pdf). One of the great things about these decomposition exercises — or about other empirical techniques, like principal component analysis — is that they limit themselves to describing the actual data. In many cases — lots of labor economics, for instance — the fiction of a stable underlying pdf is perfectly reasonable. But in other cases — including, I think, almost all interesting questions in macroeconomics — the conventional econometrics approach is a bit like asking, If a whale were the top of an island, what would the underlying geology look like? It’s certainly possible to come up with a answer to that question. But it is probably not the simplest way of describing the shape of the whale.
[1] A perennial question at these things is whether we should continue identifying ourselves as “heterodox,” or just say we’re doing economics. Personally, I’ll be happy to give up the distinct heterodox identity just as soon as economists are willing to give up their distinct identity and dissolve into the larger population of social scientists, or of guys with opinions.
[2] The results for the US are symmetrical with those for Germany: the growing US trade deficit since 1990 is fully explained by more rapid US income growth relative to its trade partners. But it’s worth noting that China is not: Knowing only China’s relative income growth, which has been of course very high, you would predict that China would be moving toward trade deficits, when in fact it has ben moving toward surplus. This is consistent with a story that explains China’s trade surpluses by an undervalued currency, tho it is consistent with other stories as well.

… or Possibly Social Scientists?

On the other hand!

If you’re thinking, yes, economists have a reflexive pro-market bias, then there is one question in the Booth polls to which the answer will be surprising: Are CEOs overpaid? Remarkably, a large majority says Yes.

As a friend points out, this result would almost certainly have been very different a few years ago. The financial crisis, OWS and the new political discourse of the 99%, or something else? I don’t know, but progress is progress, and it should be acknowledged, and celebrated. We on the left are a little too invested in our grumpiness sometimes, I think. A lot of people, ok, were overly optimistic about the extent to which the orthodoxies in macro would be discredited by the crisis and Great Recession. But still, something has changed.

Case in point: This paper (ht: AD) by Thomas Philippon and Ariell Reshef, on wages in the financial industry. It’s the same paper, but look how the abstract changes from pre- to post-Lehman. Before:

Over the past 60 years, the U.S. financial sector has grown from 2.3% to 7.7% of GDP. While the growth in the share of value added has been fairly linear, it hides a dramatic change in the composition of skills and occupations. In the early 1980s, the financial sector started paying higher wages and hiring more skilled individuals than the rest of economy. These trends reflect a shift away from low-skill jobs and towards market- oriented activities within the sector. Our evidence suggests that technological and financial innovations both played a role in this transformation. We also document an increase in relative wages, controlling for education, which partly reflects an increase in unemployment risk: Finance jobs used to be safer than other jobs in the private sector, but this is not longer the case.

And after:

We use detailed information about wages, education and occupations to shed light on the evolution of the U.S. financial sector over the past century. We uncover a set of new, interrelated stylized facts: financial jobs were relatively skill intensive, complex, and highly paid until the 1930s and after the 1980s, but not in the interim period. We investigate the determinants of this evolution and find that financial deregulation and corporate activities linked to IPOs and credit risk increase the demand for skills in financial jobs. Computers and information technology play a more limited role. Our analysis also shows that wages in finance were excessively high around 1930 and from the mid 1990s until 2006. For the recent period we estimate that rents accounted for 30% to 50% of the wage differential between the financial sector and the rest of the private sector.

Look at the bolded phrases that appear in one abstract but not the other. In 2007, we have a story of skills, technology and compensating differentials. A year and change later, the star players are deregulation and rents, with technology demoted to “a limited role.” I don’t say this as a criticism of Philippon and Reshef, who deserve only credit for being willing to publicly change their beliefs in the face of new evidence. But this can cut both ways. That the same data can be, and is, used to tell such different stories, is a sign that there is something else going on here than disinterested social science.

Economists: Actively Evil Neoliberal Ideologues or Soulless Technocratic Hacks?

… or in other words, does economics (as it’s currently constituted) inherently promote a vision of markets for everything and no rights but property rights? (A vision that, obviously, conforms nicely to the interests of the owners of capital.) Or is the role of economics in upholding neoliberalism mainly the work of apolitical technicians, administrators and scientists manques, who could just as comfortably supply arguments for more regulation and a larger public sector if that’s what those in power were asking for?

Well, like nature vs. nurture, or whether to get the sweet brunch or the savory, it’s a debate that will never be fully resolved. (Go with savory, unless you’re, like, 12 years old.) But new evidence does sometimes come in.

Like those those polls of economists that the University of Chicago business school does; has everybody seen those? For those of us who’ve been debating this question, these things are a gold mine.

The latest question, on rent control, has Peter Dorman rightly exercised. As he points out, the question — whether rent regulations have had “a positive impact over the past three decades on the amount and quality of broadly affordable rental housing in cities that have used them” — omits the genuine goal of rent regulation, neighborhood stabilization:

The most compelling argument for rent control is neighborhood stabilization, the idea that social capital in an urban environment requires stable residence patterns.  If prices are volatile, and this leads to a lot of residential turnover, the result can be a less desirable neighborhood for everyone.  … not a single textbook treatment of rent control mentions stabilization as an objective, even though this is a standard element in the real-world rhetoric surrounding this issue. 

I would just add that a diversity of income levels in a neighborhood is also a goal of rent regulation, as is recognizing the legitimate interest of long-time tenants in staying in their homes. (Not all rights are property rights!) So by framing the question purely in terms of the housing supply, the Booth people have already disconnected it from actual policy debates in a way favorable to orthodoxy. Anyway, no surprise, orthodoxy wins, with only a single respondent favoring rent regulation. (And I think that one might be a typo.) My favorite answer is the person who said, ” Rent control will have similar effects to any price control.” That’s the beauty of economics, isn’t it? — all markets are exactly the same.

Some of the other ones are even better. Check out the one on education, which asks if all money currently being spent on K-12 education should be given out as vouchers instead. (Why not cash?) By a margin of 36 to 19 (or 41 to 23 when the answers are weighted by confidence) the economists vote, Hells yeah, let’s abolish the public school system. Presumably they’re mostly reasoning along the same lines as Michael Greenstone of MIT: ” Competition is likely beneficial on average. Less clear that all students would benefit leading to tough questions about social welfare functions” — which doesn’t stop him from signing up in favor of vouchers. The presumed benefits of competition are dispositive, while distributional questions, while “tough” in principle, can evidently be ignored in practice. On the other hand, props to Nancy Stokely of the U of C (strongly agree, confidence 9 out of 10) for spelling it right out: “It’s the only way to break the unions.” (Yes, that’s what she wrote.) So, hardly definitive, but definitely some ammo for Team Ideologue.

People sometimes say that academic economists just reflect the views of the country at large, or even the more-liberal-than-median views of other academics or educated professionals. And on some issues, that’s certainly true. (Booth also gets a solid majority in favor of drug law reform.) But come on. Replacing the public school system with vouchers is a far-right, fringe position in almost any significant demographic — except, it would seem, professional economists.

Back to rent control. Jodi Beggs enthusiastically endorses the consensus, but her conscience then compels her to add:

Techhhhhnically speaking [1], if none of the housing in an area was deemed “affordable” before the price ceiling, then the price ceiling could, I suppose, increase the quantity of affordable housing. (In fact, Pinelopi Goldberg specifically points this out.) [True. Goldberg’s answers in general are a beacon of sanity.] In most realistic cases, however, the rent control laws are going to make builders think twice about putting up residential properties and make potential landlords think twice about getting into the rental business. 

It’s awfully hard not to read the drawn out adverb as a parapraxis, indicating resistance to the heretical thought that, in fact, economic theory gives no answer to the question of whether rent control laws increase or decrease the supply of affordable housing. More concretely, Begg’s “realistic cases” are a figment of her imagination, or rather her ideology; in all actual cases rent control laws, at least in major American cities (there are only a couple) only apply to units built before a certain date. In New York City, for instance, rent regulations DO NOT APPLY to anything built since 1974. Hard to believe that builders are thinking twice about putting up new buildings because of rent stabilization, when it hasn’t applied to new buildings in nearly 40 years. But hey, why should you have to actually know something about the policy you’re discussing, to walk through the old familiar supply and demand graphs showing why Price Controls Are Bad?

“Nothing dulls the mind,” says Feyerabend, “as thoroughly as a sequence of familiar notions.”

(I can’t resist putting down the quote in full:

Writing… [is] almost like composing a work of art. There is some overall pattern, very vague at first, but sufficiently well defined to provide … a starting point. Then come the details — arranging the words in sentences and paragraphs. I choose my words very carefully — they must sound right, must have the right rhythm, and their meaning must be slightly off-center; nothing dulls the mind as thoroughly as a sequence of familiar notions. Then comes the story. It should be interesting and comprehensible, and it should have some unusual twists. I avoid “systematic” analyses. The elements hang together beautifully, but the argument itself is from outer space, as it were, unless it is connected with the lives and interests of individuals or special groups. Of course, it is already so connected, otherwise it would not be understood, but the connection is concealed, which means that, strictly speaking, a “systematic” analysis is a fraud. So why not avoid the fraud by using stories right away?)