Eich on Marx on Money

I’ve been using some of Stefan Eich’s The Currency of Politics in the graduate class I’m teaching this semester. (I read it last year, after seeing a glowing mention of it by Adam Tooze.) This week, we talked about his chapter on Marx, which reminded me that I wrote some notes on it when I first read it. I thought it might be worthwhile turning them into a blogpost, incorporating some points that came out in the discussion in today’s class.

Eich begins with one commonly held idea of Marx’s views of money: that he was “a more or less closeted adherent of metallism who essentially accepted … gold-standard presumptions” — specifically, that the relative value of commodities is prior to whatever we happen to use for units of account and payments, that the value of gold (or whatever is used for money) is determined just like that of any other commodity, and that changes to the monetary system can’t have any effects on real activity (or at least, only disruptive ones). Eich’s argument is that while Marx’s theoretical views on money were more subtle and complex than this, he did share the operational conclusion that monetary reform was a dead end for political action. In Eich’s summary, while at the time of the Manifesto Marx still believed in a public takeover of the banking system as part of a socialist program, by the the 1860s he had come to believe that “any activist monetary policy to alter the level of investment, let alone … shake off exploitation, was futile.”

Marx’s arguments on money of course developed in response to the arguments of Proudhon and similar socialists like Robert Owen. For these socialists (in Eich’s telling; but it seems right to me) scarcity of gold and limits on credit were “obstacles to reciprocal exchange,” preventing people from undertaking all kinds of productive activity on a cooperative basis and creating conditions of material scarcity and dependence on employers. “A People’s Bank,” as Eich writes channeling Proudhon, “was the only way to guarantee the meaningfulness of the right to work.” Ordinary people are capable of doing much more socially useful (and remunerative) work than whatever jobs they were offered. But under the prevailing monopoly of credit, we have no way to convert our capacity to work into access to the means of production we would need to realize it.

Why, we can imagine Proudhon asking, do you need to work for a boss? Because he owns the factory. And why does he own the factory? Is it because only he had the necessary skills, dedication, and ambition to establish it? No, of course not. It’s because only he had the money to pay for it. Democratize money, and you can democratize production.

Marx turned this around. Rather than money being the reason why a small group of employers control the means of production, it is, under capitalism, simply an expression of that fact. And if we are going to attribute this control to a prior monopoly, it should be to land and the productive forces of nature, not money. The capitalist class inherits its coercive power from the landlord side of its family tree, not the banker side.

In Marx’s view, Proudhon had turned the fundamental reality of life under capitalism — that people are free to exchange their labor power for any other commodity — into an ideal. He attributed the negative  consequences of organizing society around market exchange to monopolies and other deviations from it. (This is a criticism that might also be leveled against many subsequent reformers, including the ”market socialists” of our own time.) 

That labor time is the center of gravity for prices is not a universal fact about commodities. It is a tendency — only a tendency — under capitalism specifically, as a result of several concrete social developments. First, again, production is carried out by wage labor. Second, wage labor is deskilled, homogenized, proletarianized. The equivalence of one hour of anyone’s labor for one hour of anyone else’s is a sociological fact reflecting that fact that workers really are interchangeable. Just as important, production must be carried out for profit, because capitalists compete both in the markets for their product and for the means of production. It is the objective need for them to produce at the lowest possible cost, or else cease being capitalists, that ensures that production is carried out with the socially necessary labor time and no more.

The equivalence of commodities produced by the same amount of labor is the result of proletarianization on the one side and the hard budget constraint on the other. The compulsion of the market, enforced by the “artificial” scarcity of money, is not an illegitimate deviation from the logic of equal exchange but its precondition. The need for money plays an essential coordinating function. This doesn’t mean that no other form of coordination is possible. But if you want to dethrone money-owners from control of the production process, you have to first create another way to organize it.

So one version of Marx’s response to Proudhon might go like this. In a world where production was not organized on capitalist lines, we could still have market exchange of various things. But the prices would be more or less conventional. Productive activity, on the other side, would be embedded in all kinds of other social relationships. We would not have commodities produced for sale by abstract labor, but particular use values produced by particular forms of activity carried out by particular people. Given the integration of production with the rest of life, there would be no way to quantitatively compare the amount of labor time embodied in different objects of exchange; and even if there were, the immobility of embedded labor means there would be no tendency for prices to adjust in line with those quantities. The situation that Proudhon is setting up as the ideal — prices corresponding to labor time, which can be freely exchanged for commodities of equal value — reflects a situation where labor is already proletarianized. Only when workers have lost any social ties to their work, and labor has been separated from the rest of life, does labor time become commensurable. 

In the real world, the owners of the means of production have harnessed all our collective efforts into the production of commodities by wage labor for sale in the market, in order to accumulate more means of production – that is to say, capital. In this world, and only in this world, quantitative comparisons in terms of money must reflect the amount of labor required for production. Changes to the money system cannot change these relative values. At the same time, it’s only the requirement to produce for the market that ensures that one hour of labor really is equivalent to any other. Proudhon’s system of labor chits, in which anyone who spent an hour doing something could get a claim on the product of an hour of anyone else’s labor, would destroy the equivalence that the chits are supposed to represent. (A similar criticism might be made of job guarantee proposals today.)

For the mature Marx, money is merely “the form of appearance of the measure of value which is immanent in commodities, namely labor time.” There is a great deal to unpack in a statement like this. But the conclusion that changes in the quantity or form of money can have no effect on relative prices does indeed seem to be shared with the gold-standard orthodoxy of his time (and of ours). 

The difference is that for Marx, that quantifiable labor time was not a fact of nature. People’s productive activities become uniform and homogeneous only as work is proletarianized, deskilled, and organized in pursuit of profit. It is not a general fact about exchange. Money might be neutral in the sense of not entering into the determination of relative prices, which are determined by labor time. But the existence of money is essential for there to be relative prices at all. The possibility of transforming authority over particular production processes into claims on the social product in general is a precondition for generalized wage labor to exist. 

While Marx does look like commodity money theorist in some important ways, he shared with the credit-money theorists, and greatly developed, the  idea — mostly implicit until then — that the productive capacities of a society are not something that exist prior to exchange, but develop only through the generalization of monetary exchange. Much more than earlier writers, or than Keynes and later Keynesians, he foregrounded the qualitative transformation of society that comes with the organization of production around the pursuit of money. 

You could get much of this from any number of writers on Marx. What is a bit more distinctive in the Eich chapter is the links he makes between the theory and Marx’s political engagement. When Marx was writing his critique of Proudhon’s monetary-reform proposals in the 1840s, Eich observes, he and Engels  still believed that public ownership of the banks was an important plank in the socialist program. Democratically-controlled banks would “make it possible to regulate the credit system in the interest of the people as a whole, and … undermine the dominion of the great money men. Further, by gradually substituting paper money for gold and silver coin, the universal means of exchange … will be cheapened.” At this point they still held out the idea that public credit could both alleviate monetary bottlenecks on production and be a move toward the regulation of production “according to the general interest of society as represented in the state.”

By the 1850s, however, Marx had grown skeptical of the relevance of money and banking for a socialist program. In a letter to Engels, he wrote that the only way forward was to “cut himself loose from all this ‘money shit’”; a few years later, he said, in an address to the First International, that “the currency question has nothing at all to do with the subject before us.” In the Grundrisse he asked rhetorically, “Can the existing relations of production and the relations of distribution which correspond to them be revolutionized by a change in the instrument of circulation…? Can such a transformation be undertaken without touching the existing relations of production and social relations which rest on them?” The answer, obviously, is No.

The reader of Marx’s published work might reasonably come away with something like this understanding of money: Generalized use of money is a precondition of wage labor, and leads to qualitative transformations of human life. But control over money is not the source of capitalists’ power, and the logic of capitalism doesn’t depend on the specific workings of the financial system. To understand the sources of conflict and crises under capitalism, and its transformative power and development over time, one should focus on the organization of production and the hierarchical relationships within the workplace. Capitalism is essentially a system of hierarchical control over labor. Money and finance are at best second order. 

Eich doesn’t dispute this, as a description of what Marx actually he wrote.. But he argues that this rejection of finance as a site of political action was based on the specific conditions of the times. Today, though, the power and salience of organized labor has diminished. Meanwhile, central banks are more visible as sites of power, and the allocation of credit is a major political issue. A Marx writing now, he suggests, might take a different view on the value of monetary reform to a socialist program. I’m not sure, though, if this is a judgment that many people inspired by Marx would share. 

In Praise of Profiteering

Of the usefulness of the concept, that is.1

 In my comments on inflation, I’ve emphasized supply disruptions more than market power. But as I’ll explain in this post, I think the market power or profiteering frame is also a valid and useful one.

Thanks in large part to Lindsay Owens and her team at the Groundwork Collaborative, the idea that corporate profiteering is an important part of today’s inflation is getting a surprising amount of traction, including from the administration. So it’s no surprise that it’s attracted some hostile pushback. This sneering piece by Catherine Rampell in the Washington Post is typical, so let’s start from there.

For critics like Rampell, the profiteering claim isn’t just wrong, but “conspiracy theory”, vacuous and incoherent:

The theory goes something like this: The reason prices are up so much is that companies have gotten “greedy” and are conspiring to “pad their profits,” “profiteer” and “price-gouge.” No one has managed to define “profiteering” and “price-gouging” more specifically than “raising prices more than I’d like.” 

The problem with this narrative is that it’s just a pejorative tautology. Yes, prices are going up because companies are raising prices. Okay. This is the economic equivalent of saying “It’s raining because water is falling from the sky.” 

The interesting thing about the profiteering story, to me, is precisely that it’s not a tautology. As a matter of logic, one might just as easily say “prices are going up because consumers are paying more.” It is not an axiomatic truth that businesses are who decide on prices. It is not a feature of textbook economics (where firms are price takers) nor is it an empirically true of all markets. As for profiteering, there is a straightforward definition — price increases that don’t reflect any change in the costs of production. Both economically and in the common-sense morality that terms like “price gouging” appeal to, there’s a distinction between price increases that reflect higher costs and ones that do not. And there’s nothing novel or strange about policies to limit the latter.

These two points are related. If prices were set straightforwardly as a markup over marginal costs, it wouldn’t make sense to say that “companies are raising prices.” And there wouldn’t be any question of price-gouging. The starting point here is, that’s not necessarily how prices are set. And once we agree that prices are a decision variable for firms, rather than an automatic market outcome, it’s not obvious why there shouldn’t be a public interest in how that decision gets made.

Think about water. It’s a commonplace that big increases in the price of bottled water in a disaster zone should not be allowed. The marginal cost of selling a bottle of water already on the shelf is no higher than in normal times. Nor are high prices for bottled water serving a function as signals — the premise is precisely that the quantity available is temporarily fixed. And everyone agrees that in these settings, willingness to pay is not a good measure of need. 

What about water in normal times? In most of the United States, piped water is provided by local government. But in some places, it is provided by private water companies. And in those cases, invariably, its price is tightly regulated by a public utility commission, with price increases limited to cases where an increase in costs has been established. According to this recent GAO report, states with private water utilities all “rely on the same standard formula … to set private for-profit water rates. The formula relies on the actual costs of the utility …. including capital invested in its facilities, operations and maintenance costs, taxes, and other adjustments.” 

The principle in these types of regulations — which, again, are ubiquitous and uncontroversial — is that in the real world prices may or may not track costs of production. Price increases that reflect higher costs are legitimate, and should be permitted; ones that do not are not, and should not.

Rent control is very controversial, both among economists and the general public. But I have never heard “water rate control” brought up as an example of an illegitimate government interference in the market, or seen a study of how much more water would be provided if utilities could charge what the market would bear. (Maybe some enterprising young economist will take that on.)

The same goes for many other public utilities — electricity, gas, and so on. Here in New York, a utility that wants to raise its electricity rates has to submit a filing to the Public Service Commission documenting the its operating and capital costs; if the proposed increase doesn’t reflect the company’s costs, it is not allowed. Obviously this isn’t so simple in practice, and the system certainly has its critics. But the point is, no one thinks that electricity — an industry that combines very high fixed costs, concentration and very inelastic demand, and which is an essential input to all kinds of other activity — is something where prices can be left to the market.

So the the question is not: Should prices be regulated or controlled? Nor is it whether some price increases are unreasonable. The answers to those questions are obviously, uncontroversially Yes. The question is whether the price regulation of utilities, and the economic analysis behind it, should be extended to other areas, or to prices in general.

*

People like Rampell are not thinking in terms of our world of production by large organizations using specialized tools and techniques. They are imagining an Econ 101 world where there is a fixed stock of stuff, and the market price is the one where people just want to buy that much. There are, to be sure, cases where this is a reasonable first approximation — used car dealers, say. But it is not a good description of most of the economy. Markets are not allocating a given stock of stuff, but guiding production. This production is carried out by large enterprises with substantial market power. They are not price takers. For most goods and services, price is a decision variable for producers, involving tradeoffs on a number of margins.2 

In the models taught in introductory microeconomics, producers are price takers; they choose a quantity of output which they will sell at the going price. Given rising marginal costs — each additional unit of output costs more to produce than the last one — firms will carry out production just to the point where marginal cost equals the market price. This model is in principle consistent with the existence of fixed as well as marginal costs: Free entry and exit ensures that revenue at the market price just covers fixed costs, plus the normal profit (whatever that is). 

The usual situation in a modern economy, however, is flat or declining marginal costs. Non-increasing marginal costs, nonzero fixed costs, and competitive pricing cannot coexist: In the absence of increasing marginal costs, a price equal to marginal cost leaves nothing to cover fixed costs. Modern industries, which invariably involve substantial fixed costs and flat or declining marginal costs at normal levels of output, require some degree of monopoly power in order to survive. This is the economic logic behind patents and copyrights — developing a new idea is costly, but disseminating it is cheap. So if we are relying on private businesses for this, they must be granted some degree of monopoly.3

The problem is, once we agree that some degree of market power is necessary in order for industries without declining returns to cover their fixed costs, how do we know how much market power is enough? Too much market power, and firms can make super-normal profits by holding prices above the level required to cover their costs, reducing access to whatever social useful thing they supply. Too little market power, and competing firms will be inefficiently small, drive each other to bankruptcy, or simply decline to enter, depriving society of the useful thing entirely. Returning to the IP example: To the extent that copyrights and patents serve an economic function, it is possible for them to be either too long or too short.

The problem gets worse when we think about what fixed costs men concretely. On the one hand, the decision to pay for a particular long-lived means of production is irreversible and taken in historical time; producers don’t know in advance whether their margins over costs of production will be enough to recoup the outlay. But on the other hand, the form these costs take is financial: A company has, typically, borrowed to pay for its plant, equipment and intellectual property; the concrete ongoing costs it faces are debt service payments.  These may change after the fact, by, for example, being discharged in bankruptcy — which does not in general prevent the firm from continuing to operate. So there may be a very wide space between a price high enough to induce new firms to enter and a price low enough to induce existing firms to exit.

In addition, concerns over market share, public opinion, financing constraints,  strategic interaction with competitors and other considerations mean that the price chosen within this space will not necessarily be the one that maximizes short-term profits (to the extent that this can even be known.) A lower price might allow a firm to gain market share, but risk retaliation from competitors. A higher price might allow for increased payments to shareholders, but risk a backlash from regulators or bad press. Narrowly economic factors may set some broad limits to pricing, but within them there is a broad range for strategic choices by sellers.

*

These issues were central to economic debates around the turn of the last century, particularly in the context of railroads. In the second half of the 19th century, railroads were the overwhelmingly dominant industrial businesses. And they clearly did not fit the models of competitive producers pricing according to marginal cost that the economics profession was then developing.

Railroads provided an essential function, for which there were no good alternatives. A single line on a given route had an effective monopoly, while two lines in parallel were almost perfect substitutes. The largest part of costs were fixed. But on the other hand, a firm that failed to meet its fixed costs would see its debt discharged in bankruptcy and then continue operating under new ownership. The result was cycles of price gouging and ruinous competition, in which farmers and small businesses could (much of the time) reasonably complain that they were being crushed by rapacious railroad owners, and railroads could (some of the time) reasonably complain they were being driven to the wall by cutthroat competition. Or as Alfred Chandler puts it,

Railroad competition presented an entirely new business phenomenon. Never before had a very small number of very large enterprises competed for the same business. And never before had competitors been saddled with such high fixed costs. In the 1880s fixed costs…averaged two-thirds of total cost. The relentless pressure of such costs quickly convinced railroad managers that uncontrolled competition of through traffic would be “ruinous”. As long as a road had cars available to carry freight, the temptation to attract traffic by reducing rates was always there. … To both the railroad managers and investors, the logic of such competition would be bankruptcy for all.4

As Michael Perelman explains in his excellent books The End of Economics and Railroading Economics (from which the following quotes are drawn), the problem of the railroads was the problem for the first generation of American professional economists. As these economists were developing models in which prices set in competitive markets would guarantee both a rational allocation of society’s resources and a normatively fair distribution of incomes, it was clear that in the era’s dominant industry, market prices did not work at all.

Already in the 1870s, Charles Francis Adams could observe:

The traditions of political economy,…notwithstanding, there are functions of modern life, the number of which is also continually increasing, which necessarily partake in their essence of the character of monopolies…. Now it is found that, whenever this characteristic exists, the effect of competition is not to regulate cost or equalize production, but under a greater or less degree of friction to bring about combination and a closer monopoly. This law is invariable. It knows no exceptions. 

Arthur Hadley, an early president of the American Economic Association, made a similar argument. Where railroads competed, prices fell to a level that was too low to recover fixed costs, eventually sending one or both lines into bankruptcy. In the absence of competition, railroads could charge monopoly prices, which might be much higher than fixed costs. Equating prices to marginal costs made sense in an economy of small farmers or artisans. But in industries where most costs took the form of large, irreversible investments in fixed capital, there was no automatic process that would bring prices in line with costs. In Perelman’s summary:

 In order to attract new capital into the business, rates must be high enough to pay not merely operating expenses, but fixed charges on both old and new capital. But, when capital is once invested, it can afford to make rates hardly above the level of operating expenses rather than lose a given piece of business. This “fighting rate” may be only one-half or one-third of a rate which would pay fixed charges. Based on his knowledge of the railroads, [Hadley] concluded that “survival of the fittest is only possible when the unfittest can be physically removed—a thing which is impossible in the case of an unfit trunk line.”

Perelman continues:

The root of the problem, for Hadley, was that to build a new line, owners had to expect rates high enough to cover not only the costs of operating it but the costs of constructing it, the financing charges, and a premium for risk; while to continue running an existing line, rates only had to cover operating costs. And these costs were essentially invariant to the volume of traffic on the line. 

Or as John Bates Clark  put it in 1901: “There is often a considerable range within which trusts can control prices without calling potential competition into positive activity.”

These were some of the leading figures in the economics profession around the turn of the century, so it’s striking how unambiguously they rejected the  Marshallian orthodoxy of equilibrium prices. When the American Economics Association met for the first time, its proposed statement of principles included the line: “While we recognize the necessity of individual initiative in industrial life, we hold that the doctrine of laissez-faire is unsafe in politics and unsound in morals.” Politically, they were not socialists or radicals. They rejected competitive markets, but not private ownership. That however left the question, how should prices be regulated? 

For a conservative economist like Hadley, the answer was social norms:

This power [of the trusts] is so great that it can only be controlled by public opinion—not by statute…. There are means enough. Don’t let him come to your house. Disqualify him socially. You may say that it is not an operative remedy. This is a mistake. Whenever it is understood that certain practices are so clearly against public need and public necessity that the man who perpetrates them is not allowed to associate on even terms with his fellow men, you have in your hands an all-powerful remedy.

Unfortunately, in practice, the withholding of dinner party invitations is not always an operative remedy.

In principle, there are many other ways to solve the problem. Intellectually, one can assume it away by simply insisting on declining returns to scale; or one can allow constant returns but have firms rent the services of undifferentiated capital, so there are no fixed costs. If the problem is not assumed away — a more practical option for theorists than for policymakers — it could in principle be solved by somehow ensuring that producers enjoy just the right degree of monopoly. This is what patents and copyrights are presumably supposed to do. Another possible answer is to say that where competition is not possible, that is an activity that should be carried out by the public. That was, of course, where urban rail systems ended up. For someone like Oskar Lange, it was a decisive argument for socializing production more broadly.5

Alternatively, one can accept cartels or monopolies (perhaps under the tutelage of dominant banks) in the hopes that social pressure or norms will limit prices, or on the grounds that a useful service provided at monopoly prices is still better than it not being provided at all. This was, broadly, the view of figures like Hadley, Ely and Clark, and arguably a big part of how things worked out. 

But the main resolution to the problem, at least in the case of railroads, came from the increasing public pressure to regulate prices. The Interstate Commerce Commission was established to regulate railroad rates in 1887; its authority was initially limited, and it faced challenges from hostile Gilded-Age courts. But it was strengthened over the ensuing decades. The guiding principle was that rates should be high enough to cover a railroad’s full costs and a reasonable return, but no higher. This required railroads, among other things, to adopt more systematic and consistent accounting for capital costs.

Indeed, there’s a sense in which the logic of Langean socialism describes much of the evolution of private markets over the 20th century. The spread of cost-based price regulation forced firms to systematically measure and account for marginal  costs in a way they might not have done otherwise. Mark Wilson, in his fascinating Destructive Creation, describes how the use of cost-plus contracts during World War II rationalized accounting in a broad range of industries. Systems of railroad-like rate regulation were applied to a number of more or less utility-like businesses both before and after the war, imposing from above the rational relationship between costs and prices that the market could not. Many of these regulations have been rolled back since the 1970s, but as noted earlier, many others remain in place. 

*

Late 19th-century debates over railroad regulation might not be the most obvious place to look for guidance to today’s inflation debates. But as Axel Leijonhufvud points out in a beautiful essay on “The Uses of the Past,” economics is not progressive in the way that physical sciences are — we can’t assume that the useful contributions of the past are all incorporated into today’s thought. Economists’ thinking often changes for reasons of politics or fashion, while the questions posed by reality are changing as well as well, often in quite different ways. Older ideas may be more relevant to new problems than the current state of the art. History of economic thought becomes useful, Leijonhufvud writes,

when the road that took you to the ‘frontier of the field’ ends in a swamp or blind alley. A lot of them do. … Back there, in the past, there were forks in the road and it is possible, even plausible, that some roads were more passable than the one that looked most promising at the time.

The road I want to take from those earlier debates is that in a setting of high fixed costs and pervasive market power, how businesses set prices is a legitimate question, both as an object of inquiry and target for policy. One of the central insights of the railroad economists is that in modern capital-intensive industries, there is a wide range over which prices are, in an economic sense, indeterminate. Depending on competitive conditions and the strategic choices of firms, prices can be persistently too high or too low relative to costs. This indeterminacy means that pricing decisions are, at least potentially, a political question. 

It’s worth emphasizing here that in empirical studies of how firms actually set prices — which admittedly are rather rare in the economics literature — an important factor in these decisions often seems to be norms around price-setting. In a classic paper on sticky prices, Alan Blinder surveyed business decision-makers on why they don’t change prices more frequently. The most common answer was, “it would antagonize customers.” In a recent ECB survey, one of the top two answers to the same question from businesses selling to the public was, similarly, that “customers expect prices to remain roughly the same.” (The other one was fear that competitors would not follow suit.)

This kind of survey data supports the idea, relied on by the Groundwork team, that businesses with substantial market power might be reluctant to use it in normal times. Those inhibitions would be lifted in an environment like that of the pandemic recovery, where individual price hikes are less likely to be seen as norm violations, or to be noticed at all. (And are more likely to be matched by competitors.)

Even more: It suggests that the moralizing language that critics like Rampell object to can, itself, be a form of inflation control. If fear of antagonizing customers is normally an important restraint on price increases then maybe we need to stoke up that antagonism! The language of “greedflation,” which I admit I didn’t originally care for, can be seen as an updated version of Arthur Hadley’s proposal to “disqualify socially” any business owner who raised prices too much. It is also, of course, useful in the fight for more direct price regulation, which is unlikely to get far on the basis of dispassionate analysis alone.

And this, I think, is a big source of the hostility toward Groundwork and toward others making the greedflation argument, like Isabella Weber.6 They are taking something that has been understood as a neutral, objective market outcome and reframing it as a moral and political question. This is, in Keynes’ terms, a question about the line between the Agenda and the Non-Agenda of political debates; and these are often more acrimonious than disputes where the legitimacy of the question itself is accepted by everyone, however much they may disagree on the answer.

By the same token, I think this line-shifting is a central contribution of the profiteering work. The 2022-23 inflation seems on its way to coming to an end on its own as supply disruptions gradually revolve themselves, just as (albeit more slowly than) Team Transitory always predicted. But even if the aggregate price level is behaving itself, rising prices can remain burdensome and economically costly in all kinds of areas (as can ruinous competition and underinvestment in others). Prices will remain an important political question, even if inflation is not.

My neighbor Stephanie Luce, who spent many years working in the Living Wage movement, often points out that the direct impact of those measures was in general quite small. But that does not mean that all the hard work and organizing that went into them was wasted. A more important contribution, she argues, is that they establish a moral vision and language around wages. Beyond their direct effects, living wage campaigns help shift discussions of wage-setting from economic criteria to questions of fairness and justice. In the same way, establishing price setting as a legitimate part of the political agenda is a step forward that will have lasting value even after the current bout of inflation is long over.

 

Keynes on Newton and the Methods of Science

I’ve just been reading Keynes’ short sketches of Isaac Newton in Essays in Biography. (Is there any topic he wasn’t interesting on?) His thesis is that Newton was not so much the first modern scientist as “the last of the magicians” — “a magician who believed that by intense concentration of mind on traditional hermetics and revealed books he could discover the secrets of nature and the course of future events, just as by the pure play of mind on a few facts of observation he had unveiled the secrets of the heavens.”

The two pieces are fascinating in their own right, but they also crystallized something I’ve been thinking about for a while about the relationship between the methods and the subject matter of the physical sciences.

It’s no secret that Newton had an interest in the occult, astrology and alchemy and so on. Keynes’ argument is that this was not a sideline to his “scientific” work, but was his project, of which his investigations into mathematics and the physical world formed just a part. In Keynes’ words,

He looked on the whole universe and all that is in it as a riddle, as a secret which could be read by applying pure thought to … mystic clues which God had laid about the world to allow a sort of philosopher’s treasure hunt to the esoteric brotherhood. He believed that these clues were to be found partly in the evidence of the heavens and in the constitution of elements… but also partly in certain papers and traditions … back to the original cryptic revelation in Babylonia. …

In Keynes’ view — supported by the vast collection of unpublished papers Newton left after his death, which Keynes made it his mission to recover for Cambridge — Newton looked for a mathematical pattern in the movements of the planets in exactly the same way as one would look for the pattern in a coded message or a secret meaning in a ancient text. Indeed, Keynes says, Newton did look in the same way for secret messages in ancient texts, with the same approach and during the same period in which he was developing calculus and his laws of motion.

There was extreme method in his madness. All his unpublished works on esoteric and theological matters are marked by careful learning, accurate method and extreme sobriety of statement. They are just as sane as the Principia, if their whole matter and purpose were not magical. They were nearly all composed during the same twenty-five years of his mathematical studies. 

Even in his alchemical research, which superficially resembled modern chemistry, he was looking for secret messages. He was, says Keynes, “almost entirely concerned, not in serious experiment, but in trying to read the riddle of tradition, to find meaning in cryptic verses, to imitate the alleged but largely imaginary experiments of the initiates of past centuries.”

There’s an interesting parallel here to Foucault’s discussion in The Order of Things of 16th century comparative anatomy. When someone like Pierre Belon carefully compares the structures of a bird’s skeleton to a human one, it superficially resembles modern biology, but really “belongs to the same analogical cosmography as the comparison between apoplexy and tempests,” reflecting the idea that man “stands in proportion to the heavens just as he does to animals and plants.”

Newton’s “scientific” work was, similarly, an integral part of his search for ancient secrets and, perhaps, for him, not the most important part. Keynes approvingly quotes the words that George Bernard Shaw (drawing on some of the same material) puts in Newton’s mouth:

There are so many more important things to be worked at: the transmutations of matter, the elixir of life, the magic of light and color, above all the secret meaning of the Scriptures. And when I should be concentrating my mind on these I find myself wandering off into idle games of speculation about numbers in infinite series, and dividing curves into indivisibly short triangle bases. How silly!

None of this, Keynes insists, is to diminish Newton’s greatness as a thinker or the value of his achievements. His scientific accomplishments flowed from this same conviction that the world was a puzzle that would reveal some simple, logical, in retrospect obvious solution if one stared at it long enough. His greatest strength was his power of concentration, his ability to

hold a problem in his mind for hours and days and weeks until it surrendered to him its secret. Then being a supreme mathematical technician he could dress it up… for purposes of exposition, but it was his intuition which was pre-eminent … The proofs … were not the instrument of discovery. 

There is the story of how he informed Halley of one of his most fundamental discoveries of planetary motion. ‘Yes,’ replied Halley, ‘but how do you know that? Have you proved it?’ Newton was taken aback—’Why, I’ve known it for years,’ he replied. ‘ If you’ll give me a few days, I’ll certainly find you a proof of it’—as in due course he did. 

This is a style of thinking that we are probably all familiar with — the conviction that a difficult problem must have an answer, and that once we see it in a flash of insight we’ll know that it’s right. (In movies and tv shows, intellectual work is almost never presented in any other way.) Some problems really do have answers like this. Many, of course, do not. But you can’t necessarily know in advance which is which. 

Which brings me to the larger point I want to draw out of these essays. Newton was not wrong to think that if the motion of the planets could be explained by a simple, universal law expressible in precise mathematical terms, other, more directly consequential questions might be explained the same way. As Keynes puts it,

He did read the riddle of the heavens. And he believed that by the same powers of his introspective imagination he would read the riddle of the Godhead, the riddle of past and future events divinely fore-ordained, the riddle of the elements…, the riddle of health and of immortality. 

It’s a cliché that economists suffer from physics envy. There is definitely some truth to this (though how much the object of envy resembles actual physics I couldn’t say.)  The positive content of this envy might be summarized as follows: The techniques of physical sciences have yielded good results where they have been applied, in physics, chemistry, etc. So we should expect similar good results if we apply the same techniques to human society. If we don’t have a hard science of human society, it’s simply because no one has yet done the work to develop one. (Economists, it’s worth noting, are not alone in believing this.)

In Robert Solow’s critical but hardly uniformed judgement,

the best and the brightest in the profession proceed as if economics is the physics of society. There is a single universal model of the world. It only needs to be applied. You could drop a modern economist from a time machine … at any time in any place, along with his or her personal computer; he or she could set up in business without even bothering to ask what time and which place. In a little while, the up-to-date economist will have maximized a familiar-looking present-value integral, made a few familiar log-linear approximations, and run the obligatory familiar regression. 

It’s not hard to find examples of this sort of time-machine economics. David Romer’s widely-used macroeconomics textbook, for example, offers pre-contact population density in Australia and Tasmania (helpfully illustrated with a figure going back to one million BC) as an illustration of endogenous growth theory. Whether you’re asking about GDP growth next year, the industrial revolution or the human population in the Pleistocene, it’s all the same equilibrium condition.

Romer’s own reflections on economics methodology (in an interview with Snowdon and Vane) are a perfect example of what I am talking about. 

As a formal or mathematical science, economics is still very young. You might say it is still in early adolescence. Remember, at the same time that Einstein was working out the theory of general relativity in physics, economists were still talking to each other using ambiguous words and crude diagrams. 

In other words, people who studied physical reality embraced precise mathematical formalism early, and had success. The people who studied society stuck with “ambiguous words and crude diagrams” and did not. Of course, Romer says, that is now being corrected. But it’s not surprising that with its late start, economics hasn’t yet produced as definite and useful knowledge as the physical science have.  

This is where Newton comes in. His occult interests are a perfect illustration of why the Romer view gets it backward. The same techniques of mathematical formalization, the same effort to build up from an axiomatic foundation, the same search for precisely expressible universal laws, have been applied to the whole range of domains right from the beginning — often, as in Newton’s case, by the same people. We have not, it seems to me, gained useful knowledge of orbits and atoms because that’s where the techniques of physical science happen to have been applied. Those techniques have been consistently applied there precisely because that’s where they turned out to yield useful knowledge.

In the interview quoted above, Romer defends the aggregate production function (that “drove Robinson to distraction”) and Real Business Cycle theory as the sort of radical abstraction science requires. You have “to strip things down to their bare essentials” and thoroughly grasp those before building back up to a more realistic picture.

There’s something reminiscent of Newton the mystic-scientist in this conviction that things like business cycles or production in a capitalist economy have an essential nature which can be grasped and precisely formalized without all the messy details of observable reality. It’s tempting to think that there must be one true signal hiding in all that noise. But I think it’s safe to say that there isn’t. As applied to certain physical phenomena, the idea that apparently disparate phenomena are united by a single beautiful mathematical or geometric structure has been enormously productive. As applied to business cycles or industrial production, or human health and longevity, or Bible exegesis, it yields nonsense and crankery. 

In his second sketch, Keynes quotes a late statement of Newton’s reflecting on his own work:

I do not know what I may appear to the world; but to myself I seem to have been only like a boy, playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me. 

I’m sure this quote is familiar to anyone who’s read anything about Newton, but it was new and striking to me. One way of reading it as support for the view that Newton’s scientific work was, in his mind, a sideshow to the really important inquiries which he had set aside. But another way is as a statement of what I think is arguably the essence of a scientific mindset – the willingness to a accept ignorance and uncertainty. My friend Peter Dorman once made an observation about science that has always stuck with me – that what distinguishes scientific thought is the disproportionate priority put on avoiding Type I errors (accepting a false claim) over avoiding Type II errors (rejecting a true claim). Until an extraordinary degree of confidence can be reached, one simply says “I don’t know”.

It seems to me that if social scientists are going to borrow something from the practices of Newton and his successors,  it shouldn’t be an aversion to “ambiguous words,” the use calculus or geometric proofs, or the formulation of universal mathematical laws. It should be his recognition of the vast ocean of our ignorance. We need to accept that on most important questions we don’t know the answers and probably cannot know them. Then maybe we can recognize the small pebbles of knowledge that are accessible to us.

At Jacobin: Review of Beth Popp Berman’s Thinking Like an Economist

(This review appeared in the Summer 2022 edition of Jacobin.)

After the passage of Medicare and Medicaid, universal health insurance seemed to be on its way. In 1971, the New York Times observed that “Americans from all strata of society … are swinging over to the idea that good health care, like good education, ought to be a fundamental right of citizenship.” That same year, Ted Kennedy introduced a bill providing universal coverage with no payments at the point of service, on the grounds that “health care for all our people must now be recognized as a right.” The bill didn’t pass, but it laid down a marker for future health care reform.

But when Democratic presidents and congresses took up health care in later years they chose a different path. Rather than pitching health care as a right of citizenship, the goal was better-functioning markets for health care as a commodity. From the “consumer choice health plan” proposed by Alain Enthoven in the Carter administration, though the 1993 Clinton plan down to Obama’s ACA, the goal of reform was no longer the universal provision of health care, but addressing certain specific failures in the market for health insurance.

The intellectual roots of this shift are the subject of Beth Popp Berman’s new book Thinking Like an Economist: How Efficiency Replaced Equity in U.S. Public Policy. A distinct style of thinking, she argues, reshaped ideas how about how government should work and what it could achieve. This “economic style” of thinking, originating among Democrats rather than on the Right, “centered efficiency and cost-effectiveness, choice and incentives, and competition and the market mechanism… Its implicit theory of politics imagined that disinterested technocrats could make reasonably neutral, apolitical policy decisions.” Rather than see particular domains of public life, like health care or the environment, as embodying their own distinct goals and logics, they were imagined in terms of an idealized market, where the question was what specific market failure, if any, the government should correct.

The book traces this evolution in various policy domains, focusing on the microeconomic questions of regulation, social provision and market governance rather than the higher-profile debates among macroeconomists. Covering mainly the period of the Kennedy through Reagan administrations, with brief discussions of more recent developments, the book documents how the economic style of reasoning displaced alternative ways of thinking about policy questions. The first generation of environmental regulation, for example, favored high, inflexible standards such as simply forbidding emission of certain substances. Workplace and consumer safety laws similarly favored categorical prohibitions and requirements.

But to regulators trained in economics, this made no sense. To an economist, “the optimal level of air pollution, worker illness, or car accidents might be lower than its current level, but it was probably not zero.” As economist Marc Roberts wrote with frustration of the Clean Water Act, “There is no be no case-by-case balancing of costs and benefits, no attempt at ‘fine-tuning’ the process of resource allocation.’”

The book has aroused hostility from economists, who insist that this is an unfairly one-sided portrayal of their profession. I think Berman has the better of the argument here. As anyone who has taken an economics course in college can confirm, there really is such a thing as “thinking like an economist,” even if not every economist thinks that way. Framing every question as a problem of optimization under constraints is a very particular style of reasoning. And, as Berman observes, the most important site of this thinking is not the work of professional economists with their “frontier research,” but undergraduate classes and in schools of public policy where those in government, non-profits, and the press acquire this perspective.

Berman also is right to link this distinctive economic style of reasoning to a narrowing of American political horizons. At the same time, she is appropriately cautious about attributing too much independent influence to it — ideas matter, she suggests, but as tools of power rather than sources of it.

The problem with the book is not that she is unfair to economists; it’s that she concedes too much ground to them. Thinking Like an Economist is attentive to the shifting backgrounds of leaders and staff in federal agencies — if you’re wondering who was the first economics PhD to head the Justice Department’s Antitrust Division, this is the book for you. But this institutional history, while important, sometimes crowds out critical engagement with the ideas being discussed.

Take the term efficiency, which seems to occur on almost every page of the book, starting with the cover. The essence of the economic style, says Berman, is that government should make decisions “to promote efficiency.” But what does that mean?

We know what “efficient” means as applied to, say, a refrigerator. It means comparing a measurable input (electricity, in this case) to a well-defined outcome (a given volume maintained at a given temperature). There is nothing distinct to economics in preferring a more energy-efficient to a less energy-efficient appliance. Unions planning organizing campaigns, socialists running in elections, or public housing administrators all similarly face the problem of getting the most out of their scarce resources.

But what if the question is whether you should have a refrigerator in the first place, or if refrigerators ought to be privately owned? What could “efficient” mean here?

To an economist, the answer is the one that maximizes “utility” or “welfare.” These things, of course, are unobservable. So the measurement of inputs and output that defines efficiency in the every day sense is impossible.

Instead, what we do is start with an abstract model in which all choices involve using or trading property claims, and people know and care about only their own private interests. Then we show that in this model, exchange at market prices will satisfy a particular definition of efficiency — either Pareto, where no one can get a better outcome without someone else getting a worse one, or Kaldor-Hicks, where improvements to one person’s situation at the expense of another’s are allowed as long as the winners could, in principle, make the losers whole. Finally, in a sort of argument by homonym, this specialized and near-tautological meaning of “efficiency” is imported back into real-world settings, where it is used interchangeably with the everyday doing-more-with-less one.

When someone steeped in the economic style of thinking says “efficiency,” they mean something quite different from what normal people would. Rather than a favorable ratio of measurable out- puts to inputs, they mean a desirable outcome in terms of unmeasurable welfare or utility, which is simply assumed to be reached via markets. A great part of the power of economics in policy debates comes through the conflation of these two meanings. A common-sensical wish to get better outcomes with less resources gets turned into a universal rule that economic life should be organized around private property and private exchange.

Berman is well aware of the ambiguities of her key term, and the book contains some good discussions of these different meanings. But that understanding seldom makes it into the primary narrative of the book, where economists are allowed to pose as advocates of an undifferentiated “efficiency,” as opposed to non-economic social and political values. This forces Berman into the position of arguing that making government programs work well is in conflict with making them fair, when in reality an ideological preference for markets is often in conflict with both.

To be sure, there are cases where Berman’s frame works. Health care as a right is fundamentally different from a good that should be delivered efficiently, by whatever meaning. But in other cases, it leads her seriously astray. There are many things to criticize in the United States’ thread- bare welfare state. But is one of them really that it focuses too much on raising recipients’ in- comes, as opposed to relieving their “feelings of anomie and alienation”? Or again, there are many reasons to prefer 1960s and ‘70s style environmental regulation, with simple categorical rules, to the more recent focus on incentives and flexibility. But I am not sure that “the sacredness of Mother Earth” is the most convincing one.

That last phrase is Berman’s, from the introduction. It’s noteworthy that in her long and informative chapter on environmental regulation, we never hear the case for strong, inflexible standards being made in such terms. Rather, the first generation of regulators “built ambitious and relatively rigid rules … because they saw inflexibility as a tool for preventing capture” by industry, and because they believed that “setting high, even seemingly unrealistic standards … could drive rapid improvements” in technology. Meanwhile, their economics-influenced opponents like Charles Schultze (a leading economist in the Johnson and Carter administrations, and a central figure in the book) and Carter EPA appointee Bill Drayton, seem to have been motivated less by measurable policy outcomes than by objections on principle to “command and control” regulation. As one colleague described Drayton’s belief that companies should be allowed to offset emissions at one plant with reductions elsewhere, “What was driving Bill was pure intellectual conviction that this was a truly elegant approach — The Right Approach, with a capital ’T’ and ‘R’.” This does not look like a conflict between the values of equity and efficiency. It looks like a conflict between the goal of making regulation effective on one side, versus a preference for markets as such on the other.

On anti-trust regulation, the subject of another chapter in the book, the efficiency-versus-equity frame also obscures more than it reveals. The fundamental shift here was, as Berman says, away from a concern with size or market share, toward a narrower focus on horizontal agreements between competitors. And it is true that this shift was sometimes justified in terms of the supposed greater efficiency of dominant firms. But we shouldn’t take this justification at face value. As critical anti-trust scholars like Sanjukta Paul have shown, courts were not really interested in evidence for (or against) such efficiency. Rather, the guiding principle was a preference for top-down coordination by owners over other forms of economic coordination. This is why centralized price-setting by Amazon is acceptable, but an effort to bargain jointly with it by publishers was unacceptable; or why manufacturers’ prohibitions on resale of their products were accept- able but the American Medical Association’s limits on advertising by physicians was unacceptable. The issue here is not efficiency versus equity, or even centralized versus decentralized economic decision making. It’s about what kind of authority can be exercised in the economic sphere.

Berman ends the book with the suggestion that rebuilding the public sector calls for rethinking the language in which policies are understood and evaluated. On this, I fully agree. Readers who were politically active in the 2000s may recall the enormous mobilizations against George W. Bush’s proposals for Social Security privatization — and the failure, after those were abandoned, to translate this defensive program into a positive case for expanding social insurance. More recently, we’ve seen heroic labor actions by public teachers across the country. But while these have sometimes succeeded in their immediate goals, they haven’t translated into a broader argument for the value of public services and civil service protections.

As Berman says, it’s not enough to make the case for particular public programs; what we need is better language to make the positive case for the public sector in general.

No Maestros: Further Thoughts

One of the things we see in the questions of monetary policy transmission discussed in my Barron’s piece is the real cost of an orthodox economics education. If your vision of the economy is shaped by mainstream theory, it is impossible to think about what central banks actually do.

The models taught in graduate economics classes feature an “interest rate” that is the price of goods today in terms of identical goods in the future. Agents in these models are assumed to be able to freely trade off consumption today against consumption at any point in the future, and to distribute income from any time in the future over their lifetime as they see fit, subject only to the “no Ponzi” condition that over infinite time their spending must equal their income. This is a world, in other words, of infinite liquidity. There are no credit markets as such, only real goods at different dates.1

Monetary policy in this framework is then thought of in terms of changing the terms at which goods today trade for goods tomorrow, with the goal of keeping it at some “natural” level. It’s not at all clear how the central bank is supposed to set the terms of all these different transactions, or what frictions cause the time premium to deviate from the natural level, or whether the existence of those frictions might have broader consequences. 2 But there’s no reason to get distracted by this imaginary world, because it has nothing at all to do with what real central banks do.

In the real world, there are not, in general, markets where goods today trade for identical goods at some future date. But there are credit markets, which is where the price we call “the interest rate” is found. The typical transaction in a credit market is a loan — for example, a mortgage. A mortgage does not involve any trading-off of future against present income. Rather, it is income-positive for both parties in every period.

The borrower is getting a flow of housing services and making a flow of mortgage payments, both of which are the same in every period. Presumably they are getting more/better housing services for their mortgage payment than they would for an equivalent rental payment in every period (otherwise, they wouldn’t be buying the house.) Far from getting present consumption at the expense of future consumption, the borrower probably expects to benefit more from owning the house in the future, when rents will be higher but the mortgage payment is the same.

The bank, meanwhile, is getting more income in every period from the mortgage loan than it is paying to the holder of the newly-created deposit. No one associated with the bank is giving up any present consumption — the loan just involves creating two offsetting entries on the bank’s books. Both parties to the transaction are getting higher income over the whole life of the mortgage.

So no one, in the mortgage transaction, is trading off the present against the future. The transaction will raise the income of both sides in every period. So why not make more mortgages to infinity? Because what both parties are giving up in exchange for the higher income is liquidity. For the homeowner, the mortgage payments yield more housing services than equivalent rent payments, but they are also harder to adjust if circumstances change. Renting gives you less housing for your buck, but it’s easier to move if it turns out you’d rather live somewhere else. For the bank, the mortgage loan (its asset) carries a higher interest rate than the deposit (its liability), but involves the risk that the borrower will not repay, and also the risk that, in a crisis, ownership of the mortgage cannot be turned into immediate cashflows while the deposit is payable on demand.

In short, the fundamental tradeoff in credit markets – what the interest rate is the price of – is not less now versus more later, but income versus liquidity and safety.3

Money and credit are hierarchical. Bank deposits are an asset for us – they are money – but are a liability for banks. They must settle their own transactions with a different asset, which is a liability for the higher level of the system. The Fed sits at the top of this hierarchy. That is what makes its actions effective. It’s not that it can magically change the terms of every transaction that involves things happening at different dates. It’s that, because its liabilities are what banks use to settle their obligations to each other, it can influence how easy or difficult they find it to settle those liabilities and hence, how willing they are to take on the risk of expanding their balance sheets.

So when we think about the transmission of monetary policy, we have to think about two fundamental questions. First, how much do central bank actions change liquidity conditions within the financial system? And second, how much does real activity depends on the terms on which credit is available?

We might gloss this as supply and demand for credit. The mortgage, however, is typical of credit transactions in another way: It involves a change in ownership of an existing asset rather than the current production of goods and services. This is by far the most common case. So some large part of monetary policy transmission is presumably via changes in prices of assets rather than directly via credit-financed current production. 4 There are only small parts of the economy where production is directly sensitive to credit conditions.

One area where current production does seem to be sensitive to interest rates is housing construction. This is, I suppose, because on the one hand developers are not large corporations that can finance investment spending internally, and on the other hand land and buildings are better collateral than other capital goods. My impression – tho I’m getting well outside my area of expertise here – is that some significant part of construction finance is shorter maturity loans, where rates will be more closely linked to the policy rate. And then of course the sale price of the buildings will be influenced by prevailing interest rates as well. As a first approximation you could argue that this is the channel by which Fed actions influence the real economy. Or as this older but still compelling article puts it, “Housing IS the business cycle.

Of course there are other possible channels. For instance, it’s sometimes argued that during the middle third of the 20th century, when reserve requirements really bound, changes in the quantity of reserves had a direct quantitative effect on the overall volume of lending, without the interest rate playing a central role one way or the other. I’m not sure how true this is — it’s something I’d like to understand better — but in any case it’s not relevant to monetary policy today. Robert Triffin argued that inventories of raw materials and imported commodities were likely to be financed with short term debt, so higher interest rates would put downward pressure on their prices specifically. This also is probably only of historical interest.

The point is, deciding how much, how quickly and how reliably changes in the central bank’s policy rate will affect real activity (and then, perhaps, inflation) would seem to require a fairly fine-grained institutional knowledge about the financial system and the financing needs of real activity. The models taught in graduate macroeconomics are entirely useless for this purpose. Even for people not immersed in academic macro, the fixation on “the” interest rate as opposed to credit conditions broadly is a real problem.

These are not new debates, of course. I’ve linked before to Juan Acosta’s fascinating article about the 1950s debates between Paul Samuelson and various economists associated with the Fed.5 The lines of debate then were a bit different from now, with the academic economists more skeptical of monetary policy’s ability to influence real economic outcomes. What Fed economist Robert Roosa seems to have eventually convinced Samuelson of, is that monetary policy works not so much through the interest rate — which then as now didn’t seem to have big effect on investment decision. It works rather by changing the willingness of banks to lend — what was then known as “the availability doctrine.” This is reflected in later editions of his textbook, which added an explanation of monetary policy in terms of credit rationing.

Even if a lender should make little or no change in the rate of interest that he advertises to his customers, there may probably still be the following important effect of “easy money.” …  the lender will now be rationing out credit much more liberally than would be the case if the money market were very tight and interest rates were tending to rise. … Whenever in what follows I speak of a lowering of interest rates, I shall also have in mind the equally important relaxation of the rationing of credit and general increase in the availability of equity and loan capital to business.

The idea that “the interest rate” is a metaphor or synecdoche for a broader easing of credit conditions is important step toward realism. But as so often happens, the nuance has gotten lost and the metaphor gets taken literally.

Alternative Visions of Inflation

Like many people, I’ve been thinking a bit about inflation lately. One source of confusion, it seems to me, is that underlying concept has shifted in a rather fundamental way, but the full implications of this shift haven’t been taken on board.

I was talking with my Roosevelt colleague Lauren Melodia about inflation and alternative policies to manage it, which is a topic I hope Roosevelt will be engaging in more in the later part of this year. In the course of our conversation, it occurred to me that there’s a basic source of confusion about inflation. 

Many of our ideas about inflation originated in the context of a fixed quantity money. The original meaning of the term “inflation” was an increase in the stock of money, not a general increase in the price level. Over there you’ve got a quantity of stuff; over here you’ve got a quantity of money. When the stock of money grows rapidly and outpaces the growth of stuff, that’s inflation.

 In recent decades, even mainstream economists have largely abandoned the idea of the money stock as a meaningful economic quantity, and especially the idea that there is a straightforward relationship between money and inflation.

Here is what a typical mainstream macroeconomics textbook — Olivier Blanchard’s, in this case; but most are similar — says about inflation today. (You can just read the lines in italics.) 

There are three stories about inflation here: one based on expected inflation, one based on markup pricing, and one based on unemployment. We can think of these as corresponding to three kinds of inflation in the real world — inertial, supply-drive, and demand-driven. What there is not, is any mention of money. Money comes into the story only in the way that it did for Keynes: as an influence on the interest rate. 

To be fair, the book does eventually bring up the idea of a direct link between the money supply and inflation, but only to explain why it is obsolete and irrelevant for the modern world:

Until the 1980s, the strategy was to choose a target rate of money growth and to allow for deviations from that target rate as a function of activity. The rationale was simple. A low target rate of money growth implied a low average rate of inflation. … 

That strategy did not work well.

First, the relation between money growth and inflation turned out to be far from tight, even in the medium run. … Second, the relation between the money supply and the interest rate in the short run also turned out also to be unreliable. …

Throughout the 1970s and 1980s, frequent and large shifts in money demand created serious problems for central banks. … Starting in the early 1990s, a dramatic rethinking of monetary policy took place based on targeting inflation rather than money growth, and the use of an interest rate rule.

Obviously, I don’t endorse everything in the textbook.1 (The idea of a tight link between unemployment and inflation is not looking much better than the idea of a tight link between inflation and the money supply.) I bring it up here just to establish that the absence of a link between money growth and inflation is not radical or heterodox, but literally the textbook view.

One way of thinking about the first Blanchard passage above is that the three stories about inflation correspond to three stories about price setting. Prices may be set based on expectations of where prices will be, or prices may be set based on market power (the markup), or prices may be set based on costs of production. 

This seems to me to be the beginning of wisdom with respect to inflation: Inflation is just an increase in prices, so for every theory of price setting there’s a corresponding theory of inflation. There is wide variation in how prices get set across periods, countries and markets, so there must be a corresponding variety of inflations. 

Besides the three mentioned by Blanchard, there’s one other story that inflation is perhaps even more widespread. We could call this too much spending chasing too little production. 

The too-much-spending view of inflation corresponds to a ceiling on output, rather than a floor on unemployment, as the inflationary barrier. As the NAIRU has given way to potential output as the operational form of supply constraints on macroeconomic policy, this understanding of inflation has arguably become the dominant one, even if without formalization in textbooks. It overlaps with the unemployment story in making current demand conditions a key driver of inflation, even if the transmission mechanism is different. 

Superfically “too much spending relative to production” sounds a lot like “too money relative to goods.” (As to a lesser extent does “too much wage growth relative to productivity growth.”) But while these formulations sound similar, they have quite different implications. Intuitions formed by the old quantity-of-money view don’t work for the new stories.

The older understanding of inflation, which runs more or less unchanged from David Hume through Irving Fisher to Milton Friedman and contemporary monetarists, goes like this. There’s a stock of goods, which people can exchange for their mutual benefit. For whatever reasons, goods don’t exchange directly for other goods, but only for money. Money in turn is only used for purchasing goods. When someone receives money in exchange for a good, they turn around and spend it on some good themselves — not instantly, but after some delay determined by the practical requirements of exchange. (Imagine you’ve collected your earnings from your market stall today, and can take them to spend at a different market tomorrow.) The total amount of money, meanwhile, is fixed exogenously — the quantity of gold in circulation, or equivalently the amount of fiat tokens created by the government via its central bank.

Under these assumptions, we can write the familiar equation

MV = PY

If Y, the level of output, is determined by resources, technology and other “real” factors, and V is a function of the technical process of exchange — how long must pass between the receipt of money and it spending — then we’re left with a direct relationship between the change in M and the change in P. “Inflation is always and everywhere a monetary phenomenon.”2

I think something like this underlies most folk wisdom about inflation. And as is often the case, the folk wisdom has outlived whatever basis in reality it may once have had.3

Below, I want to sketch out some ways in which the implications of the excessive-spending-relative-to-production vision of inflation are importantly different from those of the excessive-money-relative-to-goods vision. But first, a couple of caveats.

First, the idea of a given or exogenous quantity of money isn’t wrong a priori, as a matter of logic; it’s an approximation that happens not to fit the economy in which we live. Exactly what range of historical settings it does fit is a tricky question, which I would love to see someone try to answer. But I think it’s safe to say that many important historical inflations, both under metallic and fiat regimes, fit comfortably enough in a monetarist framework. 

Second, the fact that the monetarist understanding of inflation is wrong (at least for contemporary advanced economies) doesn’t mean that the modern mainstream view is right. There is no reason to think there is one general theory of inflation, any more than there is one general etiology of a fever. Lots of conditions can produce the same symptom. In general, inflation is a persistent, widespread rise in prices, so for any theory of price-setting there’s a corresponding theory of inflation. And the expectations-based propagation mechanism of inertial inflation — where prices are raised in the expectation that prices will rise — is compatible with many different initial inflationary impulses. 

That said — here are some important cleavages between the two visions.

1. Money vs spending. More money is just more money, but more spending is always more spending on something in particular. This is probably the most fundamental difference. When we think of inflation in terms of money chasing a given quantity of goods, there is no connection between a change in the quantity of money and a change in individual spending decisions. But when we think of it in terms of spending, that’s no longer true — a decision to spend more is a decision to spend more on some specific thing. People try to carry over intuitions from the former case to the latter, but it doesn’t work. In the modern version, you can’t tell a story about inflation rising that doesn’t say who is trying to buy more of what; and you can’t tell a story about controlling inflation without saying whose spending will be reduced. Spending, unlike money, is not a simple scalar.

The same goes for the wages-markup story of the textbook. In the model, there is a single wage and a single production process. But in reality, a fall in unemployment or any other process that “raises the wage” is raising the wages of somebody in particular.

2. Money vs prices. There is one stock of money, but there are many prices, and many price indices. Which means there are many ways to measure inflation. As I mentioned above, inflation was originally conceived of as definitionally an increase in the quantity of money. Closely related to this is the idea of a decrease in the purchasing power of money, a definition which is still sometimes used. But a decrease in the value of money is not the same as an increase in the prices of goods and services, since money is used for things other than purchasing goods and services.  (Merijn Knibbe is very good on this.4) Even more problematically, there are many different goods and services, whose prices don’t move in unison. 

This wasn’t such a big deal for the old concept of inflation, since one could say that all else equal, a one percent increase in the stock of money would imply an additional point of inflation, without worrying too much about which specific prices that showed up in. But in the new concept, there’s no stock of money, only the price changes themselves. So picking the right index is very important. The problem is, there are many possible price indexes, and they don’t all move in unison. It’s no secret that inflation as measured by the CPI averages about half a point higher than that measured by the PCE. But why stop there? Those are just two of the infinitely many possible baskets of goods one could construct price indexes for. Every individual household, every business, every unit of government has their own price index and corresponding inflation rate. If you’ve bought a used car recently, your personal inflation rate is substantially higher than that of people who haven’t. We can average these individual rates together in various ways, but that doesn’t change the fact that there is no true inflation rate out there, only the many different price changes of different commodities.

3. Inflation and relative prices. In the old conception, money is like water in a pool. Regardless of where you pour it in, you get the same rise in the overall level of the pool.

Inflation conceived of in terms of spending doesn’t have that property. First, for the reason above — more spending is always more spending on something. If, let’s say for sake of argument, over-generous stimulus payments are to blame for rising inflation, then the inflation must show up in the particular goods and services that those payments are being used to purchase — which will not be a cross-section of output in general. Second, in the new concept, we are comparing desired spending not to a fixed stock of commodities, but to the productive capacity of the economy. So it matters how elastic output is — how easily production of different goods can be increased in response to stronger demand. Prices of goods in inelastic supply — rental housing, let’s say — will rise more in response to stronger demand, while prices of goods supplied elastically — online services, say — will rise less. It follows that inflation, as a concrete phenomenon, will involve not an across-the-board increase in prices, but a characteristic shift in relative prices.

This is a different point than the familiar one that motivates the use of “core” inflation — that some prices (traditionally, food and energy) are more volatile or noisy, and thus less informative about sustained trends. It’s that  when spending increases, some goods systematically rise in price faster than others.

This recent paper by Stock and Watson, for example, suggests that housing, consumer durables and food have historically seen prices vary strongly with the degree of macroeconomic slack, while prices for gasoline, health care, financial services, clothing and motor vehicles do not, or even move the opposite way. They suggest that the lack of a cyclical component in health care and finance reflect the distinct ways that prices are set (or imputed) in those sectors, while the lack of a cyclical component in gas, clothing and autos reflects the fact that these are heavily traded goods whose prices are set internationally. This interpretation seems plausible enough, but if you believe these numbers they have a broader implication: We should not think of cyclical inflation as an across the board increase in prices, but rather as an increase in the price of a fairly small set of market-priced, inelastically supplied goods relative to others.

4. Inflation and wages. As I discussed earlier in the post, the main story about inflation in today’s textbooks is the Phillips curve relationship where low unemployment leads to accelerating inflation. Here it’s particularly clear that today’s orthodoxy has abandoned the quantity-of-money view without giving up the policy conclusions that followed from it.

In the old monetarist view, there was no particular reason that lower unemployment or faster wage growth should be associated with higher inflation. Wages were just one relative price among others. A scarcity of labor would lead to higher real wages, while an exogenous increase in wages would lead to lower employment. But absent a change in the money supply, neither should have any effect on the overall price level. 

It’s worth noting here that altho Milton Friedman’s “natural rate of unemployment” is often conflated with the modern NAIRU, the causal logic is completely different. In Friedman’s story, high inflation caused low unemployment, not the reverse. In the modern story, causality runs from lower unemployment to faster wage growth to higher inflation. In the modern story, prices are set as a markup over marginal costs. If the markup is constant, and all wages are part of marginal cost, and all marginal costs are wages, then a change in wages will just be passed through one to one to inflation.

We can ignore the stable markup assumption for now — not because it is necessarily reasonable, but because it’s not obvious in which direction it’s wrong. But if we relax the other assumptions, and allow for non-wage costs of production and fixed wage costs, that unambiguously implies that wage changes are passed through less than one for one to prices. If production inputs include anything other than current labor, then low unemployment should lead to a mix of faster inflation and faster real wage growth. And why on earth should we expect anything else? Why shouldn’t the 101 logic of “reduced supply of X leads to a higher relative price of X” be uniquely inapplicable to labor?5

There’s an obvious political-ideological reason why textbooks should teach that low unemployment can’t actually make workers better off. But I think it gets a critical boost in plausibility — a papering-over of the extreme assumptions it rests on — from intuitions held over from the old monetarist view. If inflation really was just about faster money growth, then the claim that it leaves real incomes unchanged could work as a reasonable first approximation. Whereas in the markup-pricing story it really doesn’t. 

5. Inflation and the central bank.  In the quantity-of-money vision, it’s obvious why inflation is the special responsibility of the central bank. In the textbooks, managing the supply of money is often given as the first defining feature of a central bank. Clearly, if inflation is a function of the quantity of money, then primary responsibility for controlling it needs to be in the hands of whoever is in charge of the money supply, whether directly, or indirectly via bank lending. 

But here again, it seems, to me, the policy conclusion is being asked to bear weight even after the logical scaffolding supporting it has been removed. 

Even if we concede for the sake of argument that the central bank has a special relationship with the quantity of money, it’s still just one of many influences on the level of spending. Indeed, when we think about all the spending decisions made across the economy, “at one interest rate will I borrow the funds for it” is going to be a central consideration in only a few of them. Whether our vision of inflation is too much spending relative to the productive capacity of the economy, or wages increasing faster than productivity, many factors are going to play a role beyond interest rates or central bank actions more broadly. 

One might believe that compared with other macro variables, the policy interest rate has a uniquely strong and reliable link to the level of spending and/or wage growth; but almost no one, I think, does believe this. The distinct responsibility of the central bank for inflation gets justified not on economic grounds but political-institutional ones: the central bank can act more quickly than the legislature, it is free of undue political influence, and so on. These claims may or may not be true, but they have nothing in particular to do with inflation. One could justify authority over almost any area of macroeconomic policy on similar grounds.

Conversely, once we fully take on board the idea that the central bank’s control over inflation runs through to the volume of credit creation to the level of spending (and then perhaps via unemployment to wage growth), there is no basis for the distinction between monetary policy proper and other central bank actions. All kinds of regulation and lender-of-last-resort operations equally change the volume and direction of credit creation, and so influenced aggregate spending just as monetary policy in the narrow sense does.

6. The costs of inflation. If inflation is a specifically monetary phenomenon, the costs of inflation presumably involve the use of money. The convenience of quoting relative prices in money becomes a problem when the value of money is changing.

An obvious example is the fixed denominations of currency — monetarists used to talk with about “shoe leather costs” — the costs of needing to go more frequently to the bank (as one then did) to restock on cash. A more consequential example is public incomes or payments fixed in money terms. As recently as the 1990s, one could find FOMC members talking about bracket creep and eroded Social Security payments as possible costs of higher inflation — albeit with some embarrassment, since the schedules of both were already indexed by then. More broadly, in an economy organized around money payments, changes in what a given flow of money can buy will create problems. Here’s one way to think about these problems:

Social coordination requires a mix of certainty and flexibility. It requires economic units to make all kinds of decisions in anticipation of the choices of other units — we are working together; my plans won’t work out if you can change yours too freely. But at the same time, you need to have enough space to adapt to new developments — as with train cars, there needs to be some slack in the coupling between economic unit for things to run smoothly. One dimension of this slack is the treatment of some extended period as if it were a single instant.

This is such a basic, practical requirements of contracting and management that we hardly think about it. For example, budgets — most organizations budget for periods no shorter than a quarter, which means that as far as internal controls and reporting are concerned, anything that happens within that quarter happens at the same time.6Similarly, invoices normally require payment in 30 or 60 days, thus treating shorter durations as instantaneous. Contracts of all kinds are signed for extended periods on fixed money terms. All these arrangements assume that the changes in prices over a few months or a year are small enough that they can be safely ignored.can be modified when inflation is high enough to make the fiction untenable that 30, 60 or 90 days is an instant. Social coordination strongly benefits from the convention that shorter durations can be ignored for most periods, which means people behave in practice as if they expect inflation over such shorter periods to be zero.

Axel Leijonhufvud’s mid-70s piece on inflation is one of the most compelling accounts of this kinds of cost of inflation — the breakdown of social coordination — that I have seen. For him, the stability of money prices is the sine qua non of decentralized coordination through markets. 

In largely nonmonetary economies, important economic rights and obligations will be inseparable from particularized relationships of social status and political allegiance and will be in some measure permanent, inalienable and irrevocable. … In monetary exchange systems, in contrast, the value to the owner of an asset derives from rights, privileges, powers and immunities against society generally rather than from the obligation of some particular person. …

Neoclassical theories rest on a set of abstractions that separate “economic” transactions from the totality of social and political interactions in the system. For a very large set of problems, this separation “works”… But it assumes that the events that we make the subject of … the neoclassical model of the “economic system” do not affect the “social-political system” so as … to invalidate the institutional ceteris paribus clauses of that model. …

 Double-digit inflation may label a class of events for which this assumption is a bad one. … It may be that … before the “near-neutral” adjustments can all be smoothly achieved, society unlearns to use money confidently and reacts by restrictions on “the circles people shall serve, the prices they shall charge, and the goods they can buy.”

One important point here is that inflation has a much greater impact than in conventional theory because of the price-stability assumption incorporated into any contract that is denominated in money terms and not settled instantly — which is to say, pretty much any contract. So whatever expectations of inflation people actually hold, the whole legal-economic system is constructed in a way that makes it behave as if inflation expectations were biased toward zero:

The price stability fiction — a dollar is a dollar is a dollar — is as ingrained in our laws as if it were a constitutional principle. Indeed, it may be that no real constitutional principle permeates the Law as completely as does this manifest fiction.

The market-prices-or-feudalism tone of this seems more than a little overheated from today’s perspective, and when Arjun and I asked him about this piece a few years ago, he seemed a bit embarrassed by it. But I still think there is something to it. Market coordination, market rationality, the organization of productive activity through money payments and commitments, really does require the fiction of a fixed relationship between quantities of money and real things. There is some level of inflation at which this is no longer tenable.

So I have no problem with the conventional view that really high inflations — triple digits and above — can cause far-reaching breakdowns in social coordination. But this is not relevant to the question of inflation of 1 or 2 or 5 or probably even 10 percent. 

In this sense, I think the mainstream paradoxically both understates and overstates the real costs of inflation. They exaggerate the importances of small differences in inflation. But at the same time, because they completely naturalize the organization of life through markets, they are unable to talk about the possibility that it could break down.

But again, this kind of breakdown of market coordination is not relevant for the sorts of inflation seen in the United States or other rich countries in modern times. 

It’s easier to talk about the costs (and benefits) of inflation when we see it as a change in relative prices, and redistribution of income and wealth. If inflation is typically a change in relative prices, then the costs are experienced by those whose incomes rise more slowly than their payments. Keynes emphasized this point in an early article on “Social Consequences of a Change in the Value of Money.”7

A change in the value of money, that is to say in the level of prices, is important to Society only in so far as its incidence is unequal. Such changes have produced in the past, and are producing now, the vastest social consequences, because, as we all know, when the value of money changes, it does not change equally for all persons or for all purposes. … 

Keynes sees the losers from inflation as passive wealth owners, while the winners are active businesses and farmers; workers may gain or lose depending on the degree to which they are organized. For this reason, he sees moderate inflation as being preferable to moderate deflation, though both as evils to be avoided — until well after World War II, the goal of price stability meant what it said.

Let’s return for a minute to the question of wages. As far as I can tell, the experience in modern inflations is that wage changes typically lag behind prices. If you plot nominal wage growth against inflation, you’ll see a clear positive relationship, but with a slope well below 1. This might seem to contradict what I said under point 4. But my point there was that insofar as inflation is driven by increased worker bargaining power, it should be associated with faster real wage growth. In fact, the textbook is wrong not just on logic but on facts. In principle, a wage-driven inflation would see a rise in real wage. But most real inflations are not wage-driven.

In practice, the political costs of inflation are probably mostly due to a relatively small number of highly salient prices. 

7. Inflation and production. The old monetarist view had a fixed quantity of money confronting a fixed quantity of goods, with the price level ending up at whatever equated them. As I mentioned above, the fixed-quantity-of-money part of this has been largely abandoned by modern mainstream as well as heterodox economists. But what about the other side? Why doesn’t more spending call forth more production?

The contemporary mainstream has, it seems to me, a couple ways of answering the question. One is the approach of a textbook like Blanchard’s. There, higher spending does lead to to higher employment and output and lower unemployment. But unless unemployment is at a single unique level — the NAIRU — inflation will rise or fall without limit. It’s exceedingly hard to find anything that looks like a NAIRU in the data, as critics have been pointing out for a long time. Even Blanchard himself rejects it when he’s writing for central bankers rather than undergraduates. 

There’s a deeper conceptual problem as well. In this story, there is a tradeoff between unemployment and inflation. Unemployment below the NAIRU does mean higher real output and income. The cost of this higher output is an inflation rate that rises steadily from year to year. But even if we believed this, we might ask, how much inflation acceleration is too much? Can we rule out that a permanently higher level of output might be worth a slowly accelerating inflation rate?

Think about it: In the old days, the idea that the price level could increase without limit was considered crazy. After World War II, the British government imposed immense costs on the country not just to stabilize inflation, but to bring the price level back to its prewar level. In the modern view, this was crazy — the level of prices is completely irrelevant. The first derivative of prices — the inflation rate — is also inconsequential, as long as it is stable and predictable. But the second derivative — the change in the rate of inflation — is apparently so consequential that it must be kept at exactly zero at all costs. It’s hard to find a good answer, or indeed any answer, for why this should be so.

The more practical mainstream answer is to say, rather than that there is a tradeoff between unemployment and inflation with one unambiguously best choice, but that there is no tradeoff. In this story, there is a unique level of potential output (not a feature of the textbook model) at which the relationship between demand, unemployment and inflation changes. Below potential, more spending calls forth more production and employment; above potential, more spending only calls forth higher inflation. This looks better as a description of real economies, particular given that the recent experience of long periods of elevated unemployment that have not, contrary to the NAIRU prediction, resulted in ever-accelerating deflation. But it begs the question of why should be such a sharp line.

The alternative view would be that investment, technological change, and other determinants of “potential output” also respond to demand. Supply constraints, in this view, are better thought of in terms of the speed with which supply can respond to demand, rather than an absolute ceiling on output.

Well, this post has gotten too long, and has been sitting in the virtual drawer for quite a while as I keep adding to it. So I am going to break off here. But it seems to me that this is where the most interesting conversations around inflation are going right now — the idea that supply constraints are not absolute but respond to demand with varying lags — that inflation should be seen as often a temporary cost of adjustment to a new higher level of capacity. And the corollary, that anti-inflation policy should aim at identifying supply constraints as much as, or more than, restraining demand. 

“Monetary Policy in a Changing World”

While looking for something else, I came across this 1956 article on monetary policy by Erwin Miller. It’s a fascinating read, especially in light of current discussions about, well, monetary policy in a changing world. Reading the article was yet another reminder that, in many ways, debates about central banking were more sophisticated and far-reaching in the 1950s than they are today.

The recent discussions have been focused mainly on what the goals or targets of monetary policy should be. While the rethinking there is welcome — higher wages are not a reliable sign of rising inflation; there are good reasons to accept above-target inflation, if it developed — the tool the Fed is supposed to be using to hit these targets is the overnight interest rate faced by banks, just as it’s been for decades. The mechanism by which this tool works is basically taken for granted — economy-wide interest rates move with the rate set by the Fed, and economic activity reliably responds to changes in these interest rates. If this tool has been ineffective recently, that’s just about the special conditions of the zero lower bound. Still largely off limits are the ideas that, when effective, monetary policy affects income distribution and the composition of output and not just its level, and that, to be effective, monetary policy must actively direct the flow of credit within the economy and not just control the overall level of liquidity.

Miller is asking a more fundamental question: What are the institutional requirements for monetary policy to be effective at all? His answer is that conventional monetary policy makes sense in a world of competitive small businesses and small government, but that different tools are called for in a world of large corporations and where the public sector accounts for a substantial part of economic activity. It’s striking that the assumptions he already thought were outmoded in the 1950s still guide most discussions of macroeconomic policy today.1

From his point of view, relying on the interest rate as the main tool of macroeconomic management is just an unthinking holdover from the past — the “normal” world of the 1920s — without regard for the changed environment that would favor other approaches. It’s just the same today — with the one difference that you’ll no longer find these arguments in the Quarterly Journal of Economics.2

Rather than resort unimaginatively to traditional devices whose heyday was one with a far different institutional environment, authorities should seek newer solutions better in harmony with the current economic ‘facts of life.’ These newer solutions include, among others, real estate credit control, consumer credit control, and security reserve requirements…, all of which … restrain the volume of credit available in the private sector of the economy.

Miller has several criticisms of conventional monetary policy, or as he calls it, “flexible interest rate policies” — the implicit alternative being the wartime policy of holding key rates fixed. One straightforward criticism is that changing interest rates is itself a form of macroeconomic instability. Indeed, insofar as both interest rates and inflation describe the terms on which present goods trade for future goods, it’s not obvious why stable inflation should be a higher priority than stable interest rates.

A second, more practical problem is that to the extent that a large part of outstanding debt is owed by the public sector, the income effects of interest rate changes will become more important than the price effects. In a world of large public debts, conventional monetary policy will affect mainly the flow of interest payments on existing debt rather than new borrowing. Or as Miller puts it,

If government is compelled to borrow on a large scale for such reasons of social policy — i.e., if the expenditure programs are regarded as of such compelling social importance that they cannot be postponed merely for monetary considerations — then it would appear illogical to raise interest rates against government, the preponderant borrower, in order to restrict credit in the private sphere.

Arguably, this consideration applied more strongly in the 1950s, when government accounted for the majority of all debt outstanding; but even today governments (federal plus state and local) accounts for over a third of total US debt. And the same argument goes for many forms of private debt as well.

As a corollary to this argument — and my MMT friends will like this — Miller notes that a large fraction of federal debt is held by commercial banks, whose liabilities in turn serve as money. This two-step process is, in some sense, equivalent to simply having the government issue the money — except that the private banks get paid interest along the way. Why would inflation call for an increase in this subsidy?

Miller:

The continued existence of a large amount of that bank-held debt may be viewed as a sop to convention, a sophisticated device to issue needed money without appearing to do so. However, it is a device which requires that a subsidy (i.e., interest) be paid the banks to issue this money. It may therefore be argued that the government should redeem these bonds by an issue of paper money (or by an issue of debt to the central bank in exchange for deposit credit). … The upshot would be the removal of the governmental subsidy to banks for performing a function (i.e., creation of money) which constitutionally is the responsibility of the federal government.

Finance franchise, anyone?

This argument, I’m sorry to say, does not really work today — only a small fraction of federal debt is now owned by commercial banks, and there’s no longer a link, if there ever was, between their holdings of federal debt and the amount of money they create by lending. There are still good arguments for a public payments system, but they have to be made on other grounds.

The biggest argument against using a single interest rate as the main tool of macroeconomic management is that it doesn’t work very well. The interesting thing about this article is that Miller doesn’t spend much time on this point. He assumes his readers will already be skeptical:

There remains the question of the effectiveness of interest rates as a deterrent to potential private borrowing. The major arguments for each side of this issue are thoroughly familiar and surely demonstrate most serious doubt concerning that effectiveness.

Among other reasons, interest is a small part of overall cost for most business activity. And in any situation where macroeconomic stabilization is needed, it’s likely that expected returns will be moving for other reasons much faster than a change in interest rates can compensate for. Keynes says the same thing in the General Theory, though Miller doesn’t mention it.3 (Maybe in 1956 there wasn’t any need to.)

Because the direct link between interest rates and activity is so weak, Miller notes, more sophisticated defenders of the central bank’s stabilization role argue that it’s not so much a direct link between interest rates and activity as the effect of changes in the policy rate on banks’ lending decisions. These arguments “skillfully shift the points of emphasis … to show how even modest changes in interest rates can bring about significant credit control effects.”

Here Miller is responding to arguments made by a line of Fed-associated economists from his contemporary Robert Roosa through Ben Bernanke. The essence of these arguments is that the main effect of interest rate changes is not on the demand for credit but on the supply. Banks famously lend long and borrow short, so a bank’s lending decisions today must take into account financing conditions in the future. 4 A key piece of this argument — which makes it an improvement on orthodoxy, even if Miller is ultimately right to reject it — is that the effect of monetary policy can’t be reduced to a regular mathematical relationship, like the interest-output semi-elasticity of around 1 found in contemporary forecasting models. Rather, the effect of policy changes today depend on their effects on beliefs about policy tomorrow.

There’s a family resemblance here to modern ideas about forward guidance — though people like Roosa understood that managing market expectations was a trickier thing than just announcing a future policy. But even if one granted the effectiveness of this approach, an instrument that depends on changing beliefs about the long-term future is obviously unsuitable for managing transitory booms and busts.

A related point is that insofar as rising rates make it harder for banks to finance their existing positions, there is a chance this will create enough distress that the Fed will have to intervene — which will, of course, have the effect of making credit more available again. Once the focus shifts from the interest rate to credit conditions, there is no sharp line between the Fed’s monetary policy and lender of last resort roles.

A further criticism of conventional monetary policy is that it disproportionately impacts more interest-sensitive or liquidity-constrained sectors and units. Defenders of conventional monetary policy claim (or more often tacitly assume) that it affects all economic activity equally. The supposedly uniform effect of monetary policy is both supposed to make it an effective tool for macroeconomic management, and helps resolve the ideological tension between the need for such management and the belief in a self-regulating market economy. But of course the effect is not uniform. This is both because debtors and creditors are different, and because interest makes up a different share of the cost of different goods and services.

In particular, investment, especially investment in housing and other structures, is mo sensitive to interest and liquidity conditions than current production. Or as Miller puts it, “Interest rate flexibility uses instability of one variety to fight instability of a presumably more serious variety: the instability of the loanable funds price-level and of capital values is employed in an attempt to check commodity price-level and employment instability.” (emphasis added)

The point that interest rate changes, and monetary conditions generally, change the relative price of capital goods and consumption goods is important. Like much of Miller’s argument, it’s an unacknowledged borrowing from Keynes; more strikingly, it’s an anticipation of Minsky’s famous “two price” model, where the relative price of capital goods and current output is given a central role in explaining macroeconomic dynamics.

If we take a step back, of course, it’s obvious that some goods are more illiquid than others, and that liquidity conditions, or the availability of financing, will matter more for production of these goods than for the more immediately saleable ones. Which is one reason that it makes no sense to think that money is ever “neutral.”5

Miller continues:

In inflation, e.g., employment of interest rate flexibility would have as a consequence the spreading of windfall capital losses on security transactions, the impairment of capital values generally, the raising of interest costs of governmental units at all levels, the reduction in the liquidity of individuals and institutions in random fashion without regard for their underlying characteristics, the jeopardizing of the orderly completion of financing plans of nonfederal governmental units, and the spreading of fear and uncertainty generally.

Some businesses have large debts; when interest rates rise, their earnings fall relative to businesses that happen to have less debt. Some businesses depend on external finance for investment; when interest rates rise, their costs rise relative to businesses that are able to finance investment internally. In some industries, like residential construction, interest is a big part of overall costs; when interest rates rise, these industries will shrink relative to ones that don’t finance their current operations.

In all these ways, monetary policy is a form of central planning, redirecting activity from some units and sectors to other units and sectors. It’s just a concealed, and in large part for that reason crude and clumsy, form of planning.

Or as Miller puts it, conventional monetary policy

discriminates between those who have equity funds for purchases and those who must borrow to make similar purchases. … In so far as general restrictive action successfully reduces the volume of credit in use, some of those businesses and individuals dependent on bank credit are excluded from purchase marts, while no direct restraint is placed on those capable of financing themselves.

In an earlier era, Miller suggests, most borrowing was for business investment; most investment was externally financed; and business cycles were driven by fluctuations in investment. So there was a certain logic to focusing on interest rates as a tool of stabilization. Honestly, I’m not sure if that was ever true.But I certainly agree that by the 1950s — let alone today — it was not.

In a footnote, Miller offers a more compelling version of this story, attributing to the British economist R. S. Sayers the idea of

sensitive points in an economy. [Sayers] suggests that in the English economy mercantile credit in the middle decades of the nineteenth century and foreign lending in the later decades of that century were very sensitive spots and that the bank rate technique was particularly effective owing to its impact upon them. He then suggests that perhaps these sensitive points have given way to newer ones, namely, stock exchange speculation and consumer credit. Hence he concludes that central bank instruments should be employed which are designed to control these newer sensitive areas.

This, to me, is a remarkably sophisticated view of how we should think about monetary policy and credit conditions. It’s not an economywide increase or decrease in activity, which can be imagined as a representative household shifting their consumption over time; it’s a response of whatever specific sectors or activities are most dependent on credit markets, which will be different in different times and places. Which suggests that a useful education on monetary policy requires less calculus and more history and sociology.

Finally, we get to Miller’s own proposals. In part, these are for selective credit controls — direct limits on the volume of specific kinds of lending are likely to be more effective at reining in inflationary pressures, with less collateral damage. Yes, these kinds of direct controls pick winners and losers — no more than conventional policy does, just more visibly. As Miller notes, credit controls imposed for macroeconomic stabilization wouldn’t be qualitatively different from the various regulations on credit that are already imposed for other purposes — tho admittedly that argument probably went further in a time when private credit was tightly regulated than in the permanent financial Purge we live in today.

His other proposal is for comprehensive security reserve requirements — in effect generalizing the limits on bank lending to financial positions of all kinds. The logic of this idea is clear, but I’m not convinced — certainly I wouldn’t propose it today. I think when you have the kind of massive, complex financial system we have today, rules that have to be applied in detail, at the transaction level, are very hard to make effective. It’s better to focus regulation on the strategic high ground — but please don’t ask me where that is!

More fundamentally, I think the best route to limiting the power of finance is for the public sector itself to take over functions private finance currently provides, as with a public payments system, a public investment banks, etc. This also has the important advantage of supporting broader steps toward an economy built around human needs rather than private profit. And it’s the direction that, grudgingly but steadily, the response to various crises is already pushing us, with the Fed and other authorities reluctantly stepping in to perform various functions that the private financial system fails to. But this is a topic for another time.

Miller himself is rather tentative in his positive proposals. And he forthrightly admits that they are “like all credit control instruments, likely to be far more effective in controlling inflationary situations than in stimulating revival from a depressed condition.” This should be obvious — even Ronald Reagan knew you can’t push on a string. This basic asymmetry is one of the many everyday insights that was lost somewhere in the development of modern macro.

The conversation around monetary policy and macroeconomics is certainly broader and more realistic today than it was 15 or 20 years ago, when I started studying this stuff. And Jerome Powell — and even more the activists and advocates who’ve been shouting at him — deserves credit for the Fed;s tentative moves away from the reflexive fear of full employment that has governed monetary policy for so long. But when you take a longer look and compare today’s debates to earlier decades, it’s hard not to feel that we’re still living in the Dark Ages of macroeconomics

In Jacobin: A Demystifying Decade for Economics

(The new issue of Jacobin has a piece by me on the state of economics ten years after the crisis. The published version is here. I’ve posted a slightly expanded version below. Even though Jacobin was generous with the word count and Seth Ackerman’s edits were as always superb, they still cut some material that, as king of the infinite space of this blog, I would rather include.)

 

For Economics, a Demystifying Decade

Has economics changed since the crisis? As usual, the answer is: It depends. If we look at the macroeconomic theory of PhD programs and top journals, the answer is clearly, no. Macroeconomic theory remains the same self-contained, abstract art form that it has been for the past twenty-five years. But despite its hegemony over the peak institutions of academic economics, this mainstream is not the only mainstream. The economics of the mainstream policy world (central bankers, Treasury staffers, Financial Times editorialists), only intermittently attentive to the journals in the best times, has gone its own way; the pieties of a decade ago have much less of a hold today. And within the elite academic world, there’s plenty of empirical work that responds to the developments of the past ten years, even if it doesn’t — yet — add up to any alternative vision.

For a socialist, it’s probably a mistake to see economists primarily as either carriers of valuable technical expertise or systematic expositors of capitalist ideology. They are participants in public debates just like anyone else. The profession as the whole is more often found trailing after political developments than advancing them.

***

The first thing to understand about macroeconomic theory is that it is weirder than you think. The heart of it is the idea that the economy can be thought of as a single infinite-lived individual trading off leisure and consumption over all future time. For an orthodox macroeconomist – anyone who hoped to be hired at a research university in the past 30 years – this approach isn’t just one tool among others. It is macroeconomics. Every question has to be expressed as finding the utility-maximizing path of consumption and production over all eternity, under a precisely defined set of constraints. Otherwise it doesn’t scan.

This approach is formalized in something called the Euler equation, which is a device for summing up an infinite series of discounted future values. Some version of this equation is the basis of most articles on macroeconomic theory published in a mainstream journal in the past 30 years.It might seem like an odd default, given the obvious fact that real economies contain households, businesses, governments and other distinct entities, none of whom can turn income in the far distant future into spending today. But it has the advantage of fitting macroeconomic problems — which at face value involve uncertainty, conflicting interests, coordination failures and so on — into the scarce-means-and-competing-ends Robinson Crusoe vision that has long been economics’ home ground.

There’s a funny history to this technique. It was invented by Frank Ramsey, a young philosopher and mathematician in Keynes’ Cambridge circle in the 1920s, to answer the question: If you were organizing an economy from the top down and had to choose between producing for present needs versus investing to allow more production later, how would you decide the ideal mix? The Euler equation offers a convenient tool for expressing the tradeoff between production in the future versus production today.

This makes sense as a way of describing what a planner should do. But through one of those transmogrifications intellectual history is full of, the same formalism was picked up and popularized after World War II by Solow and Samuelson as a description of how growth actually happens in capitalist economies. The problem of macroeconomics has continued to be framed as how an ideal planner should direct consumption and production to produce the best outcomes for anyone, often with the “ideal planner” language intact. Pick up any modern economics textbook and you’ll find that substantive questions can’t be asked except in terms of how a far sighted agent would choose this path of consumption as the best possible one allowed by the model.

There’s nothing wrong with adopting a simplified formal representation of a fuzzier and more complicated reality. As Marx said, abstraction is the social scientist’s substitute for the microscope or telescope. But these models are not simple by any normal human definition. The models may abstract away from features of the world that non-economists might think are rather fundamental to “the economy” — like the existence of businesses, money, and government — but the part of the world they do represent — the optimal tradeoff between consumption today and consumption tomorrow — is described in the greatest possible detail. This combination of extreme specificity on one dimension and extreme abstraction on the others might seem weird and arbitrary. But in today’s profession, if you don’t at least start from there, you’re not doing economics.

At the same time, many producers of this kind of models do have a quite realistic understanding of the behavior of real economies, often informed by first-hand experience in government. The combination of tight genre constraints and real insight leads to a strange style of theorizing, where the goal is to produce a model that satisfies the the conventions of the discipline while arriving at a conclusion that you’ve already reached by other means. Michael Woodford, perhaps the leading theorist of “New Keynesian” macroeconomics, more or less admits that the purpose of his models is to justify the countercyclical interest rate policy already pursued by central banks in a language acceptable to academic economists. Of course the central bankers themselves don’t learn anything from such an exercise — and you will scan the minutes of Fed meetings in vain for discussion of first-order ARIMA technology shocks — but they  presumably find it reassuring to hear that what they already thought is consistent with the most modern economic theory. It’s the economic equivalent of the college president in Randall Jarrell’s Pictures from an Institution:

About anything, anything at all, Dwight Robbins believed what Reason and Virtue and Tolerance and a Comprehensive Organic Synthesis of Values would have him believe. And about anything, anything at all, he believed what it was expedient for the president of Benton College to believe. You looked at the two beliefs, and lo! the two were one. Do you remember, as a child without much time, turning to the back of the arithmetic book, getting the answer to a problem, and then writing down the summary hypothetical operations by which the answer had been, so to speak, arrived at? It is the only method of problem-solving that always gives correct answers…

The development of theory since the crisis has followed this mold. One prominent example: After the crash of 2008, Paul Krugman immediately began talking about the liquidity trap and the “perverse” Keynesian claims that become true when interest rates were stuck at zero. Fiscal policy was now effective, there was no danger in inflation from increases in the money supply, a trade deficit could cost jobs, and so on. He explicated these ideas with the help of the “IS-LM” models found in undergraduate textbooks — genuinely simple abstractions that haven’t played a role in academic work in decades.

Some years later, he and Gautti Eggertson unveiled a model in the approved New Keynesian style, which showed that, indeed, if interest rates  were fixed at zero then fiscal policy, normally powerless, now became highly effective. This exercise may have been a display of technical skill (I suppose; I’m not a connoisseur) but what do we learn from it? After all, generating that conclusion was the announced  goal from the beginning. The formal model was retrofitted to generate the argument that Krugman and others had been making for years, and lo! the two were one.

It’s a perfect example of Joan Robinson’s line that economic theory is the art of taking a rabbit out of a hat, when you’ve just put it into the hat in full view of the audience. I suppose what someone like Krugman might say in his defense is that he wanted to find out if the rabbit would fit in the hat. But if you do the math right, it always does.

(What’s funnier in this case is that the rabbit actually didn’t fit, but they insisted on pulling it out anyway. As the conservative economist John Cochrane gleefully pointed out, the same model also says that raising taxes on wages should also boost employment in a liquidity trap. But no one believed that before writing down the equations, so they didn’t believe it afterward either. As Krugman’s coauthor Eggerston judiciously put it, “there may be reasons outside the model” to reject the idea that increasing payroll taxes is a good idea in a recession.)

Left critics often imagine economics as an effort to understand reality that’s gotten hopelessly confused, or as a systematic effort to uphold capitalist ideology. But I think both of these claims are, in a way, too kind; they assume that economic theory is “about” the real world in the first place. Better to think of it as a self-constrained art form, whose apparent connections to economic phenomena are results of a confusing overlap in vocabulary. Think about chess and medieval history: The statement that “queens are most effective when supported by strong bishops” might be reasonable in both domains, but its application in the one case will tell you nothing about its application in the other.

Over the past decade, people (such as, famously, Queen Elizabeth) have often asked why economists failed to predict the crisis. As a criticism of economics, this is simultaneously setting the bar too high and too low. Too high, because crises are intrinsically hard to predict. Too low, because modern macroeconomics doesn’t predict anything at all.  As Suresh Naidu puts it, the best way to think about what most economic theorists do is as a kind of constrained-maximization poetry. It makes no more sense to ask “is it true” than of a haiku.

***

While theory buzzes around in its fly-bottle, empirical macroeconomics, more attuned to concrete developments, has made a number of genuinely interesting departures. Several areas have been particularly fertile: the importance of financial conditions and credit constraints; government budgets as a tool to stabilize demand and employment; the links between macroeconomic outcomes and the distribution of income; and the importance of aggregate demand even in the long run.

Not surprisingly, the financial crisis spawned a new body of work trying to assess the importance of credit, and financial conditions more broadly, for macroeconomic outcomes. (Similar bodies of work were produced in the wake of previous financial disruptions; these however don’t get much cited in the current iteration.) A large number of empirical papers tried to assess how important access to credit was for household spending and business investment, and how much of the swing from boom to bust could be explained by the tighter limits on credit. Perhaps the outstanding figures here are Atif Mian and Amir Sufi, who assembled a large body of evidence that the boom in lending in the 2000s reflected mainly an increased willingness to lend on the part of banks, rather than an increased desire to borrow on the part of families; and that the subsequent debt overhang explained a large part of depressed income and employment in the years after 2008.

While Mian and Sufi occupy solidly mainstream positions (at Princeton and Chicago, respectively), their work has been embraced by a number of radical economists who see vindication for long-standing left-Keynesian ideas about the financial roots of economic instability. Markus Brunnermeier (also at Princeton) and his coauthors have also done interesting work trying to untangle the mechanisms of the 2008 financial crisis and to generalize them, with particular attention to the old Keynesian concept of liquidity. That finance is important to the economy is not, in itself, news to anyone other than economists; but this new empirical work is valuable in translating this general awareness into concrete usable form.

A second area of renewed empirical interest is fiscal policy — the use of the government budget to manage aggregate demand. Even more than with finance, economics here has followed rather than led the policy debate. Policymakers were turning to large-scale fiscal stimulus well before academics began producing studies of its effectiveness. Still, it’s striking how many new and sophisticated efforts there have been to estimate the fiscal multiplier — the increase in GDP generated by an additional dollar of government spending.

In the US, there’s been particular interest in using variation in government spending and unemployment across states to estimate the effect of the former on the latter. The outstanding work here is probably that of Gabriel Chodorow-Reich. Like most entries in this literature, Chodorow-Reich’s suggests fiscal multipliers that are higher than almost any mainstream economist would have accepted a decade ago, with each dollar of government spending adding perhaps two dollars to GDP. Similar work has been published by the IMF, which acknowledged that past studies had “significantly underestimated” the positive effects of fiscal policy. This mea culpa was particularly striking coming from the global enforcer of economic orthodoxy.

The IMF has also revisited its previously ironclad opposition to capital controls — restrictions on financial flows across national borders. More broadly, it has begun to offer, at least intermittently, a platform for work challenging the “Washington Consensus” it helped establish in the 1980s, though this shift predates the crisis of 2008. The changed tone coming out of the IMF’s research department has so far been only occasionally matched by a change in its lending policies.

Income distribution is another area where there has been a flowering of more diverse empirical work in the past decade. Here of course the outstanding figure is Thomas Piketty. With his collaborators (Gabriel Zucman, Emmanuel Saez and others) he has practically defined a new field. Income distribution has always been a concern of economists, of course, but it has typically been assumed to reflect differences in “skill.” The large differences in pay that appeared to be unexplained by education, experience, and so on, were often attributed to “unmeasured skill.” (As John Eatwell used to joke: Hegemony means you get to name the residual.)

Piketty made distribution — between labor and capital, not just across individuals — into something that evolves independently, and that belongs to the macro level of the economy as a whole rather than the micro level of individuals. When his book Capital in the 21st Century was published, a great deal of attention was focused on the formula “r > g,” supposedly reflecting a deep-seated tendency for capital accumulation to outpace economic growth. But in recent years there’s been an interesting evolution in the empirical work Piketty and his coauthors have published, focusing on countries like Russia/USSR and China, etc., which didn’t feature in the original survey. Political and institutional factors like labor rights and the legal forms taken by businesses have moved to center stage, while the formal reasoning of “r > g” has receded — sometimes literally to a footnote. While no longer embedded in the grand narrative of Capital in the 21st Century, this body of empirical work is extremely valuable, especially since Piketty and company are so generous in making their data publicly available. It has also created space for younger scholars to make similar long-run studies of the distribution of income and wealth in countries that the Piketty team hasn’t yet reached, like Rishabh Kumar’s superb work on India. It has also been extended by other empirical economists, like Lukas Karabarbounis and coauthors, who have looked at changes in income distribution through the lens of market power and the distribution of surplus within the corporation — not something a University of Chicago economist would have ben likely to study a decade ago.

A final area where mainstream empirical work has wandered well beyond its pre-2008 limits is the question of whether aggregate demand — and money and finance more broadly — can affect long-run economic outcomes. The conventional view, still dominant in textbooks, draws a hard line between the short run and the long run, more or less meaning a period longer than one business cycle. In the short run, demand and money matter. But in the long run, the path of the economy depends strictly on “real” factors — population growth, technology, and so on.

Here again, the challenge to conventional wisdom has been prompted by real-world developments. On the one hand, weak demand — reflected in historically low interest rates — has seemed to be an ongoing rather than a cyclical problem. Lawrence Summers dubbed this phenomenon “secular stagnation,” reviving a phrase used in the 1940s by the early American Keynesian Alvin Hansen.

On the other hand, it has become increasingly clear that the productive capacity of the economy is not something separate from current demand and production levels, but dependent on them in various ways. Unemployed workers stop looking for work; businesses operating below capacity don’t invest in new plant and equipment or develop new technology. This has manifested itself most clearly in the fall in labor force participation over the past decade, which has been considerably greater than can be explained on the basis of the aging population or other demographic factors. The bottom line is that an economy that spends several years producing less than it is capable of, will be capable of producing less in the future. This phenomenon, usually called “hysteresis,” has been explored by economists like Laurence Ball, Summers (again) and Brad DeLong, among others. The existence of hysteresis, among other implications, suggests that the costs of high unemployment may be greater than previously believed, and conversely that public spending in a recession can pay for itself by boosting incomes and taxes in future years.

These empirical lines are hard to fit into the box of orthodox theory — not that people don’t try. But so far they don’t add up to more than an eclectic set of provocative results. The creativity in mainstream empirical work has not yet been matched by any effort to find an alternative framework for thinking of the economy as a whole. For people coming from non-mainstream paradigms — Marxist or Keynesian — there is now plenty of useful material in mainstream empirical macroeconomics to draw on – much more than in the previous decade. But these new lines of empirical work have been forced on the mainstream by developments in the outside world that were too pressing to ignore. For the moment, at least, they don’t imply any systematic rethinking of economic theory.

***

Perhaps the central feature of the policy mainstream a decade ago was a smug and, in retrospect, remarkable complacency that the macroeconomic problem had been solved by independent central banks like the Federal Reserve.  For a sense of the pre-crisis consensus, consider this speech by a prominent economist in September 2007, just as the US was heading into its worst recession since the 1930s:

One of the most striking facts about macropolicy is that we have progressed amazingly. … In my opinion, better policy, particularly on the part of the Federal Reserve, is directly responsible for the low inflation and the virtual disappearance of the business cycle in the last 25 years. … The story of stabilization policy of the last quarter century is one of amazing success.

You might expect the speaker to be a right-wing Chicago type like Robert Lucas, whose claim that “the problem of depression prevention has been solved” was widely mocked after the crisis broke out. But in fact it was Christina Romer, soon headed to Washington as the Obama administration’s top economist. In accounts of the internal debates over fiscal policy that dominated the early days of the administration, Romer often comes across as one of the heroes, arguing for a big program of public spending against more conservative figures like Summers. So it’s especially striking that in the 2007 speech she spoke of a “glorious counterrevolution” against Keynesian ideas. Indeed, she saw the persistence of the idea of using deficit spending to fight unemployment as the one dark spot in an otherwise cloudless sky. There’s more than a little irony in the fact that opponents of the massive stimulus Romer ended up favoring drew their intellectual support from exactly the arguments she had been making just a year earlier. But it’s also a vivid illustration of a consistent pattern: ideas have evolved more rapidly in the world of practical policy than among academic economists.

For further evidence, consider a 2016 paper by Jason Furman, Obama’s final chief economist, on “The New View of Fiscal Policy.” As chair of the White House Council of Economic Advisers, Furman embodied the policy-economics consensus ex officio. Though he didn’t mention his predecessor by name, his paper was almost a point-by-point rebuttal of Romer’s “glorious counterrevolution” speech of a decade earlier. It starts with four propositions shared until recently by almost all respectable economists: that central banks can and should stabilize demand all by themselves, with no role for fiscal policy; that public deficits raise interest rates and crowd out private investment; that budget deficits, even if occasionally called for, need to be strictly controlled with an eye on the public debt; and that any use of fiscal policy must be strictly short-term.

None of this is true, suggests Furman. Central banks cannot reliably stabilize modern economies on their own, increased public spending should be a standard response to a downturn, worries about public debt are overblown, and stimulus may have to be maintained indefinitely. While these arguments obviously remain within a conventional framework in which the role of the public sector is simply to maintain the flow of private spending at a level consistent with full employment, they nonetheless envision much more active management of the economy by the state. It’s a remarkable departure from textbook orthodoxy for someone occupying such a central place in the policy world.

Another example of orthodoxy giving ground under the pressure of practical policymaking is Narayana Kocherlakota. When he was appointed as President of the Federal Reserve Bank of Minneapolis, he was on the right of debates within the Fed, confident that if the central bank simply followed its existing rules the economy would quickly return to full employment, and rejecting the idea of active fiscal policy. But after a few years on the Fed’s governing Federal Open Market Committee (FOMC), he had moved to the far left, “dovish” end of opinion, arguing strongly for a more aggressive approach to bringing unemployment down by any means available, including deficit spending and more aggressive unconventional tools at the Fed. This meant rejecting much of his own earlier work, perhaps the clearest example of a high-profile economist repudiating his views after the crisis; in the process, he got rid of many of the conservative “freshwater” economists in the Minneapolis Fed’s research department.

The reassessment of central banks themselves has run on parallel lines but gone even farther.

For twenty or thirty years before 2008, the orthodox view of central banks offered a two-fold defense against the dangerous idea — inherited from the 1930s — that managing the instability of capitalist economies was a political problem. First, any mismatch between the economy’s productive capabilities (aggregate supply) and the desired purchases of households and businesses (aggregate demand) could be fully resolved by the central bank; the technicians at the Fed and its peers around the world could prevent any recurrence of mass unemployment or runaway inflation. Second, they could do this by following a simple, objective rule, without any need to balance competing goals.

During those decades, Alan Greenspan personified the figure of the omniscient central banker. Venerated by presidents of both parties, Greenspan was literally sanctified in the press — a 1990 cover of The International Economy had him in papal regalia, under the headline, “Alan Greenspan and His College of Cardinals.” A decade later, he would appear on the cover of Time as the central figure in “The Committee to Save the World,” flanked by Robert Rubin and the ubiquitous Summers. And a decade after that he showed up as Bob Woodward’s eponymous Maestro.

In the past decade, this vision of central banks and central bankers has eroded from several sides. The manifest failure to prevent huge falls in output and employment after 2008 is the most obvious problem. The deep recessions in the US, Europe and elsewhere make a mockery of the “virtual disappearance of the business cycle” that people like Romer had held out as the strongest argument for leaving macropolicy to central banks. And while Janet Yellen or Mario Draghi may be widely admired, they command nothing like the authority of a Greenspan.

The pre-2008 consensus is even more profoundly undermined by what central banks did do than what they failed to do. During the crisis itself, the Fed and other central banks decided which financial institutions to rescue and which to allow to fail, which creditors would get paid in full and which would face losses. Both during the crisis and in the period of stagnation that followed, central banks also intervened in a much wider range of markets, on a much larger scale. In the US, perhaps the most dramatic moment came in late summer 2008, when the commercial paper market — the market for short-term loans used by the largest corporations — froze up, and the Fed stepped in with a promise to lend on its own account to anyone who had previously borrowed there. This watershed moment took the Fed from its usual role of regulating and supporting the private financial system, to simply replacing it.

That intervention lasted only a few months, but in other markets the Fed has largely replaced private creditors for a number of years now. Even today, it is the ultimate lender for about 20 percent of new mortgages in the United States. Policies of quantitative easing, in the US and elsewhere, greatly enlarged central banks’ weight in the economy — in the US, the Fed’s assets jumped from 6 percent of GDP to 25 percent, an expansion that is only now beginning to be unwound.  These policies also committed central banks to targeting longer-term interest rates, and in some cases other asset prices as well, rather than merely the overnight interest rate that had been the sole official tool of policy in the decades before 2008.

While critics (mostly on the Right) have objected that these interventions “distort” financial markets, this makes no sense from the perspective of a practical central banker. As central bankers like the Fed’s Ben Bernanke or the Bank of England’s Adam Posen have often said in response to such criticism, there is no such thing as an “undistorted” financial market. Central banks are always trying to change financial conditions to whatever it thinks favors full employment and stable prices. But as long as the interventions were limited to a single overnight interest rate, it was possible to paper over the contradiction between active monetary policy and the idea of a self-regulating economy, and pretend that policymakers were just trying to follow the “natural” interest rate, whatever that is. The much broader interventions of the past decade have brought the contradiction out into the open.

The broad array of interventions central banks have had to carry out over the past decade have also provoked some second thoughts about the functioning of financial markets even in normal times. If financial markets can get things wrong so catastrophically during crises, shouldn’t that affect our confidence in their ability to allocate credit the rest of the time? And if we are not confident, that opens the door for a much broader range of interventions — not only to stabilize markets and maintain demand, but to affirmatively direct society’s resources in better ways than private finance would do on its own.

In the past decade, this subversive thought has shown up in some surprisingly prominent places. Wearing his policy rather than his theory hat, Paul Krugman sees

… a broader rationale for policy activism than most macroeconomists—even self-proclaimed Keynesians—have generally offered in recent decades. Most of them… have seen the role for policy as pretty much limited to stabilizing aggregate demand. … Once we admit that there can be big asset mispricing, however, the case for intervention becomes much stronger… There is more potential for and power in [government] intervention than was dreamed of in efficient-market models.

From another direction, the notion that macroeconomic policy does not involve conflicting interests has become harder to sustain as inflation, employment, output and asset prices have followed diverging paths. A central plank of the pre-2008 consensus was the aptly named “divine coincidence,” in which the same level of demand would fortuitously and simultaneously lead to full employment, low and stable inflation, and production at the economy’s potential. Operationally, this was embodied in the “NAIRU” — the level of unemployment below which, supposedly, inflation would begin to rise without limit.

Over the past decade, as estimates of the NAIRU have fluctuated almost as much as the unemployment rate itself, it’s become clear that the NAIRU is too unstable and hard to measure to serve as a guide for policy, if it exists at all. It is striking to see someone as prominent as IMF chief economist Olivier Blanchard write (in 2016) that “the US economy is far from satisfying the ‘divine coincidence’,” meaning that stabilizing inflation and minimizing unemployment are two distinct goals. But if there’s no clear link between unemployment and inflation, it’s not clear why central banks should worry about low unemployment at all, or how they should trade off the risks of prices rising undesirably fast against the risk of too-high unemployment. With surprising frankness, high officials at the Fed and other central banks have acknowledged that they simply don’t know what the link between unemployment and inflation looks like today.

To make matters worse, a number of prominent figures — most vocally at the Bank for International Settlements — have argued that we should not be concerned only with conventional price inflation, but also with the behavior of asset prices, such as stocks or real estate. This “financial stability” mandate, if it is accepted, gives central banks yet another mission. The more outcomes central banks are responsible for, and the less confident we are that they all go together, the harder it is to treat central banks as somehow apolitical, as not subject to the same interplay of interests as the rest of the state.

Given the strategic role occupied by central banks in both modern capitalist economies and economic theory, this rethinking has the potential to lead in some radical directions. How far it will actually do so, of course, remains to be seen. Accounts of the Fed’s most recent conclave in Jackson Hole, Wyoming suggest a sense of “mission accomplished” and a desire to get back to the comfortable pieties of the past. Meanwhile, in Europe, the collapse of the intellectual rationale for central banks has been accompanied by the development of the most powerful central bank-ocracy the world has yet seen. So far the European Central Bank has not let its lack of democratic mandate stop it from making coercive intrusions into the domestic policies of its member states, or from serving as the enforcement arm of Europe’s creditors against recalcitrant debtors like Greece.

One thing we can say for sure: Any future crisis will bring the contradictions of central banks’ role as capitalism’s central planners into even sharper relief.

***

Many critics were disappointed the crisis of a 2008 did not lead to an intellectual revolution on the scale of the 1930s. It’s true that it didn’t. But the image of stasis you’d get from looking at the top journals and textbooks isn’t the whole picture — the most interesting conversations are happening somewhere else. For a generation, leftists in economics have struggled to change the profession, some by launching attacks (often well aimed, but ignored) from the outside, others by trying to make radical ideas parsable in the orthodox language. One lesson of the past decade is that both groups got it backward.

Keynes famously wrote that “Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” It’s a good line. But in recent years the relationship seems to have been more the other way round. If we want to change the economics profession, we need to start changing the world. Economics will follow.

Thanks to Arjun Jayadev, Ethan Kaplan, Mike Konczal and Suresh Naidu for helpful suggestions and comments.

Links for May 25, 2016

Deliberately. The IMF has released its new Debt Sustainability Analysis for Greece. Frances Coppola has the details, and they are something. Per the IMF,

Demographic projections suggest that working age population will decline by about 10 percentage points by 2060. At the same time, Greece will continue to struggle with high unemployment rates for decades to come. Its current unemployment rate is around 25 percent, the highest in the OECD, and after seven years of recession, its structural component is estimated at around 20 percent. Consequently, it will take significant time for unemployment to come down. Staff expects it to reach 18 percent by 2022, 12 percent by 2040, and 6 percent only by 2060.

Frances adds:

For Greece’s young people currently out of work, that is all of their working life. A whole generation will have been consigned to the scrapheap. …

The truth is that seven years of recession has wrecked the Greek economy. It is no longer capable of generating enough jobs to employ its population. The IMF estimates that even in good times, 20 percent of adults would remain unemployed. To generate the jobs that are needed there will have to be large numbers of new businesses, perhaps even whole new industries. Developing such extensive new productive capacity takes time and requires substantial investment – and Greece is not the most attractive of investment prospects. Absent something akin to a Marshall Plan, it will take many, many years to repair the damage deliberately inflicted on Greece by European authorities and the IMF in order to bail out the European banking system.

For some reason, that reminds me of this. Good times.

Also, here’s the Economist, back in 2006:

The core countries of Europe are not ready to make the economic reforms they so desperately need—and that will change, alas, only after a diabolic economic crisis. … The sad truth is that voters are not yet ready to swallow the nasty medicine of change. Reform is always painful. And there are too many cosseted insiders—those with secure jobs, those in the public sector—who see little to gain and much to lose. … One reason for believing that reform can happen … is that other European countries have shown the way. Britain faced economic and social meltdown in 1979; there followed a decade of Thatcherite reform. … The real problem, not just for Italy and France but also for Germany, is that, so far, life has continued to be too good for too many people.

I bet they’re pretty pleased right now.

 

 

Polanyism. At Dissent, Mike Konczal and Patrick Iber have a very nice introduction to Karl Polanyi. One thing I like about this piece is that they present Polanyi as a sort of theoretical back-formation for the Sanders campaign.

The vast majority of Sanders’s supporters … are, probably without knowing it, secret followers of Karl Polanyi. …

One of the divides within the Democratic primary between Bernie Sanders and Hillary Clinton has been between a social-democratic and a “progressive” but market-friendly vision of addressing social problems. Take, for example, health care. Sanders proposes a single-payer system in which the government pays and health care directly, and he frames it explicitly in the language of rights: “healthcare is a human right and should be guaranteed to all Americans regardless of wealth or income.” … Sanders offers a straightforward defense of decommodification—the idea that some things do not belong in the marketplace—that is at odds with the kind of politics that the leadership of the Democratic Party has offered … Polanyi’s particular definition of socialism sounds like one Sanders would share.

 

Obamacare and the insurers. On the subject of health care and decommodification, I liked James Kwak’s piece on Obamacare.

The dirty not-so-secret of Obamacare … is that sometimes the things we don’t like about market outcomes aren’t market failures—they are exactly what markets are supposed to do. …  at the end of the day, Obamacare is based on the idea that competition is good, but tries to prevent insurers from competing on all significant dimensions except the one that the government is better at anyway. We shouldn’t be surprised when insurance policies get worse and health care costs continue to rise.

It’s too bad so many intra-Democratic policy debates are conducted in terms of the radical-incremental binary, it’s not really meaningful. You can do more or less of anything. Would be better to focus on this non-market vs market question.

In this context, I wish there’d been some discussion in the campaign of New York’s new universal pre-kindergarten, which is a great example incremental decommodification in practice. Admittedly I’m a bit biased — I live in New York, and my son will be starting pre-K next year. Still: Here’s an example of a social need being addressed not through vouchers, or tax credits, or with means tests, but through a universal public services, provided — not entirely, but mainly and increasingly — by public employees. Why isn’t this a model?

 

The prehistory of the economics profession. I really liked this long piece by Marshall Steinbaum and Bernard Weisberger on the early history of the American Economics Association. The takeaway is that the AEA’s early history was surprisingly radical, both intellectually and in its self-conception as part of larger political project. (Another good discussion of this is in Michael Perelman’s Railroading Economics.) This is history more people should know, and Steinbaum and Weisberger tell it well. I also agree with their conclusion:

That [the economics profession] abandoned “advocacy” under the banner of “objectivity” only raises the question of what that distinction really means in practice. Perhaps actual objectivity does not require that the scholar noisily disclaim advocacy. It may, in fact, require the opposite.

The more I struggle with this stuff, the more I think this is right. A field or discipline needs its internal standards to distinguish valid or well-supported claims from invalid or poorly supported ones. But evaluation of relevance, importance, correspondence to the relevant features of reality can never be made on the basis of internal criteria. They require the standpoint of some outside commitment, some engagement with the concrete reality you are studying distinct from your formal representations of it. Of course that engagement doesn’t have to be political. Hyman Minsky’s work for the Mark Twain Bank in Missouri, for example, played an equivalent role; and as Perry Mehrling observes in his wonderful essay on Minsky, “It is significant that the fullest statement of his business cycle theory was published by the Joint Economic Committee of the U.S. Congress.” But it has to be something. In economics, I think, even more than in other fields, the best scholarship is not going to come from people who are only scholars.

 

Negative rates, so what. Here’s a sensible look at the modest real-world impact of negative rates from Brian Romanchuk. It’s always interesting to see how these things look from the point of view of market participants. The importance of a negative policy rate has nothing to do with the terms on which present consumption trades off against future consumption, it’s about one component of the return on some assets relative to others.

 

I’m number 55. Someone made a list of the top 100 economics blogs, and put me on it. That was nice.

Alvin Hansen on Monetary Policy

The more you read in the history of macroeconomics and monetary theory, the more you find that current debates are reprises of arguments from 50, 100 or 200 years ago.

I’ve just been reading Perry Mehrling’s The Money Interest and the Public Interest, which  is one of the two best books I know of on this subject. (The other is Arie Arnon’s Monetary Theory and Policy Since David Hume and Adam Smith.) About a third of the book is devoted to Alvin Hansen, and it inspired me to look up some of Hansen’s writings from the 1940s and 50s. I was especially struck by this 1955 article on monetary policy. It not only anticipates much of current discussions of monetary policy — quantitative easing, the maturity structure of public debt, the need for coordination between the fiscal and monetary policy, and more broadly, the limits of a single interest rate instrument as a tool of macroeconomic management — but mostly takes them for granted as starting points for its analysis. It’s hard not to feel that macro policy debates have regressed over the past 60 years.

The context of the argument is the Treasury-Federal Reserve Accord of 1951, following which the Fed was no longer committed to maintaining fixed rates on treasury bonds of various maturities. [1] The freeing of the Fed from the overriding responsibility of stabilizing the market for government debt, led to scholarly and political debates about the new role for monetary policy. In this article, Hansen is responding to several years of legislative debate on this question, most recently the 1954 Senate hearings which included testimony from the Treasury department, the Fed Board’s Open Market Committee, and the New York Fed.

Hansen begins by expressing relief that none of the testimony raised

the phony question whether or not the government securities market is “free.” A central bank cannot perform its functions without powerfully affecting the prices of government securities.

He then expresses what he sees as the consensus view that it is the quantity of credit that is the main object of monetary policy, as opposed to either the quantity of money (a non-issue) or the price of credit (a real but secondary issue), that is, the interest rate.

Perhaps we could all agree that (however important other issues may be) control of the credit base is the gist of monetary management. Wise management, as I see it, should ensure adequate liquidity in the usual case, and moderate monetary restraint (employed in conjunction with other more powerful measures) when needed to check inflation. No doubt others, who see no danger in rather violent fluctuations in interest rates (entailing also violent fluctuations in capital values), would put it differently. But at any rate there is agreement, I take it, that the central bank should create a generous dose of liquidity when resources are not fully employed. From this standpoint the volume of reserves is of primary importance.

Given that the interest rate is alsoan object of policy, the question becomes, which interest rate?

The question has to be raised: where should the central bank enter the market -short-term only, or all along the gamut of maturities?

I don’t believe this is a question that economists asked much in the decades before the Great Recession. In most macro models I’m familiar with, there is simply “the interest rate,” with the implicit assumption that the whole rate structure moves together so it doesn’t matter which specific rate the monetary authority targets. For Hansen, by contrast, the structure of interest rates — the term and “risk” premiums — is just as natural an object for policy as the overall level of rates. And since there is no assumption that the whole structure moves together, it makes a difference which particular rate(s) the central bank targets. What’s even more striking is that Hansen not only believes that it matters which rate the central bank targets, he is taking part in a conversation where this belief is shared on all sides.

Obviously it would make little difference what maturities were purchased or sold if any change in the volume of reserve money influenced merely the level of interest rates, leaving the internal structure of rates unaffected. … In the controversy here under discussion, the Board leans toward the view that … new impulses in the short market transmit themselves rapidly to the longer maturities. The New York Reserve Bank officials, on the contrary, lean toward the view that the lags are important. If there were no lags whatever, it would make no difference what maturities were dealt in. But of course the Board does not hold that there are no lags.

Not even the most conservative pole of the 1950s debate goes as far as today’s New Keynesian orthodoxy that monetary policy can be safely reduced to the setting of a single overnight interest rate.

The direct targeting of long rates is the essential innovation of so-called quantitative easing. [2] But to Hansen, the idea that interest rate policy should directly target long as well as short rates was obvious. More than that: As Hansen points out, the same point was made by Keynes 20 years earlier.

If the central bank limits itself to the short market, and if the lags are serious, the mere creation of large reserves may not lower the long-term rate. Keynes had this in mind when he wrote: “Perhaps a complex offer by the central bank to buy and sell at stated prices gilt-edged bonds of all maturities, in place of the single bank rate for short-term bills, is the most important practical improvement that can be made in the technique of monetary management. . . . The monetary authority often tends in practice to concentrate upon short-term debts and to leave the price of long-term debts to be influenced by belated and imperfect re- actions from the price of short-term debts.” ‘ Keynes, it should be added, wanted the central bank to deal not only in debts of all maturities, but also “to deal in debts of varying degrees of risk,” i.e., high grade private securities and perhaps state and local issues.

That’s a quote from The General Theory, with Hansen’s gloss.

Fast-forward to 2014. Today we find Benjamin Friedman — one of the smartest and most interesting orthodox economists on these issues — arguing that the one great change in central bank practices in the wake of the Great Recession is intervention in a range of securities beyond the shortest-term government debt. As far as I can tell, he has no idea that this “profound” innovation in the practice of monetary policy was already proposed by Keynes in 1936. But then, as Friedman rightly notes, “Macroeconomics is a field in which theory lags behind experience and practice, not the other way around.”

Even more interesting, the importance of the rate structure as a tool of macroeconomic policy was recognized not only by the Federal Reserve, but by the Treasury in its management of debt issues. Hansen continues:

Monetary policy can operate on two planes: (1) controlling the credit base – the volume of reserve balances- and (2) changing the interest rate structure. The Federal Reserve has now backed away from the second. The Treasury emphasized in these hearings that this is its special bailiwick. It supports, so it asserts, the System’s lead, by issuing short- terms or long-terms, as the case may be, according to whether the Federal Reserve is trying to expand or contract credit … it appears that we now have (whether by accident or design) a division of monetary management between the two agencies- a sort of informal cartel arrangement. The Federal Reserve limits itself to control of the volume of credit by operating exclusively in the short end of the market. The Treasury shifts from short-term to long-term issues when monetary restraint is called for, and back to short-term issues when expansion is desired.

This is amazing. It’s not that Keynesians like Hansen  propose that Treasury should issue longer or shorter debt based on macroeconomic conditions. Rather, it is taken for granted that it does choose maturities this way. And this is the conservative side in the debate, opposed to the side that says the central bank should manage the term structure directly.

Many Slackwire readers will have recently encountered the idea that the maturities of new debt should be evaluated as a kind of monetary policy. It’s on offer as the latest evidence for the genius of Larry Summers. Proposing that Treasury should issue short or long term debt based on goals for the overall term structure of interest rates, and not just on minimizing federal borrowing costs, is the main point of Summers’ new Brookings paper, which has attracted its fair share of attention in the business press. No reader of that paper would guess that its big new idea was a commonplace of policy debates in the 1950s. [3]

Hansen goes on to raise some highly prescient concerns about the exaggerated claims being made for narrow monetary policy.

The Reserve authorities are far too eager to claim undue credit for the stability of prices which we have enjoyed since 1951. The position taken by the Board is not without danger, since Congress might well draw the conclusion that if monetary policy is indeed as powerful as indicated, nonmonetary measures [i.e. fiscal policy and price controls] are either unnecessary or may be drawn upon lightly.

This is indeed the conclusion that was drawn, more comprehensively than Hansen feared. The idea that setting an overnight interest rate is always sufficient to hold demand at the desired level has conquered the economics profession “as completely as the Holy Inquisition conquered Spain,” to coin a phrase. If you talk to a smart young macroeconomist today, you’ll find that the terms “aggregate demand was too low” and “the central bank set the interest rate too high” are used interchangeably. And if you ask, which interest rate?, they react the way a physicist might if you asked, the mass of which electron?

Faced with the argument that the inflation of the late 1940s, and price stability of the early 1950s, was due to bad and good interest rate policy respectively, Hansen offers an alternative view:

I am especially unhappy about the impli- cation that the price stability which we have enjoyed since February-March 1951 (and which everyone is justifiably happy about) could quite easily have been purchased for the entire postwar period (1945 to the present) had we only adopted the famous accord earlier …  The postwar cut in individual taxes and the removal of price, wage, and other controls in 1946 … did away once and for all with any really effective restraint on consumers. Under these circumstances the prevention of price inflation … [meant] restraint on investment. … Is it really credible that a drastic curtailment of investment would have been tolerated any more than the continuation of wartime taxation and controls? … In the final analysis, of course,  the then prevailing excess of demand was confronted with a limited supply of productive resources.

Inflation always comes down to this mismatch between “demand,” i.e. desired expenditure, and productive capacity.

Now we might say in response to such mismatches: Well, attempts to purchase more than we can produce will encourage increased capacity, and inflation is just a temporary transitional cost. Alternatively, we might seek to limit spending in various ways. In this second case, there is no difference of principle between an engineered rise in the interest rate, and direct controls on prices or spending. It is just a question of which particular categories of spending you want to hold down.

The point: Eighty years ago, Keynes suggested that what today is called quantitative easing should be a routine tool of monetary policy. Sixty years ago, Alvin Hansen believed that this insight had been accepted by all sides in macroeconomic debates, and that the importance of the term structure for macroeconomic activity guided the debt-issuance policies of Treasury as well as the market interventions of the Federal Reserve. Today, these seem like new discoveries. As the man says, the history of macroeconomics is mostly a great forgetting.

[1] I was surprised by how minimal the Wikipedia entry is. One of these days, I am going to start having students improve economics Wikipedia pages as a class assignment.

[2] What is “quantitative about this policy is that the Fed buys a a quantity of bonds, evidently in the hopes of forcing their price up, but does not announce an explicit target for the price. On the face of it, this is a strangely inefficient way to go about things. If the Fed announced a target for, say, 10-year Treasury bonds, it would have to buy far fewer of them — maybe none — since market expectations would do more of the work of moving the price. Why the Fed has hobbled itself in this way is a topic for another post.

[3] I am not the world’s biggest Larry Summers fan, to say the least. But I worry I’m giving him too hard a time in this case. Even if the argument of the paper is less original than its made out to be, it’s still correct, it’s still important, and it’s still missing from today’s policy debates. He and his coauthors have made a real contribution here. I also appreciate the Hansenian spirit in which Summers derides his opponents as “central bank independence freaks.”