The Slack Wire

Demand and Competitiveness: Germany and the EU

I put up a post the other day about Enno Schroder’s excellent work on accounting for changes in trade flows. Based on the comments, there’s some confusion about the methodology. That’s not surprising: It’s not complicated, but it’s also not a familiar way of looking at this stuff, either within or outside the economics profession. Maybe a numerical example will help?

Let’s consider two trading partners, in this case Germany and the rest of the EU. (Among other things, having just two partners avoids the whole weighting issue.) The first line of the table below shows total demand in each — that is, all private consumption, government consumption, and investment — in billions of euros. (As usual, this is final demand — transfers and intermediate goods are excluded.) So, for instance, in the year 2000 all spending by households, firms and governments in Germany totaled 2.04 trillion euros. The next two lines show the part of that expenditure that went to imports — from the rest of the EU for Germany, from Germany for the rest of the EU, and from the rest of the world for both. The final two lines of each panel then show the share of total expenditure in each place that went to German and rest-of-EU goods respectively. The table looks at 2000 and 2009, a period of growing surpluses for Germany.

2000 2009
Germany Demand 2,041 2,258
Imports from EU 340 429
Imports from Rest of World 198 235
Germany Share 74% 71%
EU ex-Germany Share 17% 19%
EU ex-Germany Demand 9,179 11,633
Imports from Germany 387 501
Imports from Rest of World 795 998
Germany Share 4% 4%
EU ex-Germany Share 87% 87%
Ratio, Germany-EU Exports to Imports 1.14 1.17
EU Surplus, Percent of German GDP 2.27 3.02

So what do we see? In 2000, 74 cents out of every euro spent in Germany went for German goods and services, and 17 cents for goods and services from the rest of the EU. Nine years later, 71 cents out of each German euro went to German stuff, and 19 cents to stuff from the rest of the EU. German households, businesses and government agencies were buying more from the rest of Europe, and less from their own country. Meanwhile, the rest of Europe was spending 4 cents out of every euro on goods and services from Germany — exactly the same fraction in 2009 as in 2000.

If Germans were buying more from the rest of the EU, and non-German Europeans were buying the same amount from Germany, how could it be that the German trade surplus with the rest of Europe increased? And by nearly one percent of German GDP, a significant amount? The answer is that total expenditure was rising much faster in the rest of Europe — by 2.7 percent a year, compared with 1.1 percent a year in Germany. This is what it means to say that the growing German surplus is entirely accounted for by demand, and that Germany actually lost competitiveness over this period.

Again, these are not estimates, they are the actual numbers as reported by EuroStat. It is simply a matter of historical fact that Germans spent more of their income on goods from the rest of the EU, and less on German goods, in 2009 than in 2000, and that the rest of the EU spent the same fraction of its income on German goods in the two years. Obviously, this does not rule out the possibility that German goods were becoming cheaper relative to the rest of Europe’s, if you postulate some other factor that would have reduced Germany’s exports without a growing cost advantage. (This is not so easy, since Germany’s exports are the sort of high-end manufactures which usually have a high income elasticity, i.e. for which demand is expected to rise over time.) And it is also compatible with a story where German export prices fell, but export demand is price-inelastic, so that lower prices did nothing to raise export earnings. But it is absolutely not compatible with a simple story where the most important driver of German trade imbalances is changing relative prices. For that story to work, the main factor in Germany’s growing surpluses would have to have been expenditure switching from other countries’ goods to Germany’s. And that didn’t happen.

NOTE: This my table, not Enno’s. The data is from Eurostat, while he uses the Penn World Tables, and he does not look at intra-European trade specifically.

UPDATE: There’s another question, which no one asked but which you should always try to answer: Why does it matter? The truth is, a big reason I care about this is that I’m curious how capitalist economies work, and this stuff seems to shed some light on that, in terms of both the specific content  and the methodology. But more specifically:

First, seeing trade flows as driven by income as well as price fits better with a vision of economy that has many different possible states of rest. It fits better with a vision of economies evolving in historical time, rather than gravitating toward an equilibrium which is both natural and optimal. In this particular case, there is no reason to suppose that the relative growth rates consistent with full employment in each country are also the relative growth rates consistent with balanced trade. A world in which trade flows respond mainly to relative prices is a world where macropolicy doesn’t pose any fundamentally different challenges in an open economy than in a closed one. Whatever mechanisms operated to ensure full employment continue to do so, and then the exchange rate adjusts to keep trade flows balanced (or appropriately unbalanced, for a country with a good reason to export or import capital.) Whereas when the main relationship is between income and trade, they cannot vary independently.

Second, there are important implications for policy. Krugman keeps saying that Germany needs higher relative prices, i.e., higher inflation. Even leaving aside the political difficulties with such a program, it makes sense on its own terms only if there is a fixed pool of European demand. To say that the only way you can have an adequate level of demand in Greece is for prices to fall relative to Germany, is to accept, on a European or global level, the structural theory of unemployment that Krugman rejects so firmly (and rightly) for the US. By contrast if competitiveness didn’t cause the problem, we shouldn’t assume competitiveness is involved in the solution. The historical evidence suggests that more rapid income growth in Germany will be sufficient to move its current account back to balance. The implications for domestic demand in Germany are the opposite in this case as in the relative-prices case: Fixing the current account problem means more jobs and orders for German workers and firms, not  higher inflation in Germany. [1]

So if you buy this story, you should be more pessimistic about a Greek exit from the euro — since there’s less reason to think that flexible exchange rates will lead to balanced trade — but more optimistic about a solution within the euro.

I don’t understand why, for economists like Krugman and Dean Baker, Keynesianism always seems to stop at the water’s edge. Why does their analysis of international trade always implicitly [2] assume a world economy continually at full capacity, where a demand shortfall in one country or region implies excess demand somewhere else? They know perfectly well that the question of unemployment in one country cannot be reduced to the question of who is getting paid too much; why do they forget it as soon as exchange rates come into the picture? Perhaps it’s for the same reasons — whatever they are — that so many economists who support all kinds of domestic regulation are ardent supporters of free trade, even though that’s just laissez-faire at the global level. In the particular case of Krugman, I think part of the problem is that his own scholarly work is in trade. So when the conversation turns to trade he loses one of the biggest assets he brings to discussions of domestic policy — a willingness to forget all the “progress” in economic theory over the past 30 or 40 years.

[1] A more reasonable version of the higher-prices-in-Germany claim is that Germany must be willing to accept higher inflation in order to raise demand. In some times and places this could certainly be true. But I don’t think it is for Germany, given the evident slack in labor markets implied by stagnant wages. And in any case that’s not what Krugman is saying — for him, higher inflation is the solution, not an unfortunate side effect.

[2] Or sometimes explicitly — e.g. this post has Germany sitting on a vertical aggregate supply curve.

What Drives Trade Flows? Mostly Demand, Not Prices

I just participated (for the last time, thank god) in the UMass-New School economics graduate student conference, which left me feeling pretty good about the next generation of heterodox economists. [1] A bunch of good stuff was presented, but for my money, the best and most important work was Enno Schröder’s: “Aggregate Demand (Not Competitiveness) Caused the German Trade Surplus and the U.S. Deficit.” Unfortunately, the paper is not yet online — I’ll link to it the moment it is — but here are his slides.

The starting point of his analysis is that, as a matter of accounting, we can write the ratio of a county’s exports to imports as :

X/M = (m*/m) (D*/D)

where X and M are export and import volumes, m* is the fraction of foreign expenditure spent on the home country’s goods, m is the fraction of the home expenditure spent on foreign goods, and D* and D are total foreign and home expenditure.

This is true by definition. But the advantage of thinking of trade flows this way, is that it allows us to separate the changes in trade attributable to expenditure switching (including, of course, the effect of relative price changes) and the changes attributable to different growth rates of expenditure. In other words, it lets us distinguish the changes in trade flows that are due to changes in how each dollar is spent in a given country, from changes in trade flows that are due to changes in the distribution of dollars across countries.

(These look similar to price and income elasticities, but they are not the same. Elasticities are estimated, while this is an accounting decomposition. And changes in m and m*, in this framework, capture all factors that lead to a shift in the import share of expenditure, not just relative prices.)

The heart of the paper is an exercise in historical accounting, decomposing changes in trade ratios into m*/m and D*/D. We can think of these as counterfactual exercises: How would trade look if growth rates were all equal, and each county’s distribution of spending across countries evolved as it did historically; and how would trade look if each country had had a constant distribution of spending across countries, and growth rates were what they were historically? The second question is roughly equivalent to: How much of the change in trade flows could we predict if we knew expenditure growth rates for each country and nothing else?

The key results are in the figure below. Look particularly at Germany,  in the middle right of the first panel:

The dotted line is the actual ratio of exports to imports. Since Germany has recently had a trade surplus, the line lies above one — over the past decade, German exports have exceed German imports by about 10 percent. The dark black line is the counterfactual ratio if the division of each county’s expenditures among various countries’ goods had remained fixed at their average level over the whole period. When the dark black line is falling, that indicates a country growing more rapidly than the countries it exports to; with the share of expenditure on imports fixed, higher income means more imports and a trade balance moving toward deficit. Similarly, when the black line is rising, that indicates a country’s total expenditure growing more slowly than expenditure its export markets, as was the case for Germany from the early 1990s until 2008. The light gray line is the other counterfactual — the path trade would have followed if all countries had grown at an equal rate, so that trade depended only on changes in competitiveness. When the dotted line and the heavy black line move more or less together, we can say that shifts in trade are mostly a matter of aggregate demand; when the dotted line and the gray line move together, mostly a matter of competitiveness (which, again, includes all factors that cause people to shift expenditure between different countries’ goods, including but not limited to exchange rates.)
The point here is that if you only knew the growth of income in Germany and its trade partners, and nothing at all about German wages or productivity, you could fully explain the German trade surplus of the past decade. In fact, based on income growth alone you would predict an even larger surplus; the fraction of the world’s dollars falling on German goods actually fell. Or as Enno puts it: During the period of the German export boom, Germany became less, not more, competitive. [2] The cases of Spain, Portugal and Greece (tho not Italy) are symmetrical: Despite the supposed loss of price competitiveness they experienced under the euro, the share of expenditure falling on these countries’ goods and services actually rose during the periods when their trade balances worsened; their growing deficits were entirely a product of income growth more rapid than their trade partners’.
These are tremendously important results. In my opinion, they are fatal to the claim (advanced by Krugman among others) that the root of the European crisis is the inability to adjust exchange rates, and that a devaluation in the periphery would be sufficient to restore balanced trade. (It is important to remember, in this context, that southern Europe was running trade deficits for many years before the establishment of the euro.) They also imply a strong criticism of free trade. If trade flows depend mostly or entirely on relative income, and if large trade imbalances are unsustainable for most countries, then relative growth rates are going to be constrained by import shares, which means that most countries are going to grow below their potential. (This is similar to the old balance-of-payments constrained growth argument.) But the key point, as Enno stresses, is that both the “left” argument about low German wage growth and the “right” argument about high German productivity growth are irrelevant to the historical development of German export surpluses. Slower income growth in Germany than its trade partners explains the whole story.
I really like the substantive argument of this paper. But I love the methodology. There is an econometrics section, which is interesting (among other things, he finds that the Marshall-Lerner condition is not satisfied for Germany, another blow to the relative-prices story of the euro crisis.) But the main conclusions of the paper don’t depend in any way on it. In fact, the thing can be seen as an example of an alternative methodology to econometrics for empirical economics, historical accounting or decomposition analysis. This is the same basic approach that Arjun Jayadev and I take in our paper on household debt, and which has long been used to analyze the historical evolution of public debt. Another interesting application of this kind of historical accounting: the decomposition of changes in the profit rate into the effects of the profit share, the utilization rate, and the technologically-determined capital-output ratio, an approach pioneered by Thomas Weisskopf, and developed by others, including Ed WolffErdogan Bakir, and my teacher David Kotz.
People often say that these accounting exercises can’t be used to establish claims about causality. And strictly speaking this is true, though they certainly can be used to reject certain causal stories. But that’s true of econometrics too. It’s worth taking a step back and remembering that no matter how fancy our econometrics, all we are ever doing with those techniques is describing the characteristics of a matrix. We have the observations we have, and all we can do is try to summarize the relationships between them in some useful way. When we make causal claims using econometrics, it’s by treating the matrix as if it were drawn from some stable underlying probability distribution function (pdf). One of the great things about these decomposition exercises — or about other empirical techniques, like principal component analysis — is that they limit themselves to describing the actual data. In many cases — lots of labor economics, for instance — the fiction of a stable underlying pdf is perfectly reasonable. But in other cases — including, I think, almost all interesting questions in macroeconomics — the conventional econometrics approach is a bit like asking, If a whale were the top of an island, what would the underlying geology look like? It’s certainly possible to come up with a answer to that question. But it is probably not the simplest way of describing the shape of the whale.
[1] A perennial question at these things is whether we should continue identifying ourselves as “heterodox,” or just say we’re doing economics. Personally, I’ll be happy to give up the distinct heterodox identity just as soon as economists are willing to give up their distinct identity and dissolve into the larger population of social scientists, or of guys with opinions.
[2] The results for the US are symmetrical with those for Germany: the growing US trade deficit since 1990 is fully explained by more rapid US income growth relative to its trade partners. But it’s worth noting that China is not: Knowing only China’s relative income growth, which has been of course very high, you would predict that China would be moving toward trade deficits, when in fact it has ben moving toward surplus. This is consistent with a story that explains China’s trade surpluses by an undervalued currency, tho it is consistent with other stories as well.

In Comments: The Lessons of Fukushima

I’d like to promise a more regular posting schedule here. On the other hand, seeing as I’m on the academic job market this fall (anybody want to hire a radical economist?) I really shouldn’t be blogging at all. So on balance we’ll probably just keep staggering along as usual.

But! In lieu of new posts, I really strongly recommend checking out the epic comment thread on Will Boisvert’s recent post on the lessons of Fukushima. It’s well over 100 comments, which while no big deal for real blogs, is off the charts here. But more importantly, they’re almost all serious & thoughtful, and number evidently come from people with real expertise in nuclear and/or alternative energy. Which just goes to show: If you bring data and logic to the conversation, people will respond in kind.

You Eat Mitt Romney’s Salt

Don’t you love the Romney video? I’m not going to deny it, right now I am with Team Dem. It’s true, we usually say “the bosses have two parties”; but it’s not usual for them to run for office personally, themselves. And when they do, wow, what a window onto how they really think.

It’s hard to even imagine the mindset where the person sitting in the back of the town car is the “maker” and the person upfront driving is just lazing around; where the guys maintaining the hedges and manning the security gates at the mansion are idle parasites, while the person living in it, just by virtue of that fact, is working; where the person who owns the dressage horse is the producer and the people who groom it and feed it and muck it are the layabouts. As some on the right have pointed out, it’s weird, also, that “producing” is now equated with paying federal taxes. Isn’t working in the private sector supposed to be productive? Isn’t a successful business contributing something to society besides checks to the IRS?

It is weird. But as we’re all realizing, the 47 percent/53 percent rhetoric has a long history on the Right. (It would be interesting to explore this via the rounding-up of 46.4 percent to 47, the same way medievalists trace the dissemination of a text by the propagation of copyists’ errors.) Naturally, brother Konczal is on the case, with a great post tracing out four lineages of the 47 percent. His preferred starting point, like others’, is the Wall Street Journal‘s notorious 2002 editorial on the “lucky duckies” who pay no income tax.

That’s a key reference point, for sure. But I think this attitude goes back a bit further. The masters of mankind, it seems to me, have always cultivated a funny kind of solipsism, imagining that the people who fed and clothed and worked and fought for them, were somehow living off of them instead.

Here, as transcribed in Peter Laslett’s The World We Have Lost, is Gregory King’s 1688 “scheme” of the population of England. It’s fascinating to see the careful gradations of status (early-moderns were nothing if not attentive to “degree”); we’ll be pleased to see, for instance, that “persons in liberal arts and sciences,” come above shopkeepers, though below farmers. But look below that to the “general account” at the bottom. We have 2.675 million people “increasing the wealth of the kingdom,” and 2.825 million “decreasing the wealth of the kingdom.” The latter group includes not only the vagrants, gypsies and thieves, but common seamen, soldiers, laborers, and “cottagers,” i.e. landless farmworkers. So in three centuries, the increasers are up from 49 percent to 53 percent, and the lucky duckies are down from 51 percent to 47. That’s progress, I guess.

One can’t help wondering how the wealth of the kingdom would hold up if the eminent traders by sea couldn’t find common seamen, if the farmers had to do without laborers, if there were officers but no common soldiers. 

Young Alexander conquered India.
He alone?
Caesar beat the Gauls.
Was there not even a cook in his army?

Always more where they come from, I suppose Gregory King might say.

Here, also from Laslett, is a similar division from 100 years earlier, by Sir Thomas Smith:

1. ‘The first part of the Gentlemen of England called Nobilitas Major.’ This is the nobility, or aristocracy proper.
2. ‘The second sort of Gentlemen called Nobilitas Minor.’ This is the gentry and Smith further divides it into Knights, Esquires and gentlemen.
3. ‘Citizens, Burgesses and Yeomen.’
4. ‘The fourth sort of men which do not rule.’

Of this last group, Smith explains:

The fourth sort or class amongst us is of those which the old Romans called capite sensu proletarii or operarii, day labourers, poor husbandmen, yea merchants or retailers which have no free land, copyholders, and all artificers, as tailors, shoemakers, carpenters, brick- makers, brick-layers, etc. These have no voice nor authority in our commonwealth and no account is made of them, but only to be ruled and not to rule others.

In other words, Elizabethan Mitt Romney, your job is not to worry about those people.

Smith’s contemporary Shakespeare evidently had distinctions like these in mind when he wrote Coriolanus. (A remarkably radical play; I think it was the only Shakespeare Brecht approved of.) The title chracter’s overriding passion is his contempt for the common people, those “geese that bear the shapes of men,” who “sit by th’ fire and presume to know what’s done i’ the Capitol.” He hates them specifically because they are, as it were, dependent, and think of themselves as victims.

They said they were an-hungry; sigh’d forth proverbs, —
That hunger broke stone walls, that dogs must eat,
That meat was made for mouths, that the gods sent not
Corn for the rich men only: — with these shreds
They vented their complainings…

He has no patience for this idea that people are entitled to enough to to eat:

Would the nobility lay aside their ruth
And let me use my sword, I’d make a quarry
With thousands of these quarter’d slaves, as high
As I could pick my lance.

One more instance. Did everybody read Daniyal Mueenuddin‘s In Other Rooms, Other Wonders?

It’s a magnificent, but also profoundly conservative, work of fiction. In Mueenuddin’s world the social hierarchy is so natural, so unquestioned, that any crossing of its boundaries can only be understood as a personal, moral failing, which of course always comes at a great personal cost. There’s one phrase in particular that occurs repeatedly in the book: “They eat your salt,” “you ate his salt,” etc. The thing about this evidently routine expression is that the eater is always someone of lower status, and the person whose salt is being eaten is always a landlord or aristocrat. “Oh what could be the matter in your service? I’ve eaten your salt all my life,” says the electrician who has, in fact, spent all his life keeping the pumps going on the estates of the man he’s petitioning. Somehow, in this world, the person who sits in a mansion in Lahore or Karachi is entitled as a matter of course to all the salt and all the good things of life, and the person who physically produces the salt should be grateful to get any of it.

Mueenuddin describes this world vividly and convincingly, in part because he is a writer of great talent, but also clearly in part because he shares its essential values. Just in case we haven’t got the point, the collection’s final story, “A Spoiled Man,” is about how an old laborer’s life is ruined when his master’s naive American wife gets the idea he deserves a paycheck and proper place to sleep, giving him the disastrous idea that he has rights. You couldn’t write fiction like that in this country, I don’t think. Hundreds of years of popular struggle have reshaped the culture in ways that no one with the sensitivity to write good fiction could ignore. A Romney is a different story.

UDPATE: Krugman today is superb on this. (Speaking of being on Team Dem.)

Thomas Sargent Abandons Rational Expectations, Converts to Post Keynesianism

From Keynes (1937), “The General Theory of Employment”:

We have, as a rule, only the vaguest idea of any but the most direct consequences of our acts. Sometimes we are not much concerned with their remoter consequences, even though time and chance may make much of them. But sometimes we are intensely concerned with them… The whole object of the accumulation of wealth is to produce results, or potential results, at a comparatively distant, and sometimes indefinitely distant, date. Thus the fact that our knowledge of the future is fluctuating, vague and uncertain, renders wealth a peculiarly unsuitable subject for the methods of the classical economic theory. … 

By ‘uncertain’ knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty… Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of an European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.

Better late than never, I suppose…

UPDATE: OK, ok, Sargent has written a lot about uncertainty and expectations.  Harry Konstantidis points out this 1972 article Rational Expectations and the Term Structure of Interest Rates, which is … really interesting, actually. It’s an empirical test of whether the behavior of interest rates over time is consistent with bond prices being set by a process of rational expectations. The conclusions are surprisingly negative:

The evidence summarized above implies that it is difficult to maintain both that only expectations determine the yield curve and that expectations are rational in the sense of efficiently incorporating available information. The predictions of the random walk version of the model are fairly decisively rejected by the data, particularly for forward rates with less than five years term to maturity. … 

It is clear that our conclusions apply with equal force to the diluted form of the expectations hypothesis that allows forward rates to be determined by expectations plus time-invariant liquidity premiums. … On the other hand, it would clearly be possible to determine a set of time-dependent “liquidity premiums” that could be used to adjust the forward rates so that the required sequences would display “white” spectral densities. … While this procedure has its merits in certain instances, it is essentially arbitrary, there being no adequate way to relate the “liquidity premiums” so derived to objective characteristics of markets… 

An alternative way to “save” the doctrine that expectations alone determine the yield curve in the face of empirical evidence like that presented above is to abandon the hypothesis that expectations are rational. Once that is done, the model becomes much freer, being capable of accommodating all sorts of ad hoc, plausible hypotheses about the formation of expectations. Yet salvaging the expectations theory in that way involves building a model of the term structure that … permits expectations to be formed via a process that could utilize available information more efficiently and so enhance profits. That seems … extremely odd.

In other words: Observed yields are inconsistent with a simple version of rational expectations. They could be made consistent, but only by trvializing the theory by adding ad hoc adjustments that could fit anything. We might conclude that expectations are not rational, but that’s too scary. So … who knows.

One unambiguous point, though, is that under rational expectations interest rates should follow a random walk, and your best prediction of interest rates on a given instrument at some future date should be the rate today. Just saying “I don’t now” is not consistent with rational expectations — for one thing, it gives you no basis for deciding whether a product like the one being advertised here is priced fairly. Sargent’s “No” is consistent, though, with fundamental uncertainty in the Keynes sense. In that case the decision to buy or not buy is based on nonrational behavioral rules — like, say, looking at endorsements of recognized authorities. (Keynes: “Knowing that our individual judgment is worthless, we endeavour to fall back on the judgment of the rest of the world which is perhaps better informed.”) So I stand by my one-liner.

In Defense of Debt

I have a new post up at the Jacobin, responding to Mike Beggs’ critical review of David Graeber’s Debt. It’s a much longer, and hopefully more convincing, version of some arguments I was having with Mike and others over at Crooked Timber last month. Mike things there is no useful economics in Debt; I think that on the contrary, the book fits well with important strands of heterodox economics going back to Marx and Keynes (not to mention Schumpeter and Wicksell).

In particular, I think the historical and anthropological material in Debt helps put concrete social flesh on two key analytic points. First, that we need to think of capitalism primarily organized around the accumulation of money, with economic decision taken in terms of money flows and money commitments; not as a system for the mutually beneficial exchange of goods. And second, within capitalism, we can distinguish between economies where the medium of exchange is primarily commodity or fiat money, and economies where it is primarily bank-created credit money. Textbook economic analysis tends to work strictly in terms of the former, but both kinds of economies have existed historically and they behave quite differently.

(There’s a lot more in the book than this, of course, but what I am trying to do — I don’t know how successfully — is clarify the points where Debt contributes most directly to economics debates about money and credit.)

If this sounds at all interesting, you should first read Mike’s review, if you haven’t, and then read my very long response.

… and then, you should read all the other great stuff at The Jacobin. For my money, it’s the most exciting new political journal to come along in a while.

Bring Back Butlerism

From Eric Foner’s A Short History of Reconstruction:

Even more outrageous than Tweed … was Massachusetts Congressman Benjamin F. Butler, who flamboyantly supported causes that appalled reformers such as the eight-hour day, inflation, and payment of the national debt in greenbacks. He further horrified respectable opinion by embracing women’s suffrage, Irish nationalism, and the Paris Commune.

Or, as a horrified Nation put it, Butlerism was

the embodiment in political organization of a desire for the transfer of power to the ignorant and poor, and the use of government to carry out the poor and ignorant man’s view of the nature of society.

Labor law, inflation, women’s rights, anti-imperialism, and small-c communism, not to mention government by the poor? We could use a little more of that 1870s spirit today. People on the left who want to central banks to do more, in particular, could talk more about loose money’s radical pedigree.

So who was this guy? The internet is mainly interested in his Civil War career. Made a general on the basis of his pro-union, anti-slavery politics, he was, not surprisingly, pretty crap at it; but it does appear that he was the first Union officer to refuse to return fugitive slaves to their masters, and the first to successfully enlist black troops in the South. That was enough for Jefferson Davis to order that if he were captured, he should be executed on the spot. So he didn’t know how to lead a cavalry charge; sounds like a war hero to me.

In the current Jacobin (which everyone should be reading), Seth Ackerman offers emancipation and Reconstruction as a usable past for the Occupy left, unfavorably contrasting “the heavily prefigurative and antipolitical style of activism practiced by William Lloyd Garrison” with the pragmatic abolitionists who

saw that a strategic approach to abolition was required, one in which the “cause of the slave” would be harnessed to a wider set of appeals. At each stage of their project, from the Liberty Party to the Free Soil Party and finally the Republican Party, progressively broader coalitions were formed around an emerging ideology of free labor that merged antislavery principles with the economic interests of ordinary northern whites.

Today’s left, he suggests, could learn from this marriage of radical commitments and practical politics. Absolutely right.

There is, though, a problem: Reconstruction wasn’t just defeated in the South, it was abandoned by the North, largely by these same practical politicians, whose liberalism was transposed in just a few years from the key of anti-slavery to the key of “free trade, the law of supply and demand, the gold standard and limited government” (that’s Foner again), and who turned out to be less frightened by the restoration of white supremacy in the South than by “schemes for interference with property.”

If we must, as we must, “conjure up the spirits of the past …, borrowing from them names, battle slogans, and costumes in order to present this new scene in world history in time-honored disguise and borrowed language,” then certainly, we could do worse than the Civil-War era Republicans who successfully yoked liberalism to the cause of emancipation (though I’m not sure why Seth name-checks Salmon P. Chase, an early opponent of Reconstruction). But personally, I’d prefer to dress up as a populist who continued to support the rights of working people even after liberalism had decisively gone its own way, and who ended up representing “all that the liberals considered unwholesome in American politics.” Anybody for a revival of Butlerism?

Fukushima Update: How Safe Can a Nuclear Meltdown Get?

by Will Boisvert

Last summer I posted an essay here arguing that nuclear power is a lot safer than people think—about a hundred times safer than our fossil fuel-dominated power system. At the time I predicted that the impact of the March, 2011 Fukushima Daiichi nuclear plant accident in Japan would be small. A year later, now that we have a better fix on the consequences of the Fukushima meltdowns, I’ll have to revise “small” to “microscopic.” The accumulating data and scientific studies on the Fukushima accident reveal that radiation doses are and will remain low, that health effects will be minor and imperceptible, and that the traumatic evacuation itself from the area around the plant may well have been unwarranted. Far from the apocalypse that opponents of nuclear energy anticipated, the Fukushima spew looks like a fizzle, one that should drastically alter our understanding of the risks of nuclear power.

Anti-nuke commentators like Arnie Gundersen continue to issue forecasts of a million or more long-term casualties from Fukushima radiation. (So far there have been none.) But the emerging scientific consensus is that the long-term health consequences of the radioactivity, particularly cancer fatalities, will be modest to nil. At the high end of specific estimates, for example, Princeton physicist Frank von Hippel, writing in the nuke-dreading Bulletin of the Atomic Scientists, reckons an eventual one thousand fatal cancers arising from the spew.

Now there’s a new peer-reviewed paper by Stanford’s Mark Z. Jacobson and John Ten Hoeve that predicts remarkably few casualties. (Jacobson, you may remember, wrote a noted Scientific American article proposing an all-renewable energy system for the world.) They used a supercomputer to model the spread of radionuclides from the Fukushima reactors around the globe, and then calculated the resulting radiation doses and cancer cases through the year 2061. Their result: a probable 130 fatal cancers, with a range from 15 to 1300, in the whole world over fifty years. (Because radiation exposures will have subsided to insignificant levels by then, these cases comprise virtually all that will ever occur.) They also simulated a hypothetical Fukushima-scale meltdown of the Diablo Canyon nuclear power plant in California, and calculated a likely cancer death toll of 170, with a range from 24 to 1400.

To put these figures in context, pollution from American coal-fired power plants alone kills about 13,000 people every year. The Stanford estimates therefore indicate that the Fukushima spew, the only significant nuclear accident in 25 years, will likely kill fewer people over five decades than America’s coal-fired power plants kill every five days to five weeks. Worldwide, coal plants kill over 200,000 people each year—150 times more deaths than the high-end Fukushima forecasts predict over a half century.

We’ll probably never know whether these projected Fukushima fatalities come to pass or not. The projections are calculated by multiplying radiation doses by standard risk factors derived from high-dose exposures; these risk factors are generally assumed—but not proven—to hold up at the low doses that nuclear spews emit. Radiation is such a weak carcinogen that scientists just can’t tell for certain whether it causes any harm at all below a dose of 100 millisieverts (100 mSv). Even if it does, it’s virtually impossible to discern such tiny changes in cancer rates in epidemiological studies. Anti-nukes give that fact a paranoid spin by warning of “hidden cancer deaths.” But if you ask me, risks that are too small to measure are too small to worry about.

The Stanford study relied on a computer simulation, but empirical studies of radiation doses support the picture of negligible effects from the Fukushima spew.

In a direct measurement of radiation exposure, officials in Fukushima City, about 40 miles from the nuclear plant, made 37,000 schoolchildren wear dosimeters around the clock during September, October and December, 2011, to see how much radiation they soaked up. Over those three months, 99 percent of the participants absorbed less than 1 mSv, with an average external dose of 0.26 mSv. Doubling that to account for internal exposure from ingested radionuclides gives an annual dose of 2.08 mSv. That’s a pretty small dose, about one third the natural radiation dose in Denver, with its high altitude and abundant radon gas, and many times too small to cause any measurable up-tick in cancer rates. At the time, the outdoor air-dose rate in Fukushima was about 1 microsievert per hour (or about 8.8 mSv per year), so the absorbed external dose was only about one eighth of the ambient dose. That’s because the radiation is mainly gamma rays emanating from radioactive cesium in the soil, which are absorbed by air and blocked by walls and roofs. Since people spend most of their time indoors at a distance from soil—often on upper floors of houses and apartment buildings—they are shielded from most of the outdoor radiation.

Efforts to abate these low-level exposures will be massive—and probably redundant. The Japanese government has budgeted $14 billion for cleanup over thirty years and has set an immediate target of reducing radiation levels by 50 percent over two years. But most of that abatement will come from natural processes—radioactive decay and weathering that washes radio-cesium deep into the soil or into underwater sediments, where it stops irradiating people—that  will reduce radiation exposures on their own by 40% over two years. (Contrary to the centuries-of-devastation trope, cesium radioactivity clears from the land fairly quickly.) The extra 10 percent reduction the cleanup may achieve over two years could be accomplished by simply doing nothing for three years. Over 30 years the radioactivity will naturally decline by at least 90 percent, so much of the cleanup will be overkill, more a political gesture than a substantial remediation. Little public-health benefit will flow from all that, because there was little radiation risks to begin with.

How little? Well, an extraordinary wrinkle of the Stanford study is that it calculated the figure of 130 fatal cancers by assuming that there had been no evacuation from the 20-kilometer zone around the nuclear plant. You may remember the widely televised scenes from that evacuation, featuring huddled refugees and young children getting wanded down with radiation detectors by doctors in haz-mat suits. Those images of terror and contagion reinforced the belief that the 20-km zone is a radioactive killing field that will be uninhabitable for eons. The Stanford researchers endorse that notion, writing in their introduction that “the radiation release poisoned local water and food supplies and created a dead-zone of several hundred square kilometers around the site that may not be safe to inhabit for decades to centuries.”

But later in their paper Jacobson and Ten Hoeve actually quantify the deadliness of the “dead-zone”—and it turns out to be a reasonably healthy place. They calculate that the evacuation from the 20-km zone probably prevented all of 28 cancer deaths, with a lower bound of 3 and an upper bound of 245. Let me spell out what that means: if the roughly 100,000 people who lived in the 20-km evacuation zone had not evacuated, and had just kept on living there for 50 years on the most contaminated land in Fukushima prefecture, then probably 28 of them—and at most 245—would have incurred a fatal cancer because of the fallout from the stricken reactors. At the very high end, that’s a fatality risk of 0.245 %, which is pretty small—about half as big as an American’s chances of dying in a car crash. Jacobson and Ten Hoeve compare those numbers to the 600 old and sick people who really did die during the evacuation from the trauma of forced relocation. “Interestingly,” they write, “the upper bound projection of lives saved from the evacuation is lower than the number of deaths already caused by the evacuation itself.”

That observation sure is interesting, and it raises an obvious question: does it make sense to evacuate during a nuclear meltdown?

In my opinion—not theirs—it doesn’t. I don’t take the Stanford study as gospel; its estimate of risks in the EZ strikes me as a bit too low. Taking its numbers into account along with new data on cesium clearance rates and the discrepancy between ambient external radiation and absorbed doses, I think a reasonable guesstimate of ultimate cancer fatalities in the EZ, had it never been evacuated, would be several hundred up to a thousand. (Again, probably too few to observe in epidemiological studies.) The crux of the issue is whether immediate radiation exposures from inhalation outweigh long-term exposures emanating from radioactive soil. Do you get more cancer risk from breathing in the radioactive cloud in the first month of the spew, or from the decades of radio-cesium “groundshine” after the cloud disperses? Jacobson and Ten Hoeve’s model assigns most of the risk to the cloud, while other calculations, including mine, give more weight to groundshine.

But from the standpoint of evacuation policy, the distinction may be moot. If the Stanford model is right, then evacuations are clearly wrong—the radiation risks are trivial and the disruptions of the evacuation too onerous. But if, on the other hand, cancer risks are dominated by cesium groundshine, then precipitate forced evacuations are still wrong, because those exposures only build up slowly. The immediate danger in a spew is thyroid cancer risk to kids exposed to iodine-131, but that can be counteracted with potassium iodide pills or just by barring children from drinking milk from cows feeding on contaminated grass for the three months it takes the radio-iodine to decay away. If that’s taken care of, then people can stay put for a while without accumulating dangerous exposures from radio-cesium.

Data from empirical studies of heavily contaminated areas support the idea that rapid evacuations are unnecessary. The Japanese government used questionnaires correlated with air-dose readings to estimate the radiation doses received in the four months immediately after the March meltdown in the townships of Namie, Iitate and Kawamata, a region just to the northwest of the 20-kilometer exclusion zone. This area was in the path of an intense fallout plume and incurred contamination comparable to levels inside the EZ; it was itself evacuated starting in late May. The people there were the most irradiated in all Japan, yet even so the radiation doses they received over those four months, at the height of the spew, were modest. Out of 9747 people surveyed, 5636 got doses of less than 1 millisievert, 4040 got doses between 1 and 10 mSv and 71 got doses between 10 and 23 mSv. Assuming everyone was at the high end of their dose category and a standard risk factor of 570 cancer fatalities per 100,000 people exposed to 100 mSv, we would expect to see a grand total of three cancer deaths among those 10,000 people over a lifetime from that four-month exposure. (As always, these calculated casualties are purely conjectural—far too few to ever “see” in epidemiological statistics.)

Those numbers indicate that cancer risks in the immediate aftermath of a spew are tiny, even in very heavily contaminated areas. (Provided, always, that kids are kept from drinking iodine-contaminated milk.) Hasty evacuations are therefore needless. There’s time to make a considered decision about whether to relocate—not hours and days, but months and years.

And that choice should be left to residents. It makes no sense to roust retirees from their homes because of radiation levels that will raise their cancer risk by at most a few percent over decades. People can decide for themselves—to flee or not to flee—based on fallout in their vicinity and any other factors they think important. Relocation assistance should be predicated on an understanding that most places, even close to a stricken plant, will remain habitable and fit for most purposes. The vast “costs” of cleanup and compensation that have been attributed to the Fukushima accident are mostly an illusion or the product of overreaction, not the result of any objective harm caused by radioactivity.

Ultimately, the key to rational policy is to understand the kind of risk that nuclear accidents pose. We have a folk-conception of radiation as a kind of slow-acting nerve gas—the merest whiff will definitely kill you, if only after many years. That risk profile justifies panicked flight and endless quarantine after a radioactivity release, but it’s largely a myth. In reality, nuclear meltdowns present a one-in-a-hundred chance of injury. On the spectrum of threat they occupy a fairly innocuous position: somewhere above lightning strikes, in the same ballpark as driving a car or moving to a smoggy city, considerably lower than eating junk food. And that’s only for people residing in the maximally contaminated epicenter of a once-a-generation spew. For everyone else, including almost everyone in Fukushima prefecture itself, the risks are negligible, if they exist at all.

Unfortunately, the Fukushima accident has heightened public misunderstanding of nuclear risks, thanks to long-ingrained cultural associations of fission with nuclear war, the Japanese government’s hysterical evacuation orders and haz-mat mobilizations, and the alarmism of anti-nuke ideologues. The result is anti-nuclear back-lash and the shut-down of Japanese and German nukes, which is by far the most harmful consequence of the spew. These fifty-odd reactors could be brought back on line immediately to displace an equal gigawattage of coal-fired electricity, and would prevent the emission of hundreds of millions of tons of carbon dioxide each year, as well as thousands of deaths from air pollution. But instead of calling for the restart of these nuclear plants, Greens have stoked huge crowds in Japan and elsewhere into marching against them. If this movement prevails, the environmental and health effects will be worse than those of any pipeline, fracking project or tar-sands development yet proposed.

But there may be a silver lining if the growing scientific consensus on the effects of the Fukushima spew triggers a paradigm shift. Nuclear accidents, far from being the world-imperiling crises of popular lore, are in fact low-stakes, low-impact events with consequences that are usually too small to matter or even detect. There’s been much talk over the past year about the need to digest “the lessons of Fukushima.” Here’s the most important and incontrovertible one: even when it melts down and blows up, nuclear power is safe.

Does the Fed Control Interest Rates?

Casey Mulligan goes to the New York Times to say that monetary policy doesn’t work. This annoys Brad DeLong:

THE NEW YORK TIMES PUBLISHES CASEY MULLIGAN AS A JOKE, DOESN’T IT? 

… The third joke is the entire third paragraph: since the long government bond rate is made up of the sum of (a) an average of present and future short-term rates and (b) term and risk premia, if Federal Reserve policy affects short rates then–unless you want to throw every single vestige of efficient markets overboard and argue that there are huge profit opportunities left on the table by financiers in the bond market–Federal Reserve policy affects long rates as well. 

Casey B. Mulligan: Who Cares About Fed Funds?: New research confirms that the Federal Reserve’s monetary policy has little effect on a number of financial markets, let alone the wider economy…. Eugene Fama of the University of Chicago recently studied the relationship between the markets for overnight loans and the markets for long-term bonds…. Professor Fama found the yields on long-term government bonds to be largely immune from Fed policy changes…

Krugman piles on [1]; the only problem with DeLong’s post, he says, is that

it fails to convey the sheer numbskull quality of Mulligan’s argument. Mulligan tries to refute people like, well, me, who say that the zero lower bound makes the case for fiscal policy. … Mulligan’s answer is that this is foolish, because monetary policy is never effective. Huh? 

… we have overwhelming empirical evidence that monetary policy does in fact “work”; but Mulligan apparently doesn’t know anything about that.

Overwhelming evidence? Citation needed, as the Wikipedians say.

Anyway, I don’t want to defend Mulligan — I haven’t even read the column in question — but on this point, he’s got a point. Not only that: He’s got the more authentic Keynesian position.

Textbook macro models, including the IS-LM that Krugman is so fond of, feature a single interest rate, set by the Federal Reserve. The actual existence of many different interest rates in real economies is hand-waved away with “risk premia” — market rates are just equal to “the” interest rate plus a prmium for the expected probability of default of that particular borrower. Since the risk premia depnd on real factors, they should be reasonably stable, or at least independent of monetary policy. So when the Fed Funds rate goes up or down, the whole rate structure should go up and down with it. In which case, speaking of “the” interest rate as set by the central bank is a reasonable short hand.

How’s that hold up in practice? Let’s see:

The figure above shows the Federal Funds rate and various market rates over the past 25 years. Notice how every time the Fed changes its policy rate (the heavy black line) the market rates move right along with it?

Yeah, not so much.

In the two years after June 2007, the Fed lowered its rate by a full five points. In this same period, the rate on Aaa bonds fell by less 0.2 points, and rates for Baa and state and local bonds actually rose. In a naive look at the evidence, the “overwhelming” evidence for the effectiveness of monetary policy is not immediately obvious.

Ah but it’s not current short rates that long rates are supposed to follow, but expected short rates. This is what our orthodox New Keynesians would say. My first response is, So what? Bringing expectations in might solve the theoretical problem but it doesn’t help with the practical one. “Monetary policy doesn’t work because it doesn’t change expectations” is just a particular case of “monetary policy doesn’t work.”

But it’s not at all obvious that long rates follow expected short rates either. Here’s another figure. This one shows the spreads between the 10-Year Treasury and the Baa corporate bond rates, respectively, and the (geometric) average Fed Funds rate over the following 10 years.

If DeLong were right that “the long government bond rate is made up of the sum of (a) an average of present and future short-term rates and (b) term and risk premia” then the blue bars should be roughly constant at zero, or slightly above it. [2] Not what we see at all. It certainly looks as though the markets have been systematically overestimating the future level of the Federal Funds rate for decades now. But hey, who are you going to believe, the efficient markets theory or your lying eyes? Efficient markets plus rational expectations say that long rates must be governed by the future course of short rates, just as stock prices must be governed by future flows of dividends. Both claims must be true in theory, which means they are true, no matter how stubbornly they insist on looking false.

Of course if you want to believe that the inherent risk premium on long bonds is four points higher today than it was in the 1950s, 60s and 70s (despite the fact that the default rate on Treasuries, now as then, is zero) and that the risk premium just happens to rise whenever the short rate falls, well, there’s nothing I can do to stop you.

But what’s the alternative? Am I really saying that players in the bond market are leaving huge profit opportunities on the table? Well, sometimes, maybe. But there’s a better story, the one I was telling the other day.

DeLong says that if rates are set by rational, profit-maximizing agents, then — setting aside default risk — long rates should be equal to the average of short rates over their term. This is a standard view, everyone learns it. but it’s not strictly correct. What profit-maximizing bond traders do, is set long rates equal to the expected future value of long rates.

I went through this in that other post, but let’s do it again. Take a long bond — we’ll call it a perpetuity to keep the math simple, but the basic argument applies to any reasonably long bond. Say it has a coupon (annual payment) of $40 per year. If that bond is currently trading at $1000, that implies an interest rate of 4 percent. Meanwhile, suppose the current short rate is 2 percent, and you expect that short rate to be maintained indefinitely. Then the long bond is a good deal — you’ll want to buy it. And as you and people like you buy long bonds, their price will rise. It will keep rising until it reaches $2000, at which point the long interest rate is 2 percent, meaning that the expected return on holding the long bond and rolling over short bonds is identical, so there’s no incentive to trade one for the other. This is the arbitrage that is supposed to keep long rates equal to the expected future value of short rates. If bond traders don’t behave this way, they are missing out on profitable trades, right?

Not necessarily. Suppose the situation is as described above — 4 percent long rate, 2 percent short rate which you expect to continue indefinitely. So buying a long bond is a no-brainer, right? But suppose you also believe that the normal or usual long rate is 5 percent, and that it is likely to return to that level soon. Maybe you think other market participants have different expectations of short rates, maybe you think other market participants are irrational, maybe you think… something else, which we’ll come back to in a second. For whatever reason, you think that short rates will be 2 percent forever, but that long rates, currently 4 percent, might well rise back to 5 percent. If that happens, the long bond currently trading for $1000 will fall in price to $800. (Remember, the coupon is fixed at $40, and 5% = 40/800.) You definitely don’t want to be holding a long bond when that happens. That would be a capital loss of 20 percent. Of course every year that you hold short bonds rather than buying the long bond at its current price of $1000, you’re missing out on $20 of interest; but if you think there’s even a moderate chance of the long bond falling in value by $200, giving up $20 of interest to avoid that risk might not look like a bad deal.

Of course, even if you think the long bond is likely to fall in value to $800, that doesn’t mean you won’t buy it for anything above that. if the current price is only a bit above $800 (the current interest rate is only a bit below the “normal” level of 5 percent) you might think the extra interest you get from buying a long bond is enough to compensate you for the modest risk of a capital loss. So in this situation, the equilibrium price of the long bond won’t be at the normal level, but slightly below it. And if the situation continues long enough, people will presumably adjust their views of the “normal” level of the long bond to this equilibrium, allowing the new equilibrium to fall further. In this way, if short rates are kept far enough from long rates for long enough, long rates will eventually follow. We are seeing a bit of this process now. But adjusting expectations in this way is too slow to be practical for countercyclical policy. Starting in 1998, the Fed reduced rates by 4.5 points, and maintained them at this low level for a full six years. Yet this was only enough to reduce Aaa bond rates (which shouldn’t include any substantial default risk premium) by slightly over one point.

In my previous post, I pointed out that for policy to affect long rates, it must include (or be believed to include) a substantial permanent component, so stabilizing the economy this way will involve a secular drift in interest rates — upward in an economy facing inflation, downward in one facing unemployment. (As Steve Randy Waldman recently noted, Michal Kalecki pointed this out long ago.) That’s important, but I want to make another point here.

If the primary influence on current long rates is the expected future value of long rates, then there is no sense in which long rates are set by fundamentals.  There are a potentially infinite number of self-fulfilling expected levels for long rates. And again, no one needs to behave irrationally for these conventions to sustain themselves. The more firmly anchored is the expected level of long rates, the more rational it is for individual market participants to act so as to maintain that level. That’s the “other thing” I suggested above. If people believe that long rates can’t fall below a certain level, then they have an incentive to trade bonds in a way that will in fact prevent rates from falling much below that level. Which means they are right to believe it. Just like driving on the right or left side of the street, if everyone else is doing it it is rational for you to do it as well, which ensures that everyone will keep doing it, even if it’s not the best response to the “fundamentals” in a particular context.

Needless to say, the idea that that long-term rate of interest is basically a convention straight from Keynes. As he puts it in Chapter 15 of The General Theory,

The rate of interest is a highly conventional … phenomenon. For its actual value is largely governed by the prevailing view as to what its value is expected to be. Any level of interest which is accepted with sufficient conviction as likely to be durable will be durable; subject, of course, in a changing society to fluctuations for all kinds of reasons round the expected normal. 

You don’t have to take Keynes as gospel, of course. But if you’ve gotten as much mileage as Krugman has out of the particular extract of Keynes’ ideas embodied in the IS-LM mode, wouldn’t it make sense to at least wonder why the man thought this about interest rates, and if there might not be something to it.

Here’s one more piece of data. This table shows the average spread between various market rates and the Fed Funds rate.

Spreads over Fed Funds by decade
10-Year Treasuries Aaa Corporate Bonds Baa Corporate Bonds State & Local Bonds
1940s 2.2 3.3
1950s 1.0 1.3 2.0 0.7
1960s 0.5 0.8 1.5 -0.4
1970s 0.4 1.1 2.2 -1.1
1980s 0.6 1.4 2.9 -0.9
1990s 1.5 2.6 3.3 0.9
2000s 1.5 3.0 4.1 1.8

Treasuries carry no default risk; a given bond rating should imply a fixed level of default risk, with the default risk on Aaa bonds being practically negligible. [3] Yet the 10-year treasury spread has increased by a full point and the corporate bond rates by about two points, compared with the postwar era. (Municipal rates have risen by even more, but there may be an element of genuine increased risk there.) Brad DeLong might argue that society’s risk-bearing capacity has decline so catastrophically since the 1960s that even the tiny quantum of risk in Aaa bonds requires two full additional points of interest to compensate its quaking, terrified bearers. And that this has somehow happened without requiring any more compensation for the extra risk in Baa bonds relative to Aaa. I don’t think even DeLong would argue this, but when the honor of efficient markets is at stake, people have been known to do strange things.

Wouldn’t it be simpler to allow that maybe long rates are not, after all, set as “the sum of (a) an average of present and future short-term rates and (b) [relatively stable] term and risk premia,” but that they follow their own independent course, set by conventional beliefs that the central bank can only shift slowly, unreliably and against considerable resistance? That’s what Keynes thought. It’s what Alan Greenspan thinks. [4] And also it’s what seems to be true, so there’s that.

[1] Prof. T. asks what I’m working on. A blogpost, I say. “Let me guess — it says that Paul Krugman is great but he’s wrong about this one thing.” Um, as a matter of fact…

[2] There’s no risk premium on Treasuries, and it is not theoretically obvious why term premia should be positive on average, though in practice they generally are.

[3] Despite all the — highly deserved! — criticism the agencies got for their credulous ratings of mortgage-backed securities, they do seem to be good at assessing corporate default risk. The cumulative ten-year default rate for Baa bonds issued in the 1970s was 3.9 percent. Two decades later, the cumulative ten-year default rate for Baa bonds issued in the 1990s was … 3.9 percent. (From here, Exhibit 42.)

[4] Greenspan thinks that the economically important long rates “had clearly delinked from the fed funds rate in the early part of this decade.” I would only add that this was just the endpoint of a longer trend.