In Comments: The Lessons of Fukushima

I’d like to promise a more regular posting schedule here. On the other hand, seeing as I’m on the academic job market this fall (anybody want to hire a radical economist?) I really shouldn’t be blogging at all. So on balance we’ll probably just keep staggering along as usual.

But! In lieu of new posts, I really strongly recommend checking out the epic comment thread on Will Boisvert’s recent post on the lessons of Fukushima. It’s well over 100 comments, which while no big deal for real blogs, is off the charts here. But more importantly, they’re almost all serious & thoughtful, and number evidently come from people with real expertise in nuclear and/or alternative energy. Which just goes to show: If you bring data and logic to the conversation, people will respond in kind.

You Eat Mitt Romney’s Salt

Don’t you love the Romney video? I’m not going to deny it, right now I am with Team Dem. It’s true, we usually say “the bosses have two parties”; but it’s not usual for them to run for office personally, themselves. And when they do, wow, what a window onto how they really think.

It’s hard to even imagine the mindset where the person sitting in the back of the town car is the “maker” and the person upfront driving is just lazing around; where the guys maintaining the hedges and manning the security gates at the mansion are idle parasites, while the person living in it, just by virtue of that fact, is working; where the person who owns the dressage horse is the producer and the people who groom it and feed it and muck it are the layabouts. As some on the right have pointed out, it’s weird, also, that “producing” is now equated with paying federal taxes. Isn’t working in the private sector supposed to be productive? Isn’t a successful business contributing something to society besides checks to the IRS?

It is weird. But as we’re all realizing, the 47 percent/53 percent rhetoric has a long history on the Right. (It would be interesting to explore this via the rounding-up of 46.4 percent to 47, the same way medievalists trace the dissemination of a text by the propagation of copyists’ errors.) Naturally, brother Konczal is on the case, with a great post tracing out four lineages of the 47 percent. His preferred starting point, like others’, is the Wall Street Journal‘s notorious 2002 editorial on the “lucky duckies” who pay no income tax.

That’s a key reference point, for sure. But I think this attitude goes back a bit further. The masters of mankind, it seems to me, have always cultivated a funny kind of solipsism, imagining that the people who fed and clothed and worked and fought for them, were somehow living off of them instead.

Here, as transcribed in Peter Laslett’s The World We Have Lost, is Gregory King’s 1688 “scheme” of the population of England. It’s fascinating to see the careful gradations of status (early-moderns were nothing if not attentive to “degree”); we’ll be pleased to see, for instance, that “persons in liberal arts and sciences,” come above shopkeepers, though below farmers. But look below that to the “general account” at the bottom. We have 2.675 million people “increasing the wealth of the kingdom,” and 2.825 million “decreasing the wealth of the kingdom.” The latter group includes not only the vagrants, gypsies and thieves, but common seamen, soldiers, laborers, and “cottagers,” i.e. landless farmworkers. So in three centuries, the increasers are up from 49 percent to 53 percent, and the lucky duckies are down from 51 percent to 47. That’s progress, I guess.

One can’t help wondering how the wealth of the kingdom would hold up if the eminent traders by sea couldn’t find common seamen, if the farmers had to do without laborers, if there were officers but no common soldiers. 

Young Alexander conquered India.
He alone?
Caesar beat the Gauls.
Was there not even a cook in his army?

Always more where they come from, I suppose Gregory King might say.

Here, also from Laslett, is a similar division from 100 years earlier, by Sir Thomas Smith:

1. ‘The first part of the Gentlemen of England called Nobilitas Major.’ This is the nobility, or aristocracy proper.
2. ‘The second sort of Gentlemen called Nobilitas Minor.’ This is the gentry and Smith further divides it into Knights, Esquires and gentlemen.
3. ‘Citizens, Burgesses and Yeomen.’
4. ‘The fourth sort of men which do not rule.’

Of this last group, Smith explains:

The fourth sort or class amongst us is of those which the old Romans called capite sensu proletarii or operarii, day labourers, poor husbandmen, yea merchants or retailers which have no free land, copyholders, and all artificers, as tailors, shoemakers, carpenters, brick- makers, brick-layers, etc. These have no voice nor authority in our commonwealth and no account is made of them, but only to be ruled and not to rule others.

In other words, Elizabethan Mitt Romney, your job is not to worry about those people.

Smith’s contemporary Shakespeare evidently had distinctions like these in mind when he wrote Coriolanus. (A remarkably radical play; I think it was the only Shakespeare Brecht approved of.) The title chracter’s overriding passion is his contempt for the common people, those “geese that bear the shapes of men,” who “sit by th’ fire and presume to know what’s done i’ the Capitol.” He hates them specifically because they are, as it were, dependent, and think of themselves as victims.

They said they were an-hungry; sigh’d forth proverbs, —
That hunger broke stone walls, that dogs must eat,
That meat was made for mouths, that the gods sent not
Corn for the rich men only: — with these shreds
They vented their complainings…

He has no patience for this idea that people are entitled to enough to to eat:

Would the nobility lay aside their ruth
And let me use my sword, I’d make a quarry
With thousands of these quarter’d slaves, as high
As I could pick my lance.

One more instance. Did everybody read Daniyal Mueenuddin‘s In Other Rooms, Other Wonders?

It’s a magnificent, but also profoundly conservative, work of fiction. In Mueenuddin’s world the social hierarchy is so natural, so unquestioned, that any crossing of its boundaries can only be understood as a personal, moral failing, which of course always comes at a great personal cost. There’s one phrase in particular that occurs repeatedly in the book: “They eat your salt,” “you ate his salt,” etc. The thing about this evidently routine expression is that the eater is always someone of lower status, and the person whose salt is being eaten is always a landlord or aristocrat. “Oh what could be the matter in your service? I’ve eaten your salt all my life,” says the electrician who has, in fact, spent all his life keeping the pumps going on the estates of the man he’s petitioning. Somehow, in this world, the person who sits in a mansion in Lahore or Karachi is entitled as a matter of course to all the salt and all the good things of life, and the person who physically produces the salt should be grateful to get any of it.

Mueenuddin describes this world vividly and convincingly, in part because he is a writer of great talent, but also clearly in part because he shares its essential values. Just in case we haven’t got the point, the collection’s final story, “A Spoiled Man,” is about how an old laborer’s life is ruined when his master’s naive American wife gets the idea he deserves a paycheck and proper place to sleep, giving him the disastrous idea that he has rights. You couldn’t write fiction like that in this country, I don’t think. Hundreds of years of popular struggle have reshaped the culture in ways that no one with the sensitivity to write good fiction could ignore. A Romney is a different story.

UDPATE: Krugman today is superb on this. (Speaking of being on Team Dem.)

Thomas Sargent Abandons Rational Expectations, Converts to Post Keynesianism

From Keynes (1937), “The General Theory of Employment”:

We have, as a rule, only the vaguest idea of any but the most direct consequences of our acts. Sometimes we are not much concerned with their remoter consequences, even though time and chance may make much of them. But sometimes we are intensely concerned with them… The whole object of the accumulation of wealth is to produce results, or potential results, at a comparatively distant, and sometimes indefinitely distant, date. Thus the fact that our knowledge of the future is fluctuating, vague and uncertain, renders wealth a peculiarly unsuitable subject for the methods of the classical economic theory. … 

By ‘uncertain’ knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty… Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of an European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.

Better late than never, I suppose…

UPDATE: OK, ok, Sargent has written a lot about uncertainty and expectations.  Harry Konstantidis points out this 1972 article Rational Expectations and the Term Structure of Interest Rates, which is … really interesting, actually. It’s an empirical test of whether the behavior of interest rates over time is consistent with bond prices being set by a process of rational expectations. The conclusions are surprisingly negative:

The evidence summarized above implies that it is difficult to maintain both that only expectations determine the yield curve and that expectations are rational in the sense of efficiently incorporating available information. The predictions of the random walk version of the model are fairly decisively rejected by the data, particularly for forward rates with less than five years term to maturity. … 

It is clear that our conclusions apply with equal force to the diluted form of the expectations hypothesis that allows forward rates to be determined by expectations plus time-invariant liquidity premiums. … On the other hand, it would clearly be possible to determine a set of time-dependent “liquidity premiums” that could be used to adjust the forward rates so that the required sequences would display “white” spectral densities. … While this procedure has its merits in certain instances, it is essentially arbitrary, there being no adequate way to relate the “liquidity premiums” so derived to objective characteristics of markets… 

An alternative way to “save” the doctrine that expectations alone determine the yield curve in the face of empirical evidence like that presented above is to abandon the hypothesis that expectations are rational. Once that is done, the model becomes much freer, being capable of accommodating all sorts of ad hoc, plausible hypotheses about the formation of expectations. Yet salvaging the expectations theory in that way involves building a model of the term structure that … permits expectations to be formed via a process that could utilize available information more efficiently and so enhance profits. That seems … extremely odd.

In other words: Observed yields are inconsistent with a simple version of rational expectations. They could be made consistent, but only by trvializing the theory by adding ad hoc adjustments that could fit anything. We might conclude that expectations are not rational, but that’s too scary. So … who knows.

One unambiguous point, though, is that under rational expectations interest rates should follow a random walk, and your best prediction of interest rates on a given instrument at some future date should be the rate today. Just saying “I don’t now” is not consistent with rational expectations — for one thing, it gives you no basis for deciding whether a product like the one being advertised here is priced fairly. Sargent’s “No” is consistent, though, with fundamental uncertainty in the Keynes sense. In that case the decision to buy or not buy is based on nonrational behavioral rules — like, say, looking at endorsements of recognized authorities. (Keynes: “Knowing that our individual judgment is worthless, we endeavour to fall back on the judgment of the rest of the world which is perhaps better informed.”) So I stand by my one-liner.

In Defense of Debt

I have a new post up at the Jacobin, responding to Mike Beggs’ critical review of David Graeber’s Debt. It’s a much longer, and hopefully more convincing, version of some arguments I was having with Mike and others over at Crooked Timber last month. Mike things there is no useful economics in Debt; I think that on the contrary, the book fits well with important strands of heterodox economics going back to Marx and Keynes (not to mention Schumpeter and Wicksell).

In particular, I think the historical and anthropological material in Debt helps put concrete social flesh on two key analytic points. First, that we need to think of capitalism primarily organized around the accumulation of money, with economic decision taken in terms of money flows and money commitments; not as a system for the mutually beneficial exchange of goods. And second, within capitalism, we can distinguish between economies where the medium of exchange is primarily commodity or fiat money, and economies where it is primarily bank-created credit money. Textbook economic analysis tends to work strictly in terms of the former, but both kinds of economies have existed historically and they behave quite differently.

(There’s a lot more in the book than this, of course, but what I am trying to do — I don’t know how successfully — is clarify the points where Debt contributes most directly to economics debates about money and credit.)

If this sounds at all interesting, you should first read Mike’s review, if you haven’t, and then read my very long response.

… and then, you should read all the other great stuff at The Jacobin. For my money, it’s the most exciting new political journal to come along in a while.

Bring Back Butlerism

From Eric Foner’s A Short History of Reconstruction:

Even more outrageous than Tweed … was Massachusetts Congressman Benjamin F. Butler, who flamboyantly supported causes that appalled reformers such as the eight-hour day, inflation, and payment of the national debt in greenbacks. He further horrified respectable opinion by embracing women’s suffrage, Irish nationalism, and the Paris Commune.

Or, as a horrified Nation put it, Butlerism was

the embodiment in political organization of a desire for the transfer of power to the ignorant and poor, and the use of government to carry out the poor and ignorant man’s view of the nature of society.

Labor law, inflation, women’s rights, anti-imperialism, and small-c communism, not to mention government by the poor? We could use a little more of that 1870s spirit today. People on the left who want to central banks to do more, in particular, could talk more about loose money’s radical pedigree.

So who was this guy? The internet is mainly interested in his Civil War career. Made a general on the basis of his pro-union, anti-slavery politics, he was, not surprisingly, pretty crap at it; but it does appear that he was the first Union officer to refuse to return fugitive slaves to their masters, and the first to successfully enlist black troops in the South. That was enough for Jefferson Davis to order that if he were captured, he should be executed on the spot. So he didn’t know how to lead a cavalry charge; sounds like a war hero to me.

In the current Jacobin (which everyone should be reading), Seth Ackerman offers emancipation and Reconstruction as a usable past for the Occupy left, unfavorably contrasting “the heavily prefigurative and antipolitical style of activism practiced by William Lloyd Garrison” with the pragmatic abolitionists who

saw that a strategic approach to abolition was required, one in which the “cause of the slave” would be harnessed to a wider set of appeals. At each stage of their project, from the Liberty Party to the Free Soil Party and finally the Republican Party, progressively broader coalitions were formed around an emerging ideology of free labor that merged antislavery principles with the economic interests of ordinary northern whites.

Today’s left, he suggests, could learn from this marriage of radical commitments and practical politics. Absolutely right.

There is, though, a problem: Reconstruction wasn’t just defeated in the South, it was abandoned by the North, largely by these same practical politicians, whose liberalism was transposed in just a few years from the key of anti-slavery to the key of “free trade, the law of supply and demand, the gold standard and limited government” (that’s Foner again), and who turned out to be less frightened by the restoration of white supremacy in the South than by “schemes for interference with property.”

If we must, as we must, “conjure up the spirits of the past …, borrowing from them names, battle slogans, and costumes in order to present this new scene in world history in time-honored disguise and borrowed language,” then certainly, we could do worse than the Civil-War era Republicans who successfully yoked liberalism to the cause of emancipation (though I’m not sure why Seth name-checks Salmon P. Chase, an early opponent of Reconstruction). But personally, I’d prefer to dress up as a populist who continued to support the rights of working people even after liberalism had decisively gone its own way, and who ended up representing “all that the liberals considered unwholesome in American politics.” Anybody for a revival of Butlerism?

Fukushima Update: How Safe Can a Nuclear Meltdown Get?

by Will Boisvert

Last summer I posted an essay here arguing that nuclear power is a lot safer than people think—about a hundred times safer than our fossil fuel-dominated power system. At the time I predicted that the impact of the March, 2011 Fukushima Daiichi nuclear plant accident in Japan would be small. A year later, now that we have a better fix on the consequences of the Fukushima meltdowns, I’ll have to revise “small” to “microscopic.” The accumulating data and scientific studies on the Fukushima accident reveal that radiation doses are and will remain low, that health effects will be minor and imperceptible, and that the traumatic evacuation itself from the area around the plant may well have been unwarranted. Far from the apocalypse that opponents of nuclear energy anticipated, the Fukushima spew looks like a fizzle, one that should drastically alter our understanding of the risks of nuclear power.

Anti-nuke commentators like Arnie Gundersen continue to issue forecasts of a million or more long-term casualties from Fukushima radiation. (So far there have been none.) But the emerging scientific consensus is that the long-term health consequences of the radioactivity, particularly cancer fatalities, will be modest to nil. At the high end of specific estimates, for example, Princeton physicist Frank von Hippel, writing in the nuke-dreading Bulletin of the Atomic Scientists, reckons an eventual one thousand fatal cancers arising from the spew.

Now there’s a new peer-reviewed paper by Stanford’s Mark Z. Jacobson and John Ten Hoeve that predicts remarkably few casualties. (Jacobson, you may remember, wrote a noted Scientific American article proposing an all-renewable energy system for the world.) They used a supercomputer to model the spread of radionuclides from the Fukushima reactors around the globe, and then calculated the resulting radiation doses and cancer cases through the year 2061. Their result: a probable 130 fatal cancers, with a range from 15 to 1300, in the whole world over fifty years. (Because radiation exposures will have subsided to insignificant levels by then, these cases comprise virtually all that will ever occur.) They also simulated a hypothetical Fukushima-scale meltdown of the Diablo Canyon nuclear power plant in California, and calculated a likely cancer death toll of 170, with a range from 24 to 1400.

To put these figures in context, pollution from American coal-fired power plants alone kills about 13,000 people every year. The Stanford estimates therefore indicate that the Fukushima spew, the only significant nuclear accident in 25 years, will likely kill fewer people over five decades than America’s coal-fired power plants kill every five days to five weeks. Worldwide, coal plants kill over 200,000 people each year—150 times more deaths than the high-end Fukushima forecasts predict over a half century.

We’ll probably never know whether these projected Fukushima fatalities come to pass or not. The projections are calculated by multiplying radiation doses by standard risk factors derived from high-dose exposures; these risk factors are generally assumed—but not proven—to hold up at the low doses that nuclear spews emit. Radiation is such a weak carcinogen that scientists just can’t tell for certain whether it causes any harm at all below a dose of 100 millisieverts (100 mSv). Even if it does, it’s virtually impossible to discern such tiny changes in cancer rates in epidemiological studies. Anti-nukes give that fact a paranoid spin by warning of “hidden cancer deaths.” But if you ask me, risks that are too small to measure are too small to worry about.

The Stanford study relied on a computer simulation, but empirical studies of radiation doses support the picture of negligible effects from the Fukushima spew.

In a direct measurement of radiation exposure, officials in Fukushima City, about 40 miles from the nuclear plant, made 37,000 schoolchildren wear dosimeters around the clock during September, October and December, 2011, to see how much radiation they soaked up. Over those three months, 99 percent of the participants absorbed less than 1 mSv, with an average external dose of 0.26 mSv. Doubling that to account for internal exposure from ingested radionuclides gives an annual dose of 2.08 mSv. That’s a pretty small dose, about one third the natural radiation dose in Denver, with its high altitude and abundant radon gas, and many times too small to cause any measurable up-tick in cancer rates. At the time, the outdoor air-dose rate in Fukushima was about 1 microsievert per hour (or about 8.8 mSv per year), so the absorbed external dose was only about one eighth of the ambient dose. That’s because the radiation is mainly gamma rays emanating from radioactive cesium in the soil, which are absorbed by air and blocked by walls and roofs. Since people spend most of their time indoors at a distance from soil—often on upper floors of houses and apartment buildings—they are shielded from most of the outdoor radiation.

Efforts to abate these low-level exposures will be massive—and probably redundant. The Japanese government has budgeted $14 billion for cleanup over thirty years and has set an immediate target of reducing radiation levels by 50 percent over two years. But most of that abatement will come from natural processes—radioactive decay and weathering that washes radio-cesium deep into the soil or into underwater sediments, where it stops irradiating people—that  will reduce radiation exposures on their own by 40% over two years. (Contrary to the centuries-of-devastation trope, cesium radioactivity clears from the land fairly quickly.) The extra 10 percent reduction the cleanup may achieve over two years could be accomplished by simply doing nothing for three years. Over 30 years the radioactivity will naturally decline by at least 90 percent, so much of the cleanup will be overkill, more a political gesture than a substantial remediation. Little public-health benefit will flow from all that, because there was little radiation risks to begin with.

How little? Well, an extraordinary wrinkle of the Stanford study is that it calculated the figure of 130 fatal cancers by assuming that there had been no evacuation from the 20-kilometer zone around the nuclear plant. You may remember the widely televised scenes from that evacuation, featuring huddled refugees and young children getting wanded down with radiation detectors by doctors in haz-mat suits. Those images of terror and contagion reinforced the belief that the 20-km zone is a radioactive killing field that will be uninhabitable for eons. The Stanford researchers endorse that notion, writing in their introduction that “the radiation release poisoned local water and food supplies and created a dead-zone of several hundred square kilometers around the site that may not be safe to inhabit for decades to centuries.”

But later in their paper Jacobson and Ten Hoeve actually quantify the deadliness of the “dead-zone”—and it turns out to be a reasonably healthy place. They calculate that the evacuation from the 20-km zone probably prevented all of 28 cancer deaths, with a lower bound of 3 and an upper bound of 245. Let me spell out what that means: if the roughly 100,000 people who lived in the 20-km evacuation zone had not evacuated, and had just kept on living there for 50 years on the most contaminated land in Fukushima prefecture, then probably 28 of them—and at most 245—would have incurred a fatal cancer because of the fallout from the stricken reactors. At the very high end, that’s a fatality risk of 0.245 %, which is pretty small—about half as big as an American’s chances of dying in a car crash. Jacobson and Ten Hoeve compare those numbers to the 600 old and sick people who really did die during the evacuation from the trauma of forced relocation. “Interestingly,” they write, “the upper bound projection of lives saved from the evacuation is lower than the number of deaths already caused by the evacuation itself.”

That observation sure is interesting, and it raises an obvious question: does it make sense to evacuate during a nuclear meltdown?

In my opinion—not theirs—it doesn’t. I don’t take the Stanford study as gospel; its estimate of risks in the EZ strikes me as a bit too low. Taking its numbers into account along with new data on cesium clearance rates and the discrepancy between ambient external radiation and absorbed doses, I think a reasonable guesstimate of ultimate cancer fatalities in the EZ, had it never been evacuated, would be several hundred up to a thousand. (Again, probably too few to observe in epidemiological studies.) The crux of the issue is whether immediate radiation exposures from inhalation outweigh long-term exposures emanating from radioactive soil. Do you get more cancer risk from breathing in the radioactive cloud in the first month of the spew, or from the decades of radio-cesium “groundshine” after the cloud disperses? Jacobson and Ten Hoeve’s model assigns most of the risk to the cloud, while other calculations, including mine, give more weight to groundshine.

But from the standpoint of evacuation policy, the distinction may be moot. If the Stanford model is right, then evacuations are clearly wrong—the radiation risks are trivial and the disruptions of the evacuation too onerous. But if, on the other hand, cancer risks are dominated by cesium groundshine, then precipitate forced evacuations are still wrong, because those exposures only build up slowly. The immediate danger in a spew is thyroid cancer risk to kids exposed to iodine-131, but that can be counteracted with potassium iodide pills or just by barring children from drinking milk from cows feeding on contaminated grass for the three months it takes the radio-iodine to decay away. If that’s taken care of, then people can stay put for a while without accumulating dangerous exposures from radio-cesium.

Data from empirical studies of heavily contaminated areas support the idea that rapid evacuations are unnecessary. The Japanese government used questionnaires correlated with air-dose readings to estimate the radiation doses received in the four months immediately after the March meltdown in the townships of Namie, Iitate and Kawamata, a region just to the northwest of the 20-kilometer exclusion zone. This area was in the path of an intense fallout plume and incurred contamination comparable to levels inside the EZ; it was itself evacuated starting in late May. The people there were the most irradiated in all Japan, yet even so the radiation doses they received over those four months, at the height of the spew, were modest. Out of 9747 people surveyed, 5636 got doses of less than 1 millisievert, 4040 got doses between 1 and 10 mSv and 71 got doses between 10 and 23 mSv. Assuming everyone was at the high end of their dose category and a standard risk factor of 570 cancer fatalities per 100,000 people exposed to 100 mSv, we would expect to see a grand total of three cancer deaths among those 10,000 people over a lifetime from that four-month exposure. (As always, these calculated casualties are purely conjectural—far too few to ever “see” in epidemiological statistics.)

Those numbers indicate that cancer risks in the immediate aftermath of a spew are tiny, even in very heavily contaminated areas. (Provided, always, that kids are kept from drinking iodine-contaminated milk.) Hasty evacuations are therefore needless. There’s time to make a considered decision about whether to relocate—not hours and days, but months and years.

And that choice should be left to residents. It makes no sense to roust retirees from their homes because of radiation levels that will raise their cancer risk by at most a few percent over decades. People can decide for themselves—to flee or not to flee—based on fallout in their vicinity and any other factors they think important. Relocation assistance should be predicated on an understanding that most places, even close to a stricken plant, will remain habitable and fit for most purposes. The vast “costs” of cleanup and compensation that have been attributed to the Fukushima accident are mostly an illusion or the product of overreaction, not the result of any objective harm caused by radioactivity.

Ultimately, the key to rational policy is to understand the kind of risk that nuclear accidents pose. We have a folk-conception of radiation as a kind of slow-acting nerve gas—the merest whiff will definitely kill you, if only after many years. That risk profile justifies panicked flight and endless quarantine after a radioactivity release, but it’s largely a myth. In reality, nuclear meltdowns present a one-in-a-hundred chance of injury. On the spectrum of threat they occupy a fairly innocuous position: somewhere above lightning strikes, in the same ballpark as driving a car or moving to a smoggy city, considerably lower than eating junk food. And that’s only for people residing in the maximally contaminated epicenter of a once-a-generation spew. For everyone else, including almost everyone in Fukushima prefecture itself, the risks are negligible, if they exist at all.

Unfortunately, the Fukushima accident has heightened public misunderstanding of nuclear risks, thanks to long-ingrained cultural associations of fission with nuclear war, the Japanese government’s hysterical evacuation orders and haz-mat mobilizations, and the alarmism of anti-nuke ideologues. The result is anti-nuclear back-lash and the shut-down of Japanese and German nukes, which is by far the most harmful consequence of the spew. These fifty-odd reactors could be brought back on line immediately to displace an equal gigawattage of coal-fired electricity, and would prevent the emission of hundreds of millions of tons of carbon dioxide each year, as well as thousands of deaths from air pollution. But instead of calling for the restart of these nuclear plants, Greens have stoked huge crowds in Japan and elsewhere into marching against them. If this movement prevails, the environmental and health effects will be worse than those of any pipeline, fracking project or tar-sands development yet proposed.

But there may be a silver lining if the growing scientific consensus on the effects of the Fukushima spew triggers a paradigm shift. Nuclear accidents, far from being the world-imperiling crises of popular lore, are in fact low-stakes, low-impact events with consequences that are usually too small to matter or even detect. There’s been much talk over the past year about the need to digest “the lessons of Fukushima.” Here’s the most important and incontrovertible one: even when it melts down and blows up, nuclear power is safe.

Does the Fed Control Interest Rates?

Casey Mulligan goes to the New York Times to say that monetary policy doesn’t work. This annoys Brad DeLong:

THE NEW YORK TIMES PUBLISHES CASEY MULLIGAN AS A JOKE, DOESN’T IT? 

… The third joke is the entire third paragraph: since the long government bond rate is made up of the sum of (a) an average of present and future short-term rates and (b) term and risk premia, if Federal Reserve policy affects short rates then–unless you want to throw every single vestige of efficient markets overboard and argue that there are huge profit opportunities left on the table by financiers in the bond market–Federal Reserve policy affects long rates as well. 

Casey B. Mulligan: Who Cares About Fed Funds?: New research confirms that the Federal Reserve’s monetary policy has little effect on a number of financial markets, let alone the wider economy…. Eugene Fama of the University of Chicago recently studied the relationship between the markets for overnight loans and the markets for long-term bonds…. Professor Fama found the yields on long-term government bonds to be largely immune from Fed policy changes…

Krugman piles on [1]; the only problem with DeLong’s post, he says, is that

it fails to convey the sheer numbskull quality of Mulligan’s argument. Mulligan tries to refute people like, well, me, who say that the zero lower bound makes the case for fiscal policy. … Mulligan’s answer is that this is foolish, because monetary policy is never effective. Huh? 

… we have overwhelming empirical evidence that monetary policy does in fact “work”; but Mulligan apparently doesn’t know anything about that.

Overwhelming evidence? Citation needed, as the Wikipedians say.

Anyway, I don’t want to defend Mulligan — I haven’t even read the column in question — but on this point, he’s got a point. Not only that: He’s got the more authentic Keynesian position.

Textbook macro models, including the IS-LM that Krugman is so fond of, feature a single interest rate, set by the Federal Reserve. The actual existence of many different interest rates in real economies is hand-waved away with “risk premia” — market rates are just equal to “the” interest rate plus a prmium for the expected probability of default of that particular borrower. Since the risk premia depnd on real factors, they should be reasonably stable, or at least independent of monetary policy. So when the Fed Funds rate goes up or down, the whole rate structure should go up and down with it. In which case, speaking of “the” interest rate as set by the central bank is a reasonable short hand.

How’s that hold up in practice? Let’s see:

The figure above shows the Federal Funds rate and various market rates over the past 25 years. Notice how every time the Fed changes its policy rate (the heavy black line) the market rates move right along with it?

Yeah, not so much.

In the two years after June 2007, the Fed lowered its rate by a full five points. In this same period, the rate on Aaa bonds fell by less 0.2 points, and rates for Baa and state and local bonds actually rose. In a naive look at the evidence, the “overwhelming” evidence for the effectiveness of monetary policy is not immediately obvious.

Ah but it’s not current short rates that long rates are supposed to follow, but expected short rates. This is what our orthodox New Keynesians would say. My first response is, So what? Bringing expectations in might solve the theoretical problem but it doesn’t help with the practical one. “Monetary policy doesn’t work because it doesn’t change expectations” is just a particular case of “monetary policy doesn’t work.”

But it’s not at all obvious that long rates follow expected short rates either. Here’s another figure. This one shows the spreads between the 10-Year Treasury and the Baa corporate bond rates, respectively, and the (geometric) average Fed Funds rate over the following 10 years.

If DeLong were right that “the long government bond rate is made up of the sum of (a) an average of present and future short-term rates and (b) term and risk premia” then the blue bars should be roughly constant at zero, or slightly above it. [2] Not what we see at all. It certainly looks as though the markets have been systematically overestimating the future level of the Federal Funds rate for decades now. But hey, who are you going to believe, the efficient markets theory or your lying eyes? Efficient markets plus rational expectations say that long rates must be governed by the future course of short rates, just as stock prices must be governed by future flows of dividends. Both claims must be true in theory, which means they are true, no matter how stubbornly they insist on looking false.

Of course if you want to believe that the inherent risk premium on long bonds is four points higher today than it was in the 1950s, 60s and 70s (despite the fact that the default rate on Treasuries, now as then, is zero) and that the risk premium just happens to rise whenever the short rate falls, well, there’s nothing I can do to stop you.

But what’s the alternative? Am I really saying that players in the bond market are leaving huge profit opportunities on the table? Well, sometimes, maybe. But there’s a better story, the one I was telling the other day.

DeLong says that if rates are set by rational, profit-maximizing agents, then — setting aside default risk — long rates should be equal to the average of short rates over their term. This is a standard view, everyone learns it. but it’s not strictly correct. What profit-maximizing bond traders do, is set long rates equal to the expected future value of long rates.

I went through this in that other post, but let’s do it again. Take a long bond — we’ll call it a perpetuity to keep the math simple, but the basic argument applies to any reasonably long bond. Say it has a coupon (annual payment) of $40 per year. If that bond is currently trading at $1000, that implies an interest rate of 4 percent. Meanwhile, suppose the current short rate is 2 percent, and you expect that short rate to be maintained indefinitely. Then the long bond is a good deal — you’ll want to buy it. And as you and people like you buy long bonds, their price will rise. It will keep rising until it reaches $2000, at which point the long interest rate is 2 percent, meaning that the expected return on holding the long bond and rolling over short bonds is identical, so there’s no incentive to trade one for the other. This is the arbitrage that is supposed to keep long rates equal to the expected future value of short rates. If bond traders don’t behave this way, they are missing out on profitable trades, right?

Not necessarily. Suppose the situation is as described above — 4 percent long rate, 2 percent short rate which you expect to continue indefinitely. So buying a long bond is a no-brainer, right? But suppose you also believe that the normal or usual long rate is 5 percent, and that it is likely to return to that level soon. Maybe you think other market participants have different expectations of short rates, maybe you think other market participants are irrational, maybe you think… something else, which we’ll come back to in a second. For whatever reason, you think that short rates will be 2 percent forever, but that long rates, currently 4 percent, might well rise back to 5 percent. If that happens, the long bond currently trading for $1000 will fall in price to $800. (Remember, the coupon is fixed at $40, and 5% = 40/800.) You definitely don’t want to be holding a long bond when that happens. That would be a capital loss of 20 percent. Of course every year that you hold short bonds rather than buying the long bond at its current price of $1000, you’re missing out on $20 of interest; but if you think there’s even a moderate chance of the long bond falling in value by $200, giving up $20 of interest to avoid that risk might not look like a bad deal.

Of course, even if you think the long bond is likely to fall in value to $800, that doesn’t mean you won’t buy it for anything above that. if the current price is only a bit above $800 (the current interest rate is only a bit below the “normal” level of 5 percent) you might think the extra interest you get from buying a long bond is enough to compensate you for the modest risk of a capital loss. So in this situation, the equilibrium price of the long bond won’t be at the normal level, but slightly below it. And if the situation continues long enough, people will presumably adjust their views of the “normal” level of the long bond to this equilibrium, allowing the new equilibrium to fall further. In this way, if short rates are kept far enough from long rates for long enough, long rates will eventually follow. We are seeing a bit of this process now. But adjusting expectations in this way is too slow to be practical for countercyclical policy. Starting in 1998, the Fed reduced rates by 4.5 points, and maintained them at this low level for a full six years. Yet this was only enough to reduce Aaa bond rates (which shouldn’t include any substantial default risk premium) by slightly over one point.

In my previous post, I pointed out that for policy to affect long rates, it must include (or be believed to include) a substantial permanent component, so stabilizing the economy this way will involve a secular drift in interest rates — upward in an economy facing inflation, downward in one facing unemployment. (As Steve Randy Waldman recently noted, Michal Kalecki pointed this out long ago.) That’s important, but I want to make another point here.

If the primary influence on current long rates is the expected future value of long rates, then there is no sense in which long rates are set by fundamentals.  There are a potentially infinite number of self-fulfilling expected levels for long rates. And again, no one needs to behave irrationally for these conventions to sustain themselves. The more firmly anchored is the expected level of long rates, the more rational it is for individual market participants to act so as to maintain that level. That’s the “other thing” I suggested above. If people believe that long rates can’t fall below a certain level, then they have an incentive to trade bonds in a way that will in fact prevent rates from falling much below that level. Which means they are right to believe it. Just like driving on the right or left side of the street, if everyone else is doing it it is rational for you to do it as well, which ensures that everyone will keep doing it, even if it’s not the best response to the “fundamentals” in a particular context.

Needless to say, the idea that that long-term rate of interest is basically a convention straight from Keynes. As he puts it in Chapter 15 of The General Theory,

The rate of interest is a highly conventional … phenomenon. For its actual value is largely governed by the prevailing view as to what its value is expected to be. Any level of interest which is accepted with sufficient conviction as likely to be durable will be durable; subject, of course, in a changing society to fluctuations for all kinds of reasons round the expected normal. 

You don’t have to take Keynes as gospel, of course. But if you’ve gotten as much mileage as Krugman has out of the particular extract of Keynes’ ideas embodied in the IS-LM mode, wouldn’t it make sense to at least wonder why the man thought this about interest rates, and if there might not be something to it.

Here’s one more piece of data. This table shows the average spread between various market rates and the Fed Funds rate.

Spreads over Fed Funds by decade
10-Year Treasuries Aaa Corporate Bonds Baa Corporate Bonds State & Local Bonds
1940s 2.2 3.3
1950s 1.0 1.3 2.0 0.7
1960s 0.5 0.8 1.5 -0.4
1970s 0.4 1.1 2.2 -1.1
1980s 0.6 1.4 2.9 -0.9
1990s 1.5 2.6 3.3 0.9
2000s 1.5 3.0 4.1 1.8

Treasuries carry no default risk; a given bond rating should imply a fixed level of default risk, with the default risk on Aaa bonds being practically negligible. [3] Yet the 10-year treasury spread has increased by a full point and the corporate bond rates by about two points, compared with the postwar era. (Municipal rates have risen by even more, but there may be an element of genuine increased risk there.) Brad DeLong might argue that society’s risk-bearing capacity has decline so catastrophically since the 1960s that even the tiny quantum of risk in Aaa bonds requires two full additional points of interest to compensate its quaking, terrified bearers. And that this has somehow happened without requiring any more compensation for the extra risk in Baa bonds relative to Aaa. I don’t think even DeLong would argue this, but when the honor of efficient markets is at stake, people have been known to do strange things.

Wouldn’t it be simpler to allow that maybe long rates are not, after all, set as “the sum of (a) an average of present and future short-term rates and (b) [relatively stable] term and risk premia,” but that they follow their own independent course, set by conventional beliefs that the central bank can only shift slowly, unreliably and against considerable resistance? That’s what Keynes thought. It’s what Alan Greenspan thinks. [4] And also it’s what seems to be true, so there’s that.

[1] Prof. T. asks what I’m working on. A blogpost, I say. “Let me guess — it says that Paul Krugman is great but he’s wrong about this one thing.” Um, as a matter of fact…

[2] There’s no risk premium on Treasuries, and it is not theoretically obvious why term premia should be positive on average, though in practice they generally are.

[3] Despite all the — highly deserved! — criticism the agencies got for their credulous ratings of mortgage-backed securities, they do seem to be good at assessing corporate default risk. The cumulative ten-year default rate for Baa bonds issued in the 1970s was 3.9 percent. Two decades later, the cumulative ten-year default rate for Baa bonds issued in the 1990s was … 3.9 percent. (From here, Exhibit 42.)

[4] Greenspan thinks that the economically important long rates “had clearly delinked from the fed funds rate in the early part of this decade.” I would only add that this was just the endpoint of a longer trend.

The Pangolin

[This, by Marianne Moore, is one of my favorite poems.]



The Pangolin

Another armored animal–scale
lapping scale with spruce-cone regularity until they
form the uninterrupted central
tail row! This near artichoke with head and legs and
grit-equipped gizzard,
the night miniature artist engineer is,
yes, Leonardo da Vinci’s replica–
impressive animal and toiler of whom we seldom hear.
Armor seems extra. But for him,
the closing ear-ridge–
or bare ear licking even this small
eminence and similarly safe
contracting nose and eye apertures
impenetrably closable, are not;–a true ant-eater,
not cockroach-eater, who endures
exhausting solitary trips through unfamiliar ground at night,
returning before sunrise; stepping in the moonlight,
on the moonlight peculiarly, that the outside
edges of his hands may bear the weight and save the
claws
for digging. Serpentined about
the tree, he draws
away from danger unpugnaciously,
with no sound but a harmless hiss; keeping
the fragile grace of the Thomas-
of-Leighton Buzzard Westminster Abbey wrought-iron
vine, or
rolls himself into a ball that has
power to defy all effort to unroll it; strongly intailed, neat
head for core, on neck not breaking off, with curled-in feet.
Nevertheless he has sting-proof scales; and nest
of rocks closed with earth from inside, which he can
thus darken.
Sun and moon and day and night and man and beast
each with a splendor
which man in all his vileness cannot
set aside; each with an excellence!
“Fearful yet to be feared,” the armored
ant-eater met by the driver-ant does not turn back, but
engulfs what he can, the flattered sword-
edged leafpoints on the tail and artichoke set leg-and
body-plates
quivering violently when it retaliates
and swarms on him. Compact like the furled fringed frill
on the hat-brim of Gargallo’s hollow iron head of a
matador, he will drop and will
then walk away
unhurt, although if unintruded on,
he cautiously works down the tree, helped
by his tail. The giant-pangolin-
tail, graceful tool, as prop or hand or broom or ax, tipped like
an elephant’s trunk with special skin,
is not lost on this ant-and stone-swallowing uninjurable
artichoke which simpletons thought a living fable
whom the stones had nourished, whereas ants had done
so. Pangolins are not aggressive animals; between
dusk and day they have the not unchain-like machine-like
form and frictionless creep of a thing
made graceful by adversities, con-
versities. To explain grace requires
a curious hand. If that which is at all were not forever,
why would those who graced the spires
with animals and gathered there to rest, on cold luxurious
low stone seats–a monk and monk and monk–between the
thus
ingenious roof-supports, have slaved to confuse
grace with a kindly manner, time in which to pay a
debt,
the cure for sins, a graceful use
of what are yet
approved stone mullions branching out across
the perpendiculars? A sailboat
was the first machine. Pangolins, made
for moving quietly also, are models of exactness,
on four legs; on hind feet plantigrade,
with certain postures of a man. Beneath sun and moon,
man slaving
to make his life more sweet, leaves half the flowers worth
having,
needing to choose wisely how to use his strength;
a paper-maker like the wasp; a tractor of foodstuffs,
like the ant; spidering a length
of web from bluffs
above a stream; in fighting, mechanicked
like to pangolin; capsizing in
disheartenment. Bedizened or stark
naked, man, the self, the being we call human, writing-
master to this world, griffons a dark
“Like does not like like that is obnoxious”; and writes error
with four
r’s. Among animals, one has a sense of humor.
Humor saves a few steps, it saves years. Unignorant,
modest and unemotional, and all emotion,
he has everlasting vigor,
power to grow,
though there are few creatures who can make one
breathe faster and make one erecter.
Not afraid of anything is he,
and then goes cowering forth, tread paced to meet an obstacle
at every step. Consistent with the
formula–warm blood, no gills, two pairs of hands and a few
hairs–that
is a mammal; there he sits in his own habitat,
serge-clad, strong-shod. The prey of fear, he, always
curtailed, extinguished, thwarted by the dusk, work
partly done,
says to the alternating blaze,
“Again the sun!
anew each day; and new and new and new,
that comes into and steadies my soul.”

Ten Questions on Health Care Reform

Hello readers!

Knowing what a brilliant and well-informed bunch you all are, I’m hoping you can help with something. Is there somewhere out there a good critical assessment of the specific provisions of the Affordable Care Act?

I don’t mean an explanation of why single payer would be better. It would be better, much better, I know! But we also need to be able to talk to people about the law that passed. What is our best guess about how, concretely, it will affect access to health care, and the distribution of costs?

For just-the-facts, you can’t do better than the Kaiser Family Foundation — their comprehensive summary of the ACA is here. And of course there’s the Congressional Budget Office‘s reports, which include estimates of the impact on insurance status. But there are lots of more specific issues it would be nice to have an informed opinion on.

Here are some questions I’d like to see answers to:

1. What happens to people who still don’t have insurance? According to the CBO, even when fully implemented the ACA will leave over 20 million people — 8 percent of the population; 5 percent excluding undocumented immigrants — without health insurance. (Universal coverage, it ain’t.) What about these people’s access to health care? What happens when they show up at the emergency room? 

2. How will health insurance costs change? The exchange subsidies cover any premium costs above a certain fraction of income, which ranges from 2 percent for households at the poverty line up to 9.5 percent of income for households at 400% of the poverty line. Above that, no subsidies. There are additional subsidies to reduce out of pocket costs, again phasing out at 400% of poverty. It looks like enough to reduce the costs of insurance for everyone eligible for subsidies, but for people above the 400% FPL line, it all depends on what happens to premiums. There is some language in the law about limits on premium increases, but since that is supposed to happen at the state level, one is entitled to doubts. Anyway, I would like to see numbers — this would seem like a good way to make the public case for the law, though since the biggest benefits go to low-income people, maybe not.

3. How much will safety-net hospitals be hurt by the cuts in their funding? One of the less-discussed provisions of the law is its deep cuts in support for hospitals serving large numbers of uninsured patients, mainly Disproportionate Share Hospital (DSH) Medicaid and Medicare funding and Graduate Medical Education (GME) Medicare funding (which in practice goes mainly to hospitals with lots of poor patients). Medicaid DSH payments fall by about half under the law and Medicare DSH falls by 75 percentit appears that cuts to GME will further reduce total Medicare payments by close to 10 percent for big-city hospitals. In theory, this will be compensated by many of these hospitals’ currently uninsured patients becoming insured, but it’s not clear that this will fully make up for the cuts, especially for hospitals that serve large numbers of undocumented immigrants. Which leads to…
4. How will undocumented immigrants be affected? As far as I can tell, documented immigrants will get the same benefits as citizens. Undocumented immigrants will of course get nothing. Since there will be less funding for health care for the uninsured, and since some employers will reduce health benefits, it seems likely that undocumented people will be worse off as a result of the law. 
5. How will employer-provided health insurance change as a result of the law? Since low- and moderate-income workers will have access to subsidized insurance through the exchanges, and since the penalty for employers who don’t offer coverage are trivial, it seems likely that many employers will reduce or eliminate health benefits as a result of the ACA. On the other hand, there are subsidies for small employers and a temporary reinsurance program to reduce costs for insuring older workers, which push the other way. Of course even if employer coverage does fall as a result of the ACA (the CBO guesses it will, but just slightly; some people think it will, by a lot; in Massachusetts it hasn’t at all), that’s not necessarily a bad thing.

6. Will better insurance mean better access to care? Massachusetts’ 97 percent coverage is one of the more hopeful signs for the future of the ACA. But living there, I often heard stories about people who got subsidized insurance but couldn’t find doctors who accepted it; this is a long-standing problem for people with Medicaid as well. Plans on exchanges are required to have an “adequate provider network,” but what will this mean in practice? Is there any way of quantifying how big the gap could be between the number of people who gain insurance, and the number who gain reliable access to care?

7. Was the individual mandate really needed? Liberal conventional wisom is that without the mandate the whole thing falls apart. I’ve never bought the conventional “adverse selection death spiral” argument, both because when states have implemented community rating (the same price for health insurance for everyone) it has not led to the collapse of their individual health insurance markets, contrary to the death-spiral theory; and because the theory hinges on people who don’t buy insurance having lower expected health costs than people who do, which I doubt. Then there’s the other argument, that it’s not about adverse selection, but about people delaying getting insurance until they have health problems. This is more plausible but I’m skeptical of that one too. Anecdotally, the one time I’ve been to a hospital in recent years (I was hit by a car while biking to work), the first thing the paramedics asked me in the ambulance was whether I had insurance, presumably to decide where to take me; in that situation the right to buy insurance would not have been a good substitute for already having it. And of course may people want insurance to pay for routine care. But maybe I’m wrong about this. I’d love to see a good case for the need for the mandate that doesn’t — as they all seem to — just argue deductively from first principles.


8. How do the limits on medical loss ratios compare with the status quo? I recall people arguing that a big sleeper provision in the ACA was the requirement that medical loss ratios (the share of premiums paid to providers) be at least 85 percent for large-group plans and 80 percent for small-group and individual plans. This is already supposed to be in effect; is it binding?


9. Are state level single payer plans (or public options) feasible? Another sleeper provision is Section 1332, which allows states to devise their own plans for using the total subsidies available under ACA to achieve a higher level of coverage. In principle, this opens the way to pass state-level single-payer plans, as in Vermont. Are there non-obvious obstacles to pursuing this elsewhere?

10. What happens to insurance outcomes if states opt out of the Medicaid expansion? Half the ACA’s reduction in the numbers of uninsured comes from the Medicaid expansion, and presumably a large part of that is in states where opt-out is likely.

I’m sure there are a lot of other important issues I’m missing — this isn’t my area and I haven’t been paying careful attention. What I’d like to see is a good critical assessment of what the ACA is actually likely to achieve, for better or worse. Ideally from a left/progressive viewpoint, skeptical but not implacably hostile. Unfortunately debate on our side has become so polarized that that may be hard to find — all the people one would normally turn to seem to be either denouncing or defending the law in its entirety. Still, there’s got to be something out there, right?