The Slack Wire

Fukushima Update: How Safe Can a Nuclear Meltdown Get?

by Will Boisvert

Last summer I posted an essay here arguing that nuclear power is a lot safer than people think—about a hundred times safer than our fossil fuel-dominated power system. At the time I predicted that the impact of the March, 2011 Fukushima Daiichi nuclear plant accident in Japan would be small. A year later, now that we have a better fix on the consequences of the Fukushima meltdowns, I’ll have to revise “small” to “microscopic.” The accumulating data and scientific studies on the Fukushima accident reveal that radiation doses are and will remain low, that health effects will be minor and imperceptible, and that the traumatic evacuation itself from the area around the plant may well have been unwarranted. Far from the apocalypse that opponents of nuclear energy anticipated, the Fukushima spew looks like a fizzle, one that should drastically alter our understanding of the risks of nuclear power.

Anti-nuke commentators like Arnie Gundersen continue to issue forecasts of a million or more long-term casualties from Fukushima radiation. (So far there have been none.) But the emerging scientific consensus is that the long-term health consequences of the radioactivity, particularly cancer fatalities, will be modest to nil. At the high end of specific estimates, for example, Princeton physicist Frank von Hippel, writing in the nuke-dreading Bulletin of the Atomic Scientists, reckons an eventual one thousand fatal cancers arising from the spew.

Now there’s a new peer-reviewed paper by Stanford’s Mark Z. Jacobson and John Ten Hoeve that predicts remarkably few casualties. (Jacobson, you may remember, wrote a noted Scientific American article proposing an all-renewable energy system for the world.) They used a supercomputer to model the spread of radionuclides from the Fukushima reactors around the globe, and then calculated the resulting radiation doses and cancer cases through the year 2061. Their result: a probable 130 fatal cancers, with a range from 15 to 1300, in the whole world over fifty years. (Because radiation exposures will have subsided to insignificant levels by then, these cases comprise virtually all that will ever occur.) They also simulated a hypothetical Fukushima-scale meltdown of the Diablo Canyon nuclear power plant in California, and calculated a likely cancer death toll of 170, with a range from 24 to 1400.

To put these figures in context, pollution from American coal-fired power plants alone kills about 13,000 people every year. The Stanford estimates therefore indicate that the Fukushima spew, the only significant nuclear accident in 25 years, will likely kill fewer people over five decades than America’s coal-fired power plants kill every five days to five weeks. Worldwide, coal plants kill over 200,000 people each year—150 times more deaths than the high-end Fukushima forecasts predict over a half century.

We’ll probably never know whether these projected Fukushima fatalities come to pass or not. The projections are calculated by multiplying radiation doses by standard risk factors derived from high-dose exposures; these risk factors are generally assumed—but not proven—to hold up at the low doses that nuclear spews emit. Radiation is such a weak carcinogen that scientists just can’t tell for certain whether it causes any harm at all below a dose of 100 millisieverts (100 mSv). Even if it does, it’s virtually impossible to discern such tiny changes in cancer rates in epidemiological studies. Anti-nukes give that fact a paranoid spin by warning of “hidden cancer deaths.” But if you ask me, risks that are too small to measure are too small to worry about.

The Stanford study relied on a computer simulation, but empirical studies of radiation doses support the picture of negligible effects from the Fukushima spew.

In a direct measurement of radiation exposure, officials in Fukushima City, about 40 miles from the nuclear plant, made 37,000 schoolchildren wear dosimeters around the clock during September, October and December, 2011, to see how much radiation they soaked up. Over those three months, 99 percent of the participants absorbed less than 1 mSv, with an average external dose of 0.26 mSv. Doubling that to account for internal exposure from ingested radionuclides gives an annual dose of 2.08 mSv. That’s a pretty small dose, about one third the natural radiation dose in Denver, with its high altitude and abundant radon gas, and many times too small to cause any measurable up-tick in cancer rates. At the time, the outdoor air-dose rate in Fukushima was about 1 microsievert per hour (or about 8.8 mSv per year), so the absorbed external dose was only about one eighth of the ambient dose. That’s because the radiation is mainly gamma rays emanating from radioactive cesium in the soil, which are absorbed by air and blocked by walls and roofs. Since people spend most of their time indoors at a distance from soil—often on upper floors of houses and apartment buildings—they are shielded from most of the outdoor radiation.

Efforts to abate these low-level exposures will be massive—and probably redundant. The Japanese government has budgeted $14 billion for cleanup over thirty years and has set an immediate target of reducing radiation levels by 50 percent over two years. But most of that abatement will come from natural processes—radioactive decay and weathering that washes radio-cesium deep into the soil or into underwater sediments, where it stops irradiating people—that  will reduce radiation exposures on their own by 40% over two years. (Contrary to the centuries-of-devastation trope, cesium radioactivity clears from the land fairly quickly.) The extra 10 percent reduction the cleanup may achieve over two years could be accomplished by simply doing nothing for three years. Over 30 years the radioactivity will naturally decline by at least 90 percent, so much of the cleanup will be overkill, more a political gesture than a substantial remediation. Little public-health benefit will flow from all that, because there was little radiation risks to begin with.

How little? Well, an extraordinary wrinkle of the Stanford study is that it calculated the figure of 130 fatal cancers by assuming that there had been no evacuation from the 20-kilometer zone around the nuclear plant. You may remember the widely televised scenes from that evacuation, featuring huddled refugees and young children getting wanded down with radiation detectors by doctors in haz-mat suits. Those images of terror and contagion reinforced the belief that the 20-km zone is a radioactive killing field that will be uninhabitable for eons. The Stanford researchers endorse that notion, writing in their introduction that “the radiation release poisoned local water and food supplies and created a dead-zone of several hundred square kilometers around the site that may not be safe to inhabit for decades to centuries.”

But later in their paper Jacobson and Ten Hoeve actually quantify the deadliness of the “dead-zone”—and it turns out to be a reasonably healthy place. They calculate that the evacuation from the 20-km zone probably prevented all of 28 cancer deaths, with a lower bound of 3 and an upper bound of 245. Let me spell out what that means: if the roughly 100,000 people who lived in the 20-km evacuation zone had not evacuated, and had just kept on living there for 50 years on the most contaminated land in Fukushima prefecture, then probably 28 of them—and at most 245—would have incurred a fatal cancer because of the fallout from the stricken reactors. At the very high end, that’s a fatality risk of 0.245 %, which is pretty small—about half as big as an American’s chances of dying in a car crash. Jacobson and Ten Hoeve compare those numbers to the 600 old and sick people who really did die during the evacuation from the trauma of forced relocation. “Interestingly,” they write, “the upper bound projection of lives saved from the evacuation is lower than the number of deaths already caused by the evacuation itself.”

That observation sure is interesting, and it raises an obvious question: does it make sense to evacuate during a nuclear meltdown?

In my opinion—not theirs—it doesn’t. I don’t take the Stanford study as gospel; its estimate of risks in the EZ strikes me as a bit too low. Taking its numbers into account along with new data on cesium clearance rates and the discrepancy between ambient external radiation and absorbed doses, I think a reasonable guesstimate of ultimate cancer fatalities in the EZ, had it never been evacuated, would be several hundred up to a thousand. (Again, probably too few to observe in epidemiological studies.) The crux of the issue is whether immediate radiation exposures from inhalation outweigh long-term exposures emanating from radioactive soil. Do you get more cancer risk from breathing in the radioactive cloud in the first month of the spew, or from the decades of radio-cesium “groundshine” after the cloud disperses? Jacobson and Ten Hoeve’s model assigns most of the risk to the cloud, while other calculations, including mine, give more weight to groundshine.

But from the standpoint of evacuation policy, the distinction may be moot. If the Stanford model is right, then evacuations are clearly wrong—the radiation risks are trivial and the disruptions of the evacuation too onerous. But if, on the other hand, cancer risks are dominated by cesium groundshine, then precipitate forced evacuations are still wrong, because those exposures only build up slowly. The immediate danger in a spew is thyroid cancer risk to kids exposed to iodine-131, but that can be counteracted with potassium iodide pills or just by barring children from drinking milk from cows feeding on contaminated grass for the three months it takes the radio-iodine to decay away. If that’s taken care of, then people can stay put for a while without accumulating dangerous exposures from radio-cesium.

Data from empirical studies of heavily contaminated areas support the idea that rapid evacuations are unnecessary. The Japanese government used questionnaires correlated with air-dose readings to estimate the radiation doses received in the four months immediately after the March meltdown in the townships of Namie, Iitate and Kawamata, a region just to the northwest of the 20-kilometer exclusion zone. This area was in the path of an intense fallout plume and incurred contamination comparable to levels inside the EZ; it was itself evacuated starting in late May. The people there were the most irradiated in all Japan, yet even so the radiation doses they received over those four months, at the height of the spew, were modest. Out of 9747 people surveyed, 5636 got doses of less than 1 millisievert, 4040 got doses between 1 and 10 mSv and 71 got doses between 10 and 23 mSv. Assuming everyone was at the high end of their dose category and a standard risk factor of 570 cancer fatalities per 100,000 people exposed to 100 mSv, we would expect to see a grand total of three cancer deaths among those 10,000 people over a lifetime from that four-month exposure. (As always, these calculated casualties are purely conjectural—far too few to ever “see” in epidemiological statistics.)

Those numbers indicate that cancer risks in the immediate aftermath of a spew are tiny, even in very heavily contaminated areas. (Provided, always, that kids are kept from drinking iodine-contaminated milk.) Hasty evacuations are therefore needless. There’s time to make a considered decision about whether to relocate—not hours and days, but months and years.

And that choice should be left to residents. It makes no sense to roust retirees from their homes because of radiation levels that will raise their cancer risk by at most a few percent over decades. People can decide for themselves—to flee or not to flee—based on fallout in their vicinity and any other factors they think important. Relocation assistance should be predicated on an understanding that most places, even close to a stricken plant, will remain habitable and fit for most purposes. The vast “costs” of cleanup and compensation that have been attributed to the Fukushima accident are mostly an illusion or the product of overreaction, not the result of any objective harm caused by radioactivity.

Ultimately, the key to rational policy is to understand the kind of risk that nuclear accidents pose. We have a folk-conception of radiation as a kind of slow-acting nerve gas—the merest whiff will definitely kill you, if only after many years. That risk profile justifies panicked flight and endless quarantine after a radioactivity release, but it’s largely a myth. In reality, nuclear meltdowns present a one-in-a-hundred chance of injury. On the spectrum of threat they occupy a fairly innocuous position: somewhere above lightning strikes, in the same ballpark as driving a car or moving to a smoggy city, considerably lower than eating junk food. And that’s only for people residing in the maximally contaminated epicenter of a once-a-generation spew. For everyone else, including almost everyone in Fukushima prefecture itself, the risks are negligible, if they exist at all.

Unfortunately, the Fukushima accident has heightened public misunderstanding of nuclear risks, thanks to long-ingrained cultural associations of fission with nuclear war, the Japanese government’s hysterical evacuation orders and haz-mat mobilizations, and the alarmism of anti-nuke ideologues. The result is anti-nuclear back-lash and the shut-down of Japanese and German nukes, which is by far the most harmful consequence of the spew. These fifty-odd reactors could be brought back on line immediately to displace an equal gigawattage of coal-fired electricity, and would prevent the emission of hundreds of millions of tons of carbon dioxide each year, as well as thousands of deaths from air pollution. But instead of calling for the restart of these nuclear plants, Greens have stoked huge crowds in Japan and elsewhere into marching against them. If this movement prevails, the environmental and health effects will be worse than those of any pipeline, fracking project or tar-sands development yet proposed.

But there may be a silver lining if the growing scientific consensus on the effects of the Fukushima spew triggers a paradigm shift. Nuclear accidents, far from being the world-imperiling crises of popular lore, are in fact low-stakes, low-impact events with consequences that are usually too small to matter or even detect. There’s been much talk over the past year about the need to digest “the lessons of Fukushima.” Here’s the most important and incontrovertible one: even when it melts down and blows up, nuclear power is safe.

Does the Fed Control Interest Rates?

Casey Mulligan goes to the New York Times to say that monetary policy doesn’t work. This annoys Brad DeLong:

THE NEW YORK TIMES PUBLISHES CASEY MULLIGAN AS A JOKE, DOESN’T IT? 

… The third joke is the entire third paragraph: since the long government bond rate is made up of the sum of (a) an average of present and future short-term rates and (b) term and risk premia, if Federal Reserve policy affects short rates then–unless you want to throw every single vestige of efficient markets overboard and argue that there are huge profit opportunities left on the table by financiers in the bond market–Federal Reserve policy affects long rates as well. 

Casey B. Mulligan: Who Cares About Fed Funds?: New research confirms that the Federal Reserve’s monetary policy has little effect on a number of financial markets, let alone the wider economy…. Eugene Fama of the University of Chicago recently studied the relationship between the markets for overnight loans and the markets for long-term bonds…. Professor Fama found the yields on long-term government bonds to be largely immune from Fed policy changes…

Krugman piles on [1]; the only problem with DeLong’s post, he says, is that

it fails to convey the sheer numbskull quality of Mulligan’s argument. Mulligan tries to refute people like, well, me, who say that the zero lower bound makes the case for fiscal policy. … Mulligan’s answer is that this is foolish, because monetary policy is never effective. Huh? 

… we have overwhelming empirical evidence that monetary policy does in fact “work”; but Mulligan apparently doesn’t know anything about that.

Overwhelming evidence? Citation needed, as the Wikipedians say.

Anyway, I don’t want to defend Mulligan — I haven’t even read the column in question — but on this point, he’s got a point. Not only that: He’s got the more authentic Keynesian position.

Textbook macro models, including the IS-LM that Krugman is so fond of, feature a single interest rate, set by the Federal Reserve. The actual existence of many different interest rates in real economies is hand-waved away with “risk premia” — market rates are just equal to “the” interest rate plus a prmium for the expected probability of default of that particular borrower. Since the risk premia depnd on real factors, they should be reasonably stable, or at least independent of monetary policy. So when the Fed Funds rate goes up or down, the whole rate structure should go up and down with it. In which case, speaking of “the” interest rate as set by the central bank is a reasonable short hand.

How’s that hold up in practice? Let’s see:

The figure above shows the Federal Funds rate and various market rates over the past 25 years. Notice how every time the Fed changes its policy rate (the heavy black line) the market rates move right along with it?

Yeah, not so much.

In the two years after June 2007, the Fed lowered its rate by a full five points. In this same period, the rate on Aaa bonds fell by less 0.2 points, and rates for Baa and state and local bonds actually rose. In a naive look at the evidence, the “overwhelming” evidence for the effectiveness of monetary policy is not immediately obvious.

Ah but it’s not current short rates that long rates are supposed to follow, but expected short rates. This is what our orthodox New Keynesians would say. My first response is, So what? Bringing expectations in might solve the theoretical problem but it doesn’t help with the practical one. “Monetary policy doesn’t work because it doesn’t change expectations” is just a particular case of “monetary policy doesn’t work.”

But it’s not at all obvious that long rates follow expected short rates either. Here’s another figure. This one shows the spreads between the 10-Year Treasury and the Baa corporate bond rates, respectively, and the (geometric) average Fed Funds rate over the following 10 years.

If DeLong were right that “the long government bond rate is made up of the sum of (a) an average of present and future short-term rates and (b) term and risk premia” then the blue bars should be roughly constant at zero, or slightly above it. [2] Not what we see at all. It certainly looks as though the markets have been systematically overestimating the future level of the Federal Funds rate for decades now. But hey, who are you going to believe, the efficient markets theory or your lying eyes? Efficient markets plus rational expectations say that long rates must be governed by the future course of short rates, just as stock prices must be governed by future flows of dividends. Both claims must be true in theory, which means they are true, no matter how stubbornly they insist on looking false.

Of course if you want to believe that the inherent risk premium on long bonds is four points higher today than it was in the 1950s, 60s and 70s (despite the fact that the default rate on Treasuries, now as then, is zero) and that the risk premium just happens to rise whenever the short rate falls, well, there’s nothing I can do to stop you.

But what’s the alternative? Am I really saying that players in the bond market are leaving huge profit opportunities on the table? Well, sometimes, maybe. But there’s a better story, the one I was telling the other day.

DeLong says that if rates are set by rational, profit-maximizing agents, then — setting aside default risk — long rates should be equal to the average of short rates over their term. This is a standard view, everyone learns it. but it’s not strictly correct. What profit-maximizing bond traders do, is set long rates equal to the expected future value of long rates.

I went through this in that other post, but let’s do it again. Take a long bond — we’ll call it a perpetuity to keep the math simple, but the basic argument applies to any reasonably long bond. Say it has a coupon (annual payment) of $40 per year. If that bond is currently trading at $1000, that implies an interest rate of 4 percent. Meanwhile, suppose the current short rate is 2 percent, and you expect that short rate to be maintained indefinitely. Then the long bond is a good deal — you’ll want to buy it. And as you and people like you buy long bonds, their price will rise. It will keep rising until it reaches $2000, at which point the long interest rate is 2 percent, meaning that the expected return on holding the long bond and rolling over short bonds is identical, so there’s no incentive to trade one for the other. This is the arbitrage that is supposed to keep long rates equal to the expected future value of short rates. If bond traders don’t behave this way, they are missing out on profitable trades, right?

Not necessarily. Suppose the situation is as described above — 4 percent long rate, 2 percent short rate which you expect to continue indefinitely. So buying a long bond is a no-brainer, right? But suppose you also believe that the normal or usual long rate is 5 percent, and that it is likely to return to that level soon. Maybe you think other market participants have different expectations of short rates, maybe you think other market participants are irrational, maybe you think… something else, which we’ll come back to in a second. For whatever reason, you think that short rates will be 2 percent forever, but that long rates, currently 4 percent, might well rise back to 5 percent. If that happens, the long bond currently trading for $1000 will fall in price to $800. (Remember, the coupon is fixed at $40, and 5% = 40/800.) You definitely don’t want to be holding a long bond when that happens. That would be a capital loss of 20 percent. Of course every year that you hold short bonds rather than buying the long bond at its current price of $1000, you’re missing out on $20 of interest; but if you think there’s even a moderate chance of the long bond falling in value by $200, giving up $20 of interest to avoid that risk might not look like a bad deal.

Of course, even if you think the long bond is likely to fall in value to $800, that doesn’t mean you won’t buy it for anything above that. if the current price is only a bit above $800 (the current interest rate is only a bit below the “normal” level of 5 percent) you might think the extra interest you get from buying a long bond is enough to compensate you for the modest risk of a capital loss. So in this situation, the equilibrium price of the long bond won’t be at the normal level, but slightly below it. And if the situation continues long enough, people will presumably adjust their views of the “normal” level of the long bond to this equilibrium, allowing the new equilibrium to fall further. In this way, if short rates are kept far enough from long rates for long enough, long rates will eventually follow. We are seeing a bit of this process now. But adjusting expectations in this way is too slow to be practical for countercyclical policy. Starting in 1998, the Fed reduced rates by 4.5 points, and maintained them at this low level for a full six years. Yet this was only enough to reduce Aaa bond rates (which shouldn’t include any substantial default risk premium) by slightly over one point.

In my previous post, I pointed out that for policy to affect long rates, it must include (or be believed to include) a substantial permanent component, so stabilizing the economy this way will involve a secular drift in interest rates — upward in an economy facing inflation, downward in one facing unemployment. (As Steve Randy Waldman recently noted, Michal Kalecki pointed this out long ago.) That’s important, but I want to make another point here.

If the primary influence on current long rates is the expected future value of long rates, then there is no sense in which long rates are set by fundamentals.  There are a potentially infinite number of self-fulfilling expected levels for long rates. And again, no one needs to behave irrationally for these conventions to sustain themselves. The more firmly anchored is the expected level of long rates, the more rational it is for individual market participants to act so as to maintain that level. That’s the “other thing” I suggested above. If people believe that long rates can’t fall below a certain level, then they have an incentive to trade bonds in a way that will in fact prevent rates from falling much below that level. Which means they are right to believe it. Just like driving on the right or left side of the street, if everyone else is doing it it is rational for you to do it as well, which ensures that everyone will keep doing it, even if it’s not the best response to the “fundamentals” in a particular context.

Needless to say, the idea that that long-term rate of interest is basically a convention straight from Keynes. As he puts it in Chapter 15 of The General Theory,

The rate of interest is a highly conventional … phenomenon. For its actual value is largely governed by the prevailing view as to what its value is expected to be. Any level of interest which is accepted with sufficient conviction as likely to be durable will be durable; subject, of course, in a changing society to fluctuations for all kinds of reasons round the expected normal. 

You don’t have to take Keynes as gospel, of course. But if you’ve gotten as much mileage as Krugman has out of the particular extract of Keynes’ ideas embodied in the IS-LM mode, wouldn’t it make sense to at least wonder why the man thought this about interest rates, and if there might not be something to it.

Here’s one more piece of data. This table shows the average spread between various market rates and the Fed Funds rate.

Spreads over Fed Funds by decade
10-Year Treasuries Aaa Corporate Bonds Baa Corporate Bonds State & Local Bonds
1940s 2.2 3.3
1950s 1.0 1.3 2.0 0.7
1960s 0.5 0.8 1.5 -0.4
1970s 0.4 1.1 2.2 -1.1
1980s 0.6 1.4 2.9 -0.9
1990s 1.5 2.6 3.3 0.9
2000s 1.5 3.0 4.1 1.8

Treasuries carry no default risk; a given bond rating should imply a fixed level of default risk, with the default risk on Aaa bonds being practically negligible. [3] Yet the 10-year treasury spread has increased by a full point and the corporate bond rates by about two points, compared with the postwar era. (Municipal rates have risen by even more, but there may be an element of genuine increased risk there.) Brad DeLong might argue that society’s risk-bearing capacity has decline so catastrophically since the 1960s that even the tiny quantum of risk in Aaa bonds requires two full additional points of interest to compensate its quaking, terrified bearers. And that this has somehow happened without requiring any more compensation for the extra risk in Baa bonds relative to Aaa. I don’t think even DeLong would argue this, but when the honor of efficient markets is at stake, people have been known to do strange things.

Wouldn’t it be simpler to allow that maybe long rates are not, after all, set as “the sum of (a) an average of present and future short-term rates and (b) [relatively stable] term and risk premia,” but that they follow their own independent course, set by conventional beliefs that the central bank can only shift slowly, unreliably and against considerable resistance? That’s what Keynes thought. It’s what Alan Greenspan thinks. [4] And also it’s what seems to be true, so there’s that.

[1] Prof. T. asks what I’m working on. A blogpost, I say. “Let me guess — it says that Paul Krugman is great but he’s wrong about this one thing.” Um, as a matter of fact…

[2] There’s no risk premium on Treasuries, and it is not theoretically obvious why term premia should be positive on average, though in practice they generally are.

[3] Despite all the — highly deserved! — criticism the agencies got for their credulous ratings of mortgage-backed securities, they do seem to be good at assessing corporate default risk. The cumulative ten-year default rate for Baa bonds issued in the 1970s was 3.9 percent. Two decades later, the cumulative ten-year default rate for Baa bonds issued in the 1990s was … 3.9 percent. (From here, Exhibit 42.)

[4] Greenspan thinks that the economically important long rates “had clearly delinked from the fed funds rate in the early part of this decade.” I would only add that this was just the endpoint of a longer trend.

The Pangolin

[This, by Marianne Moore, is one of my favorite poems.]



The Pangolin

Another armored animal–scale
lapping scale with spruce-cone regularity until they
form the uninterrupted central
tail row! This near artichoke with head and legs and
grit-equipped gizzard,
the night miniature artist engineer is,
yes, Leonardo da Vinci’s replica–
impressive animal and toiler of whom we seldom hear.
Armor seems extra. But for him,
the closing ear-ridge–
or bare ear licking even this small
eminence and similarly safe
contracting nose and eye apertures
impenetrably closable, are not;–a true ant-eater,
not cockroach-eater, who endures
exhausting solitary trips through unfamiliar ground at night,
returning before sunrise; stepping in the moonlight,
on the moonlight peculiarly, that the outside
edges of his hands may bear the weight and save the
claws
for digging. Serpentined about
the tree, he draws
away from danger unpugnaciously,
with no sound but a harmless hiss; keeping
the fragile grace of the Thomas-
of-Leighton Buzzard Westminster Abbey wrought-iron
vine, or
rolls himself into a ball that has
power to defy all effort to unroll it; strongly intailed, neat
head for core, on neck not breaking off, with curled-in feet.
Nevertheless he has sting-proof scales; and nest
of rocks closed with earth from inside, which he can
thus darken.
Sun and moon and day and night and man and beast
each with a splendor
which man in all his vileness cannot
set aside; each with an excellence!
“Fearful yet to be feared,” the armored
ant-eater met by the driver-ant does not turn back, but
engulfs what he can, the flattered sword-
edged leafpoints on the tail and artichoke set leg-and
body-plates
quivering violently when it retaliates
and swarms on him. Compact like the furled fringed frill
on the hat-brim of Gargallo’s hollow iron head of a
matador, he will drop and will
then walk away
unhurt, although if unintruded on,
he cautiously works down the tree, helped
by his tail. The giant-pangolin-
tail, graceful tool, as prop or hand or broom or ax, tipped like
an elephant’s trunk with special skin,
is not lost on this ant-and stone-swallowing uninjurable
artichoke which simpletons thought a living fable
whom the stones had nourished, whereas ants had done
so. Pangolins are not aggressive animals; between
dusk and day they have the not unchain-like machine-like
form and frictionless creep of a thing
made graceful by adversities, con-
versities. To explain grace requires
a curious hand. If that which is at all were not forever,
why would those who graced the spires
with animals and gathered there to rest, on cold luxurious
low stone seats–a monk and monk and monk–between the
thus
ingenious roof-supports, have slaved to confuse
grace with a kindly manner, time in which to pay a
debt,
the cure for sins, a graceful use
of what are yet
approved stone mullions branching out across
the perpendiculars? A sailboat
was the first machine. Pangolins, made
for moving quietly also, are models of exactness,
on four legs; on hind feet plantigrade,
with certain postures of a man. Beneath sun and moon,
man slaving
to make his life more sweet, leaves half the flowers worth
having,
needing to choose wisely how to use his strength;
a paper-maker like the wasp; a tractor of foodstuffs,
like the ant; spidering a length
of web from bluffs
above a stream; in fighting, mechanicked
like to pangolin; capsizing in
disheartenment. Bedizened or stark
naked, man, the self, the being we call human, writing-
master to this world, griffons a dark
“Like does not like like that is obnoxious”; and writes error
with four
r’s. Among animals, one has a sense of humor.
Humor saves a few steps, it saves years. Unignorant,
modest and unemotional, and all emotion,
he has everlasting vigor,
power to grow,
though there are few creatures who can make one
breathe faster and make one erecter.
Not afraid of anything is he,
and then goes cowering forth, tread paced to meet an obstacle
at every step. Consistent with the
formula–warm blood, no gills, two pairs of hands and a few
hairs–that
is a mammal; there he sits in his own habitat,
serge-clad, strong-shod. The prey of fear, he, always
curtailed, extinguished, thwarted by the dusk, work
partly done,
says to the alternating blaze,
“Again the sun!
anew each day; and new and new and new,
that comes into and steadies my soul.”

Ten Questions on Health Care Reform

Hello readers!

Knowing what a brilliant and well-informed bunch you all are, I’m hoping you can help with something. Is there somewhere out there a good critical assessment of the specific provisions of the Affordable Care Act?

I don’t mean an explanation of why single payer would be better. It would be better, much better, I know! But we also need to be able to talk to people about the law that passed. What is our best guess about how, concretely, it will affect access to health care, and the distribution of costs?

For just-the-facts, you can’t do better than the Kaiser Family Foundation — their comprehensive summary of the ACA is here. And of course there’s the Congressional Budget Office‘s reports, which include estimates of the impact on insurance status. But there are lots of more specific issues it would be nice to have an informed opinion on.

Here are some questions I’d like to see answers to:

1. What happens to people who still don’t have insurance? According to the CBO, even when fully implemented the ACA will leave over 20 million people — 8 percent of the population; 5 percent excluding undocumented immigrants — without health insurance. (Universal coverage, it ain’t.) What about these people’s access to health care? What happens when they show up at the emergency room? 

2. How will health insurance costs change? The exchange subsidies cover any premium costs above a certain fraction of income, which ranges from 2 percent for households at the poverty line up to 9.5 percent of income for households at 400% of the poverty line. Above that, no subsidies. There are additional subsidies to reduce out of pocket costs, again phasing out at 400% of poverty. It looks like enough to reduce the costs of insurance for everyone eligible for subsidies, but for people above the 400% FPL line, it all depends on what happens to premiums. There is some language in the law about limits on premium increases, but since that is supposed to happen at the state level, one is entitled to doubts. Anyway, I would like to see numbers — this would seem like a good way to make the public case for the law, though since the biggest benefits go to low-income people, maybe not.

3. How much will safety-net hospitals be hurt by the cuts in their funding? One of the less-discussed provisions of the law is its deep cuts in support for hospitals serving large numbers of uninsured patients, mainly Disproportionate Share Hospital (DSH) Medicaid and Medicare funding and Graduate Medical Education (GME) Medicare funding (which in practice goes mainly to hospitals with lots of poor patients). Medicaid DSH payments fall by about half under the law and Medicare DSH falls by 75 percentit appears that cuts to GME will further reduce total Medicare payments by close to 10 percent for big-city hospitals. In theory, this will be compensated by many of these hospitals’ currently uninsured patients becoming insured, but it’s not clear that this will fully make up for the cuts, especially for hospitals that serve large numbers of undocumented immigrants. Which leads to…
4. How will undocumented immigrants be affected? As far as I can tell, documented immigrants will get the same benefits as citizens. Undocumented immigrants will of course get nothing. Since there will be less funding for health care for the uninsured, and since some employers will reduce health benefits, it seems likely that undocumented people will be worse off as a result of the law. 
5. How will employer-provided health insurance change as a result of the law? Since low- and moderate-income workers will have access to subsidized insurance through the exchanges, and since the penalty for employers who don’t offer coverage are trivial, it seems likely that many employers will reduce or eliminate health benefits as a result of the ACA. On the other hand, there are subsidies for small employers and a temporary reinsurance program to reduce costs for insuring older workers, which push the other way. Of course even if employer coverage does fall as a result of the ACA (the CBO guesses it will, but just slightly; some people think it will, by a lot; in Massachusetts it hasn’t at all), that’s not necessarily a bad thing.

6. Will better insurance mean better access to care? Massachusetts’ 97 percent coverage is one of the more hopeful signs for the future of the ACA. But living there, I often heard stories about people who got subsidized insurance but couldn’t find doctors who accepted it; this is a long-standing problem for people with Medicaid as well. Plans on exchanges are required to have an “adequate provider network,” but what will this mean in practice? Is there any way of quantifying how big the gap could be between the number of people who gain insurance, and the number who gain reliable access to care?

7. Was the individual mandate really needed? Liberal conventional wisom is that without the mandate the whole thing falls apart. I’ve never bought the conventional “adverse selection death spiral” argument, both because when states have implemented community rating (the same price for health insurance for everyone) it has not led to the collapse of their individual health insurance markets, contrary to the death-spiral theory; and because the theory hinges on people who don’t buy insurance having lower expected health costs than people who do, which I doubt. Then there’s the other argument, that it’s not about adverse selection, but about people delaying getting insurance until they have health problems. This is more plausible but I’m skeptical of that one too. Anecdotally, the one time I’ve been to a hospital in recent years (I was hit by a car while biking to work), the first thing the paramedics asked me in the ambulance was whether I had insurance, presumably to decide where to take me; in that situation the right to buy insurance would not have been a good substitute for already having it. And of course may people want insurance to pay for routine care. But maybe I’m wrong about this. I’d love to see a good case for the need for the mandate that doesn’t — as they all seem to — just argue deductively from first principles.


8. How do the limits on medical loss ratios compare with the status quo? I recall people arguing that a big sleeper provision in the ACA was the requirement that medical loss ratios (the share of premiums paid to providers) be at least 85 percent for large-group plans and 80 percent for small-group and individual plans. This is already supposed to be in effect; is it binding?


9. Are state level single payer plans (or public options) feasible? Another sleeper provision is Section 1332, which allows states to devise their own plans for using the total subsidies available under ACA to achieve a higher level of coverage. In principle, this opens the way to pass state-level single-payer plans, as in Vermont. Are there non-obvious obstacles to pursuing this elsewhere?

10. What happens to insurance outcomes if states opt out of the Medicaid expansion? Half the ACA’s reduction in the numbers of uninsured comes from the Medicaid expansion, and presumably a large part of that is in states where opt-out is likely.

I’m sure there are a lot of other important issues I’m missing — this isn’t my area and I haven’t been paying careful attention. What I’d like to see is a good critical assessment of what the ACA is actually likely to achieve, for better or worse. Ideally from a left/progressive viewpoint, skeptical but not implacably hostile. Unfortunately debate on our side has become so polarized that that may be hard to find — all the people one would normally turn to seem to be either denouncing or defending the law in its entirety. Still, there’s got to be something out there, right?

Posts in Three Lines

More virtual posts. Last batch, I ended up writing one (so far). Better this time? Maybe; anyway micro-posts are also things.


Hippie macroeconomics. Paul Krugman and Bill Mitchell both object to this Robert Samuelson column in the Washington Post, about how the real problem in economic policy is the if-it-feels-good-do-it macroeconomics of the 1960s. And indeed, it is objectionable. But remember, Christina Romer thinks the exact same thing.

Dedication. My friend Ben Balthaser recently published a book of poems, Dedication, based on recollections of  various American Communists from the 1940s and 50s, which I’d highly recommend even if I didn’t know him. It’s great poetry, but it’s also oral history, a bit like Vivian Gornick’s classic book on the inner life of American Communism. It really captures the spiritual appeal of Communism in the first half of the century and the moral heroism of so many people who heard that appeal, and also the almost mythic quality that world takes on in retrospect.

The debt-cycle cycle. Steve Keen’s work on the role of debt in boosting and then constraining aggregate demand is worth some careful attention. I wish, though, that there were more acknowledgement that this is not a new idea, but an old idea coming back into fashion. Very similar debt cycles have been described by Benjamin Friedman (1984 and 1986), Caskey and Fazzari (1991), Alfred Eichner  (1991) and Tom Palley (1994 and 1997),  to pick just some examples; Eichner, for instance, uses the equation E = F + delta-D – DS (aggregate expenditure equals cashflow plus debt growth minus debt service payments), which seems to me to state the key point of Keen’s “Walras-Schumpeter-Minsky’s Law” in a clearer and more straightforward way.

Free streets! The attempt to put a price on driving into Manhattan a few years ago failed, basically because Bloomberg tried to just cut a deal with the “three men in a room” and didn’t realize he needed to actually build support. But it didn’t help that the way it was pitched, drivers saw it as a punitive restriction on their freedom, when really — as anyone who finds themselves driving in Manhattan should be easily convinced — by far the biggest winners from fewer cars on the road are drivers themselves. I’d go all in on that point, and change the name from “congestion pricing” to “free streets.”

Crotty on owners and managers. In my “disgorge the cash” posts, I’ve usually pointed to chapter 6 of Wall Street as the best statement of the idea that financialization is fundamentally a political project by asset owners to claim a greater share of the surplus from nonfinancial firms. Another good (and more theoretical) discussion of the same idea is Jim Crotty’s article, “Owner-Manager Conflict and Financial Theories of Investment Instability.” Maybe I’ll type my notes on it here.

Okun’s Law. The less than proportionate response of employment to short-run changes in GDP is one of the few concrete empirical laws in macroecononomics. This is usually interpreted as the result of “labor hoarding” and the costs of hiring and firing workers. But it could also be explained by shifts of workers into higher-productivity sectors when demand is high, and into lower-productivity sectors when demand is low — Joan Robinson’s famous example is the person who loses a factory job and ends up selling pencils on the street.

Higgs: meh. I haven’t taken a physics class since my first year of college, but I’m enough of a science fan to share Stephen Wolfram’s disappointment that last week’s Higgs discovery just confirmed the 40-year old Standard Model, without pointing the way toward anything new. Also, just to be clear: It is not true that the Higgs field is responsible for mass in general, only for the rest mass of fundamental particles, like quarks and electrons. Neutrons and protons, the massive particles that make up normal matter, get only a tiny fraction of their mass from the rest mass of their constituent quarks; almost all of it comes from the binding energy of the strong force between them, which the Higgs has nothing to do with.

Interest Rates and Expectations: Responses and Further Thoughts

Some good questions asked in comments to yesterday’s post.

Random Lurker doubts whether there is a strict inverse relationship between interest rates and bond values. Indeed there is not, apart from perpetuities (bonds with an infinite maturity, where the principle is never repaid.) I should have been clearer in the post, I was talking about perpetuities just as a simplification of the general case of long assets. But I would argue it’s a reasonable simplification. If you think that the importance of interest rates is primarily for the valuation (rather than the financing) of capital goods, and you think that capital goods are effectively infinitely lived, then an analysisis in terms of perpetuities is the strcitly correct way to think about it. (Both assumptions are defensible, as a first approximation, and Keynes seems to have held both.) On the other hand, if you are thinking in terms of financing conditions for long but not infinitely lived assets, the perpetuity is only an approximation, but for long maturities it’s a reasonably close one. For example, a 30 year bond loses 14% of its value when interest rates rise from 5% to 6%, compared with a 20% loss for a perpetuity. Qualitatively the story will hold as long as the interest rates that matter are much longer than the timescale of business cycles.

Max is confused about my use of “bull” and “bear.” Again, I should have been clearer: I am using the terms in the way that Keynes did, to refer to bullishness and bearishness about bond prices, not about the economy in general.

Finally, the shortest but most substantive comment, from Chris Mealy:

Forcing Bill Gross to lose billions in slow motion is a crazy way to get to full employment.

It is! And that is kind of the point.

I wrote this post mainly to clarify my own thinking, not to make any policy or political argument. But obviously the argument that comes out of this is that while monetary policy can help stabilize demand, it’s very weak at restoring demand once it’s fallen – and not just because short rates can’t go below zero, or because central banks are choosing the wrong target. (Although it is certainly true, and important, that central bankers are not really trying to reduce unemployment.)

Here is the thing: expectations of returns on investment are also conventional and moderately elastic. Stable full employment requires both that expected sales are equal to expenditure at full employment, and that interest rates are such that the full employment level of output is chosen by profit-maximizing businesses. But once demand has fallen – and especially if it has remained depressed for a while – expected sales fall, so the interest rate that would have been low enough to prevent the fall in activity is no longer low enough to reverse it. This is why you temporarily need lower rates than you will want when the economy recovers. But the expectation of long rates returning to their old level will prevent them from falling in the first place. “The power of the central bank to affect the long rate is limited by the opinions about its normal level inherited from the past.” This is why monetary policy cannot work in a situation like this without Bill Gross first losing billions – it’s the only way to change his opinion.

Leijonhufvud:

Suppose that a situation arises in which the State of Expectation happens to be “appropriate”… but that the long rate is higher than “optimal,” so that asset demand prices are too low for full employment… Then it seems quite reasonable to demand that the Central Bank should go to great lengths in trying to reduce the interest rate… If, however, the actual interest rate equals the “optimal” rate consistent with the suggested “neutral state,” while asset prices are too low due to a State of Expectation which is “inappropriately pessimistic”-what then? 

Consider what would happen if, in this situation, the long bond rate were forced down to whatever level was necessary to equate ex ante rates of saving and investment at full employment. This would mean that prices of bonds-assets with contractually fixed long receipt streams-would shoot up while equity prices remained approximately constant instead of declining. Through a succession of short periods, with aggregate money expenditures at the full employment level, initial opinions about the future yield on capital would be revealed as too pessimistic. Anticipated returns to capital go up. The contractually fixed return streams on bonds remain the same, and it now becomes inevitable that bond-holders take a capital loss (in real terms). 

The Central Bank now has two options. (a) It may elect to stand by [leaving rates at very low levels.] …  Since the situation is one of full employment, inflation must result and the “real value” of nominally fixed contracts decline. (b) It may choose … to increase market rate sufficiently to prevent any rise in [inflation]. Bond-holders lose again, since this means a reduction in the money value of bonds.

In other words, in our world of long-lived assets, if you rely only on monetary policy to get you out of depression, Bill Gross has to lose money. On a theoretical level, the fact that the lifetime of capital goods is long relative to the period over which we can reliably treat “fundamentals” as fixed means that the Marshallian long run, in which the capital stock is fully adjusted, does not apply to any actual economy. (This fact has many important implications beyond the scope of these posts.)

The key point for our purposes is that, in the slump, investment demand is lower than it will be once the economy recovers. So if the interest rate falls enough to end the recession, then you must have either a rise in rates or inflation once the slump ends. But either of those will mean losses for bondholders, anticipation of which will prevent long rates from falling the first place. Only if you successfully fool bond market participants can monetary policy produce recovery on a timescale significantly less than average asset life. The alternative is to prove the pessimistic expectations of entrepreneurs wrong by directly raising incomes, but that seems to be off the table.

This point is obvious, but it’s strangely ignored, perhaps because discussion of monetary policy is almost entirely focused on how optimal policy can prevent slumps from occurring in the first place. The implicit assumption of Krugman’s ISLM analysis, for instance, is that investment demand has permanently fallen, presumably unrelatedly to demand conditions themselves. So the new low rate is permanently appropriate. But — I feel it’s it’s safe to say — Krugman, and certainly market participants, don’t really believe this. But if policy is going to be reversed, on a timescale significantly shorter than the duration of the assets demand for which is supposed to be affected by monetary policy, then policy will not work at all.

At this point, though, it would seem that we have proven too much. The question becomes not, why isn’t monetary policy working now, but, How did monetary policy ever work? I can think of at least four answers, all of which probably have some truth to them.

 1. It didn’t. The apparent stability of economies with active central banks is due to other factors. Changes in the policy not been stabilizing, or have even been destabilizing. This is consistent with the strand of the Post Keynesian tradition that emphasizes the inflationary impact of rate increases, since short rates are a component of marginal costs; but it is also basically the view of Milton Friedman and his latter-day epigones in the Market Monetarist world. I’m sympathetic but don’t buy it; I think the evidence is overwhelming that high interest rates are associated with low income/output, and vice versa.

2. The focus on long-lived goods is a mistake. The real effect of short rates is not via long rates, but on stuff that is financed directly by short borrowing, particularly inventories and working capital.  I’m less sure about this one, but Keynes certainly did not think it was important; for now let’s follow him. A variation is income distribution, including corporate cashflow. Bernanke believes this. I’m doubtful that it’s the main story, but I presume there is something in it; how much is ultimately an empirical question.

 3. The answer suggested by the analysis here: Monetary policy works well when the required interest rate variation stays within the conventional “normal” range. In this range, there are enough bulls and bears for the marginal bond buyer to expect the current level of interst to continue indefinitely, so that bond prices are not subject to stabilizing speculation and there is no premium for expected capital losses or gains; so long rates should move more or less one for one with short rates. This works on a theoretical level, but it’s not obvious that it particularly fits the data.

 4. The most interesting possibility, to me: When countercylclical monetary policy seemed effective, it really was, but  on different principles. Autonomous demand and interest rates were normally at a level *above* full employment, and stabilization was carried out via direct controls on credit creation, such as reserve requirements. A variation on this is that monetary policy has only ever worked through the housing market.

Regardless of the historical issue, the most immediately interesting question is how and whether monetary policy can work now. And here, we can safely say that channels 2,3 and 4, even if real, are exhausted. So in the absence of fiscal policy, it really does come down to the capacity of sustained low short rates to bring expected long rates down. Sorry, Bill Gross!

UPDATE: I was just reading this rightly classic paper by Chari, Kehoe and McGrattan. They’re pure freshwater, everything I hate. But New Keynesians are just real business cycle theorists with a bad conscience, which means the RBCers pwn them every time in straight-up debate. As here.I’m not interested in that, though, though the paper is worth reading if you want the flavor of what “modern macro” is all about. Rather, I’m interested in this subsidiary point in their argument:

as is well-known, during the postwar period, short rates and long rates have a very similar secular pattern. … Second, a large body of work in …finance has shown that the level of the long rate is well-accounted for by the expectations hypothesis. … Combining these two features of the data implies that when the Fed alters the current short rate, private agents signi…ficantly adjust their long-run expectations of the future short rate, say, 30 years into the future. At an intuitive level, then, we see that Fed policy has a large random walk component to it.

In what sense this is true, I won’t venture to guess. It seems, at least, problematic, given that they also think that “interest rates … should be kept low on average.” The important point for my purposes, tho, is just that even the ultra-orthodox agree, that for a change in monetary policy to be effective, it has to be believed to be permanent. “If that which is at all were not forever…”

Interest Rates and (In)elastic Expectations

[Apologies to any non-econ readers, this is even more obscure than usual.]

Brad DeLong observed last week that one of the most surprising things about the Great Recession is how far long-term interest rates have followed short rates toward zero.

I have gotten three significant pieces of the past four years wrong. Three things surprised and still surprise me: (1.) The failure of central banks to adopt a rule like nominal GDP targeting, or it’s equivalent. (2.) The failure of wage inflation in the North Atlantic to fall even farther than it has–toward, even if not to, zero. (3.) The failure of the yield curve to sharply steepen: federal funds rates at zero I expected, but 30-Year U.S. Treasury bond nominal rates at 2.7% I did not. 

… The third… may be most interesting. 

Back in March 2009, the University of Chicago’s Robert Lucas confidently predicted that within three years the U.S. economy would be back to normal. A normal U.S. economy has a short-term nominal interest rate of 4%. Since the 10-Year U.S. Treasury bond rate tends to be one percentage point more than the average of expected future short-term interest rates over the next decade, even five expected years of a deeply depressed economy with essentially zero short-term interest rates should not push the 10-Year Treasury rate below 3%. (And, indeed, the Treasury rate fluctuated around 3 to 3.5% for the most part from late 2008 through mid 2011.) But in July of 2011 the 10-Year U.S. Treasury bond rate crashed to 2%, and at the start of June it was below 1.5%.  [

The possible conclusions are stark: either those investing in financial markets expect … [the] current global depressed economy to endure in more-or-less its current state for perhaps a decade, perhaps more; or … the ability of financial markets to do their job and sensibly price relative risks and returns at a rational level has been broken at a deep and severe level… Neither alternative is something I would have or did predict, or even imagine.

I also am surprised by this, and for similar reasons to DeLong. But I think the fact that it’s surprising has some important implications, which he does not draw out.

Here’s a picture:

The dotted black line is the Federal Funds rate, set, of course, by the central bank. The red line is the 10-year Treasury; it’s the dip at the far right in that one that surprises DeLong (and me). The green line is the 30-year Treasury, which behaves similarly but has fallen by less. Finally, the blue line is the BAA bond rate, a reasonable proxy for the interest rate faced by large business borrowers; the 2008 financial crisis is clearly visible. (All rates are nominal.) While the Treasury rates are most relevant for the expectations story, it’s the interest rates faced by private borrowers that matter for policy.

The recent fall in 10-year treasuries is striking. But it’s at least as striking how slowly and incompletely they, and corporate bonds, respond to changes in Fed policy, especially recently. It’s hard to look at this picture and not feel a twinge of doubt about the extent to which the Fed “sets” “the” interest rate in any economically meaningful sense. As I’ve mentioned here before, when Keynes referred to the “liquidity trap,” he didn’t mean the technical zero lower bound to policy rates, but its delinking from the economically-important long rates. Clearly, it makes no difference whether or not you can set a policy rate below zero if there’s reason to think that longer rates wouldn’t follow it down in any case. And I think there is reason to think that.

The snapping of the link between monetary policy and other rates was written about years ago by Benjamin Friedman, as a potential; it figured in my comrade Hasan Comert’s dissertation more recently, as an actuality. Both of them attribute the disconnect to institutional and regulatory changes in the financial system. And I agree, that’s very important. But after reading Leijonhufvud’s On Keynesian Economics and the Economics of Keynes [1], I think there may be a deeper structural explanation.

As DeLong says, in general we think that long interest rates should be equal to the average expected short rates over their term, perhaps plus a premium. [2] So what can we say about interest rate expectations? One obvious question is, are they elastic or inelastic? Elastic expectations change easily; in particular, unit-elastic expectations mean that whatever the current short rate is, it’s expected to continue indefinitely. Inelastic expectations change less easily; in the extreme case of perfectly inelastic interest rate expectations, your prediction for short-term interest rates several years from now is completely independent of what they are now.

Inelastic interest-rate expectations are central to Keynes’ vision of the economy. (Far more so than, for instance, sticky wages.) They are what limit the effectiveness of monetary policy in a depression or recession, with the liquidity trap simply the extreme case of the general phenomenon. [3] His own exposition is a little hard to follow, but the simplest way to look at it is to recall that when interest rates fall, bond prices rise, and vice versa. (In fact they are just two ways of describing the same thing.) So if you expect a rise in interest rates in the future that means you’ll expect a capital loss if you hold long-duration bonds, and if you expect a fall in interest rates you’ll expect a capital gain.  So the more likely it seems that short-term interest rates will revert to some normal level in the future, the less long rates should follow short ones.

This effect gets stronger as we consider longer maturities. In the limiting case of a perpetuity — a bond that makes a fixed dollar period every period forever — the value of the bond is just p/i, where p is the payment in each period and i is the interest rate. So when you consider buying a bond, you have to consider not just the current yield, but the possibility that interest rates will change in the future. Because if they do, the value of the bonds you own will rise or fall, and you will experience a capital gain or loss. Of course future interest rates are never really known. But Keynes argued that there is almost always a strong convention about the normal or “safe” level of interest.

Note that the logic above means that the relationship between short and long rates will be different when rates are relatively high vs. when they are relatively low. The lower are rates, the greater the capital loss from an increase in rates. As long rates approach zero, the potential capital loss from an increase approaches infinity.

Let’s make this concrete. If we write i_s for the short interest rate and i_l for the long interest rate, B for the current price of long bonds, and BE for the expected price of long bonds a year from now, then for all assets to be willing held it must be the case that i_l = i_s – (BE/B – 1), that is, interest on the long bond will need to be just enough higher (or lower) than the short rate to cancel out the capital loss (or gain) expected from holding the long bond. If bondholders expect the long run value of bond prices to be the same as the current value, then long and short rates should be the same. [*] Now for simplicity let’s assume we are talking about perpetuities (the behavior of long but finite bonds will be qualitatively similar), so B is just 1/i_l. [4] Then we can ask the question, how much do short rates have to fall to produce a one point fall in long rates.

Obviously, the answer will depend on expectations. The standard economist’s approach to expectations is to say they are true predictions of the future state of the world, an approach with some obvious disadvantages for those of us without functioning time machines. A simpler, and more empirically relevant, way of framing the question, is to ask how expectations change based on changes in the current state of the world — which unlike the future, we can observe. Perfectly inelastic expectations mean that your best guess about interest rates at some future date is not affected at all by the current level of interest rates; unit-elastic expectations mean that your best guess changes one for one with the current level. An of course there are all the possibilities in between. Let’s quantify this as the subjective annual probability that a departure of interest rates from their current or “normal” level will subsequently be reversed. Now we can calculate the exact answer to the question posed above, as shown in the next figure.

For instance, suppose short rates are initially at 6 percent, and suppose this is considered the “normal” level, in the sense that the marginal participant in the bond market regards an increase or decrease as equally likely. Then the long rate will also be 6 percent. Now we want to get the long rate down to 5 percent. Suppose interest rate expectations are a bit less than unit elastic — i.e. when market rates change, people adjust their views of normal rates by almost but not quite as much. Concretely, say that the balance of expectations is that there is net 5 percent annual chance that rates will return to their old normal level. If the long rate does rise back to 6 percent, people who bought bonds at 5 percent will suffer a capital loss of 20 percent. A 5 percent chance of a 20 percent loss equals an expected annual loss of 1 percent, so long rates will need to be one point higher than short rates for people to hold them. [5] So from a starting point of equality, for long rates to fall by one point, short rates must fall by two points. You can see that on the blue line on the graph. You can also see that if expectations are more than a little inelastic, the change in short rates required for a one-point change in long rates is impossibly large unless rates are initially very high.

It’s easy enough to do these calculations; the point is that unless expectations are perfectly elastic, we should always expect long rates to change less than one for one with short rates; the longer the rates considered, the more inelastic expectations, and the lower initial rates, the less responsive long rates will be. At the longest end of the term structure — the limiting case of a perpetuity — it is literally impossible for interest rates to reach zero, since that would imply an infinite price.

This dynamic is what Keynes was talking about when he wrote:

If . . . the rate of interest is already as low as 2 percent, the running yield will only offset a rise in it of as little as 0.04 percent per annum. This, indeed, is perhaps the chief obstacle to a fall in the rate of interest to a very low level . . . [A] long-term rate of interest of (say) 2 percent leaves more to fear than to hope, and offers, at the same time, a running yield which is only sufficient to offset a very small measure of fear.

Respectable economists like DeLong believe that there is a true future path of interest rates out there, which current rates should reflect; either the best current-information prediction is of government policy so bad that the optimal interest rate will continue to be zero for many years to come, or else financial markets have completely broken down. I’m glad the second possibility is acknowledged, but there is a third option: There is no true future course of “natural” rates out there, so markets adopt a convention for normal interest rates based on past experience. Given the need to take forward-looking actions without true knowledge of the future, this is perfectly rational in the plain-English sense, if not in the economist’s.

A final point: For Keynes — a point made more clearly in the Treatise than in the General Theory — the effectivness of monetary policy depends critically on the fact that there are normally market participants with differing expectations about future interest rates. What this means is that when interest rates rise, people who think the normal or long-run rate of interest is relatively low (“bulls”) can sell bonds to people who think the normal rate is high (“bears”), and similarly when interest rates fall the bears can sell to the bulls. Thus the marginal bond will be held held by someone who thinks the current rate of interest is the normal one, and so does not require a premium for expected capital gains or losses. This is the same as saying that the market as a whole behaves as if expectations are unit-elastic, even though this is not the case for individual participants. [6] But when interest rates move too far, there will no longer be enough people who think the new rate is normal to willingly hold the stock of bonds without an interest-rate risk premium. In other words, you run out of bulls or bears. Keynes was particularly concerned that an excess of bear speculators relative to bulls could keep long interest rates permanently above the level compatible with full employment. The long rate, he warned,

may fluctuate for decades about a level which is chronically too high for full employment; – particularly if it is the prevailing opinion that the rate of interest is self-adjusting, so that the level established by convention is thought to be rooted in objective grounds much stronger than convention, the failure of employment to attain an optimum level being in no way associated, in the minds either of the public or of authority, with the prevalence of an inappropriate range of rates of interest’.

If the belief that interest rates cannot fall below a certain level is sufficiently widespread, it becomes self-fulfilling. If people believe that long-term interest rates can never persistently fall below, say, 3 percent, then anyone who buys long bonds much below that is likely to lose money. And, as Keynes says, this kind of self-stabilizing convention is more likely to the extent that people believe that it’s not just a convention, but that there is some “natural rate of interest” fixed by non-monetary fundamentals.

So what does all this mean concretely?

1. It’s easy to see inelastic interest-rate expectations in the data. Long rates consistently lag behind short rates. During the 1960s and 1970s, when rates were secularly rising, long rates were often well below the Federal Funds rate, especially during tightening episodes; during the period of secularly falling rates since 1980, this has almost never happened, but very large term spreads have become more common, especially during loosening episodes.

2. For the central bank to move long rates, it must persuade markets that changes in policy are permanent, or at least very persistent; this is especially true when rates are low. (This is the main point of this post.) The central bank can change rates on 30-year bonds, say, only by persuading markets that average rates over the next 30 years will be different than previously believed. Over small ranges, the existence of varying beliefs in the bond market makes this not too difficult (since the central bank doesn’t actually have to change any individual’s expectations if bond sales mean the marginal bondholder is now a bull rather than a bear, or vice versa) but for larger changes it is more difficult. And it becomes extremely difficult to the extent that economic theory has taught people that there is a long run “natural” rate of interest that depends only on technology and time preferences, which monetary policy cannot affect.

Now, the obvious question is, how sure are we that long rates are what matters? I’ve been treating a perpetual bond as an approximation of the ultimate target of monetary policy, but is that reasonable? Well, one point on which Keynes and today’s mainstream agree is that the effect of interest rates on the economy comes through demand for long-lived assets — capital goods and housing. [7] According to the BEA, the average current-cost age of private fixed assets in the US is a bit over 21 years, which implies that the expected lifetime of a new fixed asset must be quite a bit more than that. For Keynes (Leijonhufvud stresses this point; it’s not so obvious in the original texts) the main effect of interest rates is not on the financing conditions for new fixed assets, as most mainstream and heterodox writers both assume, but on the discount rate used  of the assets. In that case the maturity of assets is what matters. On the more common view, it’s the maturity of the debt used to finance them, which may be a bit less; but the maturity of debt is usually matched to the maturity of assets, so the conclusion is roughly the same. The relevant time horizon for fixed assets is long enough that perpetuities are a reasonable first approximation. [8]

3. So if long rates are finally falling now, it’s only because an environment of low rates is being established as new normal. There’s a great deal of resistance to this, since if interest rates do return to their old normal levels, the capital losses to bondholders will be enormous. So to get long rates down, the Fed has to overcome intense resistance from bear speculators. Only after a great deal of money has been lost betting on a return of interest rates to old levels will market participants begin to accept that ultra-low rates are the new normal. The recent experience of Bill Gross of PIMCO (the country’s largest bond fund) is a perfect example of this story. In late 2010, he declared that interest rates could absolutely fall no further; it was the end of the 30-year bull market in bonds. A year later, he put his money where his mouth was and sold all his holdings of Treasuries. As it turned out, this was just before bond prices rose by 30 percent (the flipside of the fall in rates), a misjudgment that cost his investors billions. But Gross and the other “bears” had to suffer those kinds of losses for the recent fall in long rates to be possible. (It is also significant that they have not only resisted in the market, but politically as well.) The point is, outside a narrow range, changes in monetary policy are only effective when they cease to be perceived as just countercyclical, but as carrying information about “the new normal.” Zero only matters if it’s permanent zero.

4. An implication of this is that in a world where the lifespan of assets is much longer than the scale of business-cycle fluctuations, we cannot expect interest rates to be stationary if monetary policy is the main stabilization tool. Unless expectations are very elastic, effective monetary policy require secular drift in interest rates, since each short-term stabilization episode will result in a permanent change in interest rates. [9] You can see this historically: the fall in long rates in the 1990 and 2000 loosenings both look about equal to the permanent components of those changes. This is a problem for two reasons: First, because it means that monetary policy must be persistent enough to convince speculators that it does represent a permanent change, which means that it will act slower, and require larger changes in short rates (with the distortions those entail) than in the unit-elastic expectations case. And second, because if there is some reason to prefer one long-ru level of interest rates to another (either because you believe in a “natural” rate, or because of the effects on income distribution, asset price stability, etc.) it would seem that maintaining that rate is incompatible with the use of monetary policy for short-run stabilization. And of course the problem is worse, the lower interest rates are.

5. One way of reading this is that monetary policy works better when interest rates are relatively high, implying that if we want to stabilize the economy with the policy tools we have, we should avoid persistently low interest rates. Perhaps surprisingly, given what I’ve written elsewhere, I think there is some truth to this. If “we” are social-welfare-maximizing managers of a capitalist economy, and we are reliant on monetary policy for short-run stabilization, then we should want full employment to occur in the vicinity of nominal rates around 10 percent, versus five percent. (One intuitive way of seeing this: Higher interest rates are equivalent to attaching a low value to events in the future, while low interest rates are equivalent to a high value on those events. Given the fundamental uncertainty about the far future, choices in the present will be more stable if they don’t depend much on far-off outcomes.) In particular — I think it is a special case of the logic I’ve been outlining here, though one would have to think it through — very low interest rates are likely to be associated with asset bubbles. But the conclusion, then, is not to accept a depressed real economy as the price of stable interest rates and asset prices, but rather to “tune” aggregate demand to a higher level of nominal interest rates. One way to do this, of course, is higher inflation; the other is a higher level of autonomous demand, either for business investment (the actual difference between the pre-1980 period and today, I think), or government spending.

[1] The most invigorating economics book I’ve read in years. It’ll be the subject of many posts here in the future, probably.

[2] Why there should be a pure term premium is seldom discussed but actually not straightforward. It’s usually explained in terms of liquidity preference of lenders, but this invites the questions of (1) why liquidity preference outweighs “solidity preference”; and (2) why lenders’ preferences should outweigh borrowers’. Leijonhufvud’s answer, closely related to the argument of this post, is that the “excessively long” lifespan of physical capital creates chronic excess supply at the long end of the asset market. In any case, for the purpose of this post, we will ignore the pure premium and assume that long rates are simply the average of expected short rates.

[3] Keynes did not, as is sometimes suggested by MMTers and other left Keynesians, reject the effectiveness of monetary policy in general. But he did believe that it was much more effective at stabilizing full employment than at restoring full employment from a depressed state

[4] I will do up these equations properly once the post is done.

[5] I anticipate an objection to reasoning on the basis of an equilibrium condition in asset markets. I could just say, Keynes does it. But I do think it’s legitimate, despite my rejection of the equilibrium methodology more generally. I don’t think there’s any sense that human behavior can be described as maximizing some quantity called utility,” not even as a rough approximation; but I do think that capitalist enterprises can be usefully described as maximizing profit. I don’t think that expectations in financial markets are “rational” in the usual economists’ sense, but I do think that one should be able to describe asset prices in terms of some set of expectations.

[6] We were talking a little while ago with Roger Farmer, Rajiv Sethi, and others about the desirability of limiting economic analysis to equilibria, i.e. states where all expectations are fulfilled. This implies, among other things, that all expectations must be identical. Keynes’ argument for why long rates are more responsive to short rates within some “normal” range of variation is — whether you think it’s right or not — an example of something you just can’t say within Farmer’s preferred framework.

[7] Despite this consensus, this may not be entirely the case; and in fact to the extent that monetary policy is effective in the real world, other channels, like income distribution, may be important. But let’s assume for now that demand for long-lived assets is what matters.

[8] Hicks had an interesting take on this, according to Leijonhufvud. Since the production process is an integrated whole, “capital” does not consist of particular goods but of a claim on the output of the process as a whole. Since this process can be expected to continue indefinitely, capital should be generally assumed to be infinitely-lived. When you consider how much of business investment is motivated by maintaining the firm’s competitive position — market share, up to date technology, etc. — it does seem reasonable to see investment as buying not a particular capital good but more of the firm as a whole.

[9] There’s an obvious parallel with the permanent inflation-temporary employment tradeoff of mainstream theory. Except, I think mine is correct!

In Which I Dare to Correct Felix Salmon

Felix Salmon is my favorite business blogger — super smart, cosmopolitan and impressively unimpressed by the Masters of the Universe he spends his days observing. In general, I’d expect him to be much more on top of current financial data than I am. But in today’s post on the commercial paper market, he makes an uncharacteristic mistake — or rather, uncharacteristic for him but highly characteristic of the larger conversation around finance.

According to Felix:

The commercial paper market has to a first approximation become an entirely financial market, a place for banks and shadow banks to do their short-term borrowing while the interbank market remains closed.

According to David S. Scharfstein of Harvard Business School, who also testified last week, of the 50 largest issuers of debt to money market funds today, only two are nonfinancial firms; the rest are banks and other financial companies, many of them foreign.

Once upon a time, before the financial crisis, money-market funds were a mechanism whereby individual investors could make safe, short-term loans to big corporates, disintermediating the banks. But all that has changed now. For one thing, says Davidoff, “about two-thirds of money market users are sophisticated finance investors”. For another, the corporates have evaporated away, to be replaced by financials. In the corporate world, it seems, the price mechanism isn’t working any more: either you’re a big and safe corporate and don’t want to run the refinancing risk of money-market funds suddenly drying up, or else you’re small enough and risky enough that the money market funds don’t want to lend to you at any price.

I’m sorry, but I don’t think that’s right.

Reading Felix, you get the clear impression that before the crisis, or anyway not too long ago, most borrowers in the commercial paper market were nonfinancial corporations. It has only “become an entirely financial market” relatively recently, he suggests, as nonfinancial borrowers have dropped out. But, to me at least, the real picture looks rather different.

Source: Flow of Funds

The graph shows outstanding financial and nonfinancial commercial paper on the left scale, and the financial share of the total on the right scale. As you can see the story is almost the opposite of the one Felix tells. Financial borrowers have always dominated the commercial paper market, and their share has fallen, not risen, in the wake of the financial crisis and recession. Relative to the economy, nonfinancial commercial paper outstanding is close to where it was at the peak of the past cycle. But financial paper is down by almost two-thirds. As a result, the nonfinancial share of the commercial paper market has doubled, from 7 to 15 percent — the highest it’s been since the 1990s.

Why does this matter? Well, of course, it’s important to get these things right. But I think Felix’s mistake here is revealing of a larger problem.

One of the most dramatic features of the financial crisis of fall 2008, bringing the Fed as close as it got to socializing the means of intermediation, was the collapse of the commercial paper market. But as I’ve written here before, it was almost never acknowledged that the collapse was largely limited to financial commercial paper. Nonfinancial borrowers did not lose access to credit in the way that banks and shadow banks did. The gap between the financial and nonfinancial commercial paper markets wasn’t discussed, I believe, because of the way the crisis was seen entirely through the eyes of finance.

I suspect the same thing is happening with the evolution of the commercial paper market in the past few years. The Flow of Funds shows clearly that commercial borrowing by nonfinancial borrowers has held up reasonably well; the fall in commercial paper lending is limited to financial borrowers. But that banks’ problems are everyone’s problems is taken for granted, or at most justified with a pious handwave about the importance of credit to the real economy.

And that’s the second  assumption, again usually unstated, at issue here: that providing credit to households and businesses is normally the main activity of finance, with departures from that role an anomalous recent development. But what if the main action in the financial system has never been intermediating between ultimate lenders and borrowers? What if banks have always mostly been, not to put too fine a point on it, parasites?

During the crisis of 2008 one big question was if it was possible to let the big banks fail, or if the consequences for the real economy would be prohibitively awful. On the left, Dean Baker took the first position while Doug Henwood took the second, arguing that the alternative to bailouts could be a second Great Depression. I was ambivalent at the time, but I’ve been moving toward the let-them-fail view. (Especially if the counterfactual is that governments and central banks putting comparable resources into sheltering the real economy from collapsing banks, as they have into propping them up.) The evolution of the commercial paper market looks to me like one more datapoint supporting that view. The collapse of interbank lending doesn’t seem to have affected nonbank borrowers much.

(Which brings us to a larger point, of whether the continued depressed state of the real economy is due to a lack of access to credit. Obviously I think not, but that’s beyond the scope of this post.)

An insidious feature of the world we live in is an unconscious tendency to adopt finance’s point of view. This is as true of intellectuals as of everyone else. An anthropologist of my acquaintance, for instance, did his fieldwork on the New York financial industry. Nothing wrong with that — he’s got some very smart things to say about it — but you really can’t imagine someone doing a similar project on any other industry, apart from high-tech internet stuff. In our culture, finance is just interesting in a way that other businesses are not. I’m not exempting myself from this, by the way. The financial crisis and its aftermath was the most exciting time in memory to be thinking about economics; I’m not going to deny it, it was fun. And there are plenty of people on the left who would say that a tendency, which I confess to, to let the conflict between Wall Street and the real economy displace the conflict between labor and capital in our political language, is a symptom — a kind of reaction-formation — of the same intellectual capture.

But that is perhaps over-broadening the point, which is just this: That someone as smart as Felix Salmon could so badly misread the commercial paper market is a sign of how hard we have to work to distinguish the state of the banks, from the state of the economy.

The Story of Q

More posts on Greece, coming right up. But first I want to revisit the relationship between finance and nonfinancial business in the US.

Most readers of this blog are probably familiar with Tobin’s q. The idea is that if investment decisions are being made to maximize the wealth of shareholders, as theory and, sometimes, the law say they should be, then there should be a relationship between the value of financial claims on the firm and the value of its assets. Specifically, the former should be at least as great as the latter, since if investing another dollar in the firm does not increase its value to shareholders by at least a dollar, then that money would better have been returned to them instead.

As usual with anything interesting in macroeconomics, the idea goes back to Keynes, specifically Chapter 12 of the General Theory:

the daily revaluations of the Stock Exchange, though they are primarily made to facilitate transfers of old investments between one individual and another, inevitably exert a decisive influence on the rate of current investment. For there is no sense in building up a new enterprise at a cost greater than that at which a similar existing enterprise can be purchased; whilst there is an inducement to spend on a new project what may seem an extravagant sum, if it can be floated off on the Stock Exchange at an immediate profit. Thus certain classes of investment are governed by the average expectation of those who deal on the Stock Exchange as revealed in the price of shares, rather than by the genuine expectations of the professional entrepreneur.

It was this kind of reasoning that led Hyman Minsky to describe Keynes as having “an investment theory of the business cycle, and a financial theory of investment.” Axel Leijonhufvud, on the other hand, would warn us against taking the dramatis personae of this story too literally; the important point, he would argue, is the way in which investment responds to the shifts in the expected return on fixed investment versus the long-term interest rate. For better or worse, postwar Keynesians including the eponymous Tobin followed Keynes here in thinking of one group of decisionmakers whose expectations are embodied in share prices and another group setting investment within the firm. If shareholders are optimistic about the prospects for a business, or for business in general, the value of shares relative to the cost of capital goods will rise, a signal for firms to invest more; if they are pessimistic, share prices will fall relative to the cost of capital goods, a signal that further investment would be, from the point of view of shareholders, value-subtracting, and the cash should be disgorged instead.

There are various specifications of this relationship; for aggregate data, the usual one is the ratio of the value corporate equity to corporate net worth, that is, to total assets minus total liabilities. In any case, q fails rather miserably, both in the aggregate and the firm level, in its original purpose, predicting investment decisions. Here is q for nonfinancial corporations in the US over the past 60 years, along with corporate investment.

The orange line is the standard specification of q; the dotted line is equity over fixed assets, which behaves almost identically. The black line shows nonfinancial corporations’ nonresidential fixed investment as a share of GDP. As you can see, apart from the late 90s tech boom, there’s no sign that high q is associated with high investment, or low q with low investment. In fact, the biggest investment boom in postwar history, in the late 1970s, comes when q was at its low point. [*]

The obvious way of looking at this is that, contra Tobin and (at least some readings of) Keynes, stock prices don’t seem to have much to do with fixed investment. Which is not so strange, when you think about it — it’s never been clear why managers and entrepreneurs should substitute the stock market’s beliefs about the profitability of some new investment for their own, presumably better-informed, one. Just as well, given the unanchored gyrations of the stock market.

This is true as far as it goes, but there’s another way of looking at it. Because, q isn’t just uncorrelated with investment; for most of the period, at least until the 1990s, it’s almost always well below 1. This is even more surprising when you consider that a well-run firm with an established market ought to have a q above one, since it will presumably have intangible assets — corporate culture, loyal customers and so on — that don’t show up on the balance sheet. In other words, measured assets should seem to be “too low”. But in fact, they’re almost always too high. For most of the postwar period, it seems that corporations were systematically investing too much, at least from the point of view maximizing shareholder value.

I was talking with Suresh the other day about labor, and about the way labor organizing can be seen as a kind of assertion of a property right. Whether shareholders are “the” residual claimants of a firm’s earnings is ultimately a political question, and in times and places where labor is strong, they are not. Same with tenant organizing — you could see it as an assertion that long-time tenants have a property right in their homes, which I think fits most people’s moral intuitions.

Seen from this angle, the fact that businesses were investing “too much” during much of the postwar decades no longer is a sign they were being irrational or made a mistake; it just suggests that they were considering the returns to claimants other than shareholders. Though one wouldn’t what to read too much into it, it’s interesting in this light that for the past dozen years aggregate q has been sitting at one, exactly where loyal agents for shareholders would try to keep it. In liberal circles, the relatively low business investment of the past decade is often considered a sign of something seriously wrong with the economy. But maybe it’s just a sign that corporations have learned to obey their masters.

EDIT: In retrospect, the idea of labor as residual claimant does not really belong in this argument, it just confuses things. I am not suggesting that labor was ever able to compel capitalist firms to invest more than they wanted, but rather that “capitalists” were more divided sociologically before the shareholder revolution and that mangers of firms chose a higher level of investment than was optimal from the point of view of owners of financial assets. Another, maybe more straightforward way of looking at this is that q is higher — financial claims on a firm are more valuable relative to the cost of its assets — because it really is better to own financial claims on a productive enterprise today than in the pr-1980 period. You can reliably expect to receive a greater share of its surplus now than you could then.

[*] One of these days I really want to write something abut the investment boom of the 1970s. Nobody seems to realize that the highest levels of business investment in modern US history came in 1978-1981, supposedly the last terrible days of stagflation. Given the general consensus that fixed capital formation is at the heart of economic growth, why don’t people ask what was going right then?

Part of it, presumably, must have been the kind of sociological factors pointed to here — this was just before the Revolt of the Rentiers got going, when businesses could still pursue growth, market share and innovation for their own sakes, without worrying much about what shareholders thought. Part must have been that the US was still able to successfully export in a range of industries that would become uncompetitive when the dollar appreciated in the 1980s. But I suspect the biggest factor may have been inflation. We always talk about investment being encouraged by stuff that makes it more profitable for capitalists to hold their wealth in the form of capital goods. But logically it should be just as effective to reduce the returns and/or safety of financial assets. Since neither nominal interest rates nor stock prices tracked inflation in the 1970s, wealthholders had no choice but to accept holding a greater part of their wealth in the form of productive business assets. The distributional case for tolerating inflation is a bit less off-limits in polite conversation than it was a few years ago, but the taboo on discussing its macroeconomic benefits is still strong. Would be nice to try violating that.

A Greek Myth

Most days, I’m a big fan of Paul Krugman’s columns.

Unlike his economics, which makes a few too many curtsies to orthodoxy, his political interventions are righteous in tone, right-on in content, and what’s more, strategic — unlike many leftish intellectuals, he clearly cares about being useful — about saying things that are not only true, but that contribute to the concrete political struggle of the moment. He’s so much better than almost of his peers it’s not even funny.

But — well, you knew there had to be a but.

But this time, he’s gotten his economics in his politics. And the results are not pretty.

In today’s column, he rightly dismisses arguments that the root of the Euro crisis is that workers in Greece and the other peripheral countries are lazy, or unproductive, or that those countries have excessive regulation and bloated welfare states. “So how did Greece get into so much trouble?” he asks. His answer:

Blame the euro. Fifteen years ago Greece was no paradise, but it wasn’t in crisis either. Unemployment was high but not catastrophic, and the nation more or less paid its way on world markets, earning enough from exports, tourism, shipping and other sources to more or less pay for its imports. Then Greece joined the euro, and a terrible thing happened: people started believing that it was a safe place to invest. Foreign money poured into Greece, some but not all of it financing government deficits; the economy boomed; inflation rose; and Greece became increasingly uncompetitive.

I’m sorry, but the bolded sentence just is not true. The rest of it is debatable, but that sentence is flat-out false. And it matters.

The analysis behind the “earning enough” claim is found on Krugman’s blog. He writes,

One of the things you keep hearing about Greece is that if it exits the euro one way or another there will be no gains, because Greece basically can’t export — so structural reform is the only way forward. But here’s the thing: if that were true, how did Greece pay its way before the big capital flows starting coming? The truth is that before the euro and the capital flow bubble it created, Greece ran only small current account deficits (the broad definition of the trade balance, including services and factor income)

And he offers this graph from Eurostat:

The numbers in the graph are fine, as far as they go. And there is the first problem: how far they go. Here’s the same graph, but going back to 1980.

Starting the graph ten years earlier gives a different picture — now it seems that the near-balance on current account in 1993 and 1994 wasn’t the normal state before the euro, but an exceptional occurrence in just those two years. And note that that while Greek deficits in the 1980s are small relative to those of the mid-2000s, they are still very far from anything you could reasonably describe as “the country more or less paid its way.” They are, for instance, significantly larger than the contemporaneous US current account deficits that were a central political concern in the 1980s here.

That’s the small problem; there’s a bigger one. Because, what are we looking at? The current account balance. Krugman glosses this as “the broadest measure of the trade balance,” but that’s not correct. (If he taught undergraduate macro, I’m sure he’d mark someone writing that wrong.) It’s broad, yes, but it’s a different concept, covering all international payments other than asset purchases, including some (transfers and income flows) that are not trade by any possible definition. The current account includes, for example, remittances by foreign workers to their home countries. So by Krugman’s logic here, the fact that there are lots of Mexican migrant workers in the US sending money home is a sign that Mexico is able to export successfully to the US, when in the real world it’s precisely a sign that it isn’t.

Most seriously, the current account includes transfer between governments. In the European context these are quite large. To call the subsidies that Greece received under the European Common Agricultural Policy export earnings is obviously absurd. Yet that’s what Krugman is doing.

The following graph shows how big a difference it makes when you call development assistance exports.

The blue line is the current account balance, same as in Krugman’s graph, again extended back to 1980. The red line is the current account balance not counting intergovernmental transfers. And the green line is the current account not counting any transfers. [*] It’s clear from this picture that, contra Krugman, Greece was not earning enough money to pay for its imports before the creation of the euro, or at any time in the past 30 years. If the problem Greece has to solve is getting its foreign exchange payments in line with with its foreign exchange earnings, then the bulk of the problem existed long before Greece joined the euro. The central claim of the column is simply false.

Again, it is true that Greece’s deficits got much bigger in the mid-2000s. I agree with Krugman that this must have ben connected with the large capital flows from northern to peripheral Europe that followed the creation of the euro. It remains an open question, though, how much this was due to an increase in relative costs, and how much due to more rapid income growth. By assuming it was entirely the former, Krugman is implicitly, but characteristically, assuming that except in special circumstances economies can be assumed to be operating at full capacity.

But the key point is that the historical evidence does not support the view that current account imbalances only arise when governments interfere in the natural adjustment of foreign exchange markets. Fixed rates or floating, in the absence of very large flows of intergovernmental aid Greece has never come close to current account balance. According to Krugman, Greece’s

famous lack of competitiveness is a recent development, caused by massive post-euro inflows of capital that raised costs and prices. And that’s the kind of thing that currency devaluations can cure.

The historical evidence is not consistent with this claim. Or if it is, it’s only after you go well beyond normal massaging of the data, to something you’d see on The Client List.

* * *
So why does it matter? What’s at stake? I can’t very well go praising Krugman for writing not only what’s true but what is useful, and then justify a post criticizing him on the grounds of Someone Is Wrong On the Internet. No; but there’s something real at stake here.

The basic issue is, does price adjustment solve everything? Krugman won’t quite come out and say Yes, but clearly it’s what he believes. Is he being deliberately dishonest? No, I’m sure he’s not. But this is how ideology works. He’s committed to the idea that relative costs are the fundamental story when it comes to trade, so when he finds a bit of data that seems to conform to that, he repeats it, without giving five minutes of critical reflection to what it actually means.

The basic issue, again, is the need for structural as opposed to price adjustment. Now if “structural adjustment” means lower wages, then of course Godspeed to Krugman here. I’m against structural whatever in that sense too. But I can’t help feeling that he’s pulling in the wrong direction. Because if external devaluation cures the problem then internal devaluation does too, at least in principle.

The fundamental question remains how important are relative costs. The way I see it, look at what Greece imports, most of it Greece doesn’t produce at all. The textbook expenditure-switching vision implicitly endorsed by Krugman ignores that there are different kinds of goods, or accepts what Paul Davidson calls the axiom of gross substitution, that every good is basically (convexly) interchangeable with every other. Hey Greeks will have fewer computers and no oil, but they’ll spend more time at the beach, and in terms of utility it’s all the same. Except, you know, it’s not.

From where I’m sitting, the only way for Greece to achieve current account balance with income growth comparable to Germany is for Greece to develop new industries. This, not low wages,  is the structural problem. This is the same problem faced by any developing country. And it raises the same problem that Krugman, I’m afraid, has never dealt with: how to you convince or compel the stratum that controls the social surplus to commit to the development of new industries? In the textbook world — which Krugman I’m afraid still occupies — a generic financial system channels savings to the highest-return available investment projects. In the real world, not so much. Figuring out how to get savings to investment is, on the contrary, an immensely challenging institutional problem.

So, first step dealing with it, you should read Gerschenkron. We know, anyway that probably the rich prefer to hold their wealth in liquid form, or overseas, or both. And we know that even if — unlikely — they want to invest in domestic industries, they’ll choose those that are already cost-competitive, when, we know, the whole point of development is to do stuff where you don’t, right now, have comparative advantage. So, again, it’s a problem.

There are solutions to this problem. Banks, the developmental state, even industrial dynasties. But it is a problem, and it needs to be solved. Relative prices are second order. Or so it seems to me.

[*] Unfortunately Eurostat doesn’t seem to have data breaking down nongovernmental transfer payments to Greece. I suspect that the main form of private transfers is remittances from Greek workers elsewhere in Europe, but perhaps not.