Bottom Rail, Moving Up

David Harvey observed recently that this crisis was the first in modern times in which the periphery has not borne a disproportionate share of the costs. Dani Rodrik’s recent posts make a similar point.

And it’s true — over the past 40 years, we’ve seen repeated episodes when growth has slowed in the rich world, and collapsed catastrophically in the South. Neoliberalism has meant the tools the North uses to ameliorate slumps have been forbidden to poor countries. As my friend Doug Henwood says, in each of the crises of the past two decades, “the First World banks got a Minsky bailout while the Third World suffered a Fisher deflation.” But this time really seems to be different.
It’s interesting to compare the last of those episodes, the Asian crisis of 1997, with the most recent crisis.
Whatever you think the underlying causes of the 1997 crisis were (people sure liked saying “crony capitalism”) the basic facts are straightforward. East and Southeast Asia experienced a “sudden stop” of previously large financial inflows, leaving them unable to meet their foreign-currency obligations. As a result they were forced to abandon their currency pegs, abruptly raise interest rates to unheard-of levels, and eliminate their trade deficits with extreme prejudice. The result was severe economic disruption and brutal recessions. Indonesia, for example, didn’t regain its pre-crisis level of output for a full five years.
Fast forward ten years, and the same region is a bright spot in the global growth picture. What people don’t realize, though, is that many Asian countries experienced a sudden stop of financial inflows in 2007-08 even larger than the one that caused so much destruction in 1997. Add to that a collapse in export earnings as demand in the rich countries fell, and Asian countries faced a substantially larger shock to foreign exchange earnings in 2007-2008 than ten years before.
Change in Gross Flows as Percent of Peak GDP
1997Q2-1997Q4 Peak-2008Q4
Portfolio Inflows All Forex Inflows Portfolio Inflows All Forex Inflows
Indonesia -10.6 -18.5 -6.1 -9.7
Korea -5.9 -18.0 -10.1 -29.5
Phillipines -5.4 -15.9 -9.3 -30.1
Thailand -2.5 -12.4 -6.2 -22.4
Source: IMF International Financial Statistics.
Notes: [1]
As the table shows, several of the newly industrializing countries of East Asia experienced a shock to their balance of payments in 2007-08 about double that of 1997. So why did the earlier shock have so much larger effects?
“Floating exchange rates” is the wrong answer. (“Fiscal responsibility” is worse, it’s not even wrong.) As captured in the well-known J-curve, even when exchange rates move in the right direction, they initially have the wrong effects on trade flows. Even in the most optimistic case it takes at least a year before a depreciation begins to improve the trade balance. Anyway, in the crisis this time, Asian exchange rates didn’t fall. 
In the short run at least, trade flows respond to movements in incomes, not relative prices. Replacing the trade-price relationship with a trade-income relationship is probably the key contribution of Post Keynesian analysis to the study of international finance and trade. [2] Combined with the notion of liquidity constraints — despite what the textbook says, the supply of credit is not infinitely elastic at “the” interest rate — this means there are situations where a country needs to rapidly improve its balance of payments and the only tool available (once direct import restrictions are ruled out) is to reduce income, often by some large multiple of the gap to be closed. [3] In a nutshell, that’s what happened in 1997. So why not this time?
The answer is that the Asian countries entered this crisis, unlike the last one, with large current account surpluses and foreign-exchange reserves. Countries that can respond to a negative shock to foreign exchange inflows by reducing their own accumulation of foreign assets or spending down their reserves, don’t have to reduce imports by pushing down income and output. Instead, they could and did raise domestic incomes via stimulus programs and interest-rate cuts, to offset the fall in export demand. And this is not only good news for them, it also dampens the process by which trade-induced contractions would otherwise propagate across borders.
Indeed, it’s probably precisely to be ready for this contingency that Asian countries committed themselves to running surpluses in the first place. There’s an old Martin Wolf column making this argument, which I’ll add to this post when/if I find it. It’s made more systematically in a couple recent articles by Jorg Bibow. (Bibow’s work is about the best I’ve seen on the whole question of “global imbalances”.) He argues that current account surpluses and reserve accumulation should be seen as a form of “self-insurance” by countries that have become disillusioned with the IMF as a provider of insurance against balance-of-payments shocks (its supposed raison d’etre). Bibow is fairly critical of this approach, which is natural from the point of view of someone steeped in Keynes’ ideas of a rational international order. We don’t, after all, think it would be such a great thing if people dispensed with health insurance and saved up money for future health expenses instead. But if your insurer insisted that you donate at least a kidney before they’d approve a blood transfusion, self-insurance might look like a better option.

There’s a couple important points here. First, the economic point:

The direct effects of trade flows on aggregate demand are usually dwarfed by the indirect effects, as government spending and investment adjust to accommodate the balance of payments constraint. This is why trade is not, in a Keynesian framework, a zero-sum game, and why the mewling of American economists about Asian “mercantilism” so misses the point. When capital flowed out of Asia in 1997, the whole development process had to be thrown into reverse in order to make up the shortfall in foreign exchange. That’s what happens when you’re pushed up against your balance of payments constraint. It didn’t haVppen this time because of their past ten years of self-insurance. In the US, on the other hand, the external balance doesn’t constrain expansionary fiscal policy at all, only the stupidity of our politics does.

Maybe even more important, the political point. How is it that these countries managed to reject the siren song of the Washington Consensus? After all, it promised (1) development would be so much faster with access to the savings of the rich world via unfettered financial flows; (2) if something did go wrong, the IMF loans were always available to bridge short-term foreign-exchange shortfalls; and, implicitly, (3) if things fell apart completely, unrestricted financial flows would ensure that elites could extract their wealth from a wrecked economy.

Around 1990, when I was first becoming aware of politics, we took it for granted that the IMF was one of the great forces for evil in the world. And you know what? I think we were right.

Which makes it all the more remarkable that some substantial fraction of the world has managed to tear itself free of those usurers. In principle, there’s the potential for progressive struggle whenever the sociological basis of a form of political consciousness requires it to cohere somewhere beneath the top of a value chain. But in practice, it’s hard to do. Much easier for the representatives of a subordinate class or geography to constitute themselves as an agent of the elites above rather than the masses below. So while self-insurance via reserve accumulation might seem like a small step towards socialism, I think it’s kind of a big deal. Economically, you have to recognize that Asian economies are not in depression now thanks to prudential state action, not the pseudo-logic of “conditional convergence”. And politically it’s even more remarkable, in the scale of things, that Asian elites have been able even to this extent to identify themselves with their national economies rather than the global owning class.

[1] “All Forex Inflows” is the sum of gross portfolio inflows, inward FDI, other inward investment and exports. (The gross numbers are conceptually the correct ones, for reasons I can’t explain here but hopefully are obvious.) The peak quarter is 1997Q2 for the 1997 crisis. For the recent crisis it is  2008Q2 for Indonesia, 2007Q4 for Korea, 2007Q2 for the Philippines and 2008Q1 for Thailand. The IMF does not have data for Malaysia prior to 1999.

[2] As on so many topics, Joan Robinson’s contribution is essential and mostly unacknowledged.

[3] The ratio of the necessary fall in output to the balance of payments gap to be closed is equal to one over the marginal propensity to import. Countries in this situation almost always sharply raise the domestic rate of interest, which theoretically helps attract short-term financial flows to bridge the gap, but in practice is mostly just a mechanism to reduce domestic incomes.

How I Learned to Stop Worrying and Love Default

If the debt-ceiling negotiations (summarized here) drag on to the point where there is real doubt about the full repayment of Treasury securities, would that be a disaster? Or could it actually raise output and employment?

Nick Rowe has a very smart argument for the latter, on straightforward Keynesian grounds:

Take the standard ISLM model… Draw the LM curve horizontal; either because the central bank conducts monetary policy by setting a rate of interest, or because the economy is stuck in a liquidity trap where money and government bonds are perfect substitutes. Now let’s hit it with a shock.

The shock is that all the bond rating agencies downgrade the rating on government bonds. Specifically, the rating agencies say that whereas the previous default risk was zero, there is now a 1% chance per year that the government will renege on 100% of all promises to pay money on its bonds. (Or a 10% chance per year of reneging on 10% of repayments, whatever.) And assume that everybody believes the rating agencies. Further assume that the central bank holds the interest rate on government bonds constant… What happens?

My answer to this question is the same as the standard textbook answer to the question where the shock is an increase in expected inflation by 1%. … And if you: did believe in the ISLM model; and did believe that the economy was stuck in a liquidity trap; and did believe the economy needed an increase in Aggregate Demand; and did believe that fiscal policy either should or could not be used, for whatever reason; you would say that this increase in expected inflation is a good thing. At first sight, the answer to the question “what is the effect of a 1% increase in perceived risk on government bonds?” is exactly the same as the answer to the question “what is the effect of a 1% increase in expected inflation?”. … Both result in the same 1% fall in the real risk-adjusted rate of interest on private saving and investment and the same rise in AD and real income.

This is one of those points that seems bizarre and counterintuitive when you first hear it and completely obvious once you’ve thought about it for a moment. Suppose a bond already pays zero interest; how can its yield go lower? One way is for inflation to get higher, so the principal repayments are expected to be worth less. But another is for the default probability to rise, which also reduces the expected value of the principal repayments. At the level of abstraction where a lot of these debates happen, for important purposes the two should be identical. If you believe the zero lower bound story — that monetary policy would have brought unemployment down to normal levels by now, if it were just possible to reduce the federal funds rate below zero — then you should support a (temporarily) nonzero default risk on government debt for the exact same reason that you support (temporarily) higher inflation — it creates the economic equivalent of negative interest rates.

There are lots of caveats in practice. (There are almost always lots of caveats.) And if you’re a ZLB skeptic — if you don’t believe even a negative interest rate would be effective in boosting demand — then default risk won’t help much either. But analytically, it’s still an important point. Among other things, it helps explain why the threat of default has not moved the price of Treasury securities at all.
I’d assumed, up until now, that it was because asset owners took it for granted that the debt ceiling would, in fact, be raised, or that if not debt payments would be prioritized over everything else.

I still suppose that’s true. But here’s a more general reason. Another way of thinking of the zero lower bound phenomenon is that the government’s commitment to issue zero-interest liabilities in the form of cash and reserves sets a ceiling on the price of its liabilities (remembering that price and yield move inversely.) This price ceiling means there is excess demand. So if you have some exogenous factor that would normally lower the price of Treasuries, it doesn’t do so; at the margin, it just reduces the backlog of frustrated buyers at the current price.

Cool.

UPDATE: And here, right next to the Rowe piece in Google Reader, is an FT Alphaville item about how settlement failures (not delivering a security at the date contracted) seem to be becoming increasingly deliberate in secondary markets for Treasuries, as a form of “unconventional financing.” It’s not an exact analogy, but there’s an important parallel: In private financial markets, when an interest rate is stuck at zero, how completely a debt is honored becomes the natural margin on which terms adjust.

Guest Post from Will Boisvert

Just above this is a long post by Will Boisvert on the relative risks of nuclear power in the light of the Fukushima disaster. It’s very long, but (in my opinion) very worth reading. I haven’t seen any comparably thorough discussion elsewhere.

For whatever it’s worth, while I’m not competent to evaluate every specific factual claim here, on the big picture I’m convinced. Boisvert is right. The practical alternative to nuclear power is fossil fuels, and by every metric fossil fuels are much worse, even setting climate change aside. (Include climate change and fossil fuels are much, much, much worse.) There are quite a few people I respect who don’t agree; I hope they’ll read these piece and take its arguments seriously. The takeaway: “Even if you accept [the worst-case estimates of the death tolls from past nuclear disasters], there is less than a one-in-25 chance that, next year, a Chernobyl-scale nuclear disaster will kill a quarter of a million people; there is a dead certainty that coal power will kill that many.”

I hope Will will post more here in the future, but as always, who knows.

Trying to Be for Nuclear Power When It Blows Up in Your Face

by Will Boisvert

Call me perverse, but ever since the Fukushima plant blew up and started spewing radiation into a depopulated countryside, I’ve been talking up nuclear power. In my estimation, nuclear is the only carbon-free energy source that can power the economy—wind and solar are too feeble and fickle—and we can’t stop global warming without it. But that’s a debate for another time, as are questions of costs, nuclear waste, and peak uranium. Nuclear stands up well on all these points, but here I’ll consider just the issue of safety because the association with apocalyptic, trans-historical death and devastation is what really motivates opposition to nukes. To me, that’s ironic, because safety is actually one of nuclear’s strongest suits, exploding plants and all. When you do the math, nuclear risks—Chernobyls and Fukushimas included—are modest in comparison with other risks that we take for granted. In particular, nuclear power is safer—far safer, statistically, by orders of magnitude—than the fossil-fuel-dominated power system we have now.

Coal vs. Nukes: The Body Count

The crux of the issue is a comparison of the safety and health impacts of coal-fired power plants and nuclear power plants. Global electricity generation is dominated by coal plants, which produce about 42 percent of the world’s supply, three times nuclear’s share. Replacing coal-generated power is a key step in decarbonizing the energy system, and it’s a task that nuclear is uniquely suited to accomplish. So how do the two energy sources stack up in terms of safety? Coal produces a steady stream of toxic emissions, while dangerous nuclear emissions come in waves, the main one being the Chernobyl disaster, the only nuclear accident before Fukushima to have killed appreciable numbers of civilians. Chernobyl was as bad as a nuclear catastrophe gets—an explosion and uncontained fire raging in the exposed heart of a reactor that lofted huge amounts of radioactive gas and soot into the sky for days. (I’ll argue below that Fukushima is nowhere near as bad.) So the key comparison to make is between the Chernobyl disaster and the steady-state performance of coal-fired power plants. And it turns out to be an open-and-shut case: when you put nuclear’s radioactive emissions, Chernobyl and all, beside the air pollution emitted by coal-burning plants, coal kill many times more people than does nuclear power.

First, the toll from coal. According to recent studies by the Clean Air Task Force and the American Lung Association, about 13,000 people die each year in the United States from air pollution from coal-burning power plants. It’s much worse in China; depending on the estimate, 300,000 to 700,000 people a year die from outdoor air pollution there, much of it from coal-burning boilers in power plants and factories. Worldwide, the World Health Organization estimates that about 1.2 million people die each year from outdoor air pollution. I couldn’t find a precise figure for the portion of those deaths caused by coal-fired power plants, but assuming that it’s the same as in the United States, 19 percent, then coal power is killing about 230,000 people a year. If you add in emissions from oil-fired power plants and the extensive water pollution from coal-burning, the toll from fossil-fueled electricity is higher still.

Now let’s look at Chernobyl. According to a 2008 study by the UN Scientific Committee on the Effects of Atomic Radiation, Chernobyl will have killed about 9,000 people once the radioactivity decays away, almost all of them cancer victims. Anti-nukes dispute that number with arguments both silly (conspiracies at the UN) and cogent (UNSCEAR left out some populations that got light dustings of Chernobyl fallout). Lisbeth Gronlund of the anti-nuke Union of Concerned Scientists recently estimated that the final Chernobyl death toll will be about 27,000, a number that’s in line with a mid-range consensus. At the high end, a recent book by Yablokov et al, Chernobyl: The Consequences of the Catastrophe for People and the Environment, which has been widely cited by greens, puts the Chernobyl cancer toll through the year 2056 at up to 264,000 deaths. (The Yablokov study has been strongly criticized by radiation scientists (see Radiation Protection Dosimetry (2010) vol 1 issue 1 pp. 97-101) and other commentators, including Gronlund.)

Why the huge discrepancies on Chernobyl figures? Well, it’s hard to get firm empirical evidence of the Chernobyl cancer toll, because radiation is such a weak carcinogen that it’s effects at low doses can’t be distinguished from statistical noise. Epidemiological studies that count excess cancer deaths usually find no statistically significant increase above the normal background incidence. That doesn’t necessarily mean that they are not there, just that it’s impossible to discern some thousands of possible Chernobyl cancer deaths amid millions of ordinary cancer deaths. So cancer fatalities have to be estimated by multiplying estimates of the radiation dose that people received by an assumed risk factor extrapolated from studies of people who received large doses, like Hiroshima survivors. Gronlund, for example, starts by taking UNSCEAR’s estimate of the total dose of Chernobyl radiation incurred by everyone in the world, 465,000 person-Sieverts. She then multiplies that dosage by a risk factor, taken from the National Academy of Science’s Report on the Biological Effects of Ionizing Radiation (BEIR-VII), of 570 cancer fatalities for every 100,000 people who each receive a dose of 100 milli-Sieverts (100 mSv). (That works out to 570 cancer deaths per 10,000 person-Sieverts; multiply 570 deaths/10,000 person-Svs by 465,000 person-Svs and you get 26,505 total deaths.) Gronlund’s method is pretty standard, but such calculations can yield wildly varying results depending on underlying guesstimates of the dosage and risk factor. (Pro-nukes even insist that there is a threshold below which small radiation doses pose no cancer risk.)

But the controversy over the precise Chernobyl numbers is academic, in my view, because they all tell the same basic story: the catastrophic failure of nuclear power at Chernobyl was nowhere near as bad as the yearly routine of the fossil-fueled power system. In the 25 years since the Chernobyl accident in 1986, for example, coal-plant air pollution has killed over 325,000 people in the United States alone. That’s a substantially larger number than Yablokov’s exaggerated estimate for Chernobyl cancer deaths in the entire world through 2056. To put it another way, if you accept Yablokov’s estimate, there is less than a one-in-25 chance that, next year, a Chernobyl-scale nuclear disaster will kill a quarter of a million people; there is a dead certainty that coal power will kill that many. Even when you adjust for the larger number of coal plants, the risks of nuclear catastrophe are still much smaller than those of business-as-usual coal. And if you accept Gronlund’s consensus estimate of 27,000 Chernobyl deaths, which I do, then you have to conclude that the risks from nuclear catastrophe pale to insignificance. Gronlund’s figures suggest that the lingering effects of Chernobyl fallout was killing on average about a thousand people per year from 1986 to 2005; the remaining undecayed radiation is now killing perhaps a few hundred people a year. These numbers hardly register beside the hundreds of thousands of people killed every year in the fossil-fuel holocaust.

Besides Chernobyl-style spews, there is also the question of radioactivity released during normal operations, like the tritium leaks that greens regularly sound the alarm over. Nukes do routinely emit traces of radioactivity, but the amounts are so small that health risks are minuscule to none. The most comprehensive epidemiological study, by the National Cancer Institute, found no statistically significant excess cancer risk in counties with nuclear plants. Indeed, we can estimate the tininess of the risk by using Gronlund’s method. According to the EPA, the average American gets a radiation dose of less than 0.001 mSv (0.1 millirem) per year from nuclear power plants. Multiply that by 300,000,000 Americans and the BEIR-VII risk factor of 570 cancer deaths per 100 mSv dose per 100,000 people exposed, and you get a maximum of 17 Americans dying of cancer every year from routine nuclear plant radiation—less than half a day’s worth of American coal-pollution fatalities. A final irony is that coal plants actually release much more radioactive material into the environment than do nuclear plants under normal conditions. Embedded in the millions of tons of coal a single plant burns every year are hundreds of pounds of radioactive uranium, thorium and radon, which go up the smokestacks and into our lungs or get dumped in ash-heaps where they lie open to the elements and leach into streams and ground water. (McBride, J. P., et. al., Radiological Impact of Airborne Effluents of Coal and Nuclear Plants”; Science, 12/8/1978. Cited in http://www.ornl.gov/info/ornlreview/rev26-34/text/colmain.html).

How Bad is Fukushima?

Fukushima has all the trappings of the nuclear nightmare scenario: creaky old reactors, it-can’t-happen-here hubris, corporate perfidy, Keystone Cops bumbling, explosions, spews and refugees. Despite all that, the expert consensus is that Fukushima is nowhere near as bad as Chernobyl. It’s easy in hindsight to castigate the nuclear establishment that lapsed so spectacularly at Fukushima, but there’s a more obscure yet equally important lesson there: because of steady advances in design and emergency response, nuclear disasters aren’t as disastrous as they used to be. That’s in part because light-water reactors like Fukushima’s are much better designed than the flimsy, volatile RBMK reactor at Chernobyl. LWRs have a “negative void coefficient,” which means that when the cooling water ran out, the fission chain reaction shut down. At Chernobyl, the RBMK’s positive void coefficient meant that a transient loss of coolant made the chain reaction speed up uncontrollably until the reactor itself exploded. Also, LWRs do not have combustible graphite in their cores to fuel a fire, as the RBMK did. Most importantly, unlike the RBMK, the Fukushima reactor had strong containment structures to curb the radioactive release (although, alas, not strong enough to entirely contain it). A second factor is that, unlike the Soviets, the Fukushima authorities did disaster-response by the book; for example, timely evacuations, distribution of potassium iodide and bans on milk-drinking mean that the Japanese will avoid the spike in thyroid cancers seen at Chernobyl.

The result of improved design and emergency response was that Fukushima’s three meltdowns generated a smaller and less damaging spew than did Chernobyl’s single reactor explosion. Estimates put the total release of Fukushima radioactivity at 770,000 tera-becquerels, about 15% of the Chernobyl spew of 5.2 million TBq. Extrapolating naively from Gronlund’s Chernobyl death toll of 27,000, we might expect perhaps 4,000 total cancer fatalities from the Fukushima spew. There’s reason to hope for even fewer casualties. Dozens of Chernobyl emergency workers died from acute radiation poisoning, which has killed no one in Japan. And much of the Fukushima radioactivity blew out to sea or was dumped into the Pacific where it will be infinitely diluted and harm nobody. Still, let’s chew a bit on that 4,000 figure. Fukushima Daiichi has been churning out 4.7 gigawatts of power since 1979. Coal power in the United States generates about 314 GW of power and causes roughly 13,000 deaths per year from air pollution, or about 41 deaths per GW per year. So if Fukushima had been a 4.7 GW coal-fired plant operating over the past 31 years, it probably would have killed about 6,000 people from air pollution. Thus, even counting the meltdown casualties, Fukushima Daiichi likely saved thousands of lives on balance over its operating lifetime by abating coal emissions

It’s too soon to know exactly what the health effects of the Fukushima spew will be. Scientists will make estimates from detailed surveys of radiation exposure, and then monitor everyone for decades to try to empirically detect increased cancer incidence. But back-of-the envelope calculations suggest that the effects will be small. Clean-up work at the plant, for example, isn’t quite the suicide mission it’s made out to be. As of June 18, the 3,514 workers who have worked on the cleanup since the tsunami had received a collective radiation exposure of 114 person-Sieverts, a dose that would cause 7 cancer fatalities among them over a lifetime. If they continue at that dose rate and it takes a year to bring the reactors to cold shut-down, they might incur 28 excess cancer fatalities over the 700 that would normally occur. If we assume that the roughly 90,000 Fukushima evacuees also somehow received the average three-month dose of a cleanup worker before they fled—a huge overestimate—that would result in 166 extra cancer deaths over the 18,000 they would normally suffer.

Judging by the Japanese government’s radiation data, radiation in the rest of Japan should be a marginal concern. As of July 8, monitoring posts in Fukushima prefecture outside the 20 km evacuation zone were showing an average outdoor radiation reading of 0.61 micro-sieverts/ hour, higher than the normal background reading of about 0.04 uSv/ hour. Those readings are gradually falling, but currently they would add up to an extra radiation exposure above background of 5 milli-sieverts in a year. How dangerous is that? By comparison, residents of Denver get an extra 8 mSv of radiation per year over what they would receive living on the East Coast, because of the mile-high elevation and a local abundance of radon gas. So, outside the EZ, Fukushima Prefecture is substantially less radioactive than Denver. Elsewhere in Japan radiation has returned to normal background levels—and indeed was never elevated in most places—except in Miyagi and Ibaraki prefectures, where radiation readings are slightly elevated but way below Denver levels. There are no detectable quantities of radioactive isotopes in the drinking water anywhere in Japan outside Fukushima.

Inside the 20 km evacuation zone, and in a small plume to the northwest, radiation levels are much higher. The EZ is largely empty now, but it’s illuminating to try to estimate what the health effects would be if people were still living there. That exercise can give us a more realistic understanding of the scale of the disaster and of the various mechanisms that attenuate the harm it will cause; it shows us why radiation spews loom small in epidemiological studies of disease and mortality. Let’s look at the 10-20 km band of the EZ, where outdoor ambient radiation levels—the “external dose” of radiation that comes from outside the body—averaged 6.4 micro-sieverts/hr as of July 10. A person receiving that external dose rate for an 80-year Japanese life expectancy would get a total dose of 4.5 sieverts. That’s a lot of radiation; it would cause 25,000 extra cancer deaths per 100,000 people in addition to the roughly 20,000 that would normally occur, and thus more than double the cancer risk to an individual—raising it about as much as smoking does. But, for several reasons, actual radiation exposures will be much smaller. First, radioactive decay constantly reduces the quantity of ambient radionuclides. Almost all the radiation comes from soil depositions of cesium-134, with a half-life of 2 years, and cesium-137, with a half-life of 30 years, each of which is currently generating about half the radiation. As these isotopes decay, radiation exposures will dwindle accordingly. When you do the math—sorry, that means integrals of exponential functions—the estimated individual dose over 80 years drops to just 1.32 Sv (causing 7,500 cancer deaths per 100,000 people). There’s also soil migration: dirt blocks radiation, so as the radioactive cesium gradually percolates down beneath the surface of the ground, it stops irradiating people. Assuming very conservatively that soil migration attenuates ambient radiation by at least 5 % every ten years, the effect further reduces the dose—more integrals!—to 0.97 Sv, (5500 deaths). Then, because floors and walls also block radiation and people generally keep soil out of buildings, the external dose people receive indoors is much lower than their outdoor exposure by a factor of 4 or more. Assuming that people spend at least three quarters of their time indoors, we should therefore cut the external dose estimate by 56% to 0.42 Sv (2400 deaths.) On the other side of the ledger, we have to add in the internal doses from radionuclides lodged inside the body when people ingest contaminated food and water or inhale radioactive dust. Chernobyl data suggest that these internal doses might be a third as much as the external dose, so that raises the total dose to 0.56 Sv (3200 deaths per 100,000 people living there 80 years.) Okay, let’s stop now. There are other radio-abatement wrinkles that I don’t know how to model, but this is a serviceable ball-park estimate: spending one’s life in the 10-20 km band of the EZ elevates one’s cancer risk by perhaps 16%. In the 5-10 km band, radiation is averaging 10 uSv/hr, for a lifetime cancer risk of about 25% above normal; and in the 2-5 km band it’s running at 27 uSv/hr for a 68% elevated cancer risk. These are rough projections based on cautious assumptions that greatly overstate the risk; at this point we should just say that living in the EZ would impose a significant extra risk of cancer because of the radiation, but one that’s substantially less than the risk from smoking. That makes the Fukushima spew a serious local health problem, not an apocalyptic one.

So even the EZ is something less than the radiological moonscape of anti-nuke hysteria. There are certainly large tracts of land there that are dangerously radioactive and should be fenced off for years; there are also cool spots that are half as radioactive as Denver. Time will tell when and how much of this area can be reoccupied. But the creation of vast exclusion zones is also a feature of coal power, through mining—and even more so of renewable technologies, like the solar power that the Japanese government said it would turn to after the tsunami. The 20-km Fukushima EZ encompasses 226 square miles of land (half of it is sea). Compare this with the size of a solar plant that could equal Fukushima Daiichi’s 4.7 gigawatt output and 90% capacity factor. The Martin Next Generation Solar Energy Center in Florida, for example, generates 75 megawatts from 500 acres of solar mirrors, with a capacity factor of 24%, outputting 155,000 megawatt-hours per year (pretty good for a solar plant). To generate the 37,000 gigawatt-hours per year of a Fukushima Daiichi, a similar solar plant would need to cover 186 square miles–an exclusion zone that’s 83 % of the size of Fukushima’s, literally paved with mirrors sitting atop bulldozed, scraped-bare soil. And that’s every solar plant, not just the rare one that gets hit by a tsunami.

We’ll have to wait for detailed radiological surveys to get precise estimates of radiation depositions, doses and health effects. Still, it’s hard to see Fukushima fatalities exceeding a few thousand all told; they will likely be a small fraction of that. (And we will have to take them on faith, since civilian casualties will be far too few for epidemiological studies to discern.) That’s a tragedy, certainly, but it pales beside the contemporaneous death toll from coal pollution, which killed a thousand Americans and going on twenty thousand people worldwide in just the first month after the tsunami, as it does every month. The worst health effects of Fukushima will therefore stem from public anxiety that shuts down or slows the construction of nuclear plants, the only technology that can replace fossil fuels. Germany, for example, immediately closed its seven oldest nukes, which had been running trouble-free for decades. The resulting shortfall in electricity will be made up largely by burning more natural gas and coal, and by importing power from French nuclear plants.

The Banality of Radiation

When we shift the focus away from comparative mass body counts, we get a less fraught perspective on nuclear power as an ordinary and rather modest item on the list of marginal everyday risks. I could compare it to car crashes or beer or nitrite-laden barbecue. Instead, I’ll just compare it to other sources of radiation in which we blithely wallow even though they give us drastically larger doses than we get from nuclear plants.

Americans absorb an average radiation dose of 620 millirems (620 mrem) per year from natural and man-made sources. (A sievert is 100 rems.) The biggest sources are radon gas (200 mrem, including 9 mrem carried in with household natural gas) and medical procedures (about 300 mrem, some of which comes from just standing near radiotherapy patients.) 39 mrem comes from naturally occurring radioactive potassium inside your body. (Because bananas concentrate potassium, they are slightly radioactive at about 0.01 mrem per banana.) Moving to Colorado increases your cosmic radiation dose by about 67 mrem and your radon dose by a whopping 800 mrems. Everything that comes out of the ground is a bit radioactive, so if you live in a brick house instead of a wooden house you get an extra 7 mrem. TV and computer screens contribute 1 mrem. A coast-to-coast airplane ride gives you 2 mrem per flight—high elevation spells high radiation—so flight attendants get a bigger occupational radiation dose than do nuclear plant workers. All of these everyday radiation doses dwarf the 0.1 mrem per year that we get from nuclear plants without rousing any concern at all.

Indeed, we avidly seek out radiation as a cosmetic. Ultraviolet light from sunlight and tanning beds has similar effects to ionizing radiation: a sun-burn is an acute radiation burn, and skin cancers caused by UV light kill thousands of people every year. Yet greens do not march on tanning salons, and we all blissfully bare ourselves to the nuclear furnace in the sky. We think of radiation as the stuff we crawl into cellars like rats to escape after a nuclear holocaust; we should as well think of it as the stuff prom queens bask in to get a healthy glow.

Better yet, we should simply think of radiation from nukes as an ordinary form of pollution, much like smoke-stack and tail-pipe fumes: at high doses it can kill; at modest doses it poses modest long-term risks; at tiny doses it is innocuous. We should regulate and abate it, as we would any pollutant. But we should also recognize that nuclear plants emit far less harmful pollution than competing power sources. Considerations of public health, as well as concerns about the slow-motion tsunami from melting ice caps, therefore dictate that we build nukes as fast as possible. We need a thirty-fold buildup of nuclear plants to nuclearize the energy supply, so let’s assume the average death rate from nuclear spews, currently under a thousand a year, scales accordingly to 30,000 a year. (There’s reason to anticipate that the nuclear industry will steadily improve upon that standard.) Sounds pretty gruesome, but what we get in exchange is the eradication of the combustion pollution that kills three million people every year—not just coal emissions but automobile exhaust (eliminated by electric cars) and smoke from coal- and wood-fired stoves in the third world (eliminated by electric ranges). () Again, it’s an open-and-shut case: a complete switch-over from our current energy system to nuclear power would reduce the number of lives lost to energy production by 99 percent, a level of safety that no technology exceeds. (Even wind turbines have their catastrophic risks.) Nukes afford us prodigious amounts of clean energy—energy that liberates us from the manifold and very deadly ravages of fossil-fuel combustion.


The Faustian Bargain: Nuclear Power and the Modern Predicament

Unfortunately, statistical arguments haven’t settled this debate. In part that’s because very rare catastrophic risks loom larger in people’s minds than statistically more dangerous routine risks. But deeper irrationalities are at play. Radiation and air pollution are identical in their effects—neither will make you drop dead, both may increase your risk of getting lung cancer 20 year down the road—yet we flee in panic from one while shrugging off the other. Psychology thus plays a huge role in shaping attitudes. Fallout is indelibly linked to nuclear war, while the hazmat suits of radiological emergencies evoke deadly pandemics. The very invisibility of radiation feeds the paranoid imagination; if it smoked and billowed it would seem less insidious.

Worse, nuclear power has become a towering symbol, especially on the environmentalist left, of the whole miasmatic sickness of advanced technological society. Greens think of global warming not just as a crisis to be solved but as a cosmic retribution for all the ills of modernity, the grand internal contradiction that will finally destroy industrial capitalism. The appeal of wind and solar to left romantics is that they promise to return us to a state of pastoral innocence by recasting civilization around a deep harmony with the natural elements, using human-scaled technologies that foster the sublime egalitarian community of the decentralized grid. Nuclear power clashes with this vision of social and redemption through sustainability. It is a font of unnatural elements, the antithesis of everything organic; it is the spawn of the military-industrial complex and the perpetuator of the centralized power of corporate elites and Big Brother; it is the glib technological fix that puts off, and thus immeasurably worsens, the inevitable day of reckoning for a heedlessly overconsuming society. Hence the hostility on the part of most greens to a proven technology that offers enormous environmental benefits. To greens, a world saved by nuclear power is a world that’s not worth saving.

The underlying religious aspects of this mindset are captured in the ubiquitous green trope of nuclear power as a “Faustian bargain.” It’s an apt metaphor, suggesting the pursuit of unlimited power dredged from the underworld. Nuclear fission, the closest thing we have to hell-fire, certainly fits it, as does the trade-off between fleeting electricity and eternal waste. Nor are the overtones of hubris, sorcerer’s apprenticeship and always-pending catastrophe entirely misplaced. The lessons of Fukushima point to obvious and inexpensive fixes: build taller seawalls, raise high the diesel generators, vent the hydrogen gas. New plant designs are safer than the Fukushima models; they have bigger, stronger containment vessels and passive cooling systems that don’t need electricity. Nevertheless, engineers aren’t perfect, so nuclear power can never be perfectly safe. If we build nukes as frantically as I think we should, then in another twenty-five years—hopefully longer—there will be another Fukushima; the devices of man will fail and the devil will claim his due.

And yet we cannot reject the Faustian bargain, which is inseparable from progress itself. Consider air travel, another safe industry built around an intrinsically disastrous technology. (And a classically Faustian one in that it arrogates to mortals the divine power of flight.) Whenever I fly, I am filled with horror at the implications. I’m riding along seven miles in the sky, hurled forward at 500 miles per hour by an inferno of explosive gas. A single hair-line crack, missed by a hung-over mechanic, could cause any one of dozens of fan-blades to shatter and blow apart the engine. I pray that if that happens the supersonic shrapnel will kill me instantly and spare me the long, unbearable plunge to earth. And catastrophe can take a thousand other guises: a terrorist’s bomb, a depressed pilot, an electrical short, a flock of birds. But as I mull all this, hoping that my muscle rictus will somehow hold the plane together, all around me people are chatting, stewardesses are doling out snacks, toddlers are skipping up and down the aisle.

That’s what it means to live in the modern world. Daily life depends on the harnessing in delicate equipoise of titanic energies that would turn and annihilate us if they slipped the leash for but an instant. And occasionally they do: the plane crashes, the reactor spews. But dire as the menace of technology is, we embrace it out of necessity and convenience, and because the age-old demons of poverty and backwardness and powerlessness are worse than any of our own making. We understand that the Faustian bargain is a good one, struck at a fair price. When things go wrong we sift the wreckage for lessons and, if the stats warrant, carry on. So with air travel, and so with nuclear power: disasters will continue to happen, but they will become less frequent and destructive as engineers and regulators learn from past mistakes; the risks and harms, already small, will shrink further (but never entirely disappear). That’s what progress looks like, and we shouldn’t turn away from it. If we let irrational fears stymie this most important of technologies, and let imagined risks obscure real ones, we’re in for a hellish future.

Some Thoughts on Negotiation

I don’t claim to be any expert on the negotiating table. But I was, about ten years ago, the lead negotiator for my graduate employee union (~3,000 members.) I’ve spent plenty of time around unions, before and since. And in my years at the Working Families Party, where I was the designated wonk, I inevitably had some involvement with negotiations over the terms of bills. From which I derive an observation that’s perhaps relevant to the debt-ceiling talks.

The principals are never at the table.

Maybe because they’re busy; very likely because they’re diffuse; or maybe they’re only constituted through the negotiating process. In any case, there are not one but three negotiations going on: between the two agents at the table, and between each of those agents and their respective principals.

So for instance, the question for me as Local 2322 representative was not just what I think of this deal, but whether I can sell it to the membership. Even worse when you’re trying to nail down health care legislation; the union at least has a defined membership roll but the coalition exists only insofar as there’s a chance of passing something. In either case, your problem as a negotiator is the same: When you go back to your constituents, you have to convince them that the deal (1) is good enough and (2) is the best you could have got. What I’m talking about here is the tension between 1 and 2.

Let’s say you’ve got some big demand — a 20 percent pay raise, let’s say. And let’s say, miraculously, the employer agrees to this the first day of negotiations. This tells you two things: First, the deal you have now is better than you expected; but also, that the payoff to continued negotiations might be better than expected too. If they gave you this for just sitting down, they must be desperate; who knows what they would give if you could really hold their feet to the fire. So it’s not actually easier for you to convince your principals to ink a deal at this point. You’ve got an easier time selling them on (1), but a harder time selling them on (2).

Now you might say this is irrational, unfortunate, people should know how to say, Yes. But I don’t agree. The principal is not negotiating not just this once. So knowing which agents, or which kinds of agents, are reliable may be every bit as important as getting the best possible deal in this round. So they’re observing, Did the negotiators keep pushing until the other side was ready to walk away, as much or more as, Is the outcome acceptable. From the principals’ point of view, an early concession from the other side is a signal that this side’s agents need to ask for more to prove they are doing their jobs.

In this sense, even if you (the agent now, like Obama) are happy to agree with everything the other agents have proposed to you, it’s in your shared interest (yours and theirs) to only concede it at the last moment, and in return for the most costly concessions, to help them sell it to their principals.

Bottom line: Even if you intend to concede X, it’s in everyone’s best interest – including the negotiators on the other side — that you don’t give up X until the last possible minute. Conceding it early actually makes it harder for the other side to accept. This sounds like a paradox but I think it’s really a perfectly logical and inevitable implication of the negotiating situation.

It’s a broadly-applicable problem in economics, that a change in price has opposite effects if it’s considered once and for all vs if it’s considered as a proxy for future changes. Whatever the model says, one has to think a change in prices might continue. This is how bubbles get started. Just so, politically, current concessions makes the current deal look better — statically. But the relevant question is, do they make the current deal look better, with respect to some future deal? To the extent that conceding now makes both look better, that effect is indeterminate.

So, Obama’s a bad negotiator, then? Maybe. I have to admit, there;s something appealing, in a poetic-justice sense, to the idea that the decline of the labor base of the Democratic Party has led to a fatal loss of practical negotiating skills. The other, more obvious possibility, is that he is not trying to get the best outcome for the Democrats in the sense that most people understand it — that he shares the Republicans’ essential goals. But I might put it a little differently — Obama’s bargaining position is weak precisely because of his independence, the fact that he doesn’t answer to anyone. Digby asks, quite reasonably, if we have any idea at this point what the President’s principles are. I might put it a little differently: it’s who are his principals.

Red Light, Green Light, Who Cares?

Interesting piece in the FT on Chinese inflation, and the persistent divergence between inflation as measured by the CPI and the GDP deflator. (The differences between price indices is something we could pay more attention to in general.) But I was struck by an odd juxtaposition. First, we get a quote from some analyst saying that the GDP deflator is overstating inflation:

In the last three months of 2010, the deflator made inflation out to be 7.3%, compared with 4.7% using the CPI. … if the 7.3% inflation the GDP deflator calculates is an overestimation, this has to mean that real GDP growth is higher than the 9.8% Beijing officially clocked for the last quarter—which is dangerously high, strengthening the fear of overheating China bears have been raising of late.

 Then we get another analyst saying no, the deflator is understating inflation:

Our guess is that the GDP deflator which probably averaged over 10% last year is now running at around 11%. The natural implication of that, of course, is that real Chinese GDP could be much lower than officially stated. And that inflation, as a result, is indeed a much greater concern for authorities than is currently being implied.

So if inflation is lower than the official number, that’s a reason to step on the brakes. And if inflation is higher than the official number, that’s a reason to step on the brakes too.

 It’s almost like support for austerity aren’t really motivated by concerns about inflation (or interest rates). But what else could it be?