The Slack Wire

Guest Post from Will Boisvert

Just above this is a long post by Will Boisvert on the relative risks of nuclear power in the light of the Fukushima disaster. It’s very long, but (in my opinion) very worth reading. I haven’t seen any comparably thorough discussion elsewhere.

For whatever it’s worth, while I’m not competent to evaluate every specific factual claim here, on the big picture I’m convinced. Boisvert is right. The practical alternative to nuclear power is fossil fuels, and by every metric fossil fuels are much worse, even setting climate change aside. (Include climate change and fossil fuels are much, much, much worse.) There are quite a few people I respect who don’t agree; I hope they’ll read these piece and take its arguments seriously. The takeaway: “Even if you accept [the worst-case estimates of the death tolls from past nuclear disasters], there is less than a one-in-25 chance that, next year, a Chernobyl-scale nuclear disaster will kill a quarter of a million people; there is a dead certainty that coal power will kill that many.”

I hope Will will post more here in the future, but as always, who knows.

Trying to Be for Nuclear Power When It Blows Up in Your Face

by Will Boisvert

Call me perverse, but ever since the Fukushima plant blew up and started spewing radiation into a depopulated countryside, I’ve been talking up nuclear power. In my estimation, nuclear is the only carbon-free energy source that can power the economy—wind and solar are too feeble and fickle—and we can’t stop global warming without it. But that’s a debate for another time, as are questions of costs, nuclear waste, and peak uranium. Nuclear stands up well on all these points, but here I’ll consider just the issue of safety because the association with apocalyptic, trans-historical death and devastation is what really motivates opposition to nukes. To me, that’s ironic, because safety is actually one of nuclear’s strongest suits, exploding plants and all. When you do the math, nuclear risks—Chernobyls and Fukushimas included—are modest in comparison with other risks that we take for granted. In particular, nuclear power is safer—far safer, statistically, by orders of magnitude—than the fossil-fuel-dominated power system we have now.

Coal vs. Nukes: The Body Count

The crux of the issue is a comparison of the safety and health impacts of coal-fired power plants and nuclear power plants. Global electricity generation is dominated by coal plants, which produce about 42 percent of the world’s supply, three times nuclear’s share. Replacing coal-generated power is a key step in decarbonizing the energy system, and it’s a task that nuclear is uniquely suited to accomplish. So how do the two energy sources stack up in terms of safety? Coal produces a steady stream of toxic emissions, while dangerous nuclear emissions come in waves, the main one being the Chernobyl disaster, the only nuclear accident before Fukushima to have killed appreciable numbers of civilians. Chernobyl was as bad as a nuclear catastrophe gets—an explosion and uncontained fire raging in the exposed heart of a reactor that lofted huge amounts of radioactive gas and soot into the sky for days. (I’ll argue below that Fukushima is nowhere near as bad.) So the key comparison to make is between the Chernobyl disaster and the steady-state performance of coal-fired power plants. And it turns out to be an open-and-shut case: when you put nuclear’s radioactive emissions, Chernobyl and all, beside the air pollution emitted by coal-burning plants, coal kill many times more people than does nuclear power.

First, the toll from coal. According to recent studies by the Clean Air Task Force and the American Lung Association, about 13,000 people die each year in the United States from air pollution from coal-burning power plants. It’s much worse in China; depending on the estimate, 300,000 to 700,000 people a year die from outdoor air pollution there, much of it from coal-burning boilers in power plants and factories. Worldwide, the World Health Organization estimates that about 1.2 million people die each year from outdoor air pollution. I couldn’t find a precise figure for the portion of those deaths caused by coal-fired power plants, but assuming that it’s the same as in the United States, 19 percent, then coal power is killing about 230,000 people a year. If you add in emissions from oil-fired power plants and the extensive water pollution from coal-burning, the toll from fossil-fueled electricity is higher still.

Now let’s look at Chernobyl. According to a 2008 study by the UN Scientific Committee on the Effects of Atomic Radiation, Chernobyl will have killed about 9,000 people once the radioactivity decays away, almost all of them cancer victims. Anti-nukes dispute that number with arguments both silly (conspiracies at the UN) and cogent (UNSCEAR left out some populations that got light dustings of Chernobyl fallout). Lisbeth Gronlund of the anti-nuke Union of Concerned Scientists recently estimated that the final Chernobyl death toll will be about 27,000, a number that’s in line with a mid-range consensus. At the high end, a recent book by Yablokov et al, Chernobyl: The Consequences of the Catastrophe for People and the Environment, which has been widely cited by greens, puts the Chernobyl cancer toll through the year 2056 at up to 264,000 deaths. (The Yablokov study has been strongly criticized by radiation scientists (see Radiation Protection Dosimetry (2010) vol 1 issue 1 pp. 97-101) and other commentators, including Gronlund.)

Why the huge discrepancies on Chernobyl figures? Well, it’s hard to get firm empirical evidence of the Chernobyl cancer toll, because radiation is such a weak carcinogen that it’s effects at low doses can’t be distinguished from statistical noise. Epidemiological studies that count excess cancer deaths usually find no statistically significant increase above the normal background incidence. That doesn’t necessarily mean that they are not there, just that it’s impossible to discern some thousands of possible Chernobyl cancer deaths amid millions of ordinary cancer deaths. So cancer fatalities have to be estimated by multiplying estimates of the radiation dose that people received by an assumed risk factor extrapolated from studies of people who received large doses, like Hiroshima survivors. Gronlund, for example, starts by taking UNSCEAR’s estimate of the total dose of Chernobyl radiation incurred by everyone in the world, 465,000 person-Sieverts. She then multiplies that dosage by a risk factor, taken from the National Academy of Science’s Report on the Biological Effects of Ionizing Radiation (BEIR-VII), of 570 cancer fatalities for every 100,000 people who each receive a dose of 100 milli-Sieverts (100 mSv). (That works out to 570 cancer deaths per 10,000 person-Sieverts; multiply 570 deaths/10,000 person-Svs by 465,000 person-Svs and you get 26,505 total deaths.) Gronlund’s method is pretty standard, but such calculations can yield wildly varying results depending on underlying guesstimates of the dosage and risk factor. (Pro-nukes even insist that there is a threshold below which small radiation doses pose no cancer risk.)

But the controversy over the precise Chernobyl numbers is academic, in my view, because they all tell the same basic story: the catastrophic failure of nuclear power at Chernobyl was nowhere near as bad as the yearly routine of the fossil-fueled power system. In the 25 years since the Chernobyl accident in 1986, for example, coal-plant air pollution has killed over 325,000 people in the United States alone. That’s a substantially larger number than Yablokov’s exaggerated estimate for Chernobyl cancer deaths in the entire world through 2056. To put it another way, if you accept Yablokov’s estimate, there is less than a one-in-25 chance that, next year, a Chernobyl-scale nuclear disaster will kill a quarter of a million people; there is a dead certainty that coal power will kill that many. Even when you adjust for the larger number of coal plants, the risks of nuclear catastrophe are still much smaller than those of business-as-usual coal. And if you accept Gronlund’s consensus estimate of 27,000 Chernobyl deaths, which I do, then you have to conclude that the risks from nuclear catastrophe pale to insignificance. Gronlund’s figures suggest that the lingering effects of Chernobyl fallout was killing on average about a thousand people per year from 1986 to 2005; the remaining undecayed radiation is now killing perhaps a few hundred people a year. These numbers hardly register beside the hundreds of thousands of people killed every year in the fossil-fuel holocaust.

Besides Chernobyl-style spews, there is also the question of radioactivity released during normal operations, like the tritium leaks that greens regularly sound the alarm over. Nukes do routinely emit traces of radioactivity, but the amounts are so small that health risks are minuscule to none. The most comprehensive epidemiological study, by the National Cancer Institute, found no statistically significant excess cancer risk in counties with nuclear plants. Indeed, we can estimate the tininess of the risk by using Gronlund’s method. According to the EPA, the average American gets a radiation dose of less than 0.001 mSv (0.1 millirem) per year from nuclear power plants. Multiply that by 300,000,000 Americans and the BEIR-VII risk factor of 570 cancer deaths per 100 mSv dose per 100,000 people exposed, and you get a maximum of 17 Americans dying of cancer every year from routine nuclear plant radiation—less than half a day’s worth of American coal-pollution fatalities. A final irony is that coal plants actually release much more radioactive material into the environment than do nuclear plants under normal conditions. Embedded in the millions of tons of coal a single plant burns every year are hundreds of pounds of radioactive uranium, thorium and radon, which go up the smokestacks and into our lungs or get dumped in ash-heaps where they lie open to the elements and leach into streams and ground water. (McBride, J. P., et. al., Radiological Impact of Airborne Effluents of Coal and Nuclear Plants”; Science, 12/8/1978. Cited in http://www.ornl.gov/info/ornlreview/rev26-34/text/colmain.html).

How Bad is Fukushima?

Fukushima has all the trappings of the nuclear nightmare scenario: creaky old reactors, it-can’t-happen-here hubris, corporate perfidy, Keystone Cops bumbling, explosions, spews and refugees. Despite all that, the expert consensus is that Fukushima is nowhere near as bad as Chernobyl. It’s easy in hindsight to castigate the nuclear establishment that lapsed so spectacularly at Fukushima, but there’s a more obscure yet equally important lesson there: because of steady advances in design and emergency response, nuclear disasters aren’t as disastrous as they used to be. That’s in part because light-water reactors like Fukushima’s are much better designed than the flimsy, volatile RBMK reactor at Chernobyl. LWRs have a “negative void coefficient,” which means that when the cooling water ran out, the fission chain reaction shut down. At Chernobyl, the RBMK’s positive void coefficient meant that a transient loss of coolant made the chain reaction speed up uncontrollably until the reactor itself exploded. Also, LWRs do not have combustible graphite in their cores to fuel a fire, as the RBMK did. Most importantly, unlike the RBMK, the Fukushima reactor had strong containment structures to curb the radioactive release (although, alas, not strong enough to entirely contain it). A second factor is that, unlike the Soviets, the Fukushima authorities did disaster-response by the book; for example, timely evacuations, distribution of potassium iodide and bans on milk-drinking mean that the Japanese will avoid the spike in thyroid cancers seen at Chernobyl.

The result of improved design and emergency response was that Fukushima’s three meltdowns generated a smaller and less damaging spew than did Chernobyl’s single reactor explosion. Estimates put the total release of Fukushima radioactivity at 770,000 tera-becquerels, about 15% of the Chernobyl spew of 5.2 million TBq. Extrapolating naively from Gronlund’s Chernobyl death toll of 27,000, we might expect perhaps 4,000 total cancer fatalities from the Fukushima spew. There’s reason to hope for even fewer casualties. Dozens of Chernobyl emergency workers died from acute radiation poisoning, which has killed no one in Japan. And much of the Fukushima radioactivity blew out to sea or was dumped into the Pacific where it will be infinitely diluted and harm nobody. Still, let’s chew a bit on that 4,000 figure. Fukushima Daiichi has been churning out 4.7 gigawatts of power since 1979. Coal power in the United States generates about 314 GW of power and causes roughly 13,000 deaths per year from air pollution, or about 41 deaths per GW per year. So if Fukushima had been a 4.7 GW coal-fired plant operating over the past 31 years, it probably would have killed about 6,000 people from air pollution. Thus, even counting the meltdown casualties, Fukushima Daiichi likely saved thousands of lives on balance over its operating lifetime by abating coal emissions

It’s too soon to know exactly what the health effects of the Fukushima spew will be. Scientists will make estimates from detailed surveys of radiation exposure, and then monitor everyone for decades to try to empirically detect increased cancer incidence. But back-of-the envelope calculations suggest that the effects will be small. Clean-up work at the plant, for example, isn’t quite the suicide mission it’s made out to be. As of June 18, the 3,514 workers who have worked on the cleanup since the tsunami had received a collective radiation exposure of 114 person-Sieverts, a dose that would cause 7 cancer fatalities among them over a lifetime. If they continue at that dose rate and it takes a year to bring the reactors to cold shut-down, they might incur 28 excess cancer fatalities over the 700 that would normally occur. If we assume that the roughly 90,000 Fukushima evacuees also somehow received the average three-month dose of a cleanup worker before they fled—a huge overestimate—that would result in 166 extra cancer deaths over the 18,000 they would normally suffer.

Judging by the Japanese government’s radiation data, radiation in the rest of Japan should be a marginal concern. As of July 8, monitoring posts in Fukushima prefecture outside the 20 km evacuation zone were showing an average outdoor radiation reading of 0.61 micro-sieverts/ hour, higher than the normal background reading of about 0.04 uSv/ hour. Those readings are gradually falling, but currently they would add up to an extra radiation exposure above background of 5 milli-sieverts in a year. How dangerous is that? By comparison, residents of Denver get an extra 8 mSv of radiation per year over what they would receive living on the East Coast, because of the mile-high elevation and a local abundance of radon gas. So, outside the EZ, Fukushima Prefecture is substantially less radioactive than Denver. Elsewhere in Japan radiation has returned to normal background levels—and indeed was never elevated in most places—except in Miyagi and Ibaraki prefectures, where radiation readings are slightly elevated but way below Denver levels. There are no detectable quantities of radioactive isotopes in the drinking water anywhere in Japan outside Fukushima.

Inside the 20 km evacuation zone, and in a small plume to the northwest, radiation levels are much higher. The EZ is largely empty now, but it’s illuminating to try to estimate what the health effects would be if people were still living there. That exercise can give us a more realistic understanding of the scale of the disaster and of the various mechanisms that attenuate the harm it will cause; it shows us why radiation spews loom small in epidemiological studies of disease and mortality. Let’s look at the 10-20 km band of the EZ, where outdoor ambient radiation levels—the “external dose” of radiation that comes from outside the body—averaged 6.4 micro-sieverts/hr as of July 10. A person receiving that external dose rate for an 80-year Japanese life expectancy would get a total dose of 4.5 sieverts. That’s a lot of radiation; it would cause 25,000 extra cancer deaths per 100,000 people in addition to the roughly 20,000 that would normally occur, and thus more than double the cancer risk to an individual—raising it about as much as smoking does. But, for several reasons, actual radiation exposures will be much smaller. First, radioactive decay constantly reduces the quantity of ambient radionuclides. Almost all the radiation comes from soil depositions of cesium-134, with a half-life of 2 years, and cesium-137, with a half-life of 30 years, each of which is currently generating about half the radiation. As these isotopes decay, radiation exposures will dwindle accordingly. When you do the math—sorry, that means integrals of exponential functions—the estimated individual dose over 80 years drops to just 1.32 Sv (causing 7,500 cancer deaths per 100,000 people). There’s also soil migration: dirt blocks radiation, so as the radioactive cesium gradually percolates down beneath the surface of the ground, it stops irradiating people. Assuming very conservatively that soil migration attenuates ambient radiation by at least 5 % every ten years, the effect further reduces the dose—more integrals!—to 0.97 Sv, (5500 deaths). Then, because floors and walls also block radiation and people generally keep soil out of buildings, the external dose people receive indoors is much lower than their outdoor exposure by a factor of 4 or more. Assuming that people spend at least three quarters of their time indoors, we should therefore cut the external dose estimate by 56% to 0.42 Sv (2400 deaths.) On the other side of the ledger, we have to add in the internal doses from radionuclides lodged inside the body when people ingest contaminated food and water or inhale radioactive dust. Chernobyl data suggest that these internal doses might be a third as much as the external dose, so that raises the total dose to 0.56 Sv (3200 deaths per 100,000 people living there 80 years.) Okay, let’s stop now. There are other radio-abatement wrinkles that I don’t know how to model, but this is a serviceable ball-park estimate: spending one’s life in the 10-20 km band of the EZ elevates one’s cancer risk by perhaps 16%. In the 5-10 km band, radiation is averaging 10 uSv/hr, for a lifetime cancer risk of about 25% above normal; and in the 2-5 km band it’s running at 27 uSv/hr for a 68% elevated cancer risk. These are rough projections based on cautious assumptions that greatly overstate the risk; at this point we should just say that living in the EZ would impose a significant extra risk of cancer because of the radiation, but one that’s substantially less than the risk from smoking. That makes the Fukushima spew a serious local health problem, not an apocalyptic one.

So even the EZ is something less than the radiological moonscape of anti-nuke hysteria. There are certainly large tracts of land there that are dangerously radioactive and should be fenced off for years; there are also cool spots that are half as radioactive as Denver. Time will tell when and how much of this area can be reoccupied. But the creation of vast exclusion zones is also a feature of coal power, through mining—and even more so of renewable technologies, like the solar power that the Japanese government said it would turn to after the tsunami. The 20-km Fukushima EZ encompasses 226 square miles of land (half of it is sea). Compare this with the size of a solar plant that could equal Fukushima Daiichi’s 4.7 gigawatt output and 90% capacity factor. The Martin Next Generation Solar Energy Center in Florida, for example, generates 75 megawatts from 500 acres of solar mirrors, with a capacity factor of 24%, outputting 155,000 megawatt-hours per year (pretty good for a solar plant). To generate the 37,000 gigawatt-hours per year of a Fukushima Daiichi, a similar solar plant would need to cover 186 square miles–an exclusion zone that’s 83 % of the size of Fukushima’s, literally paved with mirrors sitting atop bulldozed, scraped-bare soil. And that’s every solar plant, not just the rare one that gets hit by a tsunami.

We’ll have to wait for detailed radiological surveys to get precise estimates of radiation depositions, doses and health effects. Still, it’s hard to see Fukushima fatalities exceeding a few thousand all told; they will likely be a small fraction of that. (And we will have to take them on faith, since civilian casualties will be far too few for epidemiological studies to discern.) That’s a tragedy, certainly, but it pales beside the contemporaneous death toll from coal pollution, which killed a thousand Americans and going on twenty thousand people worldwide in just the first month after the tsunami, as it does every month. The worst health effects of Fukushima will therefore stem from public anxiety that shuts down or slows the construction of nuclear plants, the only technology that can replace fossil fuels. Germany, for example, immediately closed its seven oldest nukes, which had been running trouble-free for decades. The resulting shortfall in electricity will be made up largely by burning more natural gas and coal, and by importing power from French nuclear plants.

The Banality of Radiation

When we shift the focus away from comparative mass body counts, we get a less fraught perspective on nuclear power as an ordinary and rather modest item on the list of marginal everyday risks. I could compare it to car crashes or beer or nitrite-laden barbecue. Instead, I’ll just compare it to other sources of radiation in which we blithely wallow even though they give us drastically larger doses than we get from nuclear plants.

Americans absorb an average radiation dose of 620 millirems (620 mrem) per year from natural and man-made sources. (A sievert is 100 rems.) The biggest sources are radon gas (200 mrem, including 9 mrem carried in with household natural gas) and medical procedures (about 300 mrem, some of which comes from just standing near radiotherapy patients.) 39 mrem comes from naturally occurring radioactive potassium inside your body. (Because bananas concentrate potassium, they are slightly radioactive at about 0.01 mrem per banana.) Moving to Colorado increases your cosmic radiation dose by about 67 mrem and your radon dose by a whopping 800 mrems. Everything that comes out of the ground is a bit radioactive, so if you live in a brick house instead of a wooden house you get an extra 7 mrem. TV and computer screens contribute 1 mrem. A coast-to-coast airplane ride gives you 2 mrem per flight—high elevation spells high radiation—so flight attendants get a bigger occupational radiation dose than do nuclear plant workers. All of these everyday radiation doses dwarf the 0.1 mrem per year that we get from nuclear plants without rousing any concern at all.

Indeed, we avidly seek out radiation as a cosmetic. Ultraviolet light from sunlight and tanning beds has similar effects to ionizing radiation: a sun-burn is an acute radiation burn, and skin cancers caused by UV light kill thousands of people every year. Yet greens do not march on tanning salons, and we all blissfully bare ourselves to the nuclear furnace in the sky. We think of radiation as the stuff we crawl into cellars like rats to escape after a nuclear holocaust; we should as well think of it as the stuff prom queens bask in to get a healthy glow.

Better yet, we should simply think of radiation from nukes as an ordinary form of pollution, much like smoke-stack and tail-pipe fumes: at high doses it can kill; at modest doses it poses modest long-term risks; at tiny doses it is innocuous. We should regulate and abate it, as we would any pollutant. But we should also recognize that nuclear plants emit far less harmful pollution than competing power sources. Considerations of public health, as well as concerns about the slow-motion tsunami from melting ice caps, therefore dictate that we build nukes as fast as possible. We need a thirty-fold buildup of nuclear plants to nuclearize the energy supply, so let’s assume the average death rate from nuclear spews, currently under a thousand a year, scales accordingly to 30,000 a year. (There’s reason to anticipate that the nuclear industry will steadily improve upon that standard.) Sounds pretty gruesome, but what we get in exchange is the eradication of the combustion pollution that kills three million people every year—not just coal emissions but automobile exhaust (eliminated by electric cars) and smoke from coal- and wood-fired stoves in the third world (eliminated by electric ranges). () Again, it’s an open-and-shut case: a complete switch-over from our current energy system to nuclear power would reduce the number of lives lost to energy production by 99 percent, a level of safety that no technology exceeds. (Even wind turbines have their catastrophic risks.) Nukes afford us prodigious amounts of clean energy—energy that liberates us from the manifold and very deadly ravages of fossil-fuel combustion.


The Faustian Bargain: Nuclear Power and the Modern Predicament

Unfortunately, statistical arguments haven’t settled this debate. In part that’s because very rare catastrophic risks loom larger in people’s minds than statistically more dangerous routine risks. But deeper irrationalities are at play. Radiation and air pollution are identical in their effects—neither will make you drop dead, both may increase your risk of getting lung cancer 20 year down the road—yet we flee in panic from one while shrugging off the other. Psychology thus plays a huge role in shaping attitudes. Fallout is indelibly linked to nuclear war, while the hazmat suits of radiological emergencies evoke deadly pandemics. The very invisibility of radiation feeds the paranoid imagination; if it smoked and billowed it would seem less insidious.

Worse, nuclear power has become a towering symbol, especially on the environmentalist left, of the whole miasmatic sickness of advanced technological society. Greens think of global warming not just as a crisis to be solved but as a cosmic retribution for all the ills of modernity, the grand internal contradiction that will finally destroy industrial capitalism. The appeal of wind and solar to left romantics is that they promise to return us to a state of pastoral innocence by recasting civilization around a deep harmony with the natural elements, using human-scaled technologies that foster the sublime egalitarian community of the decentralized grid. Nuclear power clashes with this vision of social and redemption through sustainability. It is a font of unnatural elements, the antithesis of everything organic; it is the spawn of the military-industrial complex and the perpetuator of the centralized power of corporate elites and Big Brother; it is the glib technological fix that puts off, and thus immeasurably worsens, the inevitable day of reckoning for a heedlessly overconsuming society. Hence the hostility on the part of most greens to a proven technology that offers enormous environmental benefits. To greens, a world saved by nuclear power is a world that’s not worth saving.

The underlying religious aspects of this mindset are captured in the ubiquitous green trope of nuclear power as a “Faustian bargain.” It’s an apt metaphor, suggesting the pursuit of unlimited power dredged from the underworld. Nuclear fission, the closest thing we have to hell-fire, certainly fits it, as does the trade-off between fleeting electricity and eternal waste. Nor are the overtones of hubris, sorcerer’s apprenticeship and always-pending catastrophe entirely misplaced. The lessons of Fukushima point to obvious and inexpensive fixes: build taller seawalls, raise high the diesel generators, vent the hydrogen gas. New plant designs are safer than the Fukushima models; they have bigger, stronger containment vessels and passive cooling systems that don’t need electricity. Nevertheless, engineers aren’t perfect, so nuclear power can never be perfectly safe. If we build nukes as frantically as I think we should, then in another twenty-five years—hopefully longer—there will be another Fukushima; the devices of man will fail and the devil will claim his due.

And yet we cannot reject the Faustian bargain, which is inseparable from progress itself. Consider air travel, another safe industry built around an intrinsically disastrous technology. (And a classically Faustian one in that it arrogates to mortals the divine power of flight.) Whenever I fly, I am filled with horror at the implications. I’m riding along seven miles in the sky, hurled forward at 500 miles per hour by an inferno of explosive gas. A single hair-line crack, missed by a hung-over mechanic, could cause any one of dozens of fan-blades to shatter and blow apart the engine. I pray that if that happens the supersonic shrapnel will kill me instantly and spare me the long, unbearable plunge to earth. And catastrophe can take a thousand other guises: a terrorist’s bomb, a depressed pilot, an electrical short, a flock of birds. But as I mull all this, hoping that my muscle rictus will somehow hold the plane together, all around me people are chatting, stewardesses are doling out snacks, toddlers are skipping up and down the aisle.

That’s what it means to live in the modern world. Daily life depends on the harnessing in delicate equipoise of titanic energies that would turn and annihilate us if they slipped the leash for but an instant. And occasionally they do: the plane crashes, the reactor spews. But dire as the menace of technology is, we embrace it out of necessity and convenience, and because the age-old demons of poverty and backwardness and powerlessness are worse than any of our own making. We understand that the Faustian bargain is a good one, struck at a fair price. When things go wrong we sift the wreckage for lessons and, if the stats warrant, carry on. So with air travel, and so with nuclear power: disasters will continue to happen, but they will become less frequent and destructive as engineers and regulators learn from past mistakes; the risks and harms, already small, will shrink further (but never entirely disappear). That’s what progress looks like, and we shouldn’t turn away from it. If we let irrational fears stymie this most important of technologies, and let imagined risks obscure real ones, we’re in for a hellish future.

Some Thoughts on Negotiation

I don’t claim to be any expert on the negotiating table. But I was, about ten years ago, the lead negotiator for my graduate employee union (~3,000 members.) I’ve spent plenty of time around unions, before and since. And in my years at the Working Families Party, where I was the designated wonk, I inevitably had some involvement with negotiations over the terms of bills. From which I derive an observation that’s perhaps relevant to the debt-ceiling talks.

The principals are never at the table.

Maybe because they’re busy; very likely because they’re diffuse; or maybe they’re only constituted through the negotiating process. In any case, there are not one but three negotiations going on: between the two agents at the table, and between each of those agents and their respective principals.

So for instance, the question for me as Local 2322 representative was not just what I think of this deal, but whether I can sell it to the membership. Even worse when you’re trying to nail down health care legislation; the union at least has a defined membership roll but the coalition exists only insofar as there’s a chance of passing something. In either case, your problem as a negotiator is the same: When you go back to your constituents, you have to convince them that the deal (1) is good enough and (2) is the best you could have got. What I’m talking about here is the tension between 1 and 2.

Let’s say you’ve got some big demand — a 20 percent pay raise, let’s say. And let’s say, miraculously, the employer agrees to this the first day of negotiations. This tells you two things: First, the deal you have now is better than you expected; but also, that the payoff to continued negotiations might be better than expected too. If they gave you this for just sitting down, they must be desperate; who knows what they would give if you could really hold their feet to the fire. So it’s not actually easier for you to convince your principals to ink a deal at this point. You’ve got an easier time selling them on (1), but a harder time selling them on (2).

Now you might say this is irrational, unfortunate, people should know how to say, Yes. But I don’t agree. The principal is not negotiating not just this once. So knowing which agents, or which kinds of agents, are reliable may be every bit as important as getting the best possible deal in this round. So they’re observing, Did the negotiators keep pushing until the other side was ready to walk away, as much or more as, Is the outcome acceptable. From the principals’ point of view, an early concession from the other side is a signal that this side’s agents need to ask for more to prove they are doing their jobs.

In this sense, even if you (the agent now, like Obama) are happy to agree with everything the other agents have proposed to you, it’s in your shared interest (yours and theirs) to only concede it at the last moment, and in return for the most costly concessions, to help them sell it to their principals.

Bottom line: Even if you intend to concede X, it’s in everyone’s best interest – including the negotiators on the other side — that you don’t give up X until the last possible minute. Conceding it early actually makes it harder for the other side to accept. This sounds like a paradox but I think it’s really a perfectly logical and inevitable implication of the negotiating situation.

It’s a broadly-applicable problem in economics, that a change in price has opposite effects if it’s considered once and for all vs if it’s considered as a proxy for future changes. Whatever the model says, one has to think a change in prices might continue. This is how bubbles get started. Just so, politically, current concessions makes the current deal look better — statically. But the relevant question is, do they make the current deal look better, with respect to some future deal? To the extent that conceding now makes both look better, that effect is indeterminate.

So, Obama’s a bad negotiator, then? Maybe. I have to admit, there;s something appealing, in a poetic-justice sense, to the idea that the decline of the labor base of the Democratic Party has led to a fatal loss of practical negotiating skills. The other, more obvious possibility, is that he is not trying to get the best outcome for the Democrats in the sense that most people understand it — that he shares the Republicans’ essential goals. But I might put it a little differently — Obama’s bargaining position is weak precisely because of his independence, the fact that he doesn’t answer to anyone. Digby asks, quite reasonably, if we have any idea at this point what the President’s principles are. I might put it a little differently: it’s who are his principals.

Red Light, Green Light, Who Cares?

Interesting piece in the FT on Chinese inflation, and the persistent divergence between inflation as measured by the CPI and the GDP deflator. (The differences between price indices is something we could pay more attention to in general.) But I was struck by an odd juxtaposition. First, we get a quote from some analyst saying that the GDP deflator is overstating inflation:

In the last three months of 2010, the deflator made inflation out to be 7.3%, compared with 4.7% using the CPI. … if the 7.3% inflation the GDP deflator calculates is an overestimation, this has to mean that real GDP growth is higher than the 9.8% Beijing officially clocked for the last quarter—which is dangerously high, strengthening the fear of overheating China bears have been raising of late.

 Then we get another analyst saying no, the deflator is understating inflation:

Our guess is that the GDP deflator which probably averaged over 10% last year is now running at around 11%. The natural implication of that, of course, is that real Chinese GDP could be much lower than officially stated. And that inflation, as a result, is indeed a much greater concern for authorities than is currently being implied.

So if inflation is lower than the official number, that’s a reason to step on the brakes. And if inflation is higher than the official number, that’s a reason to step on the brakes too.

 It’s almost like support for austerity aren’t really motivated by concerns about inflation (or interest rates). But what else could it be?

Fiscal Arithmetic: The Blanchard Rule

When we left off, we’d concluded that the relationship between g, the growth rate of GDP, and i, the after-tax interest rate on government debt, was central to the evolution of public debt. When g > i, any primary deficit is sustainable, in the sense that the debt-GDP ratio converges to a finite value; when i > g, no primary deficit is sustainable, and a primary surplus, while formally sustainable at a certain exact value, occupies a knife-edge. Which invites the natural question, so which is bigger, usually?

There are articles that discuss this (tho not as many as you might think). Here’s a good recent article by Jamie Galbraith; I also like this one by Tony Aspromourgos, and “The Intertemporal Budget Constraint and the Sustainability of Budget Deficits” by Arestis and Sawyer. (I’m sorry, I can’t find a version of it online). An earlier and more mainstream, but for our current purposes especially interesting, take is this piece by Olivier Blanchard.  Blanchard says:

If i g were negative, the government would no longer need to generate primary surpluses to achieve sustainability. … The government could even run permanent primary deficits of any size, and these would eventually lead to a positive but constant level of debt… Theory suggests that this case, which corresponds to what is known as ‘dynamic inefficiency’, cannot be excluded, and that in such a case, a government should, on welfare grounds, probably issue more debt until the pressure on interest rates made them at least equal to the growth rate.

So much depends on whether the growth rate exceeds the interest rate, or not. Well, so, does it?

The funny thing about this passage in context is that Blanchard acknowledges that over most of the postwar period, the growth rate has exceeded the interest rate. But, he says, the professional consensus is that interest rates ought to equal or exceed growth rates, so he’ll stick with that assumption for the rest of the article. (There’s almost a genre of economics articles that freely admit a key assumption doesn’t seem to be consistently satisfied in practice, but then blithely go on assuming it. The Marshall-Lerner-Robinson condition is a favorite in this vein.) But we’re not here to mock; we’re here to call the Blanchard rule, the prescription that if i < g, the federal deficit ought to be higher.

Below are graphs of the growth rate and after-tax 10-year government bond rate for 10 OECD countries. Both are deflated by the CPI; the tax rate is the ratio of central government taxes to GDP. This is probably a bit high, but on the other hand the average maturity of government debt is less than 10 years in many OECD countries — in the US it is currently around 4.7 years — so these two biases might more or less cancel each other out, leaving the red line close to the economically relevant interest rate. Source is the OECD statistics site. I’ve excluded 2008-2010 since the Great Recession pulls growth rates sharply down in a (let’s hope!) misleading way. The lighter black line is the growth trend.

Click them to make them bigger!

Clearly we can’t exclude the relevance of the Blanchard rule; for much of the time, for many rich countries, the growth rate of GDP has exceeded the 10-year interest rate. At other times, interest has exceeded growth. What we see in most cases is a fairly stable growth rate, combined with an interest rate that jumps sharply up around 1980 and then drifts downward from somewhere in the 1990s. At some point soon, I hope, I’ll produce decompositions of the change in the fiscal position into the interest rate, the growth rate, changes in taxes and expenditure induced by the growth rate, and autonomous changes in taxes and spending. I suspect the first will be the most important, and the last the least. But in the meantime, we can say just looking at these graphs that changing interest rates are an important component of fiscal dynamics, so it’s wrong to think just in terms of the primary balance.

Which suggests — coming back to the earlier debate with John Quiggin — that if we are concerned with the long-term fiscal position, we should spend at least as much time worrying about policies that affect the interest rate on government debt relative to the growth rate, as we should about taxes relative to expenditures. And we should not assume a priori that a primary deficit is unsustainable.

Trade: The New Normal Was the Old Normal Too

Matthew Yglesias is puzzled by

the fundamental weirdness of having so much savings flowing uphill from poor, fast-growing countries into the rich, mature economy of the United States. It ought to be the case that people in fast-growing countries are eager to consume more than they produce, knowing that they’ll be much richer in the near future. And it ought to be the case that people in rich countries are eager to invest in poor ones seeking higher returns. But it’s not what was happening pre-crisis and it’s not what’s been happening post-crisis.

He should have added: And it’s not what’s ever happened.

I’m not sure what “ought” is doing in this passage. If it expresses pious hope, fine. But if it’s supposed to be a claim about what’s normal or usual, as the contrast with “weirdness” would suggest, then it just ain’t so. Sure, in some very artifical textbook models savings flow from rich countries to poor ones. But it has never been the case, since the world economy came into being in the 19th century, that unregulated capital flows have behaved the way they “ought” to.

Albert Fishlow’s paper “Lessons from the Past: Capital Markets During the 19th Century and the Interwar Period” includes a series for the net resources transferred from creditor to debtor countries from the mid-19th century up to the second world war. (That is, new investment minus interest and dividends on existing investment.) This series turns negative sometime between 1870 and 1885, and remains so through the end of the 1930s. For 50 years — the Gold Standard age of stable exchange rates, flexible prices, free trade and unregulated capital flows — the poor countries were consistently transferring resources to the rich ones. In other words, what Yglesias sees as the “fundamental weirdness” of the current period is the normal historical pattern. Or as Fishlow puts it:

Despite the rapid prewar growth in the stock of foreign capital, at an annual average rate of 4.6 percent between 1870 and 1913, foreign investment did not fully keep up with the reflow of income from interest and dividends. Return income flowed at a rate close to 5 percent a year on outstanding balances, meaning that on average creditors transferred no resources to debtor nations over the period. … Such an aggregate result casts doubt on the conventional description of the regular debt cycle that capital recipients were supposed to experience. … most [developing] countries experienced only brief periods of import surplus [i.e. current account deficit]. For most of the time they were compelled to export more than they imported in order to meet their debt payments.

A similar situation existed for much of the post World War II period, especially after the secular increase in world interest rates around 1980.

There is a difference between the old pattern (which still applies to much of the global south) and the new one. Then, net-debtor poor countries  ran current account surpluses to make payments on their high-yielding liabilities to rich countries. Now, net-creditor (relatively-) poor countries run current account surpluses to accumulate low-yielding assets in rich countries. I would argue there are reasons to prefer the new pattern to the old one. But the flow of real resources is unchanged: from the periphery to the center. Meanwhile, those countries that have successfully industrialized, as scholars like Ha-Joon Chang have shown, have done so not by accessing foreign savings by connecting with the world financial system, but by keeping their own savings at home by disconnecting from it.

It seems that an unregulated international finance doesn’t benevolently put the world’s collective savings to the best use for everyone, but instead channels wealth from the poor to the rich. That may not be the way things ought to be, but historically it’s pretty clearly the way things are.

Some Fiscal Arithmetic

If we’re going to discuss fiscal policy, we should be clear on the accounting relationships involved. So, here are some basic equations describing how the public debt evolves over time. I should say up front that the relationships I’m describing here, while they suggest an unorthodox skepticism about worries about debt “sustainability,” are themselves totally orthodox and noncontroversial. And they don’t make any behavioral assumptions — they’re true by definition.

We’re interested in the ratio of debt to GDP. What will this be at some time t?

Well, it will be equal to the ratio in the previous period, increased by rate of interest, and decreased by the rate of growth of GDP, (remember, we are talking about the debt-GDP ratio; increasing the denominator makes a fraction smaller), plus the previous period’s primary deficit, that is, the difference between spending on everything besides interest, and revenues.

Let b be the government debt and d the primary deficit (i.e. the deficit exclusive of interest payments), both as shares of GDP. Let i be the after-tax interest rate on government borrowing and g the growth rate of GDP (both real or both nominal, it doesn’t matter). Then we can rewrite the paragraph above as:

We can rearrange this to see how the debt changes from one period to the next:

Now, what happens if a given primary deficit is maintained for a long time? Does the debt-GDP ratio converge to some stable level? We can answer this question by setting the left-hand side of the above equation to zero. That gives us:

What does this mean? There are three cases to consider. If the rate of GDP growth is equal to the interest on government debt net of taxes, then the only stable primary balance is zero; any level of primary deficit leads to the debt-GDP rate rising without limit as long as its maintained. (And similarly, any level of primary surpluses leads to the government eventually paying off its debt accumulating a positive net asset position that grows without limit.) If g > i, then for any level of primary deficit, there is a corresponding stable level of debt; in this sense, there is no such thing as an “unsustainable” deficit. On the other hand, if g < i, then assuming debt is positive — a constant debt requires a primary surplus.

There is a further difference between the cases. When g > i, the equilibrium is stable; if for whatever reason the debt rises or falls above the level implied by the long-run average primary deficit, it will move back toward that level over time. But when g < i, if the debt is one dollar too high, it will rise without limit; if it is one dollar too low, it will fall without limit, to be eventually replaced by an endlessly growing positive net asset position.

So, which of these three cases is most realistic? Good question! So good, in fact, I’m going to devote a whole nother post to it. The short answer: sometimes one, sometimes another. But in the US, GDP growth has exceeded pre-tax interest on 5-year Treasuries (the average maturity of US debt is around 5 years) in about 50 of the past 60 years.

The discussion up to now has been in terms of the primary balance. But nearly all public discussions of fiscal issues focus on the total deficit, which includes interest along with other categories of spending. We can rewrite the equations above in those terms, adding a superscript T to indicate we’re talking about the total deficit. In these equations, g is the nominal growth rate of GDP.

Again, we define equilibrium as a situation in which the debt-GDP ratio is constant. Then we have:

In other words, any total deficit converges to a finite debt-GDP ratio. (And for every debt-GDP ratio, there is a total deficit that holds it stable.) So defining a sustainable total deficit requires picking a target debt-GDP ratio. Let’s say we expect nominal GDP growth to average 5% in the future. (That’s a bit low by historical standards, but it’s what the CBO assumes in its long-run budget forecasts.) Then 2010’s deficit of 8.8% of GDP implies a long-run debt-GDP ratio of about 175% — a number toward the top of the range observed historically in developed countries. 175% too high? Get the long-run average deficit down to 4%, and the debt-GDP ratio converges to 80%. Deficit of 3% of GDP, debt of 60% of GDP. (Yes, the Maastricht criteria apparently assume 5% growth in nominal GDP.) It is not at all clear what the criteria are for determining the best long-run debt-GDP ratio, but that’s what you’ve got to do before you can say whether the total deficit is too high — or too low.

One last point: An implication of that last equation above is that if the total deficit averages zero over a long period, the debt-GDP ratio will also converge to zero. In other words, “Balance the budget over the business cycle” is another way of saying, “Pay off the whole federal debt.” Yet I doubt many of the people who argue for the former, would support the latter. Which only shows how important it is to get the accounting relationships clear.

EDIT: I should stress: There is nothing original here. Any economist who does anything remotely related to public finance would read this and say, yes, yes, so what, of course — or at least I sure hope they would. But you really do have to be clear on these relationships for terms like “sustainable” to have any meaning.

For instance, let’s go back to that Peterson budget summit. As far as I can tell, five of the six organizations that submitted budget proposals used the CBO’s assumptions for growth and interest rates. (EPI tweaked them somewhat.) But given those assumptions, only two of the budgets — EPI  and AEI — actually stabilize the debt-GDP ratio. (Interestingly, they do so at about the same level — 70% of GDP for AEI, and 80% of GDP for EPI.) The other four budgets describe a path on which the entire federal debt is retired, and the federal government accumulates a net asset position that grows without limit relative to GDP. Personally, I am all for public ownership of the means of production. But I didn’t realize that’s what people had in mind when they called a budget “sustainable”. Of course, presumably that is, indeed, not what the people at CAP, Heritage, or the Roosevelt Campus Network had in mind; presumably they just didn’t think through the long-term implications of their budget numbers. Which is sort of the point of this post.

UPDATE: … and not 12 hours after I post this, here’s John Quiggin at Crooked Timber writing that the US needs “a substantial increase in tax revenue in the long term” and backing it up with the claim,”I assume [the optimal debt-GDP ratio is] finite, which would not be the case under plausible scenarios with no new revenue and maintenance of current discretionary expenditure relative to national income.” As we’ve seen , given the historic pattern where GDP growth is above the interest rate, this statement is simply false.

Of course, John Q. might be assuming this historic relationship will be reversed in the future. But then you could just as logically say that the interest rate is too high, or inflation is too low, as that higher taxes are needed. The view that it must be taxes that adjust implicitly assumes that that longer term interest rates aren’t responsive to policy, and that deliberately raising inflation can’t even be discussed. In other words, while surpluses later is often presented as part of an argument for deficits now, the case for surpluses in the future rests on premises that also largely rule out more aggressive monetary stimulus in the present.

Help, I’m Stuck in a Fortune Cookie Factory

Remember that old joke?

The Slack Wire was hit by a bunch of spam comments just now, which I promptly deleted. Whatever, it’s a blog, happens every day, right? The usual, a bunch of links to sites selling dresses, shoes, thermometers. (Thermometers?)

Except: the text accompanying the links was not the usual spamglish (“Thank You for a Most interesting discution”) or – what the cleverer spambots do – quotes from earlier comments. It was: “This is my job. I am so sorry.”

I’m sorry too, “Amanda.” All the wonderful new forms of creative intellectual work that could be opened up by the Internet, and we’ve stuck you doing this.

An Ant Not Even Thinking About Pissing on Cotton

Over at Crooked Timber, they’re discussing Martha Nussbaum’s new book on “Why Democracy Needs the Humanities.” Sounds like a real stinker. (Altho the thread has alerted me to the fact that I urgently need to read Randall Jarrell’s Pictures from an Institution, so I guess Nussbaum is to thank for that.) Lots of good criticism of the book, there and at the discussion CT is responding to, but most everyone seems to accept at least the premise that there is a crisis in the humanities —  a “silent crisis,” says Nussbaum. Or as representative CT commenter puts it, “you won’t be able to get a BA degree from a land-grant university in twenty years.”

Really? This must be an extrapolation from the past 20 years, yes? So, ok, what’s happened to the humanities since 1990?

Here are the numbers, from the 2010 Digest of Education Statistics:

The federal government doesn’t designate particular subjects as “liberal arts,” as far as I know, so I’ve presented two possible definitions. The blue line is the narrower one: English, visual & performing arts, foreign languages, philosophy, and area/ethnic/gender studies. The red line includes all those, plus social sciences, psychology, interdisciplinary studies, and architecture.
What do we see? Well, there was a decline in the share of humanities degrees in the 1970s. But there was some recovery in the 1980s, and since 1990, the proportion has been flat: around 20% (for the broad measure) or a bit over 10% (for the narrow — basically English and its satellites — measure). Whether these proportions ought to be higher, I couldn’t say; but if crisis means a situation that can’t persist, then this is clearly not a crisis. Or at least, it’s a really, really silent one.

Political Economy 101

When he’s right, he’s right:

everything we’re seeing makes sense if you think of the Right as representing the interests of rentiers, of creditors who have claims from the past — bonds, loans, cash — as opposed to people actually trying to make a living through producing stuff. Deflation is hell for workers and business owners, but it’s heaven for creditors. … thinking of what’s happening as the rule of rentiers, who are getting their interests served at the expense of the real economy, helps make sense of the situation.

Or, almost right. Because it isn’t just the Right…

EDIT: It’s interesting to note how reflexively DeLong shied away from this thought when it occurred to him a while back, with the ludicrous-on-its-face argument that only “coupon-clippers with their portfolios 100% in government bonds” could have an interest in deflation. The existence of rentiers as a distinct social class is an unthought in respectable circles. Which shows how impressively disrespectable Krugman is becoming.