Posts in Three Lines

I haven’t been blogging much lately. I’ve been doing real work, some of which will be appearing soon. But if I were blogging, here are some of the posts I might write.


Lessons from the 1990s. I have a new paper coming out from the Roosevelt Institute, arguing that we’re not as close to potential as people at the Fed and elsewhere seem to believe, and as I’ve been talking with people about it, it’s become clear that your priors depend a lot on how you think of the rapid growth of the 1990s. If you think it was a technological one-off, with no value as precedent — a kind of macroeconomic Bush v. Gore — then you’re likely to see today’s low unemployment as reflecting an economy working at full capacity, despite the low employment-population ratio and very weak productivity growth. But if you think the mid-90s is a possible analogue to the situation facing policymakers today, then it seems relevant that the last sustained episode of 4 percent unemployment led not to inflation but to employers actively recruiting new entrants to the laborforce among students, poor people, even prisoners.

Inflation nutters. The Fed, of course, doesn’t agree: Undeterred by the complete disappearance of the statistical relationship between unemployment and inflation, they continue to see low unemployment as a threatening sign of incipient inflation (or something) that must be nipped in the bud. Whatever other effects rate increases may have, the historical evidence suggests that one definite consequence will be rising private and public debt ratios. Economists focus disproportionately on the behavioral effects of interest rate changes and ignore their effects on the existing debt stock because “thinking like an economist” means, among other things, thinking in terms of a world in which decisions are made once and for all, in response to “fundamentals” rather than to conditions inherited from the past.

An army with only a signal corps. What are those other effects, though? Arguments for doubting central bankers’ control over macroeconomic outcomes have only gotten stronger than they were in the 2000s, when they were already strong; at the same time, when the ECB says, “let the government of Spain borrow at 2 percent,” it carries only a little less force than the God of genesis. I think we exaggerate power of central banks over real economy, but underestimate their power over financial markets (with the corollary that economists — heterodox as much as mainstream — see finance and real activity as much more tightly linked than they are).

It’s easy to be happy if you’re heterodox. This spring I was at a conference up at the University of Massachusetts, the headwaters of American heterodox economics, where I did my Phd. Seeing all my old friends reminded me what good prospects we in the heterodox world have – literally everyone I know from grad school has a good job. If you are wondering whether your prospects would be better at a nowhere-ranked heterodox economics program like UMass or a top-ranked program in some other social science, let me assure you, it’s the former by a mile — and you’ll probably have better drinking buddies as well.

The euro is not the gold standard. One of the topics I was talking about at the UMass conference was the euro which, I’ve argued, was intended to create something like a new gold standard, a hard financial constraint on governments. But that that was the intention doesn’t mean its the reality — in practice the TARGET2 system means that national central banks don’t face any binding constraint , unlike under the gold standard the central bank is “outside” the national monetary membrane. In this sense the euro is structurally more like Keynes’ proposals at Bretton Woods, it’s just not Keynes running it.

Can jobs be guaranteed? In principle I’m very sympathetic to the widespread (at least among my friends on social media) calls for a job guarantee. It makes sense as a direction of travel, implying a commitment to a much lower unemployment rate, expanded public employment, organizing work to fit people’s capabilities rather than vice versa, and increasing the power of workers vis-a-vis employers. But I have a nagging doubt: A job is contingent by its nature – without the threat of unemployment, can there even be employment as we know it?

The wit and wisdom of Haavelmo. I was talking a while back about Merijn Knibbe’s articles on the disconnect between economic theory and the national accounts with my friend Enno, and he mentioned Trygve Haavelmo’s 1944 article on The Probability Approach in Econometrics, which I’ve finally gotten around to reading. One of the big points of this brilliant article is that economic variables, and the models they enter into, are meaningful only via the concrete practices through which the variables are measured. A bigger point is that we study economics in order to “become master of the happenings of real life”: You can contribute to economics in the course of advancing a political project, or making money in financial markets, or administering a government agency (Keynes did all three), but you will not contribute if you pursue economics as an end in itself.

Coney Island. Laura and I took the boy down to Coney Island a couple days ago, a lovely day, his first roller coaster ride, rambling on the beach, a Cyclones game. One of the wonderful things about Coney Island is how little it’s changed from a century ago — I was rereading Delmore Schwartz’s In Dreams Begin Responsibilities the other day, and the title story’s description of a young immigrant couple walking the boardwalk in 1909 could easily be set today — so it’s disconcerting to think that the boy will never take his grandchildren there. It will all be under water.

Links for July 27, 2016

Labor dynamism and demand. My colleagues Mike Konczal and Marshall Steinbaum have an important new paper out on  the decline in new business starts and in labor mobility. They argue that the data don’t support a story where declining labor-market dynamism is the result of supply-side factors  like occupational licensing. It looks much  more like the result of chronically weak demand for labor, which for whatever reason is not picked up by the conventional unemployment rate.  This is obviously relevant to the potential output question I’m interested in — a slowdown in the rate at which workers move to new firms is a natural channel by which weak demand could reduce labor productivity. It’s also a very interesting story in its own right.

Konczal and Steinbaum:

The decline of entrepreneurship and “business dynamism” has become an accepted fact … Explanations for these trends … broadly fall on the supply side: that increasingly onerous occupational licensing impedes entry into certain protected professions and restricts licensed workers to staying where they are; that the high cost of housing thanks to restrictions on development hampers individuals from moving… But we find that the data reject these supply-side explanations: If there were increased restrictions on changing jobs or starting a business, we would expect those few workers and entrepreneurs who do manage to move to enjoy increased wage gains relative to periods with higher worker flows, and we would expect aggressive hiring by employers with vacancies. … Instead, we see the opposite…

We propose a different organizing principle: Declining business dynamism and labor mobility are features of a slackening labor market … workers lucky enough to have formal employment stay where they are rather than striking out as entrepreneurs …

Also in Roosevelt news, here’s a flattering piece about us in the New York Times Magazine.


John Kenneth who? Real World Economics Review polled its subscribers on the most important economics books of the past 100 years. Here’s the top ten. Personally I suspect Debt will have more staying power than Capital in the 21st Century, and I think Minsky’s book John Maynard Keynes is a better statement of his vision than Stabilizing an Unstable Economy, a lot of which is focused on banking-sector developments of the 1970s and 1980s that aren’t of much interest today. But overall it’s a pretty good list. The only one I haven’t read is The Affluent Society. I wonder if anyone under the age of 50 picked that one?


Deflating the elephant. Here is a nice catch from David Rosnick. Brank Milanovic has a well-known graph of changes in global income distribution over 1988-2008. What we see is that, while within most countries there has been increased polarization, at the global level the picture is more complicated. Yes, the top of the distribution has gone way up, and the very bottom has gone down. But the big fall has been in the upper-middle of the distribution — between the 80th and 99th percentiles — while most of the lower part has has risen, with the biggest gains coming around the 50th percentile. The decline near the high end is presumably working-class people in rich countries and most people in the former Soviet block —who were still near the top of the global distribution in 1988. A big part of the rise in the lower half is China. A natural question is, how much? — what would the distribution look like without China? Milanovic had suggested that the overall picture is still basically the same. But as Rosnick shows, this isn’t true — if you exclude China, the gains in the lower half are much smaller, and incomes over nearly half the distribution are lower in 2008 than 20 years before. It’s hard to see this as anything but a profoundly negative verdict on the Washington Consensus that has ruled the world over the past generation.


By the way, you cannot interpret this — as I at first wrongly did — as meaning that 40 percent of the world’s people have lower incomes than in 1988. It’s less than that. Faster population growth in poor countries would tend to shift the distribution downward even if every individual’s income was rising.


Does nuclear math add up? Over at Crooked Timber, there’s been an interesting comments-thread debate between Will Boisvert (known around here for his vigorous defense of nuclear power) and various nuke antis and skeptics. I’m the farthest thing from an expert, I can’t claim to be any kind of arbiter. But personally my sympathies are with Will. One important thing he brings out, which I hadn’t thought about enough until now, is the difference between electricity and most other commodities. Part of the problem is the very large share of fixed costs — as the Crotty-Minsky-Perelman strain of Keynesians have emphasized, capitalism does badly with long lived capital assets. A more distinctive problem is the time dimension — electricity produced at one time is not a good substitute for electricity produced at a different time, even just an hour before or after. Electricity cannot be stored economically at a meaningful scale, nor — given that almost everything in modern civilization uses it — can its consumption be easily shifted in time.  This means that straightforward comparisons of cost per kilowatt — hard enough to produce, given the predominance of fixed  costs — can be misleading. Regardless of costs, intermittent sources — like wind or solar — have to be balanced by sources that can be turned on anytime — which in the absence of nuclear, means fossil fuels.

Do you believe, as I do, that climate change is the great challenge facing humanity in the next generation? Then this is a very strong argument for nuclear power. Whatever its downsides, they are not as bad as boiling the oceans. Still, it’s not a decisive argument. The big other questions are the costs of power storage and of more extensive transmission networks — since when the sun isn’t shining and the wind isn’t blowing in one place, they probably are somewhere else. (I agree with Will that using the price mechanism to force electricity usage to conform to supply from renewables is definitely the wrong answer.) The CT debate doesn’t answer those questions. But it’s still an example of how informative blog debate can be when there are people  both sides with real expertise who are prepared to engage seriously with each other.


On other blogs, other wonders. Here is a fascinating post by Laura Tanenbaum on the end of sex-segregated job ads and the false dichotomy between “elite” and “grassroots”  feminism.

This very interesting article by Jose Azar on the extent and economic significance of common ownership of corporate shares deserves a post of its own.

Here’s a nice little think piece from Bloomberg wondering what, if anything, is meant by “the natural rate” of interest. I’m glad to see some skepticism about this concept in the larger conversation. In my mind, the “natural rate” is one of the key patches covering over the disconnect between economic theory and the observable economy.

Bhenn Bhiorach has a funny post on the lengths people will go to to claim that low inflation is really high inflation.

Is Capital Being Reallocated to High-Tech Industries?

Readers of this blog are familiar with the “short-termism” position: Because of the rise in shareholder power, the marginal use of funds for many corporations is no longer fixed investment, but increased payouts in the form of dividends and sharebuybacks. We’re already seeing some backlash against this view; I expect we’ll be seeing lots more.

The claim on the other side is that increased payouts from established corporations are nothing to worry about, because they increase the funds available to newer firms and sectors. We are trying to explore the evidence on this empirically. In a previous post, I asked if the shareholder revolution had been followed by an increase in the share of smaller, newer firms. I concluded that it didn’t look like it. Now, in this post and the following one, we’ll look at things by industry.

In that earlier post, I focused on publicly traded corporations. I know some people don’t like this — new companies, after all, aren’t going to be publicly traded. Of course in an ideal world we would not limit this kind of analysis to public traded firms. But for the moment, this is where the data is; by their nature, publicly traded corporations are much more transparent than other kinds of businesses, so for a lot of questions that’s where you have to go. (Maybe one day I’ll get funding to purchase access to firm-level financial data for nontraded firms; but even then I doubt it would be possible to do the sort of historical analysis I’m interested in.) Anyway, it seems unlikely that the behavior of privately held corporations is radically different from publicly traded one; I have a hard time imagining a set of institutions that reliably channel funds to smaller, newer firms but stop working entirely as soon as they are listed on a stock market. And I’m getting a bit impatient with people who seem to use the possibility that things might look totally different in the part of the economy that’s hard to see, as an excuse for ignoring what’s happening in the parts we do see.

Besides, the magnitudes don’t work. Publicly traded corporations continue to account for the bulk of economic activity in the US. For example, we can compare the total assets of the nonfinancial corporate sector, including closely held corporations, with the total assets of publicly traded firms listed in the Compustat database. Over the past decade, the latter number is consistently around 90 percent of the former. Other comparisons will give somewhat different values, but no matter how you measure, the majority of corporations in the US are going to be publicly traded. Anyway, for better or worse, I’m again looking at publicly-traded firms here.

In the simplest version of the capital-reallocation story, payouts from old, declining industries are, thanks to the magic of the capital markets, used to fund investment in new, technology-intensive industries. So the obvious question is, has there in fact been a shift in investment from the old smokestack industries to the newer high-tech ones?

One problem is defining investment. The accounting rules followed by American businesses generally allow an expense to be capitalized only when it is associated with a tangible asset. R&D spending, in particular, must be treated as a current cost. The BEA, however, has since 2013 treated R&D spending, along with other forms of intellectual property production, as a form of investment. R&D does have investment-like properties; arguably it’s the most relevant form of investment for some technology-intensive sectors. But the problem with redefining investment this way is that it creates inconsistencies with the data reported by individual companies, and with other aggregate data. For one thing, if R&D is capitalized rather than expensed, then profits have to be increased by the same amount. And then some assumptions have to be made about the depreciation rate of intellectual property, resulting in a pseudo asset in the aggregate statistics that is not reported on any company’s books. I’m not sure what the best solution is. [1]

Fortunately, companies do report R&D as a separate component of expenses, so it is possible to use either definition of investment with firm-level data from Compustat. The following figure shows the share of total corporate investment, under each definition, of a group of six high-tech industries: drugs; computers; communications equipment; medical equipment; scientific equipment other electronic goods; and software and data processing. [2]


As you can see, R&D spending is very important for these industries; for the past 20 years, it has consistently exceed investment spending as traditionally defined. Using the older, narrow definition, these industries account for no greater share of investment in the US than they did 50 years ago; with R&D included, their share of total investment has more than doubled. But both measures show the high-tech share of investment peaking in the late 1990s; for the past 15 years, it has steadily declined.

Obviously, this doesn’t tell us anything about why investment has stalled in these industries since the end of the tech boom. But it does at least suggest some problems with a simple story in which financial markets reallocate capital from old industries to newer ones.

The next figure breaks out the industries within the high-tech group. Here we’re looking at the broad measure of investment, which incudes R&D.


As you can see, the decline in high-tech investment is consistent across the high-tech sectors. While the exact timing varies, in the 1980s and 1990s all of these sectors saw a rising share of investment; in the past 15 years, none have. [3]  So we can safely say: In the universe of publicly traded corporations, the sectors we think would benefit from reallocation of capital were indeed investing heavily in the decades before 2000; but since then, they have not been. The decline in investment spending in the pharmaceutical industry — which, again, includes R&D spending on new drugs — is especially striking.

Where has investment been growing, then? Here:


The red lines show broad and narrow investment for oil and gas and related industries — SICs 101-138, 291-299, and 492. Either way you measure investment, the increase over the past 15 years has dwarfed that in any other industry. Note that oil and gas, unlike the high-tech industries, is less R&D-intensive than the corporate sector as a whole. Looking only at plant and equipment, fossil fuels account for 40 percent of total corporate investment; by this measure, in some recent years, investment here has exceeded that of all manufacturing together. With R&D included, by contrast, fossil fuels account for “only” a third of US investment.

In the next post, I’ll look at the other key financial flows — cashflow from operations, shareholder payouts, and borrowing — for the tech industries, compared with corporations in general. As we’ll see, while at one point payouts were lower in these industries than elsewhere, over the past 15 years they have increased even faster than for publicly traded corporations as a whole. In the meantime:

Very few of the people talking about the dynamic way American financial markets reallocate capital have, I suspect, a clear idea of the actual reallocation that is taking place. Save for another time the question of whether this huge growth in fossil fuel extraction is a good thing for the United States or the world. (Spoiler: It’s very bad.) I think it’s hard to argue with a straight face that shareholder payouts at Apple or GE are what’s funding fracking in North Dakota.


[1] This seems to be part of a larger phenomenon of the official statistical agencies being pulled into the orbit of economic theory and away from business accounting practices. It seems to me that allowing the official statistics to drift away from the statistics actually used by households and businesses creates all kinds of problems.

[2] Specifically, it is SICs 83, 357, 366, 367, 382, 384, and 737. I took this specific definition from Brown, Fazzari and Petersen. It seems to be standard in the literature.

[3] Since you are probably wondering: About two-thirds of that spike in software investment around 1970 is IBM, with Xerox and Unisys accounting for most of the rest.

In Comments: The Lessons of Fukushima

I’d like to promise a more regular posting schedule here. On the other hand, seeing as I’m on the academic job market this fall (anybody want to hire a radical economist?) I really shouldn’t be blogging at all. So on balance we’ll probably just keep staggering along as usual.

But! In lieu of new posts, I really strongly recommend checking out the epic comment thread on Will Boisvert’s recent post on the lessons of Fukushima. It’s well over 100 comments, which while no big deal for real blogs, is off the charts here. But more importantly, they’re almost all serious & thoughtful, and number evidently come from people with real expertise in nuclear and/or alternative energy. Which just goes to show: If you bring data and logic to the conversation, people will respond in kind.

Fukushima Update: How Safe Can a Nuclear Meltdown Get?

by Will Boisvert

Last summer I posted an essay here arguing that nuclear power is a lot safer than people think—about a hundred times safer than our fossil fuel-dominated power system. At the time I predicted that the impact of the March, 2011 Fukushima Daiichi nuclear plant accident in Japan would be small. A year later, now that we have a better fix on the consequences of the Fukushima meltdowns, I’ll have to revise “small” to “microscopic.” The accumulating data and scientific studies on the Fukushima accident reveal that radiation doses are and will remain low, that health effects will be minor and imperceptible, and that the traumatic evacuation itself from the area around the plant may well have been unwarranted. Far from the apocalypse that opponents of nuclear energy anticipated, the Fukushima spew looks like a fizzle, one that should drastically alter our understanding of the risks of nuclear power.

Anti-nuke commentators like Arnie Gundersen continue to issue forecasts of a million or more long-term casualties from Fukushima radiation. (So far there have been none.) But the emerging scientific consensus is that the long-term health consequences of the radioactivity, particularly cancer fatalities, will be modest to nil. At the high end of specific estimates, for example, Princeton physicist Frank von Hippel, writing in the nuke-dreading Bulletin of the Atomic Scientists, reckons an eventual one thousand fatal cancers arising from the spew.

Now there’s a new peer-reviewed paper by Stanford’s Mark Z. Jacobson and John Ten Hoeve that predicts remarkably few casualties. (Jacobson, you may remember, wrote a noted Scientific American article proposing an all-renewable energy system for the world.) They used a supercomputer to model the spread of radionuclides from the Fukushima reactors around the globe, and then calculated the resulting radiation doses and cancer cases through the year 2061. Their result: a probable 130 fatal cancers, with a range from 15 to 1300, in the whole world over fifty years. (Because radiation exposures will have subsided to insignificant levels by then, these cases comprise virtually all that will ever occur.) They also simulated a hypothetical Fukushima-scale meltdown of the Diablo Canyon nuclear power plant in California, and calculated a likely cancer death toll of 170, with a range from 24 to 1400.

To put these figures in context, pollution from American coal-fired power plants alone kills about 13,000 people every year. The Stanford estimates therefore indicate that the Fukushima spew, the only significant nuclear accident in 25 years, will likely kill fewer people over five decades than America’s coal-fired power plants kill every five days to five weeks. Worldwide, coal plants kill over 200,000 people each year—150 times more deaths than the high-end Fukushima forecasts predict over a half century.

We’ll probably never know whether these projected Fukushima fatalities come to pass or not. The projections are calculated by multiplying radiation doses by standard risk factors derived from high-dose exposures; these risk factors are generally assumed—but not proven—to hold up at the low doses that nuclear spews emit. Radiation is such a weak carcinogen that scientists just can’t tell for certain whether it causes any harm at all below a dose of 100 millisieverts (100 mSv). Even if it does, it’s virtually impossible to discern such tiny changes in cancer rates in epidemiological studies. Anti-nukes give that fact a paranoid spin by warning of “hidden cancer deaths.” But if you ask me, risks that are too small to measure are too small to worry about.

The Stanford study relied on a computer simulation, but empirical studies of radiation doses support the picture of negligible effects from the Fukushima spew.

In a direct measurement of radiation exposure, officials in Fukushima City, about 40 miles from the nuclear plant, made 37,000 schoolchildren wear dosimeters around the clock during September, October and December, 2011, to see how much radiation they soaked up. Over those three months, 99 percent of the participants absorbed less than 1 mSv, with an average external dose of 0.26 mSv. Doubling that to account for internal exposure from ingested radionuclides gives an annual dose of 2.08 mSv. That’s a pretty small dose, about one third the natural radiation dose in Denver, with its high altitude and abundant radon gas, and many times too small to cause any measurable up-tick in cancer rates. At the time, the outdoor air-dose rate in Fukushima was about 1 microsievert per hour (or about 8.8 mSv per year), so the absorbed external dose was only about one eighth of the ambient dose. That’s because the radiation is mainly gamma rays emanating from radioactive cesium in the soil, which are absorbed by air and blocked by walls and roofs. Since people spend most of their time indoors at a distance from soil—often on upper floors of houses and apartment buildings—they are shielded from most of the outdoor radiation.

Efforts to abate these low-level exposures will be massive—and probably redundant. The Japanese government has budgeted $14 billion for cleanup over thirty years and has set an immediate target of reducing radiation levels by 50 percent over two years. But most of that abatement will come from natural processes—radioactive decay and weathering that washes radio-cesium deep into the soil or into underwater sediments, where it stops irradiating people—that  will reduce radiation exposures on their own by 40% over two years. (Contrary to the centuries-of-devastation trope, cesium radioactivity clears from the land fairly quickly.) The extra 10 percent reduction the cleanup may achieve over two years could be accomplished by simply doing nothing for three years. Over 30 years the radioactivity will naturally decline by at least 90 percent, so much of the cleanup will be overkill, more a political gesture than a substantial remediation. Little public-health benefit will flow from all that, because there was little radiation risks to begin with.

How little? Well, an extraordinary wrinkle of the Stanford study is that it calculated the figure of 130 fatal cancers by assuming that there had been no evacuation from the 20-kilometer zone around the nuclear plant. You may remember the widely televised scenes from that evacuation, featuring huddled refugees and young children getting wanded down with radiation detectors by doctors in haz-mat suits. Those images of terror and contagion reinforced the belief that the 20-km zone is a radioactive killing field that will be uninhabitable for eons. The Stanford researchers endorse that notion, writing in their introduction that “the radiation release poisoned local water and food supplies and created a dead-zone of several hundred square kilometers around the site that may not be safe to inhabit for decades to centuries.”

But later in their paper Jacobson and Ten Hoeve actually quantify the deadliness of the “dead-zone”—and it turns out to be a reasonably healthy place. They calculate that the evacuation from the 20-km zone probably prevented all of 28 cancer deaths, with a lower bound of 3 and an upper bound of 245. Let me spell out what that means: if the roughly 100,000 people who lived in the 20-km evacuation zone had not evacuated, and had just kept on living there for 50 years on the most contaminated land in Fukushima prefecture, then probably 28 of them—and at most 245—would have incurred a fatal cancer because of the fallout from the stricken reactors. At the very high end, that’s a fatality risk of 0.245 %, which is pretty small—about half as big as an American’s chances of dying in a car crash. Jacobson and Ten Hoeve compare those numbers to the 600 old and sick people who really did die during the evacuation from the trauma of forced relocation. “Interestingly,” they write, “the upper bound projection of lives saved from the evacuation is lower than the number of deaths already caused by the evacuation itself.”

That observation sure is interesting, and it raises an obvious question: does it make sense to evacuate during a nuclear meltdown?

In my opinion—not theirs—it doesn’t. I don’t take the Stanford study as gospel; its estimate of risks in the EZ strikes me as a bit too low. Taking its numbers into account along with new data on cesium clearance rates and the discrepancy between ambient external radiation and absorbed doses, I think a reasonable guesstimate of ultimate cancer fatalities in the EZ, had it never been evacuated, would be several hundred up to a thousand. (Again, probably too few to observe in epidemiological studies.) The crux of the issue is whether immediate radiation exposures from inhalation outweigh long-term exposures emanating from radioactive soil. Do you get more cancer risk from breathing in the radioactive cloud in the first month of the spew, or from the decades of radio-cesium “groundshine” after the cloud disperses? Jacobson and Ten Hoeve’s model assigns most of the risk to the cloud, while other calculations, including mine, give more weight to groundshine.

But from the standpoint of evacuation policy, the distinction may be moot. If the Stanford model is right, then evacuations are clearly wrong—the radiation risks are trivial and the disruptions of the evacuation too onerous. But if, on the other hand, cancer risks are dominated by cesium groundshine, then precipitate forced evacuations are still wrong, because those exposures only build up slowly. The immediate danger in a spew is thyroid cancer risk to kids exposed to iodine-131, but that can be counteracted with potassium iodide pills or just by barring children from drinking milk from cows feeding on contaminated grass for the three months it takes the radio-iodine to decay away. If that’s taken care of, then people can stay put for a while without accumulating dangerous exposures from radio-cesium.

Data from empirical studies of heavily contaminated areas support the idea that rapid evacuations are unnecessary. The Japanese government used questionnaires correlated with air-dose readings to estimate the radiation doses received in the four months immediately after the March meltdown in the townships of Namie, Iitate and Kawamata, a region just to the northwest of the 20-kilometer exclusion zone. This area was in the path of an intense fallout plume and incurred contamination comparable to levels inside the EZ; it was itself evacuated starting in late May. The people there were the most irradiated in all Japan, yet even so the radiation doses they received over those four months, at the height of the spew, were modest. Out of 9747 people surveyed, 5636 got doses of less than 1 millisievert, 4040 got doses between 1 and 10 mSv and 71 got doses between 10 and 23 mSv. Assuming everyone was at the high end of their dose category and a standard risk factor of 570 cancer fatalities per 100,000 people exposed to 100 mSv, we would expect to see a grand total of three cancer deaths among those 10,000 people over a lifetime from that four-month exposure. (As always, these calculated casualties are purely conjectural—far too few to ever “see” in epidemiological statistics.)

Those numbers indicate that cancer risks in the immediate aftermath of a spew are tiny, even in very heavily contaminated areas. (Provided, always, that kids are kept from drinking iodine-contaminated milk.) Hasty evacuations are therefore needless. There’s time to make a considered decision about whether to relocate—not hours and days, but months and years.

And that choice should be left to residents. It makes no sense to roust retirees from their homes because of radiation levels that will raise their cancer risk by at most a few percent over decades. People can decide for themselves—to flee or not to flee—based on fallout in their vicinity and any other factors they think important. Relocation assistance should be predicated on an understanding that most places, even close to a stricken plant, will remain habitable and fit for most purposes. The vast “costs” of cleanup and compensation that have been attributed to the Fukushima accident are mostly an illusion or the product of overreaction, not the result of any objective harm caused by radioactivity.

Ultimately, the key to rational policy is to understand the kind of risk that nuclear accidents pose. We have a folk-conception of radiation as a kind of slow-acting nerve gas—the merest whiff will definitely kill you, if only after many years. That risk profile justifies panicked flight and endless quarantine after a radioactivity release, but it’s largely a myth. In reality, nuclear meltdowns present a one-in-a-hundred chance of injury. On the spectrum of threat they occupy a fairly innocuous position: somewhere above lightning strikes, in the same ballpark as driving a car or moving to a smoggy city, considerably lower than eating junk food. And that’s only for people residing in the maximally contaminated epicenter of a once-a-generation spew. For everyone else, including almost everyone in Fukushima prefecture itself, the risks are negligible, if they exist at all.

Unfortunately, the Fukushima accident has heightened public misunderstanding of nuclear risks, thanks to long-ingrained cultural associations of fission with nuclear war, the Japanese government’s hysterical evacuation orders and haz-mat mobilizations, and the alarmism of anti-nuke ideologues. The result is anti-nuclear back-lash and the shut-down of Japanese and German nukes, which is by far the most harmful consequence of the spew. These fifty-odd reactors could be brought back on line immediately to displace an equal gigawattage of coal-fired electricity, and would prevent the emission of hundreds of millions of tons of carbon dioxide each year, as well as thousands of deaths from air pollution. But instead of calling for the restart of these nuclear plants, Greens have stoked huge crowds in Japan and elsewhere into marching against them. If this movement prevails, the environmental and health effects will be worse than those of any pipeline, fracking project or tar-sands development yet proposed.

But there may be a silver lining if the growing scientific consensus on the effects of the Fukushima spew triggers a paradigm shift. Nuclear accidents, far from being the world-imperiling crises of popular lore, are in fact low-stakes, low-impact events with consequences that are usually too small to matter or even detect. There’s been much talk over the past year about the need to digest “the lessons of Fukushima.” Here’s the most important and incontrovertible one: even when it melts down and blows up, nuclear power is safe.

Guest Post from Will Boisvert

Just above this is a long post by Will Boisvert on the relative risks of nuclear power in the light of the Fukushima disaster. It’s very long, but (in my opinion) very worth reading. I haven’t seen any comparably thorough discussion elsewhere.

For whatever it’s worth, while I’m not competent to evaluate every specific factual claim here, on the big picture I’m convinced. Boisvert is right. The practical alternative to nuclear power is fossil fuels, and by every metric fossil fuels are much worse, even setting climate change aside. (Include climate change and fossil fuels are much, much, much worse.) There are quite a few people I respect who don’t agree; I hope they’ll read these piece and take its arguments seriously. The takeaway: “Even if you accept [the worst-case estimates of the death tolls from past nuclear disasters], there is less than a one-in-25 chance that, next year, a Chernobyl-scale nuclear disaster will kill a quarter of a million people; there is a dead certainty that coal power will kill that many.”

I hope Will will post more here in the future, but as always, who knows.

Exterminate the Brutes, er, Zombies!

At the bar the other night, they had The Walking Dead on. We do seem to be in a zombie moment right now. One can’t but wonder what it means. I hadn’t seen the show before, but I did read the comic books it’s based on. (Whatever; I like comics.) The comic version is notable for having the least threatening zombies around; in one scene, a normal guy is trapped overnight in a room with dozens of zombies, and kills them all. With his bare hands. Sure, you don’t want them to bite you, but that goes for bedbugs too. (It’s also notable for its exceptionally blatant ripoffs of other zombie stories, like the opening lifted straight from 28 Days Later. But maybe that sort of borrowing is the sign of a vital popular form?) More to the point, it, even more than the run of post-apocalypse survival tales, valorizes traditional, masculine authority. Not for nothing it’s set in the South, and the main character is a cop; that’s a departure from most of these stories, which get their juice precisely from the ordinariness of their protagonists. My friend Ben makes the interesting observation that a very large proportion of horror movies are set in decaying industrial landscapes. But that’s not the case with The Walking Dead. There, the spaces the human characters defend against the zombies are iconic enclaves of order: a gated subdivision, a prison. Their central challenge, literal and metaphorical, is to keep the fences in place. And on the other side of the fences, the zombies. The specific characteristic of the zombie, as opposed to other horror genre monsters, is their lack of individuality. They look human but have no minds, souls or personalities. Their behavior is mechanical, and they only ever appear in groups. The classic vampire story is of the monster stealthily infiltrating our society. You can’t tell that story about zombies; they have to be everywhere. Nor can you deter them or manage them, they don’t follow the various rules vampires are supposed to. All you can do, is kill them. Indeed, one of the themes of the comic-book Walking Dead is the danger of empathizing with the zombies. In one plot arc, a group of farmers are keeping their zombified relatives and neighbors locked in a barn (again, these are some seriously wimpy zombies) in the hope that they’re somehow recoverable. The heroes, naturally, put aside sentimentality and exterminate them. They may look human, is the point, but they’re really just part of the formless, threatening mass. The idea of a small group of civilized people holding some redoubt against a human-looking but impersonal mass is a familiar one in the culture, from Fort Apache to Fort Apache in the Bronx. (My father used to point out that the trope of the small band of white settlers facing a mass of Indians stretching the horizon reversed the historical situation almost exactly.) In this sense zombies slot neatly into some important political myths as well. It’s not a coincidence that in Max Brooks’ World War Z, the most mainstream recent zombie book, the two countries that are best prepared to deal with the worldwide zombie plague are Israel and South Africa, the latter explicitly thanks to apartheid-era plans for defense of the white minority against the African hordes. In terms of the logic of zombie stories, Brooks made a good choice. The idea of a small group of fully-human individuals defending themselves against a faceless, anonymous mass has deep roots, but it comes most clearly to the surface in settler societies. Here is Mario Vargas Llosa, for example, on the original confrontation between his Spanish ancestors and the ancestors of the Indian and mestizo poor all around him:

Men like Father Bartolome de Las Casas came to America with the conquistadores and abandoned the ranks in order to collaborate with the vanquished… This self-determination could not have been possible among the Incas or any of the other pre-Hispanic cultures. In these cultures, as in the other great civilizations of history foreign to the West, the individual could not morally question the social organism of which he was a part, because he existed only as an integral atom of that organism and because for him the dictates of the state could not be separated from morality.
It seems to me useless to ask … whether it would have been better for humanity if the individual had never been born and the tradition of the antlike societies had continued forever.

There’s the settler creed, with unusual frankness. We are capable of moral choices; they — that is, everyone “foreign to the West” — have no individual existence, but are only parts of a larger organism. We can sympathize with them; they can’t even sympathize with themselves. We are human; they are “antlike.” Or zombielike. But why now? Well, of course the entertainment industry needs new material; vampires are mostly played out and werewolves don’t seem to touch any commercially viable anxieties. (Maybe this one will do better.) James Frey is betting on aliens; we’ll see. But there might be a deeper reason. Look at that picture above, of the zombies pressing up against the fence. It doesn’t take a degree in semiology to see what that represents. But it’s not just the border. My friend Christian, who is finishing a book on the politics of global warming, describes one of the main forms of adaptation in the rich countries as the armed lifeboat. It’s adaptation to climate change as exclusion and repression, and that’s much easier if you can imagine the excluded as faceless ant people. If we don’t find a better way to translate climate change into a political vision that can mobilize people, then the white policeman with the gun, ruthlessly exterminating the masses outside the lager and strictly maintaining order inside it, is an idea we may be increasingly asked to become comfortable with. If so, one could read zombie tales like The Walking Dead as a warning — or, less charitably, as helping to prepare the way.

Wind, Rising

Here’s an interesting datapoint: According to the US Energy Information Agency, fully 50 percent of the net new electricity generation capacity added in 2008, was from wind power. (8,300 megawatts of a total of 19,000 megawatts of new capacity; but 2,600 megwatss worth of fossil-fuel capacity was retired.) This is very exciting; it’s clear that, despite some truly foolish opposition (what’s wrong with those people? wind turbines are beautiful), wind power has reached takeoff as a commercially viable industry.

If we are going to preserve a habitable planet, a big challenge is threading the line between complacency and despair. So it’s important to balance the bad news about the scope of the problem, with good news about its solvability.

(If you want to bend the stick back the other way, you could pick up James Hansen’s Storms of My Grandchildren and read the chapter on the Venus syndrome. Terrifying.)