The Future of Health Care Reform

This is a guest post by Michael Kinnucan.

The Collapsing Center and Solidifying Periphery of the US Healthcare System

Contrary to what most people on the US left might tell you, there’s nothing intrinsically impossible about building a healthcare system that provides universal coverage on the foundation of employer-sponsored insurance. Germany and France and several other countries have done it, and we could do it too. The way you do it is to start with core-economy full-time workers and their families, and then steadily patch and regulate your way to universal coverage (“what about retirees? The unemployed? Freelancers? What happens when people change jobs? What about employers too small to offer coverage?” and so forth) until you’ve covered everyone. This kind of system will never be quite as seamless and efficient as single-payer, but it is workable. 

What has made this effort uniquely difficult in the US case, however, has been the spiraling overall cost of US healthcare. Virtually all healthcare systems in the developed world–including multi-payer systems like Germany’s–are built on a firm foundation of medical price control. US observers are acutely aware of this in the case of pharmaceuticals, but the situation is similar across the healthcare industry; Germany, for instance, sets the price of physicians’ services and hospital care through regional sectoral bargaining. 

The US, for political reasons, has proven incapable of imposing similar discipline on the healthcare market. Prices are negotiated in a medical marketplace where the sellers of healthcare hold significant market power, and this process is intrinsically inflationary. This inflation has been far more intense in the employer-insurance market than in the public sector, particularly since the mid-1980s; Medicare and private-insurance prices have diverged to the point where commercial insurers pay on average 254% of Medicare for the same procedures.

This inflationary dynamic has put continuous pressure on the employer-sponsored insurance market, with large, high-margin businesses complaining about the ever-growing cost of healthcare while smaller and lower-wage businesses simply restrict or cancel coverage.  Thus would-be US healthcare reformers have found themselves in the strange position of trying to “patch” marginal populations into a system centered on employer-based coverage even as the center of the system constantly threatens to collapse. 

Thus, while the US public tends to equate employer-based coverage with quality and stability and to imagine healthcare reform as the process of granting new populations access to that quality and stability, in fact employer-sponsored insurance has continuously declined in quality and occasionally been faced with a death spiral in the face of constantly increasing costs. And while proponents of universal healthcare tend to be motivated by the plight of those locked out of the employer-based healthcare system (the poor, the unemployed), major efforts at healthcare reform have often been driven not by the problems of these groups but by problems within the employer market.

The Cost Control Deadlock

This dynamic has shaped mainstream US healthcare reform efforts since the Carter administration. The ambition of reformers has been to simultaneously expand coverage and control costs. This double aim is frequently given a superficial fiscal gloss (coverage expansion is “paid for” through cost control), but its real logic is political. Proponents of this strategy hope to (1) use the promise of cost control (in the employer market) to guarantee business support for coverage expansion (generally through public programs), while simultaneously (2) using coverage expansion (providing more paying customers for the healthcare industry) to mitigate healthcare industry opposition to cost control (reducing aggregate payments to the healthcare industry).

The logic of this interlocking set of political bargains has proven more compelling in theory than successful in practice. More specifically, the US political system has revealed a systematic preference for simply spending more money to expand coverage without doing much to achieve cost control–particularly employer-market cost control. The healthcare lobby has shown itself to be very focused on opposing cost control and highly effective in doing so, to the point where even obviously egregious abuses that provoke nominally bipartisan opposition have taken decades to address (so-called “surprise billing,” for instance, or the blank check to pharmaceutical companies incorporated in Medicare Part D). The central political lesson of US healthcare reform efforts going all the way back to Truman is that it’s virtually impossible to pass major reform without buying off the provider lobbies.

For this reason reformers have tended to want to hide the ball on cost control, avoiding obvious and internationally well-known methods like price control and national budgeting in favor of Rube Goldberg “managed competition” and “value-based payment” schemes that are unpopular with patients, difficult for the public to understand and of questionable efficacy in any case. 

The business lobby, in turn–which reformers have for decades seen as the natural constituency for cost control–has tended to take the clear downsides of reform (higher taxes and more regulation) more seriously than the alleged upside of long-term cost control, and to put more faith in the tried-and-true method of shifting costs onto employees than on regulatory schemes to achieve savings. Business (at least big business) tends to like the idea of cost-control-oriented healthcare reform in theory, but in practice has proven a fickle ally for reformers.

Abandoning Cost Control and Achieving Coverage: The Legacy of the ACA

This situation represents a deadlock for what used to be called “comprehensive” healthcare reform, but no such deadlock applies to the far simpler project of simply using tax dollars to pay for expanded healthcare coverage. Such a strategy may face opposition from fiscal conservatives, but it is enduringly popular with the US public (who have long been committed to the idea of universal healthcare) and under the right conditions can easily win support from the healthcare lobbies (who stand to attract those public dollars). The political project of “comprehensive” healthcare reform died a famous death in 1993, but the political project of “spending public money to buy people healthcare” scored notable successes, including a steady expansion of Medicaid eligibility and the passage of CHIP during the Clinton administration and the passage of Medicare Part D under George W. Bush. 

The situation, in other words, was the very opposite of how progressives have sometimes described it–it’s not that the US political system wants universal health coverage but is too stingy to pay for it, but rather that the US political system is perfectly prepared to do universal coverage as long as no significant savings are attached.

The ACA was the culmination of this tradition. That likely wasn’t what its architects intended–healthcare wonks still dreamed of “bending the cost curve”–but it was what the law did. While the ACA is best remembered for creating the “individual market” with its famous three-legged stool, the real story is simpler: The ACA spent roughly a trillion dollars over 10 years to cover roughly 30 million people through a combination of free Medicaid and very heavily subsidized private insurance, with the funding coming not from comprehensive cost control but from from tax revenue and suppression of Medicare cost increases.1 

From McDonough, Inside National Health Reform, p. 282.

One wrinkle to this reform strategy was the risk of employer “dumping”: if the government was prepared to heavily subsidize working-class insurance coverage, why wouldn’t employers–and workers, for that matter–simply go where the subsidies were? This was a particularly significant risk for low-wage workers; such workers were eligible for very significant subsidies on the exchange, their employers would be eager to control costs, and their employer-sponsored coverage was often nothing to write home about. They might well have been better off on the exchanges or Medicaid.

One can imagine a version of the ACA that simply embraced this dynamic, moving millions of low-wage workers into heavily subsidized individual coverage–and that version would likely have been more progressive. It would also have been significantly more disruptive and costly. Instead, the ACA dealt with this problem primarily through the “employer mandate,” which required employers with over 50 employees to offer coverage or pay a significant penalty. Smaller employers were exempt, but the law also reformed the “small group” insurance market in which these firms purchased insurance, requiring community rating for these plans, which succeeded–for a time, at least–in preventing a looming death spiral in that market.

On its own terms, this general strategy was a success. The ACA insured millions of people (by buying them insurance) while avoiding “dumping.” The share of non-elderly Americans in employer coverage, which fell nearly 10 percentage points between 1999 and 2011, rose slightly as the economy recovered from the Great Recession and has remained quite steady ever since. While over 8% of Americans remain uninsured, progressives should not mistake this for a fundamental limitation in the ACA framework: many of the uninsured are in states that haven’t expanded Medicaid, or are eligible for coverage but not enrolled, or fall into various immigrant groups not covered by the law. Aggressive state action on enrollment and uptake within the ACA framework and a commitment to covering immigrants out of state funds could reduce uninsured rates to the disappearing point.

The Unfinished Business of Cost Control

What of cost control? There was one radical cost-control proposal on the table: the much-misunderstood “public option,” which in its original form would have introduced into the marketplace a public plan paying Medicare prices. This would effectively have imported public cost control into the private market, forcing private insurers to either slash their own payments to providers to Medicare levels or get out. The consequences of such a move would have upset the entire structure of the ACA; exchange insurance would have become far cheaper than employer insurance, drawing tens of millions of people out of employer insurance into the market and radically reshaping the US health insurance system. Clearly no such move was in the cards, and the public option was first modified to pay market prices (which would have defeated its purpose), then dropped entirely.

To the extent that the ACA did anything on cost control in the individual or employer market, it addressed the issue by inviting employers to make their insurance offerings worse. Employers were required to offer some form of insurance, but the standard for that insurance was very low indeed; employees could be charged up to nearly 10% of their income in premiums for coverage with high deductibles and extensive cost-sharing. More ambitiously, the ACA attempted to fulfill a longstanding bipartisan dream of healthcare policy wonks by rolling back the tax subsidy for employer-based insurance; the so-called “Cadillac tax” would have revoked the subsidy initially only for the most generous employer insurance, but would over time have come to apply to most insurance. This effort corresponded to a long-held belief in the healthcare policy community that the tax subsidy encouraged employers to offer excessively generous coverage, and that this coverage in turn drove US healthcare costs.

Charitably, these design choices represented an effort at cost control through the “skin in the game” strategy: when required to pay a larger portion of their healthcare costs, Americans would be less likely to go to the doctor just for fun. Less charitably, they were an invitation for employers to at any rate control employers’ healthcare costs, by shifting a growing share of those costs onto employees. This safety valve was crucial, since employers would no longer be able to limit their costs as they had in the past, by dropping coverage.

The Unfinished Business of the ACA and the Coming Crisis in Employer Insurance

As I said above, the ACA worked on its own terms: the law actually passed, it greatly expanded coverage by providing government subsidies for those locked out of the employer market, and it did so without causing massive outflows or disruptions in employer insurance. The strategy of expanding coverage without controlling costs was effective.

But that was over a decade ago, and costs have continued to rise. The ACA left employer-based insurance untouched at the heart of the US healthcare system, without resolving the inflationary pressure in the employer market. This pressure continues to grow. The average premium for employer-sponsored individual coverage has nearly doubled, from $4824 in 2009 to $8435 in 2023; for family coverage the number is $23,968. 

How have employers responded? First and foremost by shifting a growing share of medical costs onto their employees. Worker contributions to premium payment, although capped at around 9% of worker income by the ACA, have grown in tandem with total premiums. At the same time, so-called “cost sharing” in US health insurance takes many forms and is difficult to measure, but the simplest proxy–the annual deductible–has nearly tripled in nominal terms since the advent of the ACA, from $533 in 2009 to $1568 in 2023, with workers at small firms paying $2138.2  As recently as 2006, 45% of workers faced no deductible for their coverage; that figure is now less than 10%. Many workers face significant cost-sharing in the form of “coinsurance” even after they hit their deductibles; it is common for a worker to owe 20% of hospital costs up to an out-of-pocket max that can be well north of $10,000. The growth of cost-sharing is the major contributor to a growing medical debt crisis, as hospitals attempt to collect from patients who can’t pay despite having insurance.

It is important to note that cost-sharing has restrained premium increases; if employers had had to hold cost-sharing constant, premiums would have grown even faster. This strategy is quickly approaching its limits, however; for actuarial reasons, further increases in deductible will face diminishing returns in premium savings, and at some point employers will run up against even the ACA’s fairly low bar on coverage quality. These limits are already being reached in the low-wage labor market.

Where will employers turn next? One possibility is to skirt the limits of the ACA’s employer mandate–for example by offering plans that cover “minimum essential benefits” under the ACA but do not meet the ACA’s “minimum value” requirements because they leave employers with enormous out-of-pocket expenses. An employee misinformed enough to enroll in such coverage is effectively uninsured, but the employer pays only part of the penalty for not offering insurance. Another option is so-called “reference-based pricing” schemes, which do not have networks and do not negotiate prices with providers, instead paying a low standard rate for care. Employees with this kind of coverage may find most providers unwilling to treat them and may be “balance billed” for enormous amounts of money when they do receive care.

As a last resort–particularly if the loopholes I just described are closed by regulators, which they should be and which provider lobbies will demand that they are–some employers may choose to simply drop coverage and pay the penalty. The ACA’s employer mandate penalties are significant, but they’re not prohibitive; if premiums continue rising there will come a point when they’re cheaper than offering insurance. If this happens, employees will have no choice but to seek insurance on the individual market or (if they’re poor enough) enroll in Medicaid. 

A tight labor market has limited these dynamics so far, but the next recession  may prove a turning point. At that point, the dam the ACA set up to prevent employers from “dumping” employees into publicly subsidized coverage will have broken.

Progressive Strategy for the Next Healthcare Crisis

As employer insurance begins to unravel around the edges, progressives will be tempted to step in and save it. They should think twice before doing so. There’s a lot to be said for a situation in which a growing share of Americans receive health insurance through Medicaid and through public subsidy on the ACA exchanges.

Medicaid and (especially) the ACA exchange have their problems, but they already offer better and more affordable insurance than low-end employer plans, and more importantly their problems are far easier to fix than the problems of the employer market. If Medicaid pays too little to providers and has too few providers, its reimbursement rates can be raised. If ACA exchange insurance is too expensive, that insurance can be subsidized, at both the state and federal level. If exchange insurance has high cost-sharing and inadequate networks, states and the federal government have full power to set standards in these markets. Perhaps most importantly, states have proven quite effective at controlling costs for the non-elderly Medicaid population, and could do the same for the exchange population, as recent state experiments with so-called “public options” in Washington, New Mexico and elsewhere demonstrate. States can even find ways to expand Medicaid-like coverage for working-class people, as New York and Minnesota already do through Basic Health Plan programs.

All these policy aims are far more easily achieved in a single, centralized individual market than in the fragmented and opaque employer market–and they free policymakers from a sharp tradeoff where raising standards for working-class insurance coverage imposes costs on businesses or causes them to drop coverage. Non-employer insurance also offers far better opportunities for state-level policymaking than does the employer marketplace, since states are virtually banned from regulating employer insurance under ERISA. If ambitious healthcare reform is blocked at the federal level for the foreseeable future, progressives have ample opportunity to experiment with such reform in the states.

What would such an agenda look like? At the federal level, the Biden administration can likely raise the bar on employer insurance through regulatory action, taking a closer look at whether employer insurance meets “minimum essential coverage” and especially “minimum value” standards and whether employers are appropriately informing employees of their rights. Setting clearer minimum standards on employer insurance will cause some employers to stop offering it–and instead of fighting that dynamic, progressives should focus on ensuring that their employees have good options elsewhere, by instituting or expanding Basic Health Plan and Medicaid buy-in options, increasing subsidies and standards on state and federal exchanges, and implementing robust public options wherever possible.

Even if successful, this strategy wouldn’t spell the end of employer insurance overnight. 59% of non-elderly Americans receive insurance through their or their family’s employer; that’s a lot of people, and it would still be a lot of people even if employers began to drop coverage. But it’s easy to imagine a virtuous cycle where, as Medicaid and individual market populations grow, a large and diverse constituency grows for improving them. In the long run, the prospects for truly universal healthcare might be far better than they are today. 

 

Guest Post from Max Sawicky

by Max Sawicky

Hillary’s getting a huge free ride on her purported mastery of the mechanics of policy, in contrast to Bernie. I decided to look into just one of her campaign initiatives. She likes to throw around the phrase “universal child care” or “universal pre-K.” But she isn’t proposing universal either. She’s proposing new money for pre-K, which is fine, but a) false advertising, and b) it’s not clear how it would “work.”

Google “Hillary universal child care.” The first thing you get is one of her web pages. On it we are told “Her proposal would work to ensure that every 4-year old in America has access to high-quality preschool in the next 10 years. It would do so by providing new federal funding for states that expand access to quality preschool for all four-year olds.” There’s nothing else from the campaign as far as I looked — 4 or 5 pages of google results.

“Would work to” means “won’t” in this context.

Most of the page is about how great preschool will be, for those who get it, and I’d be the first to agree that it would be. Think Progress informs us, again under that “universal” headline, that HRC also favors a “middle class tax cut” to help parents pay for childcare. To be clear, both of these initiatives deserve praise and point in the right directions.

The rub is that they are no more specific or rigorously motivated than the Sanders proposals that people have been blathering about.

On the strength of rousing approval by a compliant Congress unavailable to Bernie, HRC would supposedly provide a grant to those same evil state governments who couldn’t be trusted to implement single-payer, under a defunct Sanders proposal. Who could say whether the results would be “universal”? Is the money adequate, assuming full participation by the states? Is there anything that would prevent them substituting the money for their own limited programs? These are the usual questions applying to grants-in-aid. There are no wonky answers on her web site.

A published journalist of my acquaintance thinks the page is a real policy proposal, rather than an advertisement. She couldn’t tell the difference. She thought I was talking about a ‘brief summary’ of the proposal and gave me a link to what I was going on, which actually IS a brief summary.

Note that bumping up Head Start does not get you to universal either. It’s fine, but Head Start is a tiny program, relative to the relevant population.

How to “pay for it”? Forget it. They don’t say, not that I care. All the critics of “unpaid-for” single-payer BernieCare evidently don’t care either. Criticisms of Sanders’ vagueness on policy can be applied to HRC as well, if one delves just a little bit.

I look forward to all the deep-dive analyses of HRC’s projected path to universal health care coverage. Are there any? Why not? Because Hillary advocates are too busy blathering about Bernie. Those with policy expertise don’t apply it to Hillary’s treacle.

Guest Post on Portugal

(In comments to one of the posts here on Greece, Tiago Lemos Peixoto posted some observations on the situation in Portugal. In response to some questions from me, he sent the message below. We don’t hear much in the US about the the political-economic situation in Portugal, and I thought Tiago’s discussion was interesting enough to post on the front page. I’ve posted his original comment first, and then then the followup. My questions to him are in italics. JWM.)

I am a 37 year old Portuguese, born merely 4 years after a semi fascist dictatorship that furthered our already geographical peripheral position to one of political isolation. We joined the EEC 10 years after that, while still trying to recover from the aftermath of the democratic transition on the promise of cooperation, solidarity, prosperity, and a helping hand in developing our economy to the european standards of living.

Now, let’s forget for a while the fact that such a thing never happend. Instead of modernizing and improving our competitiveness, the EEC brought in fact production quotas to our industries and agriculture, and effected a great many deals that turned out to be unilateral. One example out of many would be our milk and dairy production, which is capable of producing in vast quantities, but due to said quotas are, to this day, often destroyed and wasted so that we don’t outperform.

We didn’t readily see that, however. Along with our joining in 1986 came communitary funds which were generously abundant, almost trickle down. Why would we think about our lack of competitiveness when Europe paid our producers subsidies to cut down excess? Who could really complain when great and bloated artistic endeavours, ambitious new infrastructures were being developed via communitary funds? Or with the tearing down of the old borders, the freedom to move within anywhere in the Schengen space? After the years of dictatorship and extreme poverty, after all that crippling isolation, it seemed like the dawn of a new age. And there are striking paralels to Greece here as well, what with their being a peripheral country who survived a period of dictatorship.

Add to that the promise of the €. Now, that one was harder to sell, since it actually doubled a lot of the prices, but hey, all seemed to be going so well, surely it would all adjust soon, and this is just another step to an unified Europe of progress.

That’s what we’ve been pretty much conditioned to believe, and that is the mindset that made people equate the European project with prosperity. For the Greeks, it’s very likely that any Greek between 50 and 60 grew up in a dictatorship, and same goes to every Portuguese between 40 and 90. Adults like myself grew up with the European Union and its promise of a better world and barely know any world outside of it. And communitary funds, even if they’re all almost universally badly applied, worked like a Skinner’s box of sorts.

And now, they threaten us, the adults who barely know a world outside EEC/EU, the young adults who grew up on the possibilities given by Erasmus projects, the older people who grew up or lived through overt dictatorships, that all this can go away. And that with it comes fire and brimstone, that we HAD to join the € because our currency was too weak (and what better argument to NOT join the Euro?) and would be even more worthless if we were to readopt it. They throw the ghosts of market anathema at us, “leaving the Euro would be catastrophic, the markets react just from your mere mention of it, you really want to go back to those bad old days? Do you really want to be out of the cool kids’ club, and have no EU funding for your arts association anymore?”

That is why we fear the alternative, even as we begin to see the bars that hold our gilded cage.

Is there any kind of organized challenge to austerity there? Is there any kind of left opposition party?

Well, there is, and there isn’t. The most consistent and organized opposition is the local Communist Party (PCP), which commands a respectable position in parliament (about 11%). What’s more significant, the vote on PCP has been relatively unchanged since the 1974 revolution, so they’re a familiar presence in our democracy, and relatively well respected outside of its circles for the fact that they were the “vanguard” against Salazar’s regime. They’re incredibly well organized and have the support of the majority of the unions. They’re also relatively less “orthodox” than something like the KKE, for the most part. However, despite all this, their voting base is incredibly stable for better and for worse.

Then there’s Bloco de Esquerda (Left Bloc or BE) which could be seen as our own Syriza: a broad front of minor leftist parties of different traditions, disgruntled communists and modern socialists which formed 15 years ago and was on the rise in parliament. They took a huge hit in the last elections, though, through a combination of leadership bickerings, lack of cohesive message, and the center left Socialist Party (PS) being able to sideline them via pushing BE’s social issues agenda on issues like women and LGBT rights. Many saw BE a bit redundant after PS pushed those issues through parliament. They managed to have around 12% at their best, got 5-6% on the last election.

And that’s the extent of organized opposition. BE and PCP do work together a lot better both in parliament and on the streets than in other countries and are a lot closer than say, KKE and Syriza. But whereas PCP’s strength and weakness lies in the rigidity of its speech, BE has failed to have an open discussion on issues like debt restructuring and the like.

Social movements were strong for a while in 2010-2012, and were a ineffectual, uncohesive mess after that, I’m sad to say. There was a strong grass roots response to the austerity packages with absolutely gigantic demonstrations occuring in 2011-2012. But there was also failure to plainly articulate a political alternative, as well as the prevalent “sibling rivalries” that so often fracture these kinds of movements. The government and mainstream media narrative effectively pushed back via Thatcher’s “TINA”, and it’s not so much that people bought the narrative, but they’re failing to see anyone present anything else.

Dairy has been an issue in Greece too, it’s specifically mentioned in the new agreement. It seems as tho the crude mercantilist arguments, which I think mostly miss the larger picture, really may be the story in agriculture.

There is an apparent mercantilist side to the way that the EU is constructed. In many ways, it’s a paradox: its current ruling minds come from that neoliberal school of Thatcher and Reagan policies, but the actual construct is almost neo-colonial, with very apparenty peripheries and semi peripheries forming around the German epicenter. Any analysis of trade relations will show Germany as either the no. 1 or no. 2 exporter to other European countries: Spain, Italy, Greece, Portugal, all of those have Germany in their top 2 of imports. Which is, I believe, all the more significant when you cross that data with the prevalence of production quotas.

Though I’d argue that in Portugal’s specific case, these quotas end up helping Spain, especially in terms of food products. Germany’s our second larger import origin, but we import a staggering 27% from Spain alone. Now, though I’m no economist and my field lies in History, I do believe this is a rabbit hole worth chasing.

Has there been significant liberalization in Portugal over the past five years? Rolling back of labor laws, weakening of unions, cutting back pensions, etc.? In other words, has the crisis worked?

It has. Completely. Nearly anything that could be privatized, including energy, our airfields, our air company, our telecommunications grids our energetic infrastructure, our mail service, all have been sold off at the time we’re speaking. It’s worth mentioning that, while it could be argued that there were severe deficiencies in the management of these companies, nearly all of them actually turned a profit. So, selling off, say, our energy for €3 billion 5 years ago stops being an impressive feat when we see that the company’s profits were at about €1 billion/year, and we’d have €2 billion more at our disposal by not selling.

Collective bargaining took a major hit. Most new jobs are being created on a temporary basis. People have in fact been “temps” without social welfare benefits for more than ten years now in many cases (this did not start with the crisis, but did speed up considerably). Others work under weekly renewable temp contracts, which can prolong themselves for months. Our minimum wage is net €505/month, but many make less than that by working 6.5 hour long “part time” jobs for €300-400. Any company can hire you and sign temp contracts with you for a period of 3 years during which the rules are more or less “at will”; firing you sometime during those 3 years, waiting a couple of months and then calling you back to rehire you (obviously resetting that 3 year grace period) is a very common occurence. Unpaid internships are the norm, with some of them taking on surreal form. Unpaid internships for bartender or hairdresser are things we can see on job websites and ads papers.
Unemployment is predictably high, currently at 14% after a 18% peak. We should account for the fact that Portugal measures unemployment by the number of people who have enrolled in our unemployment centers. Which are ineffective to Kafkaesque levels, often summoning you by letter to interviews which have already happened by the time you received that letter, at which point you’re out of the center (and the stats) and have to justify your absence or wait for four months before you can enroll again. Add to those numbers the fact that half a million left the country in the last 5 years. In a country of 10 million, that means 5% of our population, most of them college educated youths between 20 and 35 is absent from the active population.

But despite all that, despite the 6% GDP contraction, or the fact that the austerity measures actually skyrocketed the debt from 95% to 130%, we’re told (in an election year, no less) that we can rest assured in the fact that we’re not Greece (which if one wants to argue that we’ve been put under a slow burn as opposed to Greece’s scorched earth can be a point I do concede), and that “though the Portuguese are worse off, the country is better” (actual quote from Luis Montenegro, Parliamentary leader to the majority party), and that the worst has passed, and austerity is a thing of the past… though they need to make an extra 500 millions in pension cuts this year alone.

Fukushima Update: How Safe Can a Nuclear Meltdown Get?

by Will Boisvert

Last summer I posted an essay here arguing that nuclear power is a lot safer than people think—about a hundred times safer than our fossil fuel-dominated power system. At the time I predicted that the impact of the March, 2011 Fukushima Daiichi nuclear plant accident in Japan would be small. A year later, now that we have a better fix on the consequences of the Fukushima meltdowns, I’ll have to revise “small” to “microscopic.” The accumulating data and scientific studies on the Fukushima accident reveal that radiation doses are and will remain low, that health effects will be minor and imperceptible, and that the traumatic evacuation itself from the area around the plant may well have been unwarranted. Far from the apocalypse that opponents of nuclear energy anticipated, the Fukushima spew looks like a fizzle, one that should drastically alter our understanding of the risks of nuclear power.

Anti-nuke commentators like Arnie Gundersen continue to issue forecasts of a million or more long-term casualties from Fukushima radiation. (So far there have been none.) But the emerging scientific consensus is that the long-term health consequences of the radioactivity, particularly cancer fatalities, will be modest to nil. At the high end of specific estimates, for example, Princeton physicist Frank von Hippel, writing in the nuke-dreading Bulletin of the Atomic Scientists, reckons an eventual one thousand fatal cancers arising from the spew.

Now there’s a new peer-reviewed paper by Stanford’s Mark Z. Jacobson and John Ten Hoeve that predicts remarkably few casualties. (Jacobson, you may remember, wrote a noted Scientific American article proposing an all-renewable energy system for the world.) They used a supercomputer to model the spread of radionuclides from the Fukushima reactors around the globe, and then calculated the resulting radiation doses and cancer cases through the year 2061. Their result: a probable 130 fatal cancers, with a range from 15 to 1300, in the whole world over fifty years. (Because radiation exposures will have subsided to insignificant levels by then, these cases comprise virtually all that will ever occur.) They also simulated a hypothetical Fukushima-scale meltdown of the Diablo Canyon nuclear power plant in California, and calculated a likely cancer death toll of 170, with a range from 24 to 1400.

To put these figures in context, pollution from American coal-fired power plants alone kills about 13,000 people every year. The Stanford estimates therefore indicate that the Fukushima spew, the only significant nuclear accident in 25 years, will likely kill fewer people over five decades than America’s coal-fired power plants kill every five days to five weeks. Worldwide, coal plants kill over 200,000 people each year—150 times more deaths than the high-end Fukushima forecasts predict over a half century.

We’ll probably never know whether these projected Fukushima fatalities come to pass or not. The projections are calculated by multiplying radiation doses by standard risk factors derived from high-dose exposures; these risk factors are generally assumed—but not proven—to hold up at the low doses that nuclear spews emit. Radiation is such a weak carcinogen that scientists just can’t tell for certain whether it causes any harm at all below a dose of 100 millisieverts (100 mSv). Even if it does, it’s virtually impossible to discern such tiny changes in cancer rates in epidemiological studies. Anti-nukes give that fact a paranoid spin by warning of “hidden cancer deaths.” But if you ask me, risks that are too small to measure are too small to worry about.

The Stanford study relied on a computer simulation, but empirical studies of radiation doses support the picture of negligible effects from the Fukushima spew.

In a direct measurement of radiation exposure, officials in Fukushima City, about 40 miles from the nuclear plant, made 37,000 schoolchildren wear dosimeters around the clock during September, October and December, 2011, to see how much radiation they soaked up. Over those three months, 99 percent of the participants absorbed less than 1 mSv, with an average external dose of 0.26 mSv. Doubling that to account for internal exposure from ingested radionuclides gives an annual dose of 2.08 mSv. That’s a pretty small dose, about one third the natural radiation dose in Denver, with its high altitude and abundant radon gas, and many times too small to cause any measurable up-tick in cancer rates. At the time, the outdoor air-dose rate in Fukushima was about 1 microsievert per hour (or about 8.8 mSv per year), so the absorbed external dose was only about one eighth of the ambient dose. That’s because the radiation is mainly gamma rays emanating from radioactive cesium in the soil, which are absorbed by air and blocked by walls and roofs. Since people spend most of their time indoors at a distance from soil—often on upper floors of houses and apartment buildings—they are shielded from most of the outdoor radiation.

Efforts to abate these low-level exposures will be massive—and probably redundant. The Japanese government has budgeted $14 billion for cleanup over thirty years and has set an immediate target of reducing radiation levels by 50 percent over two years. But most of that abatement will come from natural processes—radioactive decay and weathering that washes radio-cesium deep into the soil or into underwater sediments, where it stops irradiating people—that  will reduce radiation exposures on their own by 40% over two years. (Contrary to the centuries-of-devastation trope, cesium radioactivity clears from the land fairly quickly.) The extra 10 percent reduction the cleanup may achieve over two years could be accomplished by simply doing nothing for three years. Over 30 years the radioactivity will naturally decline by at least 90 percent, so much of the cleanup will be overkill, more a political gesture than a substantial remediation. Little public-health benefit will flow from all that, because there was little radiation risks to begin with.

How little? Well, an extraordinary wrinkle of the Stanford study is that it calculated the figure of 130 fatal cancers by assuming that there had been no evacuation from the 20-kilometer zone around the nuclear plant. You may remember the widely televised scenes from that evacuation, featuring huddled refugees and young children getting wanded down with radiation detectors by doctors in haz-mat suits. Those images of terror and contagion reinforced the belief that the 20-km zone is a radioactive killing field that will be uninhabitable for eons. The Stanford researchers endorse that notion, writing in their introduction that “the radiation release poisoned local water and food supplies and created a dead-zone of several hundred square kilometers around the site that may not be safe to inhabit for decades to centuries.”

But later in their paper Jacobson and Ten Hoeve actually quantify the deadliness of the “dead-zone”—and it turns out to be a reasonably healthy place. They calculate that the evacuation from the 20-km zone probably prevented all of 28 cancer deaths, with a lower bound of 3 and an upper bound of 245. Let me spell out what that means: if the roughly 100,000 people who lived in the 20-km evacuation zone had not evacuated, and had just kept on living there for 50 years on the most contaminated land in Fukushima prefecture, then probably 28 of them—and at most 245—would have incurred a fatal cancer because of the fallout from the stricken reactors. At the very high end, that’s a fatality risk of 0.245 %, which is pretty small—about half as big as an American’s chances of dying in a car crash. Jacobson and Ten Hoeve compare those numbers to the 600 old and sick people who really did die during the evacuation from the trauma of forced relocation. “Interestingly,” they write, “the upper bound projection of lives saved from the evacuation is lower than the number of deaths already caused by the evacuation itself.”

That observation sure is interesting, and it raises an obvious question: does it make sense to evacuate during a nuclear meltdown?

In my opinion—not theirs—it doesn’t. I don’t take the Stanford study as gospel; its estimate of risks in the EZ strikes me as a bit too low. Taking its numbers into account along with new data on cesium clearance rates and the discrepancy between ambient external radiation and absorbed doses, I think a reasonable guesstimate of ultimate cancer fatalities in the EZ, had it never been evacuated, would be several hundred up to a thousand. (Again, probably too few to observe in epidemiological studies.) The crux of the issue is whether immediate radiation exposures from inhalation outweigh long-term exposures emanating from radioactive soil. Do you get more cancer risk from breathing in the radioactive cloud in the first month of the spew, or from the decades of radio-cesium “groundshine” after the cloud disperses? Jacobson and Ten Hoeve’s model assigns most of the risk to the cloud, while other calculations, including mine, give more weight to groundshine.

But from the standpoint of evacuation policy, the distinction may be moot. If the Stanford model is right, then evacuations are clearly wrong—the radiation risks are trivial and the disruptions of the evacuation too onerous. But if, on the other hand, cancer risks are dominated by cesium groundshine, then precipitate forced evacuations are still wrong, because those exposures only build up slowly. The immediate danger in a spew is thyroid cancer risk to kids exposed to iodine-131, but that can be counteracted with potassium iodide pills or just by barring children from drinking milk from cows feeding on contaminated grass for the three months it takes the radio-iodine to decay away. If that’s taken care of, then people can stay put for a while without accumulating dangerous exposures from radio-cesium.

Data from empirical studies of heavily contaminated areas support the idea that rapid evacuations are unnecessary. The Japanese government used questionnaires correlated with air-dose readings to estimate the radiation doses received in the four months immediately after the March meltdown in the townships of Namie, Iitate and Kawamata, a region just to the northwest of the 20-kilometer exclusion zone. This area was in the path of an intense fallout plume and incurred contamination comparable to levels inside the EZ; it was itself evacuated starting in late May. The people there were the most irradiated in all Japan, yet even so the radiation doses they received over those four months, at the height of the spew, were modest. Out of 9747 people surveyed, 5636 got doses of less than 1 millisievert, 4040 got doses between 1 and 10 mSv and 71 got doses between 10 and 23 mSv. Assuming everyone was at the high end of their dose category and a standard risk factor of 570 cancer fatalities per 100,000 people exposed to 100 mSv, we would expect to see a grand total of three cancer deaths among those 10,000 people over a lifetime from that four-month exposure. (As always, these calculated casualties are purely conjectural—far too few to ever “see” in epidemiological statistics.)

Those numbers indicate that cancer risks in the immediate aftermath of a spew are tiny, even in very heavily contaminated areas. (Provided, always, that kids are kept from drinking iodine-contaminated milk.) Hasty evacuations are therefore needless. There’s time to make a considered decision about whether to relocate—not hours and days, but months and years.

And that choice should be left to residents. It makes no sense to roust retirees from their homes because of radiation levels that will raise their cancer risk by at most a few percent over decades. People can decide for themselves—to flee or not to flee—based on fallout in their vicinity and any other factors they think important. Relocation assistance should be predicated on an understanding that most places, even close to a stricken plant, will remain habitable and fit for most purposes. The vast “costs” of cleanup and compensation that have been attributed to the Fukushima accident are mostly an illusion or the product of overreaction, not the result of any objective harm caused by radioactivity.

Ultimately, the key to rational policy is to understand the kind of risk that nuclear accidents pose. We have a folk-conception of radiation as a kind of slow-acting nerve gas—the merest whiff will definitely kill you, if only after many years. That risk profile justifies panicked flight and endless quarantine after a radioactivity release, but it’s largely a myth. In reality, nuclear meltdowns present a one-in-a-hundred chance of injury. On the spectrum of threat they occupy a fairly innocuous position: somewhere above lightning strikes, in the same ballpark as driving a car or moving to a smoggy city, considerably lower than eating junk food. And that’s only for people residing in the maximally contaminated epicenter of a once-a-generation spew. For everyone else, including almost everyone in Fukushima prefecture itself, the risks are negligible, if they exist at all.

Unfortunately, the Fukushima accident has heightened public misunderstanding of nuclear risks, thanks to long-ingrained cultural associations of fission with nuclear war, the Japanese government’s hysterical evacuation orders and haz-mat mobilizations, and the alarmism of anti-nuke ideologues. The result is anti-nuclear back-lash and the shut-down of Japanese and German nukes, which is by far the most harmful consequence of the spew. These fifty-odd reactors could be brought back on line immediately to displace an equal gigawattage of coal-fired electricity, and would prevent the emission of hundreds of millions of tons of carbon dioxide each year, as well as thousands of deaths from air pollution. But instead of calling for the restart of these nuclear plants, Greens have stoked huge crowds in Japan and elsewhere into marching against them. If this movement prevails, the environmental and health effects will be worse than those of any pipeline, fracking project or tar-sands development yet proposed.

But there may be a silver lining if the growing scientific consensus on the effects of the Fukushima spew triggers a paradigm shift. Nuclear accidents, far from being the world-imperiling crises of popular lore, are in fact low-stakes, low-impact events with consequences that are usually too small to matter or even detect. There’s been much talk over the past year about the need to digest “the lessons of Fukushima.” Here’s the most important and incontrovertible one: even when it melts down and blows up, nuclear power is safe.