At Barron’s: Thank Full Employment, Not AI, for Rising Productivity

(I write a monthly opinion piece for Barron’s. This one was published there in September. My previous pieces are here.)

New data about productivity are some of the best on record in recent years. That’s good news for economic growth. But just as important, it offers support for the unorthodox idea that demand shapes the economy’s productive potential. Taking this idea seriously would require us to rethink much conventional wisdom on macroeconomic policy. 

Real output per hour grew 2.6% in 2023, according to the Bureau of Labor Statistics, exceeding the highest rates seen between 2010 and the eve of the pandemic. That said, productivity is one of the most challenging macroeconomic outcomes to measure. It is constructed from three distinct series—nominal output, prices, and employment. Short-term movements often turn out to be noise. It’s an open question whether that high rate will be sustained. But if it is, that will tell us something important about economic growth. 

Discussions of productivity growth tend to treat it as the result of unpredictable scientific breakthroughs and new technologies, whose appearance has nothing to do with current economic conditions. This view of technological change as “exogenous,” in the jargon, is entrenched in economics textbooks. And it’s reinforced by the self-mythologizing barons of Silicon Valley, who are only too happy to take credit for economic good news. 

The economic conditions that lead companies to actually adopt new technologies get much less attention, as does the fact that much productivity growth comes from people shifting from lower-value to higher-value activities without the need for any new technology at all.

A recent New York Times article is typical. It discusses faster productivity growth almost entirely in terms of the new technologies — AI, Zoom, internet shopping — that might, or might not, be contributing. Not until 40 paragraphs in is there a brief mention of the strong labor market, and the incentives that rising wages create to squeeze more out of each hour of labor.

What if we didn’t treat this as an afterthought? There’s a case to be made that demand is, in fact, a central factor in productivity growth. 

The economic historian Gavin Wright has made this case for both the 1990s — our modern benchmarks for productivity success stories — and the 1920s, an earlier period of rapid productivity growth and technological change. Wright considers the adoption of general-purpose technologies: electricity in the ‘20s and computers in the ‘90s. Both had existed for some time but weren’t widely adopted until rising labor costs provided the right incentives. He observes that in both periods strong wage growth started before productivity accelerated. 

In the retail sector, for instance, it was in the 1990s that IT applications like electronic monitoring of shelf levels, barcode scanning and electronic payments came into general use. None of these technologies were new at the time; what had changed was the tight market for retail employment that made automation worthwhile.

The idea that demand can have lasting effects on the economy’s productive potential – what economists call hysteresis — has gotten attention in recent years. Discussions of hysteresis tend to focus on labor supply — people dropping out of the labor market when jobs are scarce, and re-entering when conditions improve. The effect of demand on productivity is less often discussed. But it may be even more important.

After the 2007-2009 recession, gross domestic product in the U.S. (and most other rich countries) failed to return to its pre-recession trend. By 2017, a decade after the recession began, real GDP was a full 10% below what prerecession forecasters had expected. There is wide agreement that much, if not all, of this shortfall was the result of the collapse of demand in the recession. Former Treasury Secretary Larry Summers at the time called the decisive role of demand in the slow growth of the 2010s a matter of “elementary signal identification.” 

Why did growth fall short? If you look at the CBO’s last economic forecasts before the recession, the agency was predicting 6% growth in employment between 2007 and 2017. And as it turned out, over those ten years, employment grew by exactly 6%. The entire gap between actual GDP and the CBO’s pre-recession forecasts was from slower growth in output per worker. In other words, this shortfall was entirely due to lower productivity. 

If you believe that slow growth in the 2010s was largely due to the lingering effects of the recession — and I agree with Summers that the evidence is overwhelming on this point — then what we saw in that decade was weak demand holding back productivity. And if depressed demand can slow down productivity growth, then, logically, we would expect strong demand to speed it up.

A few economists have consistently made the case for this link. Followers of John Maynard Keynes often emphasize this link under the name “Verdoorn’s law.” The law, as Keynesian economist Matias Vernengo puts it in a new article, holds that “technical change is the result, and not the fundamental cause of economic growth.” Steve Fazzari, another Keynesian economist, has explored this idea in several recent papers. But for the most part, mainstream economists have yet to embrace it. 

This perspective does occasionally make it into the world of policy debates. In a 2017 report, Josh Bivens of the Economic Policy Institute argued that “low rates of unemployment and rapid wage growth would likely induce faster productivity growth.” Skanda Amarnath and his colleagues at Employ America have made similar arguments. In a 2017 report for the Roosevelt Institute, I discussed a long list of mechanisms linking demand to productivity growth, as well as evidence that this was what explained slower growth since the recession.

If you take these sorts of arguments seriously, the recent acceleration in productivity should not be a surprise. And we don’t need to go looking for some tech startup to thank for it. It’s the natural result of a sustained period of tight labor markets and rising wages.

There are many good reasons for productivity growth to be faster in a tight labor market, as I discussed in the Roosevelt report. Businesses have a stronger incentive to adopt less labor-intensive techniques, and they are more likely to invest when they are running at full capacity. Higher-productivity firms can outbid lower-productivity ones for scarce workers. New firms are easier to start in a boom than in a slump.

When you think about it, it’s strange that concepts like Verdoorn’s law are not part of the economics mainstream. Shouldn’t they be common sense?

Nonetheless, the opposite view underlies much of policymaking, particularly at the Federal Reserve. At his most recent press conference, Fed Chair Jay Powell was asked whether he still thought that wage growth was too high for price stability. Powell confirmed that, indeed, he thought that wage gains were still excessively strong. But, he said, they were gradually moving back to levels “associated — given assumptions about productivity growth — with 2% inflation.”

The Fed’s view that price stability requires limiting workers’ bargaining power is a long-standing problem. But focus now on those assumptions. Taking productivity growth as given, unaffected by policy, risks making the Fed’s pessimism self-confirming. (This is something that Fed economists have worried about in the past.) If the Fed succeeds in getting wages down to the level consistent with the relatively slow productivity growth it expects, that itself may be what stops us from getting the faster productivity growth that the economy is capable of.

The good news is that, as I’ve written here before, the Fed is not all-powerful. The current round of rate hikes has not, so far, done much to cool off the labor market. If that continues to be the case, then we may be in for a period of sustained productivity growth and rising income.

The Future of Health Care Reform

This is a guest post by Michael Kinnucan.

The Collapsing Center and Solidifying Periphery of the US Healthcare System

Contrary to what most people on the US left might tell you, there’s nothing intrinsically impossible about building a healthcare system that provides universal coverage on the foundation of employer-sponsored insurance. Germany and France and several other countries have done it, and we could do it too. The way you do it is to start with core-economy full-time workers and their families, and then steadily patch and regulate your way to universal coverage (“what about retirees? The unemployed? Freelancers? What happens when people change jobs? What about employers too small to offer coverage?” and so forth) until you’ve covered everyone. This kind of system will never be quite as seamless and efficient as single-payer, but it is workable. 

What has made this effort uniquely difficult in the US case, however, has been the spiraling overall cost of US healthcare. Virtually all healthcare systems in the developed world–including multi-payer systems like Germany’s–are built on a firm foundation of medical price control. US observers are acutely aware of this in the case of pharmaceuticals, but the situation is similar across the healthcare industry; Germany, for instance, sets the price of physicians’ services and hospital care through regional sectoral bargaining. 

The US, for political reasons, has proven incapable of imposing similar discipline on the healthcare market. Prices are negotiated in a medical marketplace where the sellers of healthcare hold significant market power, and this process is intrinsically inflationary. This inflation has been far more intense in the employer-insurance market than in the public sector, particularly since the mid-1980s; Medicare and private-insurance prices have diverged to the point where commercial insurers pay on average 254% of Medicare for the same procedures.

This inflationary dynamic has put continuous pressure on the employer-sponsored insurance market, with large, high-margin businesses complaining about the ever-growing cost of healthcare while smaller and lower-wage businesses simply restrict or cancel coverage.  Thus would-be US healthcare reformers have found themselves in the strange position of trying to “patch” marginal populations into a system centered on employer-based coverage even as the center of the system constantly threatens to collapse. 

Thus, while the US public tends to equate employer-based coverage with quality and stability and to imagine healthcare reform as the process of granting new populations access to that quality and stability, in fact employer-sponsored insurance has continuously declined in quality and occasionally been faced with a death spiral in the face of constantly increasing costs. And while proponents of universal healthcare tend to be motivated by the plight of those locked out of the employer-based healthcare system (the poor, the unemployed), major efforts at healthcare reform have often been driven not by the problems of these groups but by problems within the employer market.

The Cost Control Deadlock

This dynamic has shaped mainstream US healthcare reform efforts since the Carter administration. The ambition of reformers has been to simultaneously expand coverage and control costs. This double aim is frequently given a superficial fiscal gloss (coverage expansion is “paid for” through cost control), but its real logic is political. Proponents of this strategy hope to (1) use the promise of cost control (in the employer market) to guarantee business support for coverage expansion (generally through public programs), while simultaneously (2) using coverage expansion (providing more paying customers for the healthcare industry) to mitigate healthcare industry opposition to cost control (reducing aggregate payments to the healthcare industry).

The logic of this interlocking set of political bargains has proven more compelling in theory than successful in practice. More specifically, the US political system has revealed a systematic preference for simply spending more money to expand coverage without doing much to achieve cost control–particularly employer-market cost control. The healthcare lobby has shown itself to be very focused on opposing cost control and highly effective in doing so, to the point where even obviously egregious abuses that provoke nominally bipartisan opposition have taken decades to address (so-called “surprise billing,” for instance, or the blank check to pharmaceutical companies incorporated in Medicare Part D). The central political lesson of US healthcare reform efforts going all the way back to Truman is that it’s virtually impossible to pass major reform without buying off the provider lobbies.

For this reason reformers have tended to want to hide the ball on cost control, avoiding obvious and internationally well-known methods like price control and national budgeting in favor of Rube Goldberg “managed competition” and “value-based payment” schemes that are unpopular with patients, difficult for the public to understand and of questionable efficacy in any case. 

The business lobby, in turn–which reformers have for decades seen as the natural constituency for cost control–has tended to take the clear downsides of reform (higher taxes and more regulation) more seriously than the alleged upside of long-term cost control, and to put more faith in the tried-and-true method of shifting costs onto employees than on regulatory schemes to achieve savings. Business (at least big business) tends to like the idea of cost-control-oriented healthcare reform in theory, but in practice has proven a fickle ally for reformers.

Abandoning Cost Control and Achieving Coverage: The Legacy of the ACA

This situation represents a deadlock for what used to be called “comprehensive” healthcare reform, but no such deadlock applies to the far simpler project of simply using tax dollars to pay for expanded healthcare coverage. Such a strategy may face opposition from fiscal conservatives, but it is enduringly popular with the US public (who have long been committed to the idea of universal healthcare) and under the right conditions can easily win support from the healthcare lobbies (who stand to attract those public dollars). The political project of “comprehensive” healthcare reform died a famous death in 1993, but the political project of “spending public money to buy people healthcare” scored notable successes, including a steady expansion of Medicaid eligibility and the passage of CHIP during the Clinton administration and the passage of Medicare Part D under George W. Bush. 

The situation, in other words, was the very opposite of how progressives have sometimes described it–it’s not that the US political system wants universal health coverage but is too stingy to pay for it, but rather that the US political system is perfectly prepared to do universal coverage as long as no significant savings are attached.

The ACA was the culmination of this tradition. That likely wasn’t what its architects intended–healthcare wonks still dreamed of “bending the cost curve”–but it was what the law did. While the ACA is best remembered for creating the “individual market” with its famous three-legged stool, the real story is simpler: The ACA spent roughly a trillion dollars over 10 years to cover roughly 30 million people through a combination of free Medicaid and very heavily subsidized private insurance, with the funding coming not from comprehensive cost control but from from tax revenue and suppression of Medicare cost increases.1 

From McDonough, Inside National Health Reform, p. 282.

One wrinkle to this reform strategy was the risk of employer “dumping”: if the government was prepared to heavily subsidize working-class insurance coverage, why wouldn’t employers–and workers, for that matter–simply go where the subsidies were? This was a particularly significant risk for low-wage workers; such workers were eligible for very significant subsidies on the exchange, their employers would be eager to control costs, and their employer-sponsored coverage was often nothing to write home about. They might well have been better off on the exchanges or Medicaid.

One can imagine a version of the ACA that simply embraced this dynamic, moving millions of low-wage workers into heavily subsidized individual coverage–and that version would likely have been more progressive. It would also have been significantly more disruptive and costly. Instead, the ACA dealt with this problem primarily through the “employer mandate,” which required employers with over 50 employees to offer coverage or pay a significant penalty. Smaller employers were exempt, but the law also reformed the “small group” insurance market in which these firms purchased insurance, requiring community rating for these plans, which succeeded–for a time, at least–in preventing a looming death spiral in that market.

On its own terms, this general strategy was a success. The ACA insured millions of people (by buying them insurance) while avoiding “dumping.” The share of non-elderly Americans in employer coverage, which fell nearly 10 percentage points between 1999 and 2011, rose slightly as the economy recovered from the Great Recession and has remained quite steady ever since. While over 8% of Americans remain uninsured, progressives should not mistake this for a fundamental limitation in the ACA framework: many of the uninsured are in states that haven’t expanded Medicaid, or are eligible for coverage but not enrolled, or fall into various immigrant groups not covered by the law. Aggressive state action on enrollment and uptake within the ACA framework and a commitment to covering immigrants out of state funds could reduce uninsured rates to the disappearing point.

The Unfinished Business of Cost Control

What of cost control? There was one radical cost-control proposal on the table: the much-misunderstood “public option,” which in its original form would have introduced into the marketplace a public plan paying Medicare prices. This would effectively have imported public cost control into the private market, forcing private insurers to either slash their own payments to providers to Medicare levels or get out. The consequences of such a move would have upset the entire structure of the ACA; exchange insurance would have become far cheaper than employer insurance, drawing tens of millions of people out of employer insurance into the market and radically reshaping the US health insurance system. Clearly no such move was in the cards, and the public option was first modified to pay market prices (which would have defeated its purpose), then dropped entirely.

To the extent that the ACA did anything on cost control in the individual or employer market, it addressed the issue by inviting employers to make their insurance offerings worse. Employers were required to offer some form of insurance, but the standard for that insurance was very low indeed; employees could be charged up to nearly 10% of their income in premiums for coverage with high deductibles and extensive cost-sharing. More ambitiously, the ACA attempted to fulfill a longstanding bipartisan dream of healthcare policy wonks by rolling back the tax subsidy for employer-based insurance; the so-called “Cadillac tax” would have revoked the subsidy initially only for the most generous employer insurance, but would over time have come to apply to most insurance. This effort corresponded to a long-held belief in the healthcare policy community that the tax subsidy encouraged employers to offer excessively generous coverage, and that this coverage in turn drove US healthcare costs.

Charitably, these design choices represented an effort at cost control through the “skin in the game” strategy: when required to pay a larger portion of their healthcare costs, Americans would be less likely to go to the doctor just for fun. Less charitably, they were an invitation for employers to at any rate control employers’ healthcare costs, by shifting a growing share of those costs onto employees. This safety valve was crucial, since employers would no longer be able to limit their costs as they had in the past, by dropping coverage.

The Unfinished Business of the ACA and the Coming Crisis in Employer Insurance

As I said above, the ACA worked on its own terms: the law actually passed, it greatly expanded coverage by providing government subsidies for those locked out of the employer market, and it did so without causing massive outflows or disruptions in employer insurance. The strategy of expanding coverage without controlling costs was effective.

But that was over a decade ago, and costs have continued to rise. The ACA left employer-based insurance untouched at the heart of the US healthcare system, without resolving the inflationary pressure in the employer market. This pressure continues to grow. The average premium for employer-sponsored individual coverage has nearly doubled, from $4824 in 2009 to $8435 in 2023; for family coverage the number is $23,968. 

How have employers responded? First and foremost by shifting a growing share of medical costs onto their employees. Worker contributions to premium payment, although capped at around 9% of worker income by the ACA, have grown in tandem with total premiums. At the same time, so-called “cost sharing” in US health insurance takes many forms and is difficult to measure, but the simplest proxy–the annual deductible–has nearly tripled in nominal terms since the advent of the ACA, from $533 in 2009 to $1568 in 2023, with workers at small firms paying $2138.2  As recently as 2006, 45% of workers faced no deductible for their coverage; that figure is now less than 10%. Many workers face significant cost-sharing in the form of “coinsurance” even after they hit their deductibles; it is common for a worker to owe 20% of hospital costs up to an out-of-pocket max that can be well north of $10,000. The growth of cost-sharing is the major contributor to a growing medical debt crisis, as hospitals attempt to collect from patients who can’t pay despite having insurance.

It is important to note that cost-sharing has restrained premium increases; if employers had had to hold cost-sharing constant, premiums would have grown even faster. This strategy is quickly approaching its limits, however; for actuarial reasons, further increases in deductible will face diminishing returns in premium savings, and at some point employers will run up against even the ACA’s fairly low bar on coverage quality. These limits are already being reached in the low-wage labor market.

Where will employers turn next? One possibility is to skirt the limits of the ACA’s employer mandate–for example by offering plans that cover “minimum essential benefits” under the ACA but do not meet the ACA’s “minimum value” requirements because they leave employers with enormous out-of-pocket expenses. An employee misinformed enough to enroll in such coverage is effectively uninsured, but the employer pays only part of the penalty for not offering insurance. Another option is so-called “reference-based pricing” schemes, which do not have networks and do not negotiate prices with providers, instead paying a low standard rate for care. Employees with this kind of coverage may find most providers unwilling to treat them and may be “balance billed” for enormous amounts of money when they do receive care.

As a last resort–particularly if the loopholes I just described are closed by regulators, which they should be and which provider lobbies will demand that they are–some employers may choose to simply drop coverage and pay the penalty. The ACA’s employer mandate penalties are significant, but they’re not prohibitive; if premiums continue rising there will come a point when they’re cheaper than offering insurance. If this happens, employees will have no choice but to seek insurance on the individual market or (if they’re poor enough) enroll in Medicaid. 

A tight labor market has limited these dynamics so far, but the next recession  may prove a turning point. At that point, the dam the ACA set up to prevent employers from “dumping” employees into publicly subsidized coverage will have broken.

Progressive Strategy for the Next Healthcare Crisis

As employer insurance begins to unravel around the edges, progressives will be tempted to step in and save it. They should think twice before doing so. There’s a lot to be said for a situation in which a growing share of Americans receive health insurance through Medicaid and through public subsidy on the ACA exchanges.

Medicaid and (especially) the ACA exchange have their problems, but they already offer better and more affordable insurance than low-end employer plans, and more importantly their problems are far easier to fix than the problems of the employer market. If Medicaid pays too little to providers and has too few providers, its reimbursement rates can be raised. If ACA exchange insurance is too expensive, that insurance can be subsidized, at both the state and federal level. If exchange insurance has high cost-sharing and inadequate networks, states and the federal government have full power to set standards in these markets. Perhaps most importantly, states have proven quite effective at controlling costs for the non-elderly Medicaid population, and could do the same for the exchange population, as recent state experiments with so-called “public options” in Washington, New Mexico and elsewhere demonstrate. States can even find ways to expand Medicaid-like coverage for working-class people, as New York and Minnesota already do through Basic Health Plan programs.

All these policy aims are far more easily achieved in a single, centralized individual market than in the fragmented and opaque employer market–and they free policymakers from a sharp tradeoff where raising standards for working-class insurance coverage imposes costs on businesses or causes them to drop coverage. Non-employer insurance also offers far better opportunities for state-level policymaking than does the employer marketplace, since states are virtually banned from regulating employer insurance under ERISA. If ambitious healthcare reform is blocked at the federal level for the foreseeable future, progressives have ample opportunity to experiment with such reform in the states.

What would such an agenda look like? At the federal level, the Biden administration can likely raise the bar on employer insurance through regulatory action, taking a closer look at whether employer insurance meets “minimum essential coverage” and especially “minimum value” standards and whether employers are appropriately informing employees of their rights. Setting clearer minimum standards on employer insurance will cause some employers to stop offering it–and instead of fighting that dynamic, progressives should focus on ensuring that their employees have good options elsewhere, by instituting or expanding Basic Health Plan and Medicaid buy-in options, increasing subsidies and standards on state and federal exchanges, and implementing robust public options wherever possible.

Even if successful, this strategy wouldn’t spell the end of employer insurance overnight. 59% of non-elderly Americans receive insurance through their or their family’s employer; that’s a lot of people, and it would still be a lot of people even if employers began to drop coverage. But it’s easy to imagine a virtuous cycle where, as Medicaid and individual market populations grow, a large and diverse constituency grows for improving them. In the long run, the prospects for truly universal healthcare might be far better than they are today.