(I write occasional opinion pieces for Barron’s. This one was published there in October 2024. My previous pieces are here.)
Not long ago, there was widespread agreement on how to think about monetary policy. When the Federal Reserve hikes, this story went, it makes credit more expensive, reducing spending on new housing and other forms of capital expenditure. Less spending means less demand for labor, which means higher unemployment. With unemployment higher, workers accept smaller wage gains, and slower wage growth is in turn passed on as slower growth in prices — that is, lower inflation.
This story, which you still find in textbooks, has some strong implications. One is that there was a unique level of unemployment consistent with stable 2% inflation — what is often called the “nonaccelerating inflation rate of unemployment,” or NAIRU.
The textbook story also assumes that wage- and price-setting depend on expectations of future prices. So it’s critical for central banks to stabilize not only current inflation but beliefs about future inflation; this implies a commitment to head off any inflationary pressures even before prices accelerate. On the other hand, if there is a unique unemployment rate consistent with stable inflation, then the Fed’s mandate is dual only in name. In practice, full employment and price stability come to the same thing.
In the early 21st century, all this seemed sufficiently settled that fundamental debates over monetary policy could be treated as a question for history, not present-day economics.
The worldwide financial crisis of 2007-2009 unsettled the conversation. The crisis, and, even more, the glacial recovery that followed it, opened the door to alternative perspectives on monetary policy and inflation. Jerome Powell, who took office as Federal Open Market Committee chair in 2018, was more open than his predecessors to a broader vision of both the Fed’s goals and the means of achieving them. In the decade after the crisis, the idea of a unique, fundamentals-determined NAIRU came to seem less plausible.
These concerns were crystallized in the strategic review process the Fed launched in 2019. That review resulted, among other things, in a commitment to allow future overshooting of the 2% inflation target to make up for falling short of it. The danger of undershooting seemed greater than in the past, the Fed acknowledged.
One might wonder how much this represents a fundamental shift in the Fed’s thinking, and how much it was simply a response to the new circumstances of the 2010s. Had Fed decision-makers really changed how they thought about the economy?
Many of us try to answer these questions by parsing the publications and public statements of Fed officials.
A fascinating recent paper by three European political scientists takes this approach and carries it to a new level. The authors—Tobias Arbogast, Hielke Van Doorslaer and Mattias Vermeiern—take 120 speeches by FOMC members from 2012 through 2022, and systematically quantify the use of language associated with defense of the NAIRU perspective, and with various degrees of skepticism toward it. Their work allows us to put numbers on the shift in Fed thinking over the decade.
The paper substantiates the impression of a move away from the NAIRU framework in the decade after the financial crisis. By 2019-2020, references to the natural rate or to the need to preempt inflation had almost disappeared from the public statements of FOMC members, while expressions of uncertainty about the natural rate, of a wait-and-see attitude toward inflation, and concern about hysteresis (long-term effects of demand shortfalls) had become more common. The mantra of “data dependence,” so often invoked by Powell and others, is also part of the shift away from the NAIRU framework, since it implies less reliance on unobservable parameters of economic models.
Just as interesting as the paper’s confirmation of a shift in Fed language, is what it says about how the shift took place. It was only in small part the result of changes in the language used by individual FOMC members. A much larger part of the shift is explained by the changing composition of the FOMC, with members more committed to the NAIRU gradually replaced by members more open to alternative perspectives.
The contrast between 2014-2018 Chair Janet Yellen and Powell is particularly noteworthy in this respect. Yellen, by the paper’s metric, was among the most conservative members of the FOMC, most committed to the idea of a fixed NAIRU and the need to preemptively raise rates in response to a strong labor market. Powell is at the opposite extreme — along with former Vice Chair Lael Brainard, he is the member who has most directly rejected the NAIRU framework, and who is most open to the idea that tight labor markets have long-term benefits for income distribution and productivity growth. The paper’s authors suggest, plausibly, that Powell’s professional training as a lawyer rather than an economist means that he is less influenced by economic models; in any case, the contrast shows how insulated the politics of the Fed are from the larger partisan divide.
Does the difference in conceptual frameworks really matter? The article’s authors argue that it does, and I agree. FOMC members may sincerely believe that they are nonideological technicians, pragmatically responding to the latest data in the interests of society as a whole. But data and interests are always assessed through the lens of some particular worldview.
To take one important example: In the NAIRU framework, the economy’s productive potential is independent of monetary policy, while inflation expectations are unstable. This implies that missing the full employment target has at worst short-term effects, while missing the inflation target grows more costly over time. NAIRU, in other words, makes a preemptive strike on any sign of inflation seem reasonable.
On the other hand, if you think that hysteresis is real and important, and that inflation is at least sometimes a question of supply disruptions rather than unanchored expectations, then it may be the other way round. Falling short of the employment target may be the error with more lasting consequences. This is a perspective that some FOMC members, particularly Powell and Brainard, were becoming open to prior to the pandemic.
Perhaps even more consequential: if there is a well-defined NAIRU and we have at least a rough idea of what it is, then it makes sense to raise rates in response to a tight labor market, even if there is no sign, yet, of rising inflation. But if we don’t believe in the NAIRU, or at least don’t feel any confidence about its level, then it makes more sense to focus more on actual inflation, and less on the state of the labor market.
By the close of the 2010s, the Fed seemed to be well along the road away from the NAIRU framework. What about today? Was heterodox language on inflation merely a response to the decade of weak demand following the financial crisis, or did it represent a more lasting shift in how the Fed thinks about its mission?
On this question, the evidence is mixed. After inflation picked up in 2022, we did see some shift back to the older language at the Fed. You will not find, in Powell’s recent press conferences, any mention of the longer-term benefits of a tight labor market that he pointed to a few years ago. Hysteresis seems to have vanished from the lexicon.
On the other hand, the past few years have also not been kind to those who see a tight link between the unemployment rate and inflation. When inflation began rising at the start of 2021, unemployment was still over 6%; two years later, when high inflation was essentially over, unemployment was below 4%. If the Fed had focused on the unemployment rate, it would have gotten inflation wrong both coming and going.
This is reflected in the language of Powell and other FOMC members. One change in central-bank thinking that seems likely to last, is a move away from the headline unemployment rate as a measure of slack. The core of the NAIRU framework is a tight link between labor-market conditions and inflation. But even if one accepts that link conceptually, there’s no reason to think that the official unemployment rate is the best measure of those conditions. In the future, we are likely to see discussion of a broader set of labor-market indicators.
The bigger question is whether the Fed will return to its old worldview where tight labor markets are seen as in themselves an inflationary threat. Or will it stick with its newer, agnostic and data-driven approach, and remain open to the possibility that labor markets can stay much stronger than we are used to, without triggering rising inflation? Will it return to a single-minded focus on inflation, or has there been a permanent shift to giving more independent weight on the full employment target? As we watch the Fed’s actions in coming months, it will be important to pay attention not just to what they do, but to why they say they are doing it.
FURTHER THOUGHTS: I really liked the Arbogast et al. paper, for reasons I couldn’t fully do justice to in the space of a column like this.
First of all, in addition to the new empirical stuff, it does an outstanding job laying out the intellectual framework within which the Fed operates. For better or worse, monetary policy is probably more reliant than most things that government does on a consciously held set of theories.
Second, it highlights — in a way I have also tried to — the ways that hysteresis is not just a secondary detail, but fundamentally undermines the conceptual foundation on which conventional macroeconomic policy operates. The idea that potential output and long-run growth (two sides of the same coin) are determined prior to, and independent of, current (demand-determined) output, is what allows a basically Keynesian short-run framework to coexist with the the long-run growth models that are the core of modern macro. If demand has lasting effects on the laborforce, productivity growth and potential output, then that separation becomes untenable, and the whole Solow apparatus floats off into the ether. In a world of hysteresis, we no longer have a nice hierarchy of “fast” and “slow” variables; arguably there’s no economically meaningful long run at all.1
Arbogast and co don’t put it exactly like this, but they do emphasize that the existence of hysteresis (and even more reverse hysteresis, where an “overheating” economy permanently raises potential) fundamentally undermine the conventional distinction between the short run and the long run.
This leads to one of the central points of the paper, which I wish I’d been able to highlight more: the difference between what they call “epistemological problematization” of the NAIRU, that is doubts about how precisely we can know it and related “natural” parameters; and “ontological problematization,” or doubts that it is a relevant concept for policy at all. At a day to day operational level, the difference may not always be that great; but I think — as do the authors — that it matters a lot for the evolution of policy over longer horizons or in new conditions.
The difference is also important for those of us thinking and writing about the economy. The idea of some kind of “natural” or “structural” parameters, of a deeper model that abstracts from demand and money, deviations from which are both normatively bad and important only in the short term — this is an incubus that we need to dislodge if we want to move toward any realistic theorizing about capitalist economies. It substitutes an imaginary world with none of the properties of the world that matter for most of the questions we are interested — a toy train set to play with instead of trying to solve the very real engineering problems we face.
I appreciate the paper’s concluding agnosticism about how far the Fed has actually moved away form this framework. As I mentioned in the piece, I was struck by their finding that among the past decade’s FOMC members, Powell has moved the furthest away from NAIRU and the rest of it. If nothing else, it vindicates some of my own kind words about him in the runup to his reappointment.2
This is also, finally, an example of what empirical work in economics ought to look like.3 First, it’s frankly descriptive. Second, it asks a question which has a quantitative answer, with substantively interesting variation (across both time and FOMC members, in this case.) As Deirdre McCloskey stressed in her wonderful pamphlet The Secret Sins of Economics, the difference between answers with quantitative and qualitative answers is the difference between progressive social science and … whatever economics is.
What kind of theory would actually contribute to an … inquiry into the world? Obviously, it would be the kind of theory for which actual numbers can conceivably be assigned. If Force equals Mass times Acceleration then you have a potentially quantitative insight into the flight of cannon balls, say. But the qualitative theorems (explicitly advocated in Samuelson’s great work of 1947, and thenceforth proliferating endlessly in the professional journals of academic economics) don’t have any place for actual numbers.
A qualitative question, in empirical work, is a question of the form “are these statistical results consistent or inconsistent with this theoretical claim?” The answer is yes, or no. The specific numbers — coefficients, p-values, and of course the tables of descriptive statistics people rush through on their way to the good stuff — are not important or even meaningful. All that matters is whether the null has been rejected.
McCloskey, insists, correctly in my view, that this kind of work adds nothing to the stock of human knowledge. And I am sorry to say that it is just as common in heterodox work as in the mainstream.
To add to our knowledge of the world, empirical work must, to begin with, tell you something you didn’t know before you did it. “Successfully” confirming your hypothesis obviously fails this test. You already believed it! It also must yield particular factual claims that other people can make use of. In general, this means some number — it means answer a “how much” question and not jsut a “yes or no” question. And it needs to reveal variation in those quantities along some interesting dimension. Since there are no universal constants to uncover in social science, interesting results will always be about how something is bigger, or more important in one time, one country, one industry, etc. than in another. Which means, of course, that the object of any kind of empirical work should be a concrete historical development, something that happened at a specific time and place.
One sign of good empirical work is that there are lots of incidental facts that are revealed along the way, besides the central claim. As Andrew Gelman observed somewhere, in a good visualization, the observations that depart from the relationship you’re illustrating should be as informative as the ones that fit it.
This paper delivers that. Along with the big question of a long term shift, or not, in the Fed’s thinking, you can see other variation that may or may be relevant to the larger question but are interesting facts about the world in their own right. If, for example, you look at the specific examples of language they coded in each category, then a figure like shows lots of interesting fine-grained variation over time.
Also, in passing, I appreciate the fact that they coded the terms themselves and didn’t outsource the job to ChatGPT. I’ve seen a couple papers doing quantitative analysis of text, that use chatbots to classify it. I really hope that does not become the norm!
Anyway, it’s a great paper, which I highly recommend, both for its content and as a model for what useful empirical work in economics should look like.