“Economic Growth, Income Distribution, and Climate Change”

In response to my earlier post on climate change and aggregate demand, Lance Taylor sends along his recent article “Economic Growth, Income Distribution, and Climate Change,” coauthored with Duncan Foley and Armon Rezai.

The article, which was published in Ecological Economics, lays out a structuralist growth model with various additions to represent the effects of climate change and possible responses to it. The bulk of the article works through the formal properties of the model; the last section shows the results of some simulations based on plausible values of the various parmaters. 1 I hadn’t seen the article before, but its conclusions are broadly parallel to my arguments in the previous two posts. It tells a story in which public spending on decarbonization not only avoids the costs and dangers of climate change itself, but leads to higher private output, income and employment – crowding in rather than crowding out.

Before you click through, a warning: There’s a lot of math there. We’ve got a short run where output and investment are determined via demand and distribution, a long run where the the investment rate from the short run dynamics is combined with exogenous population growth and endogenous productivity growth to yield a growth path, and an additional climate sector that interacts with the economic variables in various ways. How much the properties of a model like this change your views about the substantive question of climate change and economic growth, will depend on how you feel about exercises like this in general. How much should the fact that that one can write down a model where climate change mitigation more than pays for itself through higher output, change our beliefs about whether this is really the case?

For some people (like me) the specifics of the model may be less important that the fact that one of the world’s most important heterodox macroeconomists thinks the conclusion is plausible. At the least, we can say that there is a logically coherent story where climate change mitigation does not crowd out other spending, and that this represents an important segment of heterodox economics and not just an idiosyncratic personal view.

If you’re interested, the central conclusions of the calibrated model are shown below. The dotted red line shows the business-as-usual scenario with no public spending on climate change, while the other two lines show scenarios with more or less aggressive public programs to reduce and/or offset carbon emissions.

Here’s the paper’s summary of the outcomes along the business-as-usual trajectory:

Rapid growth generates high net emissions which translate into rising global mean temperature… As climate damages increase, the profit rate falls. Investment levels are insufficient to maintain aggregate demand and unemployment results. After this boom-bust cycle, output is back to its current level after 200 years but … employment relative to population falls from 40% to 15%. … Those lucky enough to find employment are paid almost three times the current wage rate, but the others have to rely on subsistence income or public transfers. Only in the very long run, as labor productivity falls in response to rampant unemployment, can employment levels recover. 

In the other scenarios, with a peak of 3-6% of world GDP spent on mitigation, we see continued exponential output growth in line with historical trends. The paper doesn’t make a direct comparison between the mitigation cases and a world where there was no climate change problem to begin with. But the structure of the model at least allows for the possibility that output ends up higher in the former case.

The assumptions behind these results are: that the economy is demand constrained, so that public spending on climate mitigation boosts output and employment in the short run; that investment depends on demand conditions as well as distributional conflict, allowing the short-run dynamics to influence the long-run growth path; that productivity growth is endogenous, rising with output and with employment; and that climate change affects the growth rate and not just the level of output, via lower profits and faster depreciation of existing capital.2

This is all very interesting. But again, we might ask how much we learn from this sort of simulation. Certainly it shouldn’t be taken as a prediction! To me there is one clear lesson at least: A simple cost benefit framework is inadequate for thinking about the economic problem of climate change. Spending on decarbonization is not simply a cost. If we want to think seriously about its economic effects, we have to think about demand, investment, distribution and induced technological change. Whether you find this particular formalization convincing, these are the questions to ask.

Guns and Ice Cream

I’ve gotten some pushback on the line from my decarbonization piece that “wartime mobilization did not crowd out civilian production.” More than one person has told me they agree with the broader argument but don’t find that claim believable. Will Boisvert writes in comments:

Huh? The American war economy was an *austerity* economy. There was no civilian auto production or housing construction for the duration. There were severe housing shortages, and riots over housing shortages. Strikes were virtually banned. Millions of soldiers lived in barracks, tents or foxholes, on rations. So yeah, there were drastic trade-offs between guns and butter (which was rationed for civilians).

It’s true that there were no new cars produced during the war, and very little new housing.1 But this doesn’t tell us what happened to civilian output in general. For most of the war, wartime planning involved centralized allocation of a handful of key resources — steel, aluminum, rubber — that were the most important constraints on military production. This obviously ruled out making cars, but most civilian production wasn’t directly affected by wartime controls. 2 If we want to look at what happened to civilian production overall, we have to look at aggregate measures.

The most comprehensive discussions of this I’ve seen are in various pieces by Hugh Rockoff.3 Here’s the BEA data on real (inflation-adjusted) civilian and military production, as he presents it:

Civilian and military production in constant dollars. Source: H. Rockoff, ‘The United States: from ploughshares into swords’ in M. Harrison, ed, The Economics of World War II

As you can see, civilian and military production rose together in 1941, but civilian production fell in 1942, once the US was officially at war. So there does seem to be some crowding out. But looking at the big picture, I think my claim is defensible. From 1939 to its peak in 1944, annual military production increased by 80 percent of prewar GDP. The fall in real civilian production over this period was less than 4 percent of prewar GDP. So essentially none of the increase in military output came at the expense of civilian output; it was all additional to it. And civilian production began rising again before the end of the war; by 1945 it was well above 1939 levels.

Production is not the same as living standards. As it happens, civilian investment fell steeply during the war — in 1943-44, it was only about one third its prewar level. If we look at civilian consumption rather than output, we see a steady rise during the war. By the official numbers, real per-capita civilian consumption was 5 percent higher in 1944 – the peak of war production — than it had been in 1940. Rockoff believes that, although the BLS did try to correct for the distortions created by rationing and price controls, the official numbers still understate the inflation facing civilians. But even his preferred estimate shows a modest increase in per-capita civilian consumption over this period.

We can avoid the problems of aggregation if we look at physical quantities of particular goods. For example, shoes were rationed, but civilians nonetheless bought about 5 percent more shoes annually in 1942-1944 than they had in 1941. Civilian meat consumption increased by about 10 percent, from 142 pounds of meat per person in 1940 to 154 pounds per person in 1944. As it happens, butter seems to be one of the few categories of food where consumption declined during the war. Here’s Rockoff’s discussion:

Consumption of edible fats, particularly butter, was down somewhat during the war. Thus in a strict sense the United States did not have guns and butter. The reasons are not clear, but the long-term decline in butter consumption probably played a role. Ice cream consumption, which had been rising for a long time, continued to rise. Thus, the United States did have guns and ice cream. The decline in edible fat consumption was a major concern, and the meat rationing system was designed to provide each family with an adequate fat ration. The concern about fats aside, [civilian] food production held up well.

As this passage suggests, rationing in itself should not be seen as a sign of increased scarcity. It is, rather, an alternative to the price mechanism for the allocation of scarce goods. In the wartime setting, it was introduced where demand would exceed supply at current prices, and where higher prices were considered undesirable. In this sense, rationing is the flipside of price controls. Rationing can also be used to deliver a more equitable distribution than prices would — especially important where we are talking about a necessity like food or shoes.

The fundamental reason why rationing was necessary in the wartime US was not that civilian production had fallen, but because civilian incomes were rising so rapidly. Civilian consumption might have been 5 percent higher in 1944 than in 1940; but aggregate civilian wages and salaries were 170 percent higher. Prices rose somewhat during the war years; but without price controls and rationing inflation would undoubtedly have been much higher. Rockoff’s comment on meat probably applies to a wide range of civilian goods: “Wartime shortages … were the result of large increases in demand combined with price controls, rather than decreases in supply.”

Another issue, which Rockoff touches on only in passing, is the great compression of incomes during the war. Per Piketty and co., the income share of the top 10 percent dropped from 45 percent in 1940 to 33 percent in 1945. If civilian consumption rose modestly in the aggregate, it must have risen by more for the non-wealthy majority. So I think it’s pretty clear that in the US, civilian living standards generally rose during the war, despite the vast expansion of military production.

You might argue that even if civilian consumption rose, it’s still wrong to say there was no crowding out, since it could have risen even more without the war. Of course one can’t know what would have happened; even speculation depends on what the counterfactual scenario is. But certainly it didn’t look this way at the time. Real per capita income in the US increased by less than 2 percent in total over the decade 1929-1939.  So the growth of civilian consumption during the war was actually faster than in the previous decade. There was a reason for the popular perception that “we’ve never had it so good.”

It is true that there was already some pickup in growth in 1940, before the US entered the war (but rearmament was already under way). But there was no reason to think that faster growth was fated to happen regardless of military production. If you read stuff written at the time, it’s clear that most people believed the 1930s represented, at least to some degree, a new normal; and no one believed that the huge increase in production of the war years would have happened on its own.

Will also writes:

War production itself was profoundly irrational. Expensive capital goods were produced, thousands of tanks and warplanes and warships, whose service lives spanned just a few hours. Factories and production lines were built knowing that in a year or two there would be no market at all for their products.

I agree that military production itself is profoundly irrational. Abolishing the military is a program I fully support. But I don’t think the last sentence follows. Much wartime capital investment could be, and was, rapidly turned to civilian purposes afterwards. One obvious piece of evidence for this is the huge increase in civilian output in 1946; there’s no way that production could increase by one third in a single year except by redirecting plant and equipment built for the military.

And of course much wartime investment was in basic industries for which reconversion wasn’t even necessary. The last chapter of Mark Wilson’s Destructive Creation makes a strong case that postwar privatization of factories built during the war was very valuable for postwar businesses, and that acquiring them was a top priority for business leaders in the reconversion period. 4 By one estimate, in the late 1940s around a quarter of private manufacturing capital consisted of plant and equipment built by the government during the war and subsequently transferred to private business. In 1947, for example, about half the nation’s aluminum came from plants built by the government during the war for aircraft production. All synthetic rubber — about half total rubber production — came from plants built for the military. And so on. While not all wartime investment was useful after the war, it’s clear that a great deal was.

I think people are attracted to the idea of wartime austerity because we’ve all been steeped in the idea of scarcity – that economic problems consist of the allocation of scarce means among alternative ends, in Lionel Robbins’ famous phrase. Aggregate demand is, in that sense, a profoundly subversive idea – it suggests that’s what’s really scarce isn’t our means but our wants. Most people are doing far less than they could be, given the basic constraints of the material world, to meet real human needs. And markets are a weak and unreliable tool for redirecting our energies to something better. World War II is the biggest experiment to date on the limits of boosting output through a combination of increased market demand and central planning. And it suggests that, altho supply constraints are real — wartime controls on rubber and steel were there for a reason – in general we are much, much farther from those constraints than we normally think.

 

 

 

Decarbonization: A Keynesian View

The International Economy has asked me to take part in a couple of their recent roundtables on economic policy. My first contribution, on productivity growth, is here (scroll down). My second one, on green investment, is below. But first, I want to explain a little more what I was trying to do with it.

I am not trying to minimize that challenge of dealing with the climate change. But I do want to reject one common way of thinking about those challenges — as a “cost”, as some quantity of other needs that will have to go unmet. I reject it because output isn’t fixed — a serious effort to deal with climate change will presumably lead to a boom with much higher levels of employment and investment. And more broadly I reject it because it’s profoundly wrong to think of the complex activities of production as being equivalent to a certain quantity of “stuff”.

There’s a Marxist version of this, which I also reject — that the reproduction of capitalism requires an ever-increasing flow of material inputs and outputs, which rules out any kind of environmental sustainability. I think this mistakenly equates the situation facing the individual capitalist — the need to maximize money sales relative to money outlays — with the logic of the system as a whole. There is no necessary link between endless accumulation of money and any particular transformation of the material world. To me the real reason capitalism makes it so hard to address climate change isn’t any objective need to dump carbon into the atmosphere. It’s the obstacles that private property and the pursuit of profit — and their supporting ideologies — create for any kind of conscious reorganization of productive activity.

The question was, who will be the winners and losers from the transition away from carbon? Here’s what I wrote:

The response to climate change is often conceived as a form of austerity—how much consumption must we give up today to avoid the costs of an uninhabitable planet tomorrow? This way of thinking is natural for economists, brought up to think in terms of the allocation of scarce means among competing ends. By some means or other—prices, permits, or plans—part of our fixed stock of resources must, in this view, be used to prevent (or cope with) climate change, reducing the resources available to meet other needs.

The economics of climate change look quite different from a Keynesian perspective, in which demand constraints are pervasive and the fundamental economic problem is not scarcity but coordination. In this view, the real resources for decarbonization will not have to be with- drawn from other uses. They can come from an expansion of society’s productive capabilities, thanks to the demand created by clean-energy investment itself. Addressing climate change need not imply a lower standard of living—if it boosts employment and steps up the pace of technological change, it may well lead to a higher one.

People rightly compare the scale of the transition to clean technologies to the mobilization for World War II. Often forgotten, though, is that in countries spared the direct destruction of the fighting, like the United States, wartime mobilization did not crowd out civilian production. Instead, it led to a remarkable acceleration of employment and productivity growth. Production of a liberty ship required 1,200 man hours in 1941, only 500 by 1944. These rapid productivity gains, spurred by the high-pressure economy of the war, meant there was no overall tradeoff between more guns and more butter.

At the same time, the degree to which all wartime economies—even the United States—were centrally planned, reinforces a lesson that economic historians such as Alexander Gerschenkron and Alice Amsden have drawn from the experience of late industrializers: however effective decentralized markets may be at allocating resources at the margin, there is a limit to the speed and scale on which they can operate. The larger and faster the redirection of production, the more it requires conscious direction—though not necessarily by the state.

In a world where output is fundamentally limited by demand, action to deal with climate change doesn’t require sacrifice. Do we really live in such a world? Think back a few years, when macroeconomic discussions were all about secular stagnation and savings gluts. The headlines may have faded, but the conditions that prompted them have not. There’s good reason to think that the main limit to capital spending still is not scarce savings, but limited outlets for profitable investment, and that the key obstacle to faster growth is not technology or “structural” constraints, but the willingness of people to spend money. Bringing clean energy to scale will call forth new spending, both public and private, in abundance.

Of course, not everyone will benefit from the clean energy boom. The problem of stranded assets is real— any effective response to climate change will mean that much of the world’s coal and oil never comes out of the ground. But it’s not clear how far this problem extends beyond the fossil fuel sector. For manufacturers, even in the most carbon-intensive industries, only a small part of their value as enterprises comes from the capital equipment they own. More important is their role in coordinating production—a role that conventional economic models, myopically focused on coordination through markets, have largely ignored. organizing complex production processes, and maintaining trust and cooperation among the various participants in them, are difficult problems, solved not by markets but by the firm as an ongoing social organism. This coordination function will retain its value even as production itself is transformed.

UPDATE: Followup post on the World War II experience here.

The Problem with Students Is They Work Too Hard

Something I’ve been thinking about lately: How many of the problems we have with students come from the fact that they have to take too many classes?

The standard requirement for a BA is 120 credit hours. That means that to graduate in four years, you need to take 5 classes a semester. And that’s assuming you pass everything; that you don’t need any remedial or other non-credit classes; that all your transfer credits are accepted (an especially big issue for students who transfer to 4-year schools from community colleges); that you don’t have any problems with distribution requirements; and so on. Realistically, for many students even 5 classes a semester is not going to be enough.

And in many cases those five classes are on top of jobs, even full-time jobs, and on top of caring for children or other family members. Especially for nontraditional students,  expecting a schedule of five classes every semester seems unrealistic. And if you want to finish in four years, five classes a semester is the best case.

A lot of students should be taking lighter loads. But the thing is, financial aid is often contingent on maintaining full-time status. So even if someone knows they are not in a position to take five classes and put an acceptable level of effort into each one, the financial penalties for a more realistic schedule may be prohibitive.

These aren’t such big problems at selective liberal arts colleges, where most students are traditional college-age without family responsibilities, don’t have to support themselves, aren’t taking remedial classes, get good advising and come in with AP credits. And students who don’t depend on financial aid have more flexibility about adjusting their courseload. But at most public universities, the system seems designed to ensure there will be a significant number of students taking more classes than they have time for.

Just speaking for myself, I never took five courses in a semester as undergraduate. With lots of credit from AP exams and so on, I didn’t have to. I never worked during the semesters, except for things like writing for campus newspapers. And when I felt like I needed to cut back to part-time for personal reasons, I wasn’t financially penalized for it.

It’s a crazy setup, when you think about it. The combination of financial aid contingent on fulltime status; AP credit; non-credit remediation courses; and problems with transferring credit from community colleges, means that we end up demanding the least from the students with the most advantages, and the most from the students with the least advantages. AP credit and the like seems especially perverse — it literally means that, the better the high school you went to, the less work you are required to do to earn a BA.

We all know about how excessive workloads undermine the quality of teaching, but I think we sometimes forget that the same goes for students. Just like with adjuncts as opposed to full time faculty, students from weak high schools, or who start their college education at community colleges, are asked to do more work for the same reward. And then we get angry at them for not living up to the standards of the better prepared students who are asked to do less. Just something to think about the next time someone doesn’t read the syllabus, or turn in their work, or show up for office hours.