Did Ending Unemployment Insurance Extensions Really Create 1.8 Million Jobs?

January 27, 2015

According to a new study by Marcus Hagedorn, Iourii Manovskii and Kurt Mitman (HMM), Congress failing to reauthorized the extension of unemployment insurance (UI) resulted in 1.8 million additional people getting jobs. But wait, how does that happen when only 1.3 million people had their benefits expire?


The answer is by going off the normal path of these arguments in models, techniques and data. The paper has a nice write-up by Patrick Brennan here, but it’s one that doesn’t convey how different this paper is compared to the vast majority of the research. The authors made a well-criticized splash in 2013 by arguing that most of the rise in unemployment in the Great Recession was UI-driven; this new paper is a continuation of that approach.

Gold Standard Model. Before we go further, let’s understand what the general standard in UI research looks like. The model here is that UI makes it easier for workers to pass up job offers. As a result they’ll take a longer time to find a job, which creates a larger pool of unemployed people, raising unemployment. In order to test this, researchers use longitudinal data for individuals to compare the length of job searches for individuals who receive UI with those who do not.

This is the standard in the two biggest UI studies from the Great Recession. Both essentially use individuals not receiving UI as a control group to see what getting UI does for people’s job searches over time. Jesse Rothstein (2011) found that UI raised unemployment “by only about 0.1 to 0.5 percentage point.” Using a similar approach, Farber and Valletta (2013) later found “UI increased the overall unemployment rate by only about 0.4 percentage points.” These are generally accepted estimated.

And though small, they are real numbers. The question then becomes an analysis of the trade-offs between this higher unemployment and the positive effects of unemployment insurance, including income support, increased aggregate demand and the increased efficiency of people taking enough time to get the best job for them.

This is not what HMM do in their research. Either in terms of their data, which doesn’t look at any individuals, or their model, which tells a much different story than what we traditionally understand, or their techniques, which add additional problems. Let’s start with the model.

Model Problems. The results HMM get are radically higher than these other studies. They argue that this is because they look at the “macro” effects of unemployment insurance. Instead of just people searching for a job, they argue that labor-search models show that employers must boost the wages of workers and create fewer job openings as a result of unemployment insurance tightening the labor market.

But in their study HMM only look at aggregate employment. If these labor search dynamics were the mechanism, there should be something in the paper about actual wage data or job openings moving in response to this change. There is not. Indeed, their argument hinges entirely on the idea that the labor market was too tight, with workers having too much bargaining power, in 2010-2013. The end of UI finally relaxed this. If that’s the case, then where are the wage declines and corporate profit gains in 2014?

This isn’t an esoteric discussion. They are, in effect, taking a residual and calling it the “macro” effect of UI. But we shouldn’t take it for granted that search models can confirm these predictions without a lot of different types of evidence; as Marshall Steinbaum wrote in his appreciation of these models, when it comes to business cycles and wages predictions they are “an empirical disaster.”

Technique Problems. The model’s vagueness is amplified by the control issue. One of the nice things about the standard model is that people without UI make a nice control group for contrast. Here, HMM simply compare high-UI and low-UI duration states and then counties, without looking at individuals. They argue that since the expiration was done by Congress, it is essentially a random change.

But a quick glance shows their high benefits states group had an unemployment rate of 8.4 percent in 2012, while their low benefits states had an unemployment rate of 6.5 percent. Not random. As the economy recovers, we’d naturally expect to see the states with a higher initial unemployment rate recover faster. But that would just be “recovery”, not an argument about UI, much less workers’ bargaining power.

Data Problems. Their county-by-county analysis is meant to cover for this, but this data is problematic here. As Dean Baker notes in an excellent post, the local area data they use is noisy, confusing based on whether the state is where one works versus lives, and is largely model driven. The fact that much of it is model-driven is problematic for their cross-state county comparisons.

Baker replaces their employment data with the more reliable CES employment data (the headline job creation number you hear every month) and finds the opposite headline result:

It’s not encouraging that you can get the opposite result by changing from one data source to another. Baker isn’t the first to question the robustness of these results to even minor changes in the data. The Cleveland Fed, on an earlier version of their argument, found their results collapsed with a longer timeframe and excluding outliers. The fact that the paper doesn’t have robustness tests to a variety of data sources and measures also isn’t encouraging.

So data problems, control problems, and the vague sense that this is just them finding a residual and attribute all of it to their “macro” element without enough supporting evidence. Rather than turning over the vast research already done, I think it’s best to conclude as Robert Hall of Stanford and the Hoover Institute did for their earlier paper with a similar argument: “This paper has attracted a huge amount of attention, much of it skeptical. I think it is an imaginative and potentially important contribution, but needs a lot of work to convince a fair-minded skeptic (like me).” This newest version is no different.