Two Opposing Methods Tell Us the Too Big To Fail Subsidy Has Collapsed
June 11, 2015
By Mike Koczal
Is there a Too Big to Fail (TBTF) subsidy? If so, is it large, sustained, and institutionalized by Dodd-Frank, as many conservatives claim? Since this is always coming up in the discussion over financial reform, and especially since both those who think Dodd-Frank should be repealed and those who think it didn’t go anywhere near far enough have an incentive to argue for it, let me put out my marker.
I think the TBTF subsidy was real in the aftermath of the crisis, when it was an obvious policy to prevent a collapse of the financial system. But, contrary to the conservative argument, the subsidy has been reduced to a small amount, if it still exists at all. I also think the focus on it is a distraction. My reasoning comes less from any single study but instead from the fact that the two primary yet opposite quantitative techniques for determining such a thing both tell the same exact story—a fact that I don’t think has been caught.
This post will be written for general readers, with the financial engineering in the footnotes. A TBTF subsidy just means that the largest firms are viewed by the markets as being safer than they should be. Since they have less credit risk, they have cheaper borrowing costs, and prices of their credit risks, as measured in CDS prices, will be lower.
So how would you go about answering whether a bank has a TBTF subsidy? There are two general quantitative approaches. The first would be to compare that bank to other, non-TBTF banks, controlling for characteristics, and see whether or not it has cheaper funding. The second would be to look at the fundamentals of that bank by itself, estimate its chances of failing, and compare it to the market’s estimates. These approaches are, as a matter of methodology, the opposite of each other [1]. Yet they tell the same story. Let’s take them in turn.
First Method – Compare a Firm to Other Firms
The first approach is to simply compare TBTF firms with other firms and see if they receive lower funding. How do you do this? You get a ton of data across many different types of banks and look at the interest rates those banks get. You assume that the chances of default are random but can change based on characteristics [2]. You then do a lot of statistical regressions while trying to control for relevant variables and see if this TBTF measure provides a lower funding cost. This is what the GAO did last year.
One major problem with this technique is that you have to control for important variables. Is TBTF a matter of the size of assets, the square of the size of assets, or just a $50 billion threshold? How do you control for risks of the firm? Given that all the information comes from comparisons across firms, the way you compare a TBTF firm with a medium-sized firm matters.
This is why you can end up with the GAO estimating 42 different models: they wanted to try all their variables. But which models are really the best? The graph below summarizes their results, where they found a major subsidy in the aftermath (dots below the line reflect models showing a subsidy) of the crisis that went back to near-zero later.
Second Method – Compare a Firm to Itself
Let’s do the exact opposite with the second approach. Instead of comparing across firms, let’s create a “structural” approach that looks at the specific structures of the bank, making an estimate of how likely it is to default [3]. We then compare that estimate with actual market prices of default estimates from credit default swaps. If there’s a TBTF subsidy, that means that our estimate of the price of a credit default swap will be higher than the actual price, since the market thinks a loss is less likely.
How do we do this? We look at the bank’s balance sheet and figure out how likely it is that the value of the firm will be less than the debt. We can even phrase it like an option, which means we can hand it to the physicists to put on their Black-Scholes goggles and find a way to price it [details at 4]. The IMF recently took a crack at using the second approach and comparing the estimate to actual CDS prices.
Here’s what they found, where a positive value means the predicted price is larger than the actual price:
Opposites Strengths, Weaknesses
These two approaches aren’t just the opposite of each other; they also have opposite weaknesses. Where under the first approach it’s not really clear whether or not you are controlling for size and risk well at any moment, the structural model is able to ignore these issues by simply looking at the TBTF firm itself. But structural models need CDS prices, which are often illiquid, introducing numerous pricing problems. The first approach includes the bond market, which is quite deep. The structural model requires a lot of financial engineering modeling assumptions, where the statistical approach requires virtually no assumptions. Let’s take a second and chart that out:
Note again that the two approaches are the complete opposite of each other in theory, data, and relative merits, yet they both tell the same story. There was a subsidy that was real in the aftermath of the crisis but has been coming down and is now close to zero. What should we take away from this?
First, the mission isn’t done, but we are on the right path. Higher capital requirements, liquidity requirements, living wills, restructuring, derivatives clearing, and more are paying off, removing much of the concern that the markets believe we have permanent bailouts.
You’ll hear many stories about this subsidy, but they will get all their value from the 2009–2010 period. For those on the right who argued that this would become a permanent GSE regime, this isn’t the case. The only question is whether we will go further to fully eliminate it, not whether it will be a permanent feature.
Second, we should remember that this subsidy focus was always a distraction. If Lehman Brothers had collapsed with no chaos, we’d still have millions of foreclosures, a securitization and credit market designed to rip off unsuspecting consumers, and a system of enforcement that doesn’t hold people accountable. The subsidy is only one of the major problems we have to deal with.
In addition, this subsidy equaling zero doesn’t mean that we can ignore the issue. These models can’t tell the difference between a successful and an unsuccessful resolution. This conclusion just means there would be a credit loss, but doesn’t tell if bankruptcy is an option, or if a resolution is swift, certain, well-funded, and likely to create minimal chaos for the economy. Those are our bigger concerns, which aren’t the same question at all.
Third, rolling back major parts of Dodd-Frank, particularly when it comes to TBTF policy, is a bad idea. These results are fragile; it’s easy for us to return to 2010. It would be a shame to remove the policies that are actually working well.
[1] It’s not exactly “reduced-form versus structural”, but if you want to learn more about these two methods (and to confirm I’m not making this up) there’s an extensive literature on it.
[2] In the jargon, defaults are thought of as exogenous, with some characteristics making a random default more likely. This will become more apparent in the second approach, when we model the default as endogenous to the structure of the firm.
[3] Full disclosure, I used to work at Moody’s KMV, a pioneer in structural models. I bleed structural modeling; it is the best.
[4] Equity is worth the firm’s assets minus debt, or zero if the assets are less than debt. This is the same exact payout as a call option; the equity of the firm is simply a call option on the firms’ assets, with the debt as a strike price, and as such can be modeled and priced like an option.
(For those really wedded to the myth that shareholders “own” the firm, note that in the world of Black-Scholes it’s just as true to say that debtholders “own” the firm, except they sell off a derivative on their ownership to someone else.)