Follow Mike on Twitter @rortybomb.
Mike Konczal in the News
This One Weird Trick Makes Economies More Fair!, The Nation
Democrats Aim to Limit Corporate Windfall From Trump Tax Cut, The New York Times
We’re All ‘Socialists’ Now, New York Magazine
The recent bill introduced by Senate Majority Leader Mitch McConnell tries to stem the current economic crisis. In a previous document, “A Forward-Thinking Response to the Coronavirus Recession,” we outlined five elements that any response needs to include: (1) Help people directly by providing cash, (2) support workers, (3) help states and municipalities, (4) prevent
The coronavirus outbreak has led to a collapsing economy. The economic situation is deteriorating so fast that people are struggling in real time to understand fundamental questions and policy objectives. “A Forward-Thinking Policy Response to the Coronavirus Recession” is an overview of where things stand. We focus on the nature of the economic crisis, and
Over the last five decades, an empirical revolution in economics has undermined many of the assumptions of “neoliberalism,” the reigning approach to economic policy. Many of the guiding assumptions underlying neoliberal policymaking no longer speak to what is going on in the economy or our country more broadly. In “The Empirical Failures of Neoliberalism,” Roosevelt
Vox published an excellent discussion with economist Brad Delong where he makes the argument on why left-leaning neoliberals (who “use market means to social democratic ends when they are more effective, and they often are”) should be comfortable with the “baton rightly pass[ing] to our colleagues on our left. We are still here, but it
Despite record corporate profits and high stock prices, most Americans have not shared in the post-recession recovery. In a new Roosevelt Institute report, Fellows Mike Konczal and J.W. Mason discuss how the Great Recession changed the way the Federal Reserve (the Fed) uses macroeconomic monetary policy—a set of rules influencing the supply of credit and
A Deal Worth Opposing: Why Senator Crapo’s Proposal to Repeal or Undermine Key Parts of Dodd-Frank Is a Threat to Consumers and Financial Stability
The agreement reached between Senate Banking Committee Chairman Mike Crapo and ten Senate Democrats is billed as a necessary technical fix to Dodd-Frank and regulatory relief for community banks. But this proposal would cause more harm than many—including some allies—currently believe. It would expose risk to mid-sized banks, threaten the stability of the financial industry,
In response to the 2007-08 Financial Crisis that cost the United States more than $20 trillion, Congress passed the Dodd-Frank Wall Street Reform and Consumer Protection Act on July 21, 2010 with the aim of overhauling the dysfunctional regulatory regime. In the years since, the wide-reaching reforms mandated by Dodd-Frank have provided key protections to
We’re going to hear a lot more from Republicans about how a single, simple 10 percent leverage requirement can replace much of what Dodd-Frank does. This idea is central to the Republican CHOICE Act, and it was also reiterated recently in FDIC Vice-Chairman Thomas Hoenig’s plan for regulatory relief. Hoenig’s plan calls for Congress to remove
It’s impossible to look at any single financial regulation without understanding the problem it is trying to solve and how it would hang together with the rest of the financial regulatory regime. This is why cost-benefit analysis of financial rules isn’t very useful, as any rule depends on all the other rules. It also means
We—Marshall Steinbaum, who has recently joined the Roosevelt Institute as a visiting fellow, and Mike Konczal—have a new working paper out titled Declining Entrepreneurship, Labor Mobility, and Business Dynamism: A Demand-side Approach. We hope you check it out! We think it adds some important evidence on an unfolding debate. Here is a great write-up by Anna Louie Sussman
Academics and policymakers have recently focused on a worsening economic phenomenon commonly referred to as the decline in “business dynamism,” that is, the declining rate at which new businesses are formed and the rate at which they grow. This decline in dynamism and entrepreneurship accompanies a decline in overall labor market mobility, including quits and
Donald Trump has received considerable positive attention for his plan to raise taxes on investment firms by ending the much-maligned “carried interest” loophole. It’s one of the clearest things Trump himself has said about his tax plan, with statements like “I want to do something with the Wall Street guys because some of these guys
Today the Roosevelt Institute is launching a new report, Untamed: How to Check Corporate, Financial, and Monopoly Power (pdf), on which I’m a co-editor. It’s launching alongside another big Roosevelt report, Rewrite the Racial Rules: Building an Inclusive American Economy (pdf). You can live stream the launch event here beginning at noon. There’s a lot of fun stuff in Untamed, including a
Props to the House Republicans for releasing their policy platforms for 2017 as if everything is just fine, in the hopes that they’ll influence Donald Trump and the general election. A speech by Jeb Hensarling, the chair of the House Financial Services Committee, today previewed the Financial CHOICE Act, their replacement for Dodd-Frank. (The H in CHOICE
Untamed: How to Check Corporate, Financial, and Monopoly Power outlines a policy agenda designed to rewrite the rules that shape the corporate and financial sectors and improve implementation and enforcement of existing regulations. This report, edited by Nell Abernathy, Mike Konczal, and Kathryn Milani, builds on recent analysis of economic inequality and on our 2015
It’s been a roller coaster few weeks for financial reform. First, we finally saw the decision where Judge Rosemary Collyer of the D.C. Federal District Court ruled that regulators couldn’t declare the insurance giant Metlife to be a major financial risk, which would have required stricter regulations. Then yesterday the Federal Reserve and the FDIC
This is a guest post by Roosevelt Institute fellow J.W. Mason, cross-posted from his essential economics blog Slackwire. Last week, the Washington Post ran an article by Jim Tankersley on what would happen if Trump got his way and the US imposed steep tariffs on goods from Mexico and China. I ended up as the
Bernie Sanders gave some fairly normal answers on financial reform to the New York Daily News editorial board. Someone sent it to me, and as I read it I thought, “Yes, these are answers I’d expect for how Sanders approaches financial reform.” You wouldn’t know that from the coverage of it, which has argued that the
It’s worth taking a break from watching the implosion of the Republican Party to pay attention to intra-Democratic fighting. Liberal economics has had a pretty great run in the 2016 primary, and I’m optimistic about its chances going forward. I’m even more optimistic after reading this thrown-together op-ed from Jon Cowan, president of the centrist
Marco Rubio and Jeb Bush have been fighting for months in the GOP primaries to be the candidate representing Wall Street, hedge funds and the financial sector. Headlines like “Bush and Rubio race for Wall Street cash” dominated the fall coverage of their campaign, right next to headlines like “Donald Trump terrifies Wall Street” with
There are three points to keep in mind about policy debates: Policy specifics are only one of many considerations voters have when evaluating a candidate, and they’re unlikely to mobilize the base in a polarized age; ideological constraints often determine policy positions; and wonk policy “discussions” can easily become a barrier to exclude people and ideas from the conversation. This has become relevant in the Clinton versus Sanders race, which, as Matt Yglesias notes, is an ideological battle like “Ronald Reagan’s battle with Gerald Ford,” one in which wonks have less of a role to play.
That said, I will defend the wonk. I think the wonk analysis is an essential part of the ideological work currently being done and is capable of advancing the progressive project in crucial ways. Working the numbers and the specifics creates clarity, and it forces people to put their cards on the table.
In this specific moment, the work of the wonk forces one to justify constraints, lets you know if you are looking in the correct places, gives you a sense of whether the scope and scale of your changes is sufficient, and lets you know the obstacles and enemies you’ll face. And it can be fun! Or fun enough.
Let’s go through each of these points with specific examples. We’ll begin with the letter from former CEA chairs attacking economist Gerald Friedman’s estimates of the impact of Sanders’s plan, and then look at some cases where wonk analysis would help the Sanders campaign.
I have a new Score column at The Nation: Bernie Sanders should just adopt Hillary Clinton’s plan, and go further than it. Making these priorities doesn’t distract from his core message on focusing on the largest players. If anything, it completes the left agenda on finance, as any such agenda needs to look at the activities of finance itself, as opposed to just the institutions, as well as the effects of finance on who the corporation works for. Record stockholder payouts while investment funding starves are just as much of the problem of finance as Too Big To Fail. Clinton makes a good first step; Sanders could take the important second and third steps if he wanted.
Intro: “In advance of the Iowa primary, Hillary Clinton and Bernie Sanders have duked it out over who would tackle Wall Street best. Clinton’s reform package aims wide, extending scrutiny from the banks to smaller players who played an outsized role in the financial crisis. Sanders—who, unlike Clinton, has rejected Wall Street money—actually takes a narrower approach that favors a popular but insufficient strategy to “break up the banks.” If Sanders wants to challenge modern finance, he should incorporate and surpass Clinton’s plan.”
Is Ted Cruz right about the Great Recession and the Federal Reserve? From a November debate, Cruz argued that “in the third quarter of 2008, the Fed tightened the money and crashed those asset prices, which caused a cascading collapse.”
Fleshing that argument out in the New York Times is David Beckworth and Ramesh Ponnuru, backing and expanding Cruz’s theory that “the Federal Reserve caused the crisis by tightening monetary policy in 2008.”
But wait, didn’t the Federal Reserve lower rates during that time? Their argument is that the Federal Reserve didn’t lower them fast enough. “Through acts and omissions, the Fed kept interest rates and expected interest rates higher than appropriate, depressing the economy.” This is passive tightening, where the Federal Reserve didn’t act throughout the summer of 2008 to the gathering storm. Without it, “[w]e could have had a decline in housing without a Great Recession.”
Beckworth has discussed this at length in his blog. If it becomes more central to the economic debate in 2016, there’s four things to keep in mind.
Checking the Internet, I’m learning from David Dayen at The Fiscal Times that I’m part of “Clinton and her minions,” “trying on contradictory criticisms to make a political point” to deliver “a mortal wound to the cause of [Senator Elizabeth] Warren’s life.”  Zach Carter, Jason Linkins, Shahien Nasiripour at The Huffington Post notes that I’m part of a crew “peddling a myth about how the financial crisis happened” and it’s a “sad new world when respected liberals start echoing the arguments” of financial lobbyists.
Two weeks before a contested primary is probably not the time for subtlety and details, but I want to contest the arguments in these pieces. Though Dayen tries to catch me in a contradiction, I’ve long thought that the project of combating shadow banking was to extend banking regulations to financial activities rather than silo them. So there’s no inconsistency there. Though I’m supportive, I also think that “breaking up the banks” is being overplayed as a financial crisis issue, doing more work as a problem and a solution than its proponents say it does. It’s also a useful check how my mind has and hasn’t changed since 2010.
Human capital contracts continue to be all the rage in higher education funding. Beth Akers of Brookings writes that they can tackle “the growing risk associated with investing in higher education.” They are also playing a role in the presidential debate over higher education. Greg Mankiw, discussing higher education costs, is excited that Marco Rubio “wants to establish a legal framework in which private investors help pay for a student’s education in exchange for a share of the student’s earnings after college. In essence, the student would finance college less with debt and more with equity.”
One thing never mentioned in these discussions is the way these kinds of financial instruments would exacerbate inequality. As we’ll see, even a preliminary financial model of these instruments shows that, when it comes to the percentage of income, women would pay 8–22 percent more relative to men, and a poor woman of color would easily pay 40 percent more relative to a rich white male, in order to attend college..
As normal, it’s tough to model an imaginary market that won’t exist at scale without extensive government intervention because of profound adverse selection problems. But let’s give it our financial engineering best. I’m following the format of “income-share agreements” (ISA) funded by private, profit-seeking markets, where tuition is paid upfront in exchange for a percentage of future earnings.
One of the most important parts of private ISAs to their advocates is that the percentage of future earnings you have to pay isn’t fixed, but instead is set depending on your school and predicted earnings. Many proponents say that this will drive people to better schools with higher graduation rates as well as in-demand majors. Why? Because, since students will end up making more money this way, the private ISA lender can charge them a lower rate.
It’s not clear if the consequences would be what proponents expect. A quick model I ran shows that there’s no reason to believe it would lead students to schools with higher graduation rates, because at reasonably high discount rates this instrument would prefer the smaller payments upfront that one would get from a dropout. More generally, it’s tough to model small changes in future payments from things you could discern at the age of 18. But there are three things you know at 18 that are correlated with future income: gender, race, and parental income.
Imagine there was no financial crisis. Lehman Brothers went into bankruptcy and the only sound was crickets chirping. No panic, no bailouts, no TARP. There’d be nothing to be mad about, right?
Actually there’s everything to be mad about. We’d still have six million foreclosures destroying communities and people’s lives. The Great Recession would have happened almost exactly as it did, throwing millions of people out of work and scarring their productive lives. And there still would have been a wave of individuals who profited enormously through bad mortgage instruments, leaving everyone else on the hook.
One of the many things I like about the new movie The Big Short is that it doesn’t focus on the financial crisis, which normally dominates all the stories about what happened. Instead it focuses on how the housing bubble was created and sustained while previewing the destruction it would take on the people whose homes were in those mortgages bonds.
Most of the review from the finance and economics community of The Big Short have a “yes, but” quality, where they like the movie but then go on at length how it doesn’t cover their particular financial bugaboos. But we are now getting the counter-narratives, arguing that the film is entirely wrong with its message. First came the crazytown bananapants stuff from the American Enterprise Institute, arguing that the whole film is a lie. 
But now we have Michael Grunwald at Politico, arguing that the movie “whiffs on the big message of the crisis.” The crisis is just a story about a general housing mania, which all the attention the movie pays to the complicated mess of mortgage-backed securities and collateralized debt obligations (MBS, CDOs) needlessly complicates. The real problem was short-term leverage and the panics that ensued in the financial crisis. However those problems were handled well enough during the bailouts, and Dodd-Frank has made significant strides in fixing the problems the movie brings up.
I think these are all wrong, full-stop. And they are all wrong in a way that limits our ability to really understand the crisis, and where we are now. (Mild spoilers ahead.)
Liberals have spent the last eight years learning the limitations of presidential rhetoric, while conservatives have romanticized its possibilities. Beyond getting Obama to say “radical Islam” as a foreign policy objective, conservative anti-poverty programs have come to focus on cultural campaigns to promote marriage.
Take Jeb Bush’s new anti-poverty plan. Beyond block-granting anti-poverty programs in a way designed to weaken them, Bush will “promote marriage as the most reliable route to family stability and resources. As president, he will join with other political leaders, educators and civic leaders in being clear and direct about how hard it is to raise children without a committed co-parent.” This is a new focus for conservatives: professionals need to lead in promoting marriage. Since professionals themselves get married at higher rates, why shouldn’t they preach what they practice? 
But there’s an intellectual contradiction here that makes this whole project unworkable. According to this, professionals need to advocate for marriage to convince poor people to get married. Yet if you read deeper into the conservative literature, you find that one of their main diagnoses of why poor people don’t get married is because of the dominance of professional views of marriage over society. As we’ll see, they need this theory because the actual evidence shows that poor people already have a very positive view of being married. What’s stopping them from actually marrying is this professional “capstone” model of marriage, one appropriate for people for whom the economy works, but (supposedly) devastating for everyone else. 
How you diagnose a policy problem is often just as important as the solutions you propose. This is certainly true when it comes to financial reform, as competing theories of what is wrong with the financial sector tend to be more important than the technical solutions proposed. This has become even more relevant with a recent speech by Bernie Sanders that laid out his financial reform agenda in advance of the Democratic primaries, contrasting his approach even more strongly with Hillary Clinton’s.
As everyone awaits the Federal Reserve’s decision today, I recommend taking a look at a new paper by Haverford College economics professor and Roosevelt Institute Visiting Fellow Carola Binder titled Rewriting the Rules of the Federal Reserve for Broad and Stable Growth.
Bernie Sanders gave a major speech yesterday outlining his definition of democratic socialism and how it relates to both his candidacy and American history. In doing so, he also described an expansive vision of economic security and fairness. The speech is important because it shows some of the strengths and weaknesses of left-liberalism at this moment, both through what it describes and, more interesting, what it doesn’t. It also clarifies how he can better contrast with Hillary Clinton on policy.
Here are eight random thoughts about the speech.
Last night’s GOP debate was the first to have an in-depth financial reform discussion. Unfortunately, the Republicans seemed like they were being introduced to these issues for the first time rather than a reflecting a deep understanding of debates that have been ongoing for eight years. (There was a weird detour into whether or not deposit insurance exists, which I’ll skip, and the less said about their embrace of the gold standard, the better.)
But it’s worthwhile to dig in now, as these talking points will be with us through the rest of the campaign. There are six statements I want to examine, the first four of which I believe to be outright wrong. This misdiagnosis causes Republicans to seek out the wrong solutions in the wrong places. The last two statements are interesting to debate. (Transcript via Washington Post.)
Today the Roosevelt Institute’s Financialization Project is releasing two new papers on short-termism:
Understanding Short-Termism: Questions and Consequences (pdf) by J.W. Mason.
Ending Short-Termism: An Investment Agenda for Growth (pdf) by Mike Konczal, J.W. Mason, and Amanda Page-Hoongrajok.
The first answers 12 common questions and complaints that are brought up when it comes to short-termism. It’s especially clear showing that investment in the most recent recovery is the weakest since the Great Depression, and that there’s no way to understand buybacks and dividends as representing funding for new businesses.
The second is our policy agenda, which goes beyond simply tackling short-termism by itself. Instead, it focuses on rebalancing power overall, limiting bad actors but also empowering good ones. This trend can only be combated by emboldening countervailing power in the marketplace while also emphasizing a new role for government.
I hope you check them out! Below is the Table of Content for policy agenda items and the questions about short-termism we answer in the two documents.
Five years after the official end of the recession, economic activity in the U.S. remains below potential. One important reason is the slow growth in business investment, which remains weak, especially compared to previous recoveries. To an increasing number of observers, the weakness in investment appears related to the rise in what observers are calling
With so much attention on short-termism these days, I’m excited to announce the Roosevelt Institute’s Financialization Project is launching two papers this Friday, November 6th, in Washington DC.
The first paper is from J.W. Mason, and it is an answer to the three most common criticisms about this topic. Namely, it isn’t happening, it is happening but is great for growth, and that it may not be great but it’s appropriate and even necessary. J.W. push back on all three. (Sneak previews of his responses to the first and second issue are already floating out there.)
The second is from me, J.W. Mason and Amanda Page-Hongrajook, and it’s a ten point policy agenda to combat short-termism. The points are broad, focusing on everything from pension guarantees to regulation of stock exchanges, but story is simple: the agenda needs to focus on building countervailing power when it comes to the strength of short-term interest holders.
I’m looking forward to sharing these with you, and I hope they move the debate forward on this crucial but still understudied and underdeveloped area. To RSVP, please contact Eric Bernstein
On Friday, November 6, please join Roosevelt Institute Fellows Mike Konczal and J.W. Mason, with keynote speaker Senator Tammy Baldwin and moderator Jim Tankersley of The Washington Post, at the National Press Club as we release two papers on corporate short–termism. Opening remarks by Senator Baldwin will be followed by a panel with Cornell University’s Lynn Stout and the AFL-CIO’s Heather Slavkin Corzo.
An Agenda to End Short–Termism
Location: National Press Club
529 14th Street Northwest, Washington, D.C.
Breakfast will be available at 8:30 a.m.,
with the program to begin promptly at 8:50 a.m.
Rising shareholder payouts and declining investment have touched off widespread concern and debate about the corporate focus on short-term profits over long-term growth. In this set of papers, Mike Konczal proposes a broad policy agenda for curbing short–termism and J.W. Mason presents new research showing the depth of the problem and addressing common critiques.
To RSVP, please contact Eric Bernstein.
The discussion over a Too Big To Fail (TBTF) subsidy, where the largest banks are able to borrow more cheaply as the result of potential future bailouts, is back in the discussion. Paul Krugman referenced it with a link to my review of two studies arguing the subsidy has largely declined since the crisis. Dean Baker has responded with critical thoughts on the studies.
My point isn’t to say that the subsidy is completely over. Nor, as I’ll explain in a bit, is it to say that TBTF is over. Instead, understanding this decline lets us know we should push forward with what we are doing. It debunks conservative narratives about Dodd-Frank being fundamentally a protective permanent bailout for the largest firms that we should scrap, and provides evidence against repealing it. And ideally it gets us to understand this subsidy as just one part of the more general TBTF problem that needs to be solved.
I’ll first respond to Dean Baker. Then I’ll map out four different ways of understanding what we mean by a TBTF subsidy, and what is and isn’t fixed, because that might clarify other responses I’ve been getting.
A remarkable thing happened: one of the largest banks slimmed down just a bit because of Dodd-Frank’s capital requirements. It’s another piece of evidence that the core parts are working, and if scaled up could make a dramatic structural change in the financial system. In July, the Federal Reserve finalized its rule on the special
Glass-Steagall (GS) has become a central part of the debate over financial reform in the 2016 election. Several commentators have portrayed it as a central objective of financial reform, verging on a litmus test for those who are serious about the topic. My opinion is that reinstating GS isn’t an important goal for financial reform.
After Hillary Clinton released her financial reform agenda in early October, Roosevelt’s Mike Konczal compared her plan and Bernie Sanders’ plan against the “Elizabeth Warren test” for financial reform. The Elizabeth Warren test calls for five key pieces of financial reform: defending Dodd-Frank, increasing enforcement, reining in the shadow banking industry, a financial transaction tax, and breaking
I have a new piece at Boston Review, Hail to the Pencil Pusher: American Bureaucracy’s Long and Useful History. It’s a review of four recent legal histories about the long rise of the administrative state and its place in securing and expanding liberty. Many of these histories, while academic, address the arguments that have become
There’s been a lot of new data and analysis of student loans and colleges in the past week, including a new Brookings paper and the launch of the College Scorecard by the Department of Education. And with so much data coming out, it’s becoming more important that we keep our questions open-ended. The Brookings Report,
One last note, following up on previous posts about human capital contracts (ISAs) and higher education. The first is about a NY Fed report that I believe argues ISAs would increase education costs. The secondis that the features of ISAs that are meant to mitigate higher education costs aren’t likely to do so. I’ve received
Andrew Kelly and Kevin James, higher education researchers at the American Enterprise Institute and prominent defenders of human capital contracts (ISAs), take issue with my earlier post on that topic, arguing that I “miss the mark.” They argue that ISAs, or selling off a future percentage of your income to pay for college now, would
Sometimes you hear something that sounds so much like common sense that you end up missing how it overturns everything you were actually thinking, and points in a far more interesting and disturbing direction. That’s how I’m feeling about the coverage of a recent paper on student loans and college tuition coming out of the
Following the well-received Disgorge The Cash, this is really the foundational paper that outlines a working definition of financialization, some of the leading concerns, worries, and research topics in each area, and a plan for future research and action. Since this is what we are building from, we’d love feedback.
Prior to this, I couldn’t find a definition of financialization broad enough to account for several different trends and accessible enough for a general, nonacademic audience. So we set out to create our own solid definition of financialization that can serve as the foundation for future research and policy. That definition includes four core elements: savings, power, wealth, and society. Put another way, financialization is the growth of the financial sector, its increased power over the real economy, the explosion in the power of wealth, and the reduction of all of society to the realm of finance.
Each of these four elements is essential, and together they tell a story about the way the economy has worked, and how it hasn’t, over the past 35 years. This enables us to understand the daunting challenges involved in reforming the financial sector, document the influence of finance over society and the economy as a whole, and clarify how finance has compounded inequality and insecurity while creating an economy that works for fewer people.
Savings: The financial sector is responsible for taking our savings and putting it toward economically productive uses. However, this sector has grown larger, more profitable, and less efficient over the past 35 years. Its goal of providing needed capital to citizens and businesses has been forgotten amid an explosion of toxic mortgage deals and the predatory pursuit of excessive fees. Beyond wasting financial resources, the sector also draws talent and energy away from more productive fields. These changes constitute the first part of our definition of financialization.
Power: Perhaps more importantly, financialization is also about the increasing control and power of finance over our productive economy and traditional businesses. The recent intellectual, ideological, and legal revolutions that have pushed CEOs to prioritize the transfer of cash to shareholders over regular, important investment in productive expansion need to be understood as part of the expansion of finance.
These historically high payouts drain resources away from productive investment. But beyond investment, there are broader worries about firms that are too dominated by the short-term interests of shareholders. These dynamics increase inequality and have a negative impact on innovation. Firms only interested in shareholder returns may be less inclined to take on the long-term, risky investment in innovation that is crucial to growth. This has spillover effects on growth and wages that can create serious long-term problems for our economy. This also makes full employment more difficult to achieve, as the delinking of corporate investment from financing has posed a serious challenge for monetary policy.
Wealth: Wealth inequality has increased dramatically in the past 35 years, and financialization includes the ways in which our laws and regulations have been overhauled to protect and expand the interests of those earning income from their wealth at the expense of everyone else. Together, these factors dramatically redistribute power and wealth upward. They also put the less wealthy at a significant disadvantage.
More important than simply creating and expanding wealth claims, policy has prioritized wealth claims over competing claims on the economy, from labor to debtors to the public. This isn’t just about increasing the power of wealth; it’s about rewriting the rules of the economy to decrease the power of everyone else.
Society: Finally, following the business professor Gerald Davis, we focus on how financialization has brought about a “portfolio society,” one in which “entire categories of social life have been securitized, turned into a kind of capital” or an investment to be managed. We now view our education and labor as “human capital,” and we imagine every person as a little corporation set to manage his or her own investments. In this view, public functions and responsibilities are mere services that should be run for profit or privatized, or both.
This way of thinking results in a radical reworking of society. Social insurance once provided across society is now deemphasized in favor of individual market solutions; for example, students take on an ever-increasing amount of debt to educate themselves. Public functions are increasingly privatized and paid for through fees, creating potential rent-seeking enterprises and further redistributing income and wealth upward. This inequality spiral saps our democracy and our ability to collectively address the nation’s greatest problems.
We have a lot of future work coming from this set of definitions, including a policy agenda and FAQ on short-termism in the near future. I hope you check this out!
In honor of Dodd-Frank’s fifth birthday party last week, I wrote a 4,000 word summary of the major accomplishments of the financial reform act. It includes what is working as well as what is stalled, what needs to be amplified and what isn’t yet tackled. There’s a focus on the CFPB, derivatives, capital, and ending Too Big To Fail. It’s aimed at both readers with little background as well as people with some familiarity, so I hope you check it out and share.
On the 5th Anniversary of the Dodd-Frank financial reform law, Roosevelt Fellow Mike Konczal wrote an explainer in Vox looking at the three core pillars of the law and where additional reforms are needed. Dodd-Frank fixes the broken consumer finance system by creating the Consumer Financial Protection Bureau. It addresses out of control derivative trading by
Our Rewriting the Rules report is in the news as part of a debate over the more liberal push in economic thinking. Matt Yglesias argues that this report and the new agenda “reverse the neoliberal formula.” He coins the term “new paleoliberalism” to describe it. David Brooks adds to this, arguing that said paleoliberalism displays “a naïve faith” in government. I want to respond to these three points in turn.
First, Yglesias says the new agenda breaks with the consensus. The old consensus, to him, was that “[t]he main way the government can impact the pre-tax distribution of income is by providing high-quality education,” and if that fails, “progressive taxes should fund redistributive programs to produce a better outcome.”
I think focusing on a new consensus is correct, but I’d think about it a different way. For us, the old consensus was built around two economic folk theories: that as an economy matures, inequality will decrease and all incomes will go up; and that any efforts to combat inequality have a serious negative impact on growth. (It’s not clear whether Kuznets or Okin, respectively, would have agreed with the extreme versions of their arguments that became this consensus.)
The new liberal economic consensus has three elements. To start, you can’t really distinguish between pre-and-post tax income the way these old arguments do. The market structures that determine final income, including taxes, also are a serious determinant of market income. This is pretty obvious if you say it in English: The rules of the economy matter. But this gets lost in the consistent idealization of abstract, perfect markets.
Also, in a world without perfect markets, efforts to fight inequality have fewer strict tradeoffs than people imagined, especially at the margins. We certainly see this internationally, with a wide variety of efforts to change the distribution of income and no obvious impact on growth. As a result, as economies grow, inequality can do any number of things—but it is a choice determined by the market.
The second question is whether the new liberal consensus is “paleo.” Inasmuch as the term means nostalgia, recycling old theories, and is bordering on revanchist, I like to think it is not.
The focus is very much a reaction to the facts on the ground, including a financial system that isn’t working to channel good investments, new forms of monopoly power, lack of institutions that support the working lives of women, a criminal justice system that has become too punitive, full employment in a period of weak demand, and so on.
The tools remain those that Franklin Roosevelt formalized: a mixed economy, a regulatory state, and social insurance as the bedrock of a thriving economy. Those are the right tools to build on. But how those tools are deployed changes with the times.
There is a strain of liberal thinking that imagines we can wish the labor movement of the 1940s or the 1890s back into existence. Our report has a detailed labor section that I think is really important. But it doesn’t simply imagine we can recreate an economy that no longer exists. Instead, it builds from where we are now.
As a third point, David Brooks, talking about Clinton but mentioning the same liberal economic consensus as Yglesias, asks if we have too much “unchastened faith in the power of government,” a faith that is “epistemologically naïve.”
What strikes me about this argument is that the Republicans have no less faith in the power of government. They have faith that the government can privatize social insurance in a way that won’t involve weaker security and higher costs. They have faith that if the government gives employers wage subsidies for poorer workers, employers won’t simply pocket them in wage bargaining. They have faith, against evidence, that the government having no taxes on capital will cause a boom in private investment. They have faith that the government cutting taxes will more than make up the lost revenue. Their faith leads them to conflate building a robust civil society and economic security with laissez-faire economics.
You could say that this is a faith in “the market.” Yet rules and institutions will always shape markets; the nature of rules is what determines what the economy will look like. The transfer of power to employers and owners isn’t “less government” in any real sense of the term. Structuring markets to give employers and owners more power based on a faith that this will usher in more prosperity is not just naïve; the past few decades have shown it to be a failure.
Derek Thompson has a 10,000 word cover story for The Atlantic, “A World Without Work,” about the possibilities of “post-work” in an economy where technology and capital has largely displaced labor. Though Thompson is clear to argue that this isn’t certain, as the “signs so far are murky and suggestive,” he takes the opportunity to describe how a post-work future might look.
There’s been a consistent trend of these stories going back decades, with a huge wave of them coming after the Great Recession. Thompson’s piece is likely to be the best of the bunch. It’s empathetic, well reported, and imaginative. I also hope it’s the last of these end-of-work stories for the time being.
At this point, the preponderance of stories about work ending is itself doing a certain kind of labor, one that distracts us and leads us away from questions we need to answer. These stories, beyond being untethered to the current economy, distract from current problems in the workforce, push laborers to identify with capitalists while ignoring deeper transitional matters, and don’t even challenge what a serious, radical story of ownership this would bring into question.
Before we begin, I think it’s important to note how unlikely this scenario remains. We can imagine the Atlantic of the 1850s running a “The Post-Agriculture, Post-Work World” cover story, correctly predicting farming would go from 70 percent of the workforce to 20 percent over the next 100 years, yet incorrectly predicting this would end work. We don’t think of what happened afterward as “post-work.” The economy managed to continue on, finding new work and workers in the process.
There are other minor problems. Globalization and technological advancement are treated as the same thing, when they are not. There’s also a slippage common in the critical discussion of these articles (you can see it from this tweet from Thompson here) of substituting in the argument that technology has weakened wages and excluded some workers in recent decades for an argument about the long-run trajectory of technology itself. These are two different, distinct stories, with the first just as much about institutions as actual technology, and evidence for the first certainly doesn’t prove the second.
We’ll Still Be Working
But what is the impact of these stories? In the short term, the most important is that they allow us to dream about a world where the current problems of labor don’t exist, because they’ve been magically solved. This is a problem, because the conditions and compensation of work are some of our biggest challenges. In these future scenarios, there’s no need to organize, seek full employment, or otherwise balance the relationship between labor and capital, because the former doesn’t exist anymore.
This is especially a problem when it leaves the “what if” fiction writings of op-eds, or provocative calls to reexamine the nature of work in our daily lives, and melds into organizational politics. I certainly see a “why does this matter, the robots are coming” mentality among the type of liberal infrastructure groups that are meant to mobilize resources and planning to build a more just economy. The more this comforting fiction takes hold, the more problematic it becomes and easier it is for liberals to become resigned to low wages.
Because even if these scenarios pan out, work is around for a while. Let’s be aggressive with a scenario here: Let’s say the need for hours worked in the economy caps right now. This is it; this is the most we’ll ever work in the United States. (It won’t be.) In addition, the amount of hours worked decreases rapidly by 4 percent a year so that it is cut to around 25 percent of the current total in 34 years. (This won’t happen.)
Back of the envelope, during this time period people in the United States will work a total of around 2 billion work years. Or roughly 10,000 times as long as human beings have existed. What kinds of lives and experiences will those workers have?
Worker power matters, ironically, because it’s difficult to imagine the productivity growth necessary to get to this world without some sense that labor is strong. If wages are stagnant or even falling, what incentive is there to build the robots to replace those workers? Nothing is certain here, but you can see periods where low unemployment is correlated with faster productivity gains. The best way forward to a post-work atmosphere will probably be to embrace labor, not hope it goes away.
How Did We Get There?
Another major problem of this popular genre is that it immediately places us at the end of the story, with no explanation of the transition. Work has already disappeared, it’s over, so the only question that remains is how we can envision our lives in the new world. This has two major consequences.
First, by compressing this timeline and making it seem like only capital will be around after a short period, it preemptively identifies the interest of workers with the interests of capital and owners. If post-work is right around the corner, people won’t have any labor (or human capital, if you must) to allow them to survive, so it’s essential to turn them into miniature capitalists immediately. That’s why it’s not abnormal to see descriptions of post-work immediately call for the repeal of Sarbanes-Oxley or the privatization of Social Security.
Secondly, this story also doesn’t explain the transition of labor among workers as it disappears. As Seth Ackerman notes, decreases in the amount of work done can result either from some people leaving the labor force (extensive margin) or from decreasing the amount of work all people do (intensive margin). In other words, do we want some people to leave the workforce entirely, or for us all to work less overall? These are two different projects, with different assumptions and actions necessary to advance them. Resolving these questions would be the fundamental problem of an actual decline in labor force participation, but they tend to be abstracted away in these discussions.
Projecting the Past Forward
Going further, the idea that a post-work economy would involve simply choosing between a handful of quasi-utopias strikes me as completely naive. In Thompson’s piece, for instance, the problem seems to be whether post-work people would spend their time in intellectual pursuits or as independent artisans. But it’s just as likely people would spend their days as refugees trying not to starve.
You can get the sense that something is missing because virtually all of these articles consider radical forms of leisure instead of ownership. (Indeed, in assuming that prosperity leads to redistribution leads to leisure and public goods, it’s really a forward projection of the Keynesian-Fordism of the past.) I rarely see any of these mass media post-work scenarios tackle these issues head-on, much less talk about “post-ownership” instead of just “post-work.” (Friend of the blog Peter Frase is one of the few who does.)
It’s just as likely that the result will be a catastrophe for those who lose the value of their human capital. It seems unlikely that the political economy would become more conducive to redistribution, as these articles usually imply, because the value of capital assets would probably skyrocket. With that value high and ownership concentrated, it would potentially lead to a political economy more favorable to fascism than to robust egalitarianism. Who owns the robots, and what that even means in such a world, will be just as much a question as what we do to occupy ourselves; the first, really, will determine the second.
As a result, discussions of the idyllic robot future give working people a desire that is an obstacle to the actual flourishing of their lived conditions, and it remains an ideology completely divorced from the lived experiences of everyday people. I hereby nominate this as Pure Ideology. Who seconds the motion?
John Judis believes that Democrats are on the wrong path and the Roosevelt Institute is partially to blame. In his recent piece, “Dear Democrats: Populism Will Not Save You,” he attacks the growing liberal consensus on economic issues, using our recent Rewriting the Rules report as an example, on both substantive and electoral grounds.
Judis’s core argument is that it is crucial “to develop a sophisticated politics” to turn economic appeals into electoral success. I couldn’t agree more, though I feel this issue is caught in the crossfire of Judis walking back his previous Democratic demographic triumphalism. His second point is that the economic platform we’ve outlined is a terrible basis for a Democratic majority because voters are “fearful of big government, worried about new taxes, skeptical about programs they think are intended to aid someone else,” and otherwise not motivated by inequality and turned off by economic “populism.”
As a coauthor on our report and as someone involved throughout its creation, I’d like to address these criticisms. I can only speak to our report, which argues that public policy and the rules of the economy are more responsible for our tough economic situation than technology, globalization, sociology, or any of the other factors normally cited. Judis, I think, misses how robust this approach can be, how much it diverges from caricatures of big government liberalism, and how a lot of forces brought us to this point.
A Broader Vision
Roosevelt’s Rewriting the Rules plan isn’t simply centered around fighting inequality, and it’s not just about fairness, which is an argument that Judis says tends to turn off voters. Instead, we view it as tackling central concerns over investment, growth, opportunity, shared prosperity, and economic security. The subtitle of the report is “An Agenda for Growth and Shared Prosperity.” During the creation process, we kept two specific things in mind: First, nobody cares about inequality abstractly, they care about specific economic issues; and second, our vision can’t be simply returning to the past.
We do argue that you can’t address the economic concerns I mentioned without going big. You can’t tackle investment without looking at the financial sector, you can’t address opportunity without looking at structural discrimination, you can’t view economic security without looking at the labor market, and you don’t get growth without doing all of the above. But the liberal economic consensus isn’t about adjusting this or that statistical abstraction, or about building a time machine to return to an era that probably didn’t exist: It’s about solving real problems with long-term consequences for our future.
This is hard to balance, especially for an economic report that wants to highlight the latest research in inequality across fields. The team is full of economists, not political messengers. But since a forward, positive agenda is built into the DNA of these arguments, it is not hard for a talented political movement to use these economic arguments to talk about how Democrats can deliver the goods people care about when it comes to the future of the economy.
Deeper Dive into Markets
But isn’t it all just tax-and-spend and big government liberalism? As a second point, we think looking at the market structures that generate inequality in the first place is a way to both meaningfully address inequality and also move us in a different policy direction. The idea that the rules are rigged isn’t in the current dialogue, and it’s one worth testing out with the public as part of a comprehensive argument about the economy.
The policy section of the report is a call for further discussion (some of which will be elaborated on in future Roosevelt products), but what I think is important is that, in addition to higher top marginal tax rates and income support, there’s an entire suite of policies focused on the rules of the economy itself.
These are not trivial. We examine how changes to corporate governance encourage short-termism, how the ramp-up of the criminal justice system reduces wages and opportunities for people of color, and how the falling value of the minimum wage increases poverty. These “market conditioning” effects complement whatever emphasis political leaders put on tax-and-spend issues.
I think that’s worthwhile, because it gives Democrats an in to talk about economic problems with people who want to become rich or don’t think of themselves as class warriors, but do care about promoting broadly shared opportunity. It also short-circuits many of the libertarian arguments about the state, because people get that the economy needs rules—and that not having rules is still a form of rules.
(Ironically, by calling upon the work of Stephen Rose, who argues everything is fine with the macroeconomy because government transfers can just take care of any weaknesses, Judis is far more reliant on tax-and-spend liberalism that he accuses us of being. I’m fine with transfers, of course, as income support was crucial to fighting the Great Recession. But there are also electoral limits to this strategy.)
As for the electoral appeal, these ideas aren’t part of the normal policy discussion, but to the extent that they are, they are quite popular. The minimum wage is winning in red states, and financial reform is broadly popular as a topic.
The Natural Next Step
Judis’s narrative is focused on the idea that the Democrats have been hijacked (with ACORN a culprit, no less) with this agenda. At times, he forces this story into a symmetry with what is happening on the right to get some easy “pox on both houses” points.
But this doesn’t reflect the actual path we’ve taken to get here. One thing we tried to demonstrate in Rewriting the Rules is that the research has been moving in this direction for the past decade. Many of these policy items build on or expand what President Obama has proposed (infrastructure, financial reform, minimum wage, etc.)—proposals that still remain good ideas in 2016. The political success of the Fight for 15 workers has also shown that there’s energy at the local level that people are looking to help scale.
The other reason this agenda has gained traction is that the other approaches have collapsed in the past year. Education doesn’t look like the silver bullet people had believed it to be in the past. The idea that the Great Recession would be a minor hiccup and we’d be back on track has proved false. Centrist claims about the need for immediate austerity and a Grand Bargain have also failed to pan out.
Oddly, I’m not sure I’ve heard a compelling counter-strategy about how to describe the economy, and Judis proposes no such thing. The 2016 nominee won’t be able to run on an “overcoming partisanship” strategy like President Obama in 2008 or a “let’s give Obamacare and the recovery a chance” strategy like in 2012.
One could just downplay the economy, of course, and if next year gives us a large increase in wages the story will change with it. But polls now show economic issues are coming to dominate the discussion, median family incomes are down 7 percent since 2000, and the argument that President Obama pulled us back from an economic collapse and rebuilt jobs, and now we need to turn to a more secure future with better wages, investment, and security, seems the most natural transition.
Economic issues will dominate on the right, and while they mimic our language, their proposals will likely be very regressive. Even the GOP’s leading reformer, Marco Rubio, is calling for eliminating all taxes on capital and inheritances. Whoever wins the Republican nomination is going to be controlled by a base that wants the Ryan Plan immediately. But rather than simply calling out what’s wrong with the right’s approach, it will be essential for liberals to have their own vision of opportunity, investment, growth, and security. We think our report is a crucial building block for this.
Is there a Too Big to Fail (TBTF) subsidy? If so, is it large, sustained, and institutionalized by Dodd-Frank, as many conservatives claim? Since this is always coming up in the discussion over financial reform, and especially since both those who think Dodd-Frank should be repealed and those who think it didn’t go anywhere near far enough have an incentive to argue for it, let me put out my marker.
I think the TBTF subsidy was real in the aftermath of the crisis, when it was an obvious policy to prevent a collapse of the financial system. But, contrary to the conservative argument, the subsidy has been reduced to a small amount, if it still exists at all. I also think the focus on it is a distraction. My reasoning comes less from any single study but instead from the fact that the two primary yet opposite quantitative techniques for determining such a thing both tell the same exact story—a fact that I don’t think has been caught.
This post will be written for general readers, with the financial engineering in the footnotes. A TBTF subsidy just means that the largest firms are viewed by the markets as being safer than they should be. Since they have less credit risk, they have cheaper borrowing costs, and prices of their credit risks, as measured in CDS prices, will be lower.
So how would you go about answering whether a bank has a TBTF subsidy? There are two general quantitative approaches. The first would be to compare that bank to other, non-TBTF banks, controlling for characteristics, and see whether or not it has cheaper funding. The second would be to look at the fundamentals of that bank by itself, estimate its chances of failing, and compare it to the market’s estimates. These approaches are, as a matter of methodology, the opposite of each other . Yet they tell the same story. Let’s take them in turn.
First Method – Compare a Firm to Other Firms
The first approach is to simply compare TBTF firms with other firms and see if they receive lower funding. How do you do this? You get a ton of data across many different types of banks and look at the interest rates those banks get. You assume that the chances of default are random but can change based on characteristics . You then do a lot of statistical regressions while trying to control for relevant variables and see if this TBTF measure provides a lower funding cost. This is what the GAO did last year.
One major problem with this technique is that you have to control for important variables. Is TBTF a matter of the size of assets, the square of the size of assets, or just a $50 billion threshold? How do you control for risks of the firm? Given that all the information comes from comparisons across firms, the way you compare a TBTF firm with a medium-sized firm matters.
This is why you can end up with the GAO estimating 42 different models: they wanted to try all their variables. But which models are really the best? The graph below summarizes their results, where they found a major subsidy in the aftermath (dots below the line reflect models showing a subsidy) of the crisis that went back to near-zero later.
Second Method – Compare a Firm to Itself
Let’s do the exact opposite with the second approach. Instead of comparing across firms, let’s create a “structural” approach that looks at the specific structures of the bank, making an estimate of how likely it is to default . We then compare that estimate with actual market prices of default estimates from credit default swaps. If there’s a TBTF subsidy, that means that our estimate of the price of a credit default swap will be higher than the actual price, since the market thinks a loss is less likely.
How do we do this? We look at the bank’s balance sheet and figure out how likely it is that the value of the firm will be less than the debt. We can even phrase it like an option, which means we can hand it to the physicists to put on their Black-Scholes goggles and find a way to price it [details at 4]. The IMF recently took a crack at using the second approach and comparing the estimate to actual CDS prices.
Here’s what they found, where a positive value means the predicted price is larger than the actual price:
Opposites Strengths, Weaknesses
These two approaches aren’t just the opposite of each other; they also have opposite weaknesses. Where under the first approach it’s not really clear whether or not you are controlling for size and risk well at any moment, the structural model is able to ignore these issues by simply looking at the TBTF firm itself. But structural models need CDS prices, which are often illiquid, introducing numerous pricing problems. The first approach includes the bond market, which is quite deep. The structural model requires a lot of financial engineering modeling assumptions, where the statistical approach requires virtually no assumptions. Let’s take a second and chart that out:
Note again that the two approaches are the complete opposite of each other in theory, data, and relative merits, yet they both tell the same story. There was a subsidy that was real in the aftermath of the crisis but has been coming down and is now close to zero. What should we take away from this?
First, the mission isn’t done, but we are on the right path. Higher capital requirements, liquidity requirements, living wills, restructuring, derivatives clearing, and more are paying off, removing much of the concern that the markets believe we have permanent bailouts.
You’ll hear many stories about this subsidy, but they will get all their value from the 2009–2010 period. For those on the right who argued that this would become a permanent GSE regime, this isn’t the case. The only question is whether we will go further to fully eliminate it, not whether it will be a permanent feature.
Second, we should remember that this subsidy focus was always a distraction. If Lehman Brothers had collapsed with no chaos, we’d still have millions of foreclosures, a securitization and credit market designed to rip off unsuspecting consumers, and a system of enforcement that doesn’t hold people accountable. The subsidy is only one of the major problems we have to deal with.
In addition, this subsidy equaling zero doesn’t mean that we can ignore the issue. These models can’t tell the difference between a successful and an unsuccessful resolution. This conclusion just means there would be a credit loss, but doesn’t tell if bankruptcy is an option, or if a resolution is swift, certain, well-funded, and likely to create minimal chaos for the economy. Those are our bigger concerns, which aren’t the same question at all.
Third, rolling back major parts of Dodd-Frank, particularly when it comes to TBTF policy, is a bad idea. These results are fragile; it’s easy for us to return to 2010. It would be a shame to remove the policies that are actually working well.
 It’s not exactly “reduced-form versus structural”, but if you want to learn more about these two methods (and to confirm I’m not making this up) there’s an extensive literature on it.  In the jargon, defaults are thought of as exogenous, with some characteristics making a random default more likely. This will become more apparent in the second approach, when we model the default as endogenous to the structure of the firm.  Full disclosure, I used to work at Moody’s KMV, a pioneer in structural models. I bleed structural modeling; it is the best.  Equity is worth the firm’s assets minus debt, or zero if the assets are less than debt. This is the same exact payout as a call option; the equity of the firm is simply a call option on the firms’ assets, with the debt as a strike price, and as such can be modeled and priced like an option.
(For those really wedded to the myth that shareholders “own” the firm, note that in the world of Black-Scholes it’s just as true to say that debtholders “own” the firm, except they sell off a derivative on their ownership to someone else.)
I have a piece at Rolling Stone, about how Yale’s giant donation and the collapse of for-profit colleges under fraud charges both tell the same story: as we defund and privatize state public colleges there no set of good institutions which will fill the void left behind.
Three quick follow-up points. First, a technical one responding to something several people have brought up. I argue: “how much will Yale increase its enrollment numbers as a result of this [Schwarzman $150 million donation]? We can make a good guess: zero. Yale’s freshman enrollment this past year [is] virtually the same as in 2003.”
Yale’s enrollment has not only been flat since 2003 but since around the 1970s, even though the number of students being educated overall has doubled over those 40 years. Some people have noted that there are plans by fall 2017 to increase Yale’s enrollment 15 percent. It’s true, though those plans have been in the works since before the financial crisis and have been significantly delayed, and are unrelated to the Schwarzman donation. The point very much stands.
Some thought this point was a cheap shot, but I think it is crucial to get out there in the debate. Private non-profits pick and choose strategically how to expand enrollment to fufill their private goals, and that’s great. But their goals do not line up with the public one of ensuring that all who qualify has access to quality, affordable higher education, and they certainly won’t step up as that system is pulled back.
Second, the for-profit stories are crazy. I need to be writing more about them, but keep an eye on their implosion, and what it means for privatization and running all government services through for-profit actors. The Corinthian debt-strikers are worth watching as well – here’s Annie Lowrey writing about them and Astra Taylor.
Third, two recommendations. Michelle Goldberg’s long Nation piece on the inequality amplifying consequences of public disinvestment at the University of Arizona, which I link to, is fantastic, and very much worth your time. I also tried to get in this great column by Andrew Hartman on how conservatives used to value mass higher education as a basis of Western Civilization during the Culture Wars – Alan Bloom describing it as “a space between the intellectual wasteland he has left behind and the inevitable dreary professional training that awaits him after the baccalaureate” – but now have traded that battle for one of defunding and privatization, but it didn’t make it. But check out my piece anyway!
I have a piece in The Nation discussing the Death of Centrism. A lot of people are discussing why the economic discussion has shifted to the left in liberal circles, and one of the big reasons is that the specific predictions centrists (as a movement, not a temperament) made about the economy didn’t pan out.
It’s very difficult to convey how different the conversation was back then. Here’s a 2010 op-ed by Peter Orszag arguing that “much more deficit reduction, enacted now, to take effect in two to three years” as well as “an improvement in the relationship between business and government” are both necessary to boost the short-term economy. (He also argues against QE2 because monetary expansion might help prevent a Grand Bargain on the budget.) When researching this piece, Josh Bivens reminded me the administration was freaking out in 2009 about how the “carry trade” could cause interest rates to spike at a moment’s notice, an argument that seems ridiculous with rates so low six years later.
All the centrists got was a counterproductive spending cut, one the GOP immediately reneged, and none of their actual goals. And now their arguments are completely absent from the debate right now. I hope you check it out!
Inequality is a choice—one that we make with the rules we create to structure our society and economy. In this report, Roosevelt Chief Economist and Nobel laureate Joseph Stiglitz, joined by co-authors Nell Abernathy, Adam Hersh, Susan Holmberg, and Mike Konczal, exposes the link between the rapidly rising fortunes of America’s wealthiest citizens and increasing economic
I’m very excited to announce the release of “Rewriting the Rules of the American Economy” (pdf report), Roosevelt Institute’s new inequality agenda report by Joe Stiglitz. I’m thrilled to be one of the co-authors, as I think this report really tells a compelling story about inequality and the challenges the economy faces.
Recently there’s been a lot of discussion about a “new” conventional wisdom (“a force to be reckoned with” according to one observer), one in which choices about the rules of the economy are a major driver of the outcomes we see. This is in contrast to the normal narrative about inequality we hear, one in which globalization, technology, or individual choices are the only important parts. I like to think this report is a major advancement in this discussion, bringing together the best recent research on this topic.
As we argue, inequality is not inevitable: it is a choice that we’ve made with the rules that structure our economy. Over the past 35 years, the rules, or the regulatory, legal and institutional frameworks, that make up the economy and condition the market have changed. These rules are a major driver of the income distribution we see, including runaway top incomes and weak or precarious income growth for most others. Crucially, however, these changes in the rules have not made our economy better off than we would be otherwise; in many cases we are weaker for these changes. We also now know that “deregulation” is, in fact, “reregulation”—that is, a new set of rules for governing the economy that favor a specific set of actors, and that there’s no way out of these difficult choices. But what were these changes?
Financial deregulation exploded both the size of finance and its incomes, roughly doubling the share of finance in the top 1 percent. However, finance grew as a result of intermediating credit in a “shadow banking” sector, which led to disastrous results. It also grew from asset management, a field in which pay is often determined by luck and by fees driven by the increasing prevalence of opaque alternative investment vehicles like hedge funds. For all the resources it uses, finance is no more efficient than it was a century ago.
Corporate governance also radically changed during this period, led by public policy decisions. CEO pay fundamentally shifted toward a high pay model in the 1980s. The shareholder revolution also changed the nature of investment. We now see finance acting as a mechanism for getting money out of firms rather than into them; similarly, private firms are investing more than public firms. CEOs regularly use buybacks to hit earnings targets and say they’d rather hit accounting goals than invest long-term, indicating that short-termism is now a serious problem for investment and its positive spillovers.
High marginal tax rates were cut, but there’s no evidence that the high-end marginal tax rate has any effect on growth; cutting it does, however, raise the share of income the top 1 percent takes home. Low taxes don’t just make the equalizing effects of taxes weaker; they also mean that CEOs and other executives in the top 1 percent have more of an incentive to bargain aggressively with boards or seek opportunities for extracting rents, all zero-sum games for the economy. Lowering capital taxes showed no impact on higher investment, but a positive effect on increased capital payouts; capital income growth is one of the main drivers of inequality during this time period.
During this time, the Federal Reserve’s focus moved toward low and stable inflation at the cost of higher unemployment. Unemployment from weak Federal Reserve action rises the most for low-skilled and minority workers. Inequality generally doesn’t come down unless unemployment is below 6 percent, and this has become less of a priority.
The rules changed, or were not updated, for the labor market as well. Decreasing unionization has taken a toll on workers’ wages. Men’s inequality, in particular, has risen due to collapsing unionization rates. Women’s inequality has suffered due to a falling minimum wage, which went from 54 percent of the average hourly wage in the late 1960s to just 35 percent now. Labor market protections and institutions that give workers voice and power, in general, have not been updated for a new world of service and care work.
Though not an effective driver of lower crime rates, a dramatic turn toward mass and punitive incarceration has reduced the employment prospects for millions of Americans, especially people of color. In particular, there’s a dense web of discriminatory codes for those with a record, which pushes them toward second-class citizenship. One estimate finds 38,000 such punitive statutes, with most of them related to employment and having no end date.
Our institutions and rules haven’t been updated to fully facilitate women’s ability to participate in the labor force. As a result of gender discrimination in the workplace, lack of paid sick and family leave, and the unavailability of affordable child care, women’s participation in the U.S. labor force has declined over the past 15 years, while it increased in most other OECD countries.
Many people agree inequality is a challenge, but would say that this is all driven by technology and globalization. We discuss this at length in the report, but we don’t find these traditional stories either convincing, in the case of technology, or sufficient, in the case of globalization. Both of these forces are playing out, in quite similar ways, in other advanced countries, whose growth of inequality nowhere mirrors our own. Technology and globalization don’t fall from the sky, but instead are determined in important ways by rules and institutions. This is especially important in the era of free trade agreements, which are really managed trade agreements. These agreements are less about trade and more about the regulatory environment corporations face.
But rules matter even in these straightforward stories about supply and demand for labor. Advancements in search theory tell us that supply and demand, rather than strictly determining wages, instead place boundaries or endzones on where wages can go. What determines where wages fall within those boundaries is a whole host of economic rules, including bargaining power, institutions, and social conventions. Even in the strong version of these arguments, the rules matter.
This report describes what has happened, going far deeper than this summary here. It also has a policy agenda focused on both taming the top and growing the rest of the economy. Some may emphasize some pieces more than others; but no matter what this argument about the rules is what is missing in the current debates over the economy. I hope you get a chance to check out the report!
The American Action Forum jumps into the financial reform debate with a letter on the growth consequences of Dodd-Frank penned by its president, Douglas Holtz-Eakin. This letter is a bad analysis, immediately violating the first thing you learn in corporate finance: capital structure doesn’t dictate funding costs. But there’s a deeper context that makes this letter reckless and a bad development, and I hope they are willing to walk back part of it.
Why reckless? It’s important to understand the role people like Holtz-Eakin play in the conservative movement. It is less about providing analysis (which is good, because this is a bad analysis), and more about signaling priorities. What should be done about Dodd-Frank if the Republicans win in 2016? This letter signals a new front I haven’t seen before on the right: one focused on going after higher capital requirements. Worse, going after them as if they were, using that conservative trigger word, a “tax.” I think that is a terrible move with serious consequences, and if they are going to do it, they need to do better than this.
A Bad Analysis
Americans for Financial Reform and David Dayen give us a solid overview of what is lacking in this analysis. It contains no benefits, confuses one-time and ongoing costs, assumes all costs derive from the cost of capital rather than profits, and so on. I’m also pretty sure there’s an error in the calculations, which would reduce the estimate by a third; I’m waiting for a response from them on that .
But I want to focus on capital requirements. Holtz-Eakin argues that the Solow growth model “can be used to transform the roughly 2 percentage point rise in the leverage ratio of the banking sector” into “a rise in the effective tax rate.” Wait, the tax rate? “The banking sector responded to Dodd-Frank by holding more equity capital,” writes Holtz-Eakin, “thus require it to have greater earnings to meet the market rate of return – the same impact as raising taxes.” Higher capital requirements, in this argument, function just like a tax.
He concludes that a 2 percentage point rise in capital requirements, much like what we just had, increases the cost of capital somewhere between 2 and 2.5 percent. (I believe I understand that to be the argument, though the paper itself is quick and not cited to any body of research.)
This is wrong, full stop. The Holtz-Eakin argument is predicated on the idea that capital structure directly affects funding costs. But our baseline assumption should be that there is virtually no impact of capital requirements on cost of capital. Economics long ago debunked the notion that changes in aggregate funding mixes can have an effect on the value of a business itself, much less a widespread, durable, macroeconomic effect. This is a theorem they teach you in Corporate Finance 101: the Modigliani–Miller theorem. And this has been one of the most important arguments in financial reform, with Anat Admati being a particularly influential advocate of pointing this out.
Just step back and think about what Holtz-Eakin’s model means. If Congress passed a law requiring companies to fund themselves with half as much equity as they did before, would the economy experience a giant growth spurt from changing the aggregate funding mix? No, of course not. The price of capital would simply adjust with this new balance; funding with more equity means funding with less debt, though the business is still the same. Investors are not stupid; they respond to a changing funding mix by simply changing the prices accordingly. This is how markets are supposed to work.
Of course, the real world doesn’t work exactly like these abstract economic models. If there’s a hierarchy of financing options, which seems reasonable, then moving up or down that ladder can impose some costs. Doug Elliott from Brookings, for instance, writes quite a bit arguing that the idea that equity and higher capital requirement is costless is a dangerous “myth” of financial reform. (Here is Admati responding.)
So Elliott’s not on the costless side, but does he agree with Holtz-Eakin’s numbers? Not even remotely. According to Elliott’s estimate, the cost of the entirety of Dodd-Frank increases the cost of capital 0.28 percent, and the “low levels of economic costs found here strongly suggest that the benefits in terms of less frequent and less costly financial crisis would indeed outweigh the costs.”
As shown in the graphic above, a model of higher capital requirements by Kashyap, Stein, and Hanson put the estimate of a 2 percent capital increase at between 0.05 percent (driven by the tax effects) and 0.09 percent (driven by a large slippage of Modigliani-Miller they assume to get a high-end estimate). These are broadly in line with other estimates throughout the past several years. Even the most industry-driven estimates designed to weaken capital requirements don’t remotely approach this 2.00+ percent increase.
(As a coincidence, Elliott did estimate what it would take to make the cost of capital rise Holtz-Eakin’s estimated 2 percent. In his view, it would be capital requirements on the order of 30 percent, which is the reach goal for some. But when you analyze Dodd-Frank and get numbers consistent with 30 percent capital ratios, you are probably doing it wrong.)
A Worse Priority
So the estimate is wrong in a fundamental way; but this is less about a specific analysis than it is about setting priorities for the conservative movement when it comes to Dodd-Frank. And if attacking capital requirements becomes a major priority for conservatives, that’s a worrying sign. When conservatives start calling things “taxes,” institutional forces go into play beyond the control of any specific person, and that’s dangerous for a successful reform with lots of support that is important for a better financial system.
A broad group of people has come together to argue for capital requirements. This includes important commentators across the spectrum, from Simon Johnson to John Cochrane to many others. And there’s good reason for this. The current capital requirement regime hits six birds with one stone: helping with solvency, balancing risk management, making resolution and the ending of Too Big to Fail more credible, preventing liquidity crises in shadow banking, right-sizing the scale and scope of the largest financial institutions, and macroeconomic prudential policy.
There are disagreements about specifics of what is the best way to do higher capital requirements—quite intense ones, actually. But there’s a broad consensus in favor of them. Having watched this from the beginning, this broad coalition is one of the most promising developments I’ve seen.
I’m excited to see the right go after Dodd-Frank. Is the argument that there’s too much accountability for consumers now, and we need to gut those regulators at the CFPB? Is it that derivatives regulations are too extensive, and we should build our future prosperity by letting a thousand AIGs bloom? Is it that there should be few, if any, consequences for firms that break the law or commit fraud? (As someone who is worried about over-policing, this is one area where I believe we are criminally under-policed.) Please, by all means, make these arguments.
But taking on capital requirements with this weak argument is a bad development. The financial market is not understudied, and though nobody has ever found anything like these results, and though it’s clear Holtz-Eakin’s analysis doesn’t even engage with this other research, those who think the cost of capital requirements are low could be wrong. But to prove that, we’ll need an analysis far better than the one provided here. And until one has that, the responsible thing is to not unleash the conservative movement against reform that is doing good work and that should be advanced rather than dismantled.
 I’m pretty sure for “rL-C” in equation 11 he uses net income ($151.2bn) rather than EBIT ($218.7bn), though, from equation 9, “rL-C” should be pre-tax. However using the wrong number is the only way I can replicate the estimate he has. I’ll update this either way if they respond.
If I’m right this decreases Holtz-Eakins’ growth costs of regulations by about 30%, meaning that the economy will probably be skyrocketing any second now.
A lot of people were surprised last month when the investment giant BlackRock flagged the rise in stock buybacks and dividend payments as a major economic concern. Its CEO argued that the “effects of the short-termist phenomenon are troubling both to those seeking to save for long-term goals such as retirement and for our broader economy,” and that this was being done at the expense of “innovation, skilled work forces or essential capital expenditures necessary to sustain long-term growth.”
They are right to be concerned. The cash handed back to shareholders in the form of buybacks and dividends was 95 percent of corporate profits in 2014, climbing from 88 percent the year before and 72 percent in 2010 and expected to go even higher in the future. These numbers are far above historical norms, but they are the culmination of a long process starting in the 1980s. Private investment remains a weak part of the recovery, and it is necessary to investigate the connection between corporate governance and those decisions.
With that in mind, Senator Tammy Baldwin (D-WI) has sent a letter to the SEC looking for answers on these issues. In particular, she flags whether the SEC’s mission to “foster capital formation and prevent fraud” is jeopardized by short-termism in the market. It will be good to see how the SEC responds, and which other senators and organizations join in with their concerns.
Personally, I’m happy that it quotes J.W. Mason’s work on profits and borrowing shifting from investment in a previous era to cash leaving the firm now. This issue is a major piece of our Financialization Project here at Roosevelt, and we will continue to develop it in the future.
I think there are two additional things of interest. One is that this relationship is becoming more of an interest for academic and popular scrutiny. Recent, high-level research is showing that as a result of short-termist pressures, “public firms invest substantially less and are less responsive to changes in investment opportunities, especially in industries in which stock prices are most sensitive to earnings news” compared to private firms before the Great Recession.
Second, this looks like a centerpiece agenda item for liberals going into 2016. Larry Summers’s Inclusive Prosperity report for the Center for American Progress discusses concerns over short-termism, noting, “it is essential that markets work in the public interest and for the long term rather than focusing only on short-term returns. Corporate governance issues, therefore, remain critical.”
The problem of short-termism was also in Senator Elizabeth Warren’s big speech on the future of the financial reform agenda, in which she noted we need to change the rules of the economy because we “too often reward short-term risk-taking instead of sustained, long-term growth” and allow CEOs to “manipulate prices in the short-term, rather than investing in the long-term health of their companies.”
And it will be central to work from the Roosevelt Institute about inequality coming next month. (Get excited!)
I’m not sure if the right has a response to this issue. One of their core policy goals, removing all taxes on capital, will certainly make the situation worse, as the Bush dividend tax cuts increased dividends payouts without encouraging any real investment or wage growth. If the Republicans want to have real answers about inequality and stagnation, it’s important that they tackle real questions. And short-termism is one of those essential questions.
Two bits of exciting news this week.
First, I’m starting a biweekly newsletter. It’ll have what I’m up to, including all the things I’ve been writing, collected into one place. It’ll also have my favorite stuff I’ve read, random personal stories, and more. (Blame Google Reader for this I suppose.) Oh, and pictures of my dog too. Given the rate at which I’m writing it’s probably more of an every other week update; we’ll see how it goes. But for now, sign up!
Second, I’m joining the masthead at Dissent as a contributing editor. Here is the annoucement; I was happy to join even before I knew the excellent people they were bringing on board. Nothing much changes for me, I just get to formalize my relationship with the brilliant team they’ve built over there and help make it a standard for left thought going forward.
I also have a review of Naomi Murakawa’s new book on liberal punishment in the latest issue. This newest issue is excellent, but the piece on the assistant economy by Francesca Mari is one of the most bizarre and enlightening business pieces I’ve read recently. Also check out Sarah Jaffe on punk rock feminism and the left once it’s out from the paywall (or better, subscribe!).
News is breaking that GE Capital will be spinning off most of its financing arm, GE Capital, over the next two years. Details are still unfolding, but, according to the initial coverage, “GE expects that by 2018 more than 90 percent of its earnings will be generated by its high-return industrial businesses, up from 58% in 2014.”
It’s good that our industrial businesses will be focusing more on innovating and services rather than financial shenanigans, but this also tells us two important things about Dodd-Frank: it confirms one of the stories about the Act and disproves the core conservative talking point about what the Act does.
A very influential theory of the financial crisis is that there were financial firms acting just like banks but without the normal safeguards that traditionally went with banks. There was no public source of liquidity or backstops through the FDIC or the Federal Reserve, a public good capable of ending self-fulfilling panics. There was no mechanism to wind down the firms and impose losses outside of the bankruptcy code. There weren’t the normal capital requirements or consumer protections that went with the traditional commercial banking sector.
Though we now call this regulatory arbitrage, at the time it was seen as innovation. GE Capital was explicitly brought up as a poster child for deregulation. You can see it in Bob Litan and Jonathan Rauch’s 1998 American Finance for the 21st Century, which lamented the “twentieth-century model of financial policy” that, using transportation as an analogy, “set a slow speed limit, specified a few basic models for cars, separated different kinds of cars into different lanes, and demanded that no one leave home without a full tank of gas and a tune-up.” GE Capital was explicitly an example of a firm that could thrive with a regulatory regime that “focuses less on preventing mishaps and more on ensuring that an accident at any one intersection will not paralyze traffic everywhere else.”
This was very apparent in the regulatory space. The fact that GE owned a Utah savings and loan allowed it to be regulated under the leniency of the Office of Thrift Supervision (OTS), so it was able to work in the banking space without the normal rules in place. It was also able to use its high-level industrial credit rating to gamble weaker positions in the financial markets, arbitraging the private-sector regulation of the credit ratings agencies in the process.
How did that work out? First off, there was massive fraud. As Michael Hudson found in a blockbuster report, one executive declared that “fraud pays” and that “it didn’t make sense to slow the gush of loans going through the company’s pipeline, because losses due to fraud were small compared to the money the lender was making from selling huge volumes of loans.” Then there were the bailouts. The government backstopped $139 billion worth of GE Capital’s debts as it was collapsing and essentially had to manipulate the regulatory space to allow it to qualify for traditional banking protections. So much for not paralyzing traffic, and so much for the old rules not being important.
Dodd-Frank looked to normalize these regulations across both the shadow and regular banking sectors. It eliminated the OTS and declared GE Capital a systemically risky firm that has to follow higher capital requirements and prepare for bankruptcy with living wills just like we expect a bank to do, regardless of what kind of legal hijinks it is using to call itself something else. And GE Capital, faced with the prospect of having to play in the same field as everyone else, decided it should go back to trying to bring better things to life rather than making financial weapons of mass destruction. That’s pretty good news, and a process that should be encouraged and continued.
The Collapse of the Conservative Argument
But there’s one ask GE has as it spins off GE Capital, one that actually disproves the core conservative argument on Dodd-Frank. In the coverage, GE Chairman and CEO Jeff Immelt states directly, “GE will work closely with [the regulators at the Financial Stability Oversight Council] to take the actions necessary to de-designate GE Capital as a Systemically Important Financial Institution (SIFI).”
Dodd-Frank designates certain financial institutions, mostly over $50 billion in size, as systemically important. Or as the lingo goes, they get designated SIFI status. Those firms have stronger capital requirements and stronger requirements to be able to declare themselves ready for bankruptcy or FDIC resolution if they fail.
Conservatives, from the beginning, have made this the centerpiece of their story about Dodd-Frank. They argue that SIFI status is a de facto permanent bailout and claim that firms will demand to be designated as SIFIs because it means they will have a favored status. This status gives them easy crony relationships with regulators and allow them to borrow cheaply in the credit markets.
This has become doctrine on the right; I can’t think of a single movement conservative who has said the opposite. Examples of the mantra range from Peter Wallison of AEI writing “[t]he designation of SIFIs is a statement by the government that the designated firms are too big to fail” to Reason’s Nick Gillespie repeating that “everyone agrees [Dodd-Frank] has simply reinscribed too big to fail as explicit law.” (I love an “everyone agrees” without any sourcing.)
It’s also the basis of proposed policy. The Ryan budget cancels out the FDIC’s ability to regulate SIFIs, stating that Dodd-Frank “actually intensifies the problem of too-big-to-fail by giving large, interconnected financial institutions advantages that small firms will not enjoy.”
If that’s the case, GE should be desperate to maintain its SIFI status even though it is spinning off its GE Capital line. After all, being a SIFI means it gets all kinds of favored protections, access, and credit relative to other firms.
But, instead GE is desperate to lose it. This is genuine; ask any financial press reporter or analyst, and they’ll tell you that GE is very sincere when it says it doesn’t want to be designated as risky anymore, and is willing to take appropriate measures to remove the designation.
If that’s the case, what’s left of the GOP argument?
In JPMorgan’s latest shareholder newsletter (p. 30-34), Jamie Dimon walks through a narrative of the next financial crisis and why we should be worried about it. But instead of worrying, I think it points to interesting details of what we’ve learned from the last crisis, what we evidently haven’t learned, and where we should go next.
Here’s Matt Levine’s summary. Dimon makes two arguments: First, the new capital requirements, especially the liquidity coverage ratio (LCR) that requires banks to fund themselves with enough liquidity to survive a 30-day crisis, will be procyclical. This means they will bind the financial sector more tightly in a crisis and prevent it from being a backstop. This is made even worse by his second argument, which is that there’s a safe asset shortage. Each individual bank is much safer than before the crisis, but using safe assets to meet the LCR means there will be fewer out there to provide stabilization when a crisis hits.
To use Dimon’s language, “there is a greatly reduced supply of Treasuries to go around – in effect, there may be a shortage of all forms of good collateral” in a crisis. Meanwhile, new capital requirements, especially the LCR, mean that in a crisis banks won’t want to lend, roll over credit, or purchase risky assets, because they would be violating the new capital rules. As such, “it will be harder for banks either as lenders or market-makers to ‘stand against the tide’” and to serve as “the ‘lender of last resort’ to their clients.”
What should we make of the fact that Dimon’s target is the LCR, an important new requirement under constant assault by the banks? Four points jump out.
The first is that the idea that we should weaken capital requirements so banks can be the lender of last resort in a financial crisis is precisely what was disproven during the 2008 panic. One reason people use the term “shadow banking” to describe this system is that it has no actual means of providing liquidity and the backstops necessary to prevent self-fulfilling panics, and that was demonstrated during the recent crisis.
Rather than financial firms heroically standing against the tide of a financial panic, they all immediately ran for shelter, forcing the Federal Reserve to stand up instead and create a de facto lender-of-last-resort facility for shadow banks out of thin air.
It’s good to hear that Dimon feels JPMorgan can still fulfill this function in the next crisis, if only we weakened Basel. But we’ve tried before to let financial firms act as the ultimate backstop to the markets while the government got out of the way, and it was a disaster. Firms like AIG wrote systemic risk insurance they could never pay; even interbank lending collapsed in the crisis.
This is precisely why we need to continue regulating the shadow banking sector and reducing reliance and risks in the wholesale short-term funding markets, and why the Federal Reserve should actually write the rules governing emergency liquidity services instead of ignoring what Congress has demanded of it. No doubt there needs to be a balance, but if anything we are counting too much on the shadow banking sector to be able to take care of itself, not too little.
As a quick, frustrating second point, it’s funny that regulators bent over backwards for the financial industry in addressing these issues with LCR, and yet the industry won’t give an inch in trying to dismantle it. That LCR is meant to adjust in a crisis and that the funds would be available for lending was emphasized when regulators weakened the rule under bank pressure, and it is explicitly stated in the final rule (“the Basel III Revised Liquidity Framework indicates that supervisory actions should not discourage or deter a banking organization from using its HQLA when necessary to meet unforeseen liquidity needs arising from financial stress that exceeds normal business fluctuations”).
If risk-weighting is too procyclical, which requires several logical leaps in Dimon’s arguments, the solution is to adjust those rules while raising the leverage ratio, not to pretend that the financial sector would be a sufficient ultimate backstop. Bank comments on tough rules like LCR are less give-and-take and more take-and-take.
But the third point is more interesting. Beyond whether or not the rules are too procyclical and unnecessarily restrictive in a crisis, there’s Dimon’s claim that there aren’t enough Treasuries to go around. If that’s the case, why don’t we simply make more Treasury debt? If the issue is a shortage of Treasuries needed to keep the financial sector well-capitalized and safe, it’s quite easy for us to make more government debt. And right now, with low interest rates and a desperate need for public investment, strikes me as an excellent time to do just that. Dimon is correct in his implicit idea that the financial markets, with enough financial engineering and private-market backstopping, can produce genuinely safe assets is a complete sham. This is a role for the government.
And for fun, a fourth point from Ben Walsh: Dimon says one of the biggest threats to the financial markets is that there isn’t enough U.S. debt. From January 2011: “Dimon Says Government Deficits, Spending Are New Global Risk.” We are risking a major rise in interest rates in the years following 2011 if we have trillion-dollar deficits, Dimon warned. How did that turn out?
Imagine how much worse shape we’d be in if we’d listened to Dimon.
So just as a friendly reminder: not only would more federal debt issued at incredibly low rates do cool things like rebuild schools, fix bridges, and give money to poor people, it would also serve as an important element of reducing the risks of the next financial crisis. This federal debt seems like a pretty useful thing to have around.
John Oliver dedicated his main segment on last Sunday’s episode to the epidemic of municipal fees. He walks through several stories about tickets and citations that are overpriced and end up being more expensive for poor people because of a series of burdensome fees. This was one of the conclusions of the Justice Department’s report on Ferguson, which argued that “law enforcement practices are shaped by the City’s focus on revenue rather than by public safety needs.”
Oliver had a memorable phrase to describe how this system catches people and won’t let them go: he called it a “f*** barrel,” and started a NSFW hashtag on Twitter to draw attention to it.
But I had actually heard a similar (and safe-for-work) phrase for this years ago: the “sweat box.” Law professor Ronald Mann coined it in 2006 to describe how the 2005 Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 (BAPCPA) would affect consumer debt, and it applies to the criminal justice system now. The problems with this system also sound like the problems in mortgage debt servicing, which has been a focus here. It turns out that these issues are generalizable, and they illustrate some of the real dilemmas with privatization and introducing the profit-motive into the public realm.
The Sweat Box
First, the barrel/box. Credit card companies and other creditors really wanted BAPCPA to become law. But why? Mann argued that the act wouldn’t reduce risky borrowing, reduce the number of bankruptcies, or increase the recoveries these companies got in bankruptcy.
But what it would do is make it harder to start a bankruptcy, thanks to a wide variety of delaying tactics. The act did this “by raising filing fees, but also by lengthening the period between permitted filings and by imposing administrative hurdles related to credit counseling, debt relief agencies, and attorney certifications.” This kept distressed debtors in a period where they faced high fees and high interest payments, which would allow the credit card companies to collect additional revenue. Instead of trying to alter bankruptcy on the front or back ends, what it really did was give consumers fewer options and more confusion in the middle. It trapped them in a box (or over a barrel, if you will).
But this also sounds familiar to those watching the scandals taking place in servicer fraud as the foreclosure crisis unfolded over the past seven years. Servicers are the delegated, third-party managers of debts, particularly mortgage securitizations but also student debt. They sound disturbingly similar to the companies Oliver describes as managing municipal fees.
As Adam Levitin and Tara Twomey have argued, third-party servicing introduces three major agency problems. The first is that servicers are incentivized to pad costs, as costs are their revenues, even at the expense of everyone else. The second is that they will often pursue their own goals and objectives as the expense of other options, especially when they don’t ultimately care about the overall goals of those who hire them. And a third problem is that when problems do occur, they are often incentivized to drag them out rather than resolve them the best way possible.
Among other heart-breaking stories, Oliver walks through the story of Harriet Cleveland, who had unpaid parking tickets with Montgomery, Alabama. Montgomery, however, outsourced the management of this debt to Judicial Correction Services (JCS). JCS followed this script perfectly.
JCS had every reason to increase its fees and keep them at a burdensome rate, as it was to be paid first. It was completely indifferent to public notions of the county that hired it, such as proportional justice or the cost-benefit ratio of incarceration, such that they threw Cleveland in jail once she couldn’t handle the box anymore. And it economically benefited from keeping Cleveland in the sweat box as long as possible, rather than trying to find some way to actually resolve the tickets.
For those watching the mortgage servicing industry during the foreclosure crisis, this is a very similar story. Mortgage servicers can pyramid nuisance fees knowing that, even if the loan goes into foreclosure when the debtor can’t handle the box, they will be paid first. They are ultimately indifferent to the private notion of maximizing the value of the loan for investors, so much so that, compared to traditional banks that hold loans directly, servicers are less likely to do modifications and do them in a way that will work out. And servicers will often refuse to make good modifications that would get the mortgage current, because doing so can reduce the principal that forms the basis of their fees.
The Perils of the Profit Motive
There are three elements to draw out here. The first is that these problems are significantly worse for vulnerable populations, particularly those whose exit options are limited by background economic institutions like backruptcy or legal defense. The second is that many of our favorite buzzword policy goals, be they privatization of public services or the market-mediation of credit, involve piling on more and more of these third-party agents whose interests and powers aren’t necessarily aligned with what those who originally hired them expected. Assuming good faith for a second, privatization of these carceral services by municipalities requires a level of control of third-party agents that even the geniuses on Wall Street haven’t been able to pull off.
But we see the sweat box when it comes to purely public mechanisms too, as we see in Ferguson. So the third takeaway is that this is what happens when the profit motive is introduced in places where it normally doesn’t exist. Introducing the profit motive requires delegation and coordination, and it can often cause far more chaos than whatever efficiencies it is meant to produce. Traditional banking serviced mortgage debts as part of the everyday functions within the firm. Putting that function outside the firm, where the profit-motive was meant to increase efficiency, also created profit-driven incentives to find ways to abuse that gap in accountability.
The same dynamics come into play with the profit motive is reintroduced into the municipal level. Our government ran under the profit motive through the 1800s, and it was a major political struggle to change that. Municipal fees are very much part of the reintroduction of the profit motive into city services. As libertarian scholar and Reason Foundation co-founder Robert Poole wrote in 1980 regarding municipal court costs, “Make the users (i.e., the criminals) pay the costs, wherever possible.” As Sarah Stillman found, this is what an “offender-funded” justice system, one that aims “to shift the financial burden of probation directly onto probationers,” looks like now as for-profit carceral service providers shift their businesses to probation and parole. Catherine Rampell reports this as a total shift away from taxes and towards fees for public revenues, and the data shows it.
This is the model of the state as a business providing services, one in which those who use or abuse its functions should fund it directly. And it’s a system that can’t shake the conflicts inherent whenever the profit motive appear.
Live at The Nation: Free Trade Isn’t about Trade. It’s About Bureaucrats—and Guns. Free trade agreements like the TPP have provisions that are designed less for trade, and more about replacing public bureaucrats with private, corporate ones. I think there’s a lot out there about the corporate welfare elements about TPP, which are definitely true, but I think this element of who has the final say over how our economies are regulating is equally important. Check it out!
About a month ago, I interviewed the great historian Eric Foner about his Civil War and Reconstruction MOOC, the experience of teaching those time periods to students, and his work’s relationship to the left now for The Nation. I forgot to post it here; I’m doing so now because the third and final part of the MOOC, The Unfinished Revolution: Reconstruction and After, 1865-1890, has just started and you can still sign up. (All the lectures will eventually end up on youtube. Here are links to the first class on the buildup to the Civil War and the second class on the Civil War itself.)
Foner is a great lecturer, and the lectures are his class, the final time he teaches it at Columbia, recorded and broken up into segments. It’s especially awesome to get to sit in on Foner lecturing about Reconstruction, given that he wrote the book that still defines the period. I highly recommend checking it out.
Is Expanding Medicaid an Essential Part of Reducing Mass Incarceration? An Interview with Harold Pollack
Every policy lever available was pulled in order to create our system of mass incarceration over the past 40 years. Reformers will have to be equally clever and nimble in trying to challenge and dismantle this system. And one important lever that I hadn’t thought much about in this context is the Affordable Care Act’s (ACA, or Obamacare) expansion of Medicaid. This expansion is being blocked in 22 states, which is preventing 5.1 million Americans from getting health-care.
This came up in an excellent interview between Connor Kilpatrick and the political scientist and incarceration scholar Marie Gottschalk over at Jacobin. Commenting on the limits of the current wave of bipartisan support against incarceration, Gottschalk notes that “If you care about reentry and about keeping people out of prison in the first place, there’s no public policy that you should support more strongly now than Medicaid expansion. Medicaid expansion gives states huge infusions of federal money to expand mental health services, substance abuse treatment, and medical care for many of the people who are most likely to end up in prison. It also allows states and localities to shift a significant portion of their correctional health care costs to the federal tab.” Similar concerns were raised by Elizabeth Stoker Bruenig at The New Republic.
I immediately got Gottschalk’s new book Caught, the subject of the Jacobin interview, and though I just started the book I highly recommended it as a guide to where the prison state stands in 2015. But I wanted to know more about the relationship between Medicaid and deincarceration.
So I reached out to friend-of-the-blog Harold Pollack. Pollack is the Helen Ross Professor at the School of Social Service Administration at the University of Chicago. He is also Co-Director of The University of Chicago Crime Lab at the University of Chicago. He has published widely at the interface between poverty policy and public health, and he also writes for a wide variety of online and print publications. He is also a thoughtful scholar on health care and crime policy and how they interact in communities.
Mike Konczal: How important is the Medicaid expansion for deincarceration?
Harold Pollack: I’m convinced that Medicaid expansion is essential for this problem. It’s essential for two different purposes. First, individuals in this population need health services, and there needs to be a clear way that individuals can get access to services from qualified providers. The Medicaid expansion does that.
Secondly, the entire ecosystem of care requires proper financing. And for historical reasons, mental health and substance abuse services have been put into their own silos. They are not properly financed, except through a patchwork of safety net funding streams that don’t particularly work well. They have also been poorly-integrated with standard medical care.
Let’s talk about individuals first. In what ways could Medicaid benefit people who are or are likely to get caught up in the criminal justice system?
Think about who is not eligible for Medicaid before health reform. A low-income male who is not a veteran or a custodial parent, or who doesn’t qualify for Ryan-White HIV/AIDS benefits. They may have a serious substance abuse problem, but that wouldn’t qualify them for federal disability benefits. They, with the expansion, can get access to Medicaid simply because they are poor.
The criminal justice population is quite varied, but there are a couple of key areas in which Medicaid expansion would be especially beneficial for them. With the expansion, Medicaid can now cover basic outpatient substance abuse treatment. This is true for both Medicaid and private insurance after health reform. And ACA provides these services in a way that is much more integrated with people’s regular medical care.
One basic challenge with drug and alcohol treatment is that these services are in a separate system that people don’t want to use, and don’t use. With the Medicaid expansion, you can go to a neighborhood clinic and they can help you get Methadone or Suboxone. They can also get you the psychiatric care you need within the same umbrella of your regular care. So it is much more likely that people will use it.
There’s very good evidence that alcohol and illicit drug treatment reduces criminal offending. [Editor note: Both this study and this study, obtained via follow-up email, show treament reduces violent and property crime enough to far pass a cost-benefit test.] Both It partly reduces criminal offending by reducing the need to commit property crimes to get the substances. It also reduces offending by allowing people to be more functional, and thus more likely to stay employed. Especially in the case of alcohol, people getting their substance abuse under control makes it less likely that they’ll be intoxicated, and thus less likely to commit crimes or be victims of crime.
What about those with mental illness?
When it comes to those with serious mental illness, we end up using local jails to try and manage them. It’s important that they can get access to help and mental health treatment outside of the criminal justice itself. It’s ironic that when someone with psychiatric disorders is inside the jail, they do have access to some of these services. But those services are often unavailable or totally disconnected when they leave the jail.
We don’t really know whether, or by how much, these services can be expected to reduce offending among this group. This remains a hypothesis that depends on how well we actually implement programs. Much will depend on how effectively we can implement Medicaid expansion.
How does this element of Medicaid deal with the traditional criticisms of the program?
Medicaid has many shortcomings. It doesn’t pay a market rate for important services. But for all of its faults, Medicaid recipients are grateful to have it. The satisfaction they have is quite high compared to traditional health insurance. Medicaid gives people access to the basic health care that they need to stay healthy and improve their lives. It is also genuinely designed for people who have no money, which is really important for these indigent populations. Medicaid is inferior to private insurance in terms of reimbursement to providers, but it’s better for really poor people than any private insurance I’ve seen, because it’s been road tested for a long time in meeting the needs of indigent people.
And as I mentioned, ACA is especially important, because the ACA includes very specific components in the area of mental health and substance use.
One thing I’ve noticed is that for all the talk about ending mandatory minimums, most of the real energy is about giving judges flexibility to ignore mandatory minimums. But that put a lot of pressure on keeping recidivism down, because judges, especially elected ones, won’t ignore long records.
Deincarceration requires the puzzle pieces to fit together to be sustainable and politically tenable. That requires that we deal with the real-life problems people face when they are released. It requires monitoring and people have access to services, both to improve their quality of life and to reduce the probabilities that they will reoffend.
If we just release people without support services, my fear is that it will not go well. Then it will ultimately generate political backlash. I’m very heartened that we are reducing the mandatory minimums, in particular for older offenders who tend to be less violent. It’s essential that we address the excessive sentencing. But we also have to do what we need to do to make this effective.
Even if judges can reduce sentencing, they are ultimately dependent on the available resources to help and monitor the people that come before them. And if judges don’t see those services, then they aren’t going to use their discretion to release many of these people as early as they might.
And if property crimes are being committed by people under criminal justice supervision, and they have a history of violent offending, then they are much more likely to be sent back with a pretty serious sanctions.
Tell me more about the second issue, how the ACA rationalizes the funding stream for these services.
We’ve had a messy system in the past, and we’ll ultimately rationalize it under Medicaid. Safety net providers for substance abuse and mental illness have always been paid for by a patchwork of public funding through obscure agencies and local governments. It has always been a huge challenge where access has been inadequate, with long waiting lines, and the services provided were often quite forbidding. Given this separate funding, it’s very difficult to integrate this in with people’s overall health care. When you have these silos of places to go, with one for mental health, another silo for substance abuse, and another for safety net health care, that person isn’t going to get the integrated care they really need. The ACA is trying to bring those things together.
Many of these issues will still be in play going forward, but it will be in the context of a coherent system that at-least addresses these issues within the context of broader health care.
I have a new Score column up: Why Are Liberals Resigned to Low Wages? It deals with the three key political institutions that are responsible for wages remaining low, both over the past generation and in the Great Recession. It also tries to understand “liberal nihilism”, or the weird glee that results when wages aren’t seen as also having an institutional component to them, and thus are no longer a political challenge.
I hope you check it out!
A blog post responding to a blog post responding to a blog post. Who says the blogosphere is dead?
Recently I wrote about Larry Summers demolishing an argument about robots and our weak recovery on a panel. Jim Tankersley called up Summers to further discuss the topic, and put his interview online as a response meant to correct and expand on my post. But I don’t think we disagree here, and if anything Summers’ interview shows how much the consensus has changed.
Before we continue, I should clarify what we are talking about. When people talk about “the robots,” they are really telling one of three stories:
1. Technology has played an important role in the economic malaise of the past 35 years, broadly defined as a mix of stagnating median wages, increased inequality, and weakening labor-force participation.
2. The Great Recession has led to such a weak and lackluster recovery in large part because of technology. In one version of this story, technology is simply taking all the jobs that would normally be found in a recovery. As the AP put it, “Five years after the start of the Great Recession, the toll is terrifyingly clear: Millions of middle-class jobs have been lost… They’re being obliterated by technology.” (President Obama himself often mentioned this story throughout the dark period when unemployment was much higher.)
Another, more popular, version is that workers simply don’t have the skills required for a high-technology labor force. A representative quote from the Atlanta Fed President Dennis Lockhart in 2010: “the skills people have don’t match the jobs available. Coming out of this recession there may be a more or less permanent change in the composition of jobs.”
3. We are moving to a post-work economy, one where robots substitute for human labor in massive numbers and fundamentally change society. Here’s an example. We may or may not be seeing the first hints of such a change now, depending on the story.
The story I said Summers (as well as David Autor) demolished is the second. There’s no evidence that we are having a technology renaissance right now, or that technology has contributed in a major way to the weak recovery, or that a skills gap or other educational factor is holding back employment, or that highly skilled workers are having a great time in the labor market. The arguments against this story from the original post are pretty damning, and Summers either reiterates them or doesn’t walk them back in the Washington Post column. (Let’s leave the third story to science fiction speculation for now, noting that the second story getting demolished means it isn’t happening now, and that it’s hard to imagine robot innovation when labor is so cheap and abundant.)
However, Summers does argue for the first story as well, the one in which technology has played a role in the malaise of the past 30 years. As he tells the Post, “In the 1960s, about 1 in 20 men between the age of 25 and 54 was not working. Today, the number is more like 1 in 6 or 1 in 7. So we have seen some troubling long-term trends, and they appear to be continuing trends.” Summers also notes, “to say that technology is important is not to say that technology is the only important factor, or even that it is the dominant factor.” He mentioned this as the conference as well; Brad DeLong and Marshall Steinbaum noted it in their posts.
Intellectual Portfolio Rebalancing
When we think of the economic malaise of the past 30 years, we should probably think of it as a combination of technology, globalization, sociology, and public policy. Tankersley wants to emphasize technology as a piece of this story, and I agree it should be there.
But here’s what I find interesting. Whenever we have a portfolio of ideas, some ideas get more weight than others. And what strikes me about this conversation is how much technology and skills have been deemphasized relative to other stories since the Great Recession, especially those of public policy.
This is a pretty quick and important change. Almost ten years ago, Greg Mankiw could write, “Policy choices […] have not been the main causes of increasing inequality. At least that is the consensus, as I understand it, of the professional labor economists who study the issue.” Brad Delong also said in 2006 that he “can’t see the mechanism by which changes in government policies bring about such huge swings in pre-tax income distribution.” Skill-biased technical change (SBTC) and technology were assumed to cover the entire inequality story.
That consensus is weaker now than it was then. Certainly the argument for SBTC, while always shaky, has taken a hit. You can see it with Summers himself in the Washington Post, where he notes that “changing patterns of education is unlikely to have much to do with a rising share of the top 1 percent, which is probably the most important inequality phenomenon.”
Meanwhile, more and more inequality research is focused on institutional factors, ranging from marginal tax rates to the minimum wage to the inefficiency and growth of the financial sector to deunionization. And as the Mankiw quote hints, 10 years ago you’d be less likely to hear, as Summers says at the Washington Post, that a “combination of softer labor markets and the growing importance of economic rents” are an essential part of inequality spoken with the same confidence as you see here. I read that as a major change of the consensus.
This is a major rebalancing of our intellectual portfolio of inequality stories, a change that I think is opening up a much more rich and accurate description of what has happened. I hope the research and conversation continues this way.
So excited to be launching our new Financialization Project. Check out the website here. Part of the goal of the project is to define financialization, and we’ve focused on the changes to savings, power, wealth, and society that have occured over the past 35 years. We’ll have more there soon, but for now check out the general idea here.
We’re also releasing our first paper, “Disgorge the Cash: The Disconnect Between Corporate Borrowing and Investment,” by Roosevelt fellow J.W. Mason. There’s a great writeup of the paper by Lydia DePillis – “Why companies are rewarding shareholders instead of investing in the real economy” – at the Washington Post.
There’s a ton in there, from the key intellectual, ideological, legal, and institutional changes that brought about the shareholder revolution, to reasons to doubt a credit crunch has played any kind of role in the Great Recession. But the core of it is told in these two graphs, dug out from detailed Compustat data:
The first figure shows that a firm borrowing $1 would correlate with an additional 40 cents of investment before the 1980s. Since the 1980s that has collapsed. Today, there is a strong correlation between shareholder payouts and borrowing that did not exist before the mid-1980s. Since the 1980s, shareholder payouts have nearly doubled; in the second half of 2007, aggregate payouts actually exceeded aggregate investment.
I think there’s reason for some skepticism about how fast things are actually moving. There’s a lot of aggregate data that don’t support the idea the labor market is changing or the economy is changing as rapidly as this very dramatic story. The premium to higher education has plateaued over the last 10 years. We see evidence highly skilled workers have less rapid career trajectories and are moving into less skill occupation if anything. Productivity is not growing very rapidly, and a lot of the employment growth we’ve seen in the past 15 years has been in relatively low education, in-person service occupations.
The second point I want to make, when we think about how technology interacts with labor market we think of substitution of labor with machinery. […] What is neglected is that it complements us as well. Many activities require a mixture of things. it requires a mixture of information process and creativity, motor power and dexterity. if those things need to be done together if you make one cheaper and more productive, you increase the value of the other.
On the one hand we have enormous anecdotal evidence and visual evidence that points to technology having huge and pervasive effects. Whether it is complementing workers and making them much more productive in a happy way, or whether it is substituting for them and leaving them unemployed can be debated. In either of those scenarios you would expect it to be producing a renaissance of higher productivity.So, on the one hand are convinced of the far greater pervasiveness of technology in the last few years, and, on the other hand, the productivity statistics on the last dozen years are dismal. Any fully satisfactory view has to reconcile those two observations and I have not heard it satisfactorily reconciled.
If you believe technology happens with a big lag and it’s only going to happen in the future, that’s fine. But then you can’t believe it’s already caused a large amount of inequality and disruption of employment today. […] Let’s take retailing. You can imagine you can have all kinds of spiffy technology so you no longer have to have people behind cash registers. The problem is you wouldn’t expect the people behind the cash registers would get fired before the people working the systems got the new systems working. […] I understand why it might take years for it all to have an effect. What I have a harder time understanding is how there can be substantial disemployment ahead of the effect of the productivity.That is, if you thought that it just was impossible to put in these systems and so forth, then you might think that in the short run it would be a big employment boom. You have to keep your old legacy system going and you have to have a million guys running around figuring out how to put the new computer system in.
I think the [education] policies that Aneesh is talking about are largely whistling past the graveyard. The core problem is that there aren’t enough jobs. If you help some people, you could help them get the jobs, but then someone else won’t get the jobs. Unless you’re doing things that have things that are effecting the demand for jobs, you’re helping people win a race to get a finite number of jobs. […]Folks, wage inflation in the united states is 2%. It has not gone up in five years. There are not 3% of the economy where there’s any evidence of hyper wage inflation of a kind that would go with worker shortages. The idea that you can just have better training and then there are all these jobs, all these places where there are shortages and we just need the train people is fundamentally an evasion. […]I am concerned that if we allow the idea to take hold, that all we need to do is there are all these jobs with skills and if we can just train people a bit, then they’ll be able to get into them and the whole problem will go away. I think that is fundamentally an evasion of a profound social challenge.
What we need is more demand and that goes to short run cyclical policy, more generally to how we operate macroeconomic policy, and the enormous importance of having tighter labor markets, so that firms have an incentive to reach for worker, rather than workers having to reach for firms. […]I think that the broad empowerment of labor in a world where an increasing part of the economy is generating income that has a kind of rent aspect to it, the question of who’s going to share in it becomes very large. One of the puzzles of our economy today is that on the one hand, we have record low real interest rates, that are expected to be record low for 30 years if you look at the index bond market. And on the other hand, we have record high profits. And you tend to think record high profits would mean record high returns to capital, would also mean really high interest rates. And what we actually have is really low real interest rates. The way to think about that is there’s a lot of rents in what we’re calling profits that don’t really represent a return to investment, but represent a rent.The question then is who’s going to get those rents? Which goes to the minimum wage, goes to the power of union, goes through the presence of profit sharing, goes to the length of patents and a variety of other government policies that confer rents and then when those are received, goes to the question of how progressive the tax and transfer system is. That has got to be a very, very large part of the picture.
If we had the income distribution in the United States that we did in 1979, the top 1% would have $1 trillion less today, and the bottom 80% would have $1 trillion more. That works out to about $700,000 a family for the top 1%, and about $11,000 a year for a family in the bottom 80%.That’s a trillion dollars. I don’t know what the number is, but my guess is that the total cost of the Earned Income Tax Credit is $50 billion. Nobody’s got on the policy agenda doubling the Earned Income Tax Credit. The big, aggressive agendas are probably to increase it by a third or a half. So, I’m all for it, but we are talking about 2.5% of the redistribution that has taken place. So, you have to be looking for things and there’s no one thing that is going to do it. My reading of the evidence, it’s a fairly general evidence, is that while there may be some elasticity, the elasticity around the current level of the minimum wage is very low.
We may need an increase in the income tax credit, not only for those who receive it at the present time but perhaps much further up the income scale. Measures that facilitate collective bargaining can result in a broader participation in the benefits of productivity and growth […] If we have ever rapid technological development and it is labor displacing, at some point in the future — as I say, that may be some distant point in the future — should that lead to some basic change in our lifestyles with less work, more lecture and a richer, more robust use of that leisure? […] In addition to everything that needs to be done to enhance growth, tighten labor markets and to improve the position of middle and lower income workers, should there be increased redistribution to accomplish the broad objectives of our society?
According to a new study by Marcus Hagedorn, Iourii Manovskii and Kurt Mitman (HMM), Congress failing to reauthorized the extension of unemployment insurance (UI) resulted in 1.8 million additional people getting jobs. But wait, how does that happen when only 1.3 million people had their benefits expire?
The answer is by going off the normal path of these arguments in models, techniques and data. The paper has a nice write-up by Patrick Brennan here, but it’s one that doesn’t convey how different this paper is compared to the vast majority of the research. The authors made a well-criticized splash in 2013 by arguing that most of the rise in unemployment in the Great Recession was UI-driven; this new paper is a continuation of that approach.
Gold Standard Model. Before we go further, let’s understand what the general standard in UI research looks like. The model here is that UI makes it easier for workers to pass up job offers. As a result they’ll take a longer time to find a job, which creates a larger pool of unemployed people, raising unemployment. In order to test this, researchers use longitudinal data for individuals to compare the length of job searches for individuals who receive UI with those who do not.
This is the standard in the two biggest UI studies from the Great Recession. Both essentially use individuals not receiving UI as a control group to see what getting UI does for people’s job searches over time. Jesse Rothstein (2011) found that UI raised unemployment “by only about 0.1 to 0.5 percentage point.” Using a similar approach, Farber and Valletta (2013) later found “UI increased the overall unemployment rate by only about 0.4 percentage points.” These are generally accepted estimated.
And though small, they are real numbers. The question then becomes an analysis of the trade-offs between this higher unemployment and the positive effects of unemployment insurance, including income support, increased aggregate demand and the increased efficiency of people taking enough time to get the best job for them.
This is not what HMM do in their research. Either in terms of their data, which doesn’t look at any individuals, or their model, which tells a much different story than what we traditionally understand, or their techniques, which add additional problems. Let’s start with the model.
Model Problems. The results HMM get are radically higher than these other studies. They argue that this is because they look at the “macro” effects of unemployment insurance. Instead of just people searching for a job, they argue that labor-search models show that employers must boost the wages of workers and create fewer job openings as a result of unemployment insurance tightening the labor market.
But in their study HMM only look at aggregate employment. If these labor search dynamics were the mechanism, there should be something in the paper about actual wage data or job openings moving in response to this change. There is not. Indeed, their argument hinges entirely on the idea that the labor market was too tight, with workers having too much bargaining power, in 2010-2013. The end of UI finally relaxed this. If that’s the case, then where are the wage declines and corporate profit gains in 2014?
This isn’t an esoteric discussion. They are, in effect, taking a residual and calling it the “macro” effect of UI. But we shouldn’t take it for granted that search models can confirm these predictions without a lot of different types of evidence; as Marshall Steinbaum wrote in his appreciation of these models, when it comes to business cycles and wages predictions they are “an empirical disaster.”
Technique Problems. The model’s vagueness is amplified by the control issue. One of the nice things about the standard model is that people without UI make a nice control group for contrast. Here, HMM simply compare high-UI and low-UI duration states and then counties, without looking at individuals. They argue that since the expiration was done by Congress, it is essentially a random change.
But a quick glance shows their high benefits states group had an unemployment rate of 8.4 percent in 2012, while their low benefits states had an unemployment rate of 6.5 percent. Not random. As the economy recovers, we’d naturally expect to see the states with a higher initial unemployment rate recover faster. But that would just be “recovery”, not an argument about UI, much less workers’ bargaining power.
Data Problems. Their county-by-county analysis is meant to cover for this, but this data is problematic here. As Dean Baker notes in an excellent post, the local area data they use is noisy, confusing based on whether the state is where one works versus lives, and is largely model driven. The fact that much of it is model-driven is problematic for their cross-state county comparisons.
Baker replaces their employment data with the more reliable CES employment data (the headline job creation number you hear every month) and finds the opposite headline result:
It’s not encouraging that you can get the opposite result by changing from one data source to another. Baker isn’t the first to question the robustness of these results to even minor changes in the data. The Cleveland Fed, on an earlier version of their argument, found their results collapsed with a longer timeframe and excluding outliers. The fact that the paper doesn’t have robustness tests to a variety of data sources and measures also isn’t encouraging.
So data problems, control problems, and the vague sense that this is just them finding a residual and attribute all of it to their “macro” element without enough supporting evidence. Rather than turning over the vast research already done, I think it’s best to conclude as Robert Hall of Stanford and the Hoover Institute did for their earlier paper with a similar argument: “This paper has attracted a huge amount of attention, much of it skeptical. I think it is an imaginative and potentially important contribution, but needs a lot of work to convince a fair-minded skeptic (like me).” This newest version is no different.
President Obama is going big on capital taxation in the State of the Union tonight, including a proposal to raise dividend taxes on the rich to 28 percent. The President is probably not going to frame this as a move away from the George W. Bush economy, but Bush’s radical cuts to capital taxes are part of his legacy that we are still living with. And it’s a part that the latest evidence tells us did a lot to help the rich without helping the overall economy at all.
In the response to Obama’s proposal, you are going to hear a lot about how lower dividend rates increase investment and help the real economy. Indeed, lowering capital tax rates has been a consistent goal of conservatives. As a result, one of the biggest capital taxation changes in history happened in 2003, when George W. Bush reduced the dividend tax rate from 38.6 percent to 15 percent as part of his rapid and expansive tax cut agenda.
There’s been a lot of research about the effect of this massive dividend tax cut on payouts to shareholders (kicked off by an important 2005 Chetty-Saez paper), but very little on its effect on the real economy. Did slashing the dividend tax rate boost corporate investments, perhaps because it made funding projects easier? We don’t know, and it’s not because economists aren’t interested; it’s because it’s very difficult to construct a control group with which to compare the results. Investments increased after 2003, but they likely would have to some degree independent of the dividend tax cut, as we were coming out of a recession. So did the tax cut make a difference?
This is where UC Berkeley economist Danny Yagan’s fantastic new paper, “Capital Tax Reform and the Real Economy: The Effects of the 2003 Dividend Tax Cut,” (pdf, slides) comes in. He uses a large amount of IRS data on corporate tax returns to compare S-corporations with C-corporations. Without getting deep into tax law, C-corporations are publicly-traded firms, while S-corporations are closely held ones without institutional investors. But they are largely comparable in the range Yagan looks at (between $1 million and $1 billion dollars in size), as they are competing in the same industries and locations.
Crucially, though, S-corporations don’t pay a dividend tax and thus didn’t benefit from the big 2003 dividend tax cut, while C-corporations do pay them and did benefit. So that allows Yagan to set up S-corporations as a control group and see what the effect of the massive dividend tax cut on C-corporations has been. Here’s what he finds:
The blue line is the C-corporations, which should diverge from the red-line if the dividend tax cut caused a real change. But there’s no statistical difference between the two paths at all. (Note how their paths are the same before the cut, so it’s a real trend in the business cycle.) There’s no difference in either investment or adjusted net investment. There’s also no difference when it comes to employee compensation. The firms that got a massive capital tax cut did not make any different choices about things that boost the real economy. This is true across a crazy-robust number of controls, measures, and coding of outliers.
The one thing that does increase for C-corporations, of course, is the disgorgement of cash to shareholders. Cutting dividend taxes leads to an increase in dividends and share buybacks. This shows that these corporations are in fact making decisions in response to the tax cut; they just happen to be decisions that benefit, well, probably not you. If right now you are worried that too much cash is leaving firms to benefit a handful of investors while the real economy stagnates, suddenly Clinton-era levels of dividend taxation don’t look so bad.
This is interesting for people interested more specifically in corporate finance theory. Because this is evidence against the theory that firms use the stock market to raise funding, and toward a “pecking order” theory that internal funds and riskless debt are far above equity in a hierarchy of corporate funding choices. In models like the latter, taxation of dividends does very little to impact the cost of capital for firms, because equity isn’t the binding constraint on marginal investment options.
President Obama will likely focus his pitch for the dividend tax increase on the future, when, in his argument, globalization and technology will cause compensation to stagnate while investor payouts skyrocket and the economy becomes more focused on the top 1 percent. But it’s worth noting that while capital taxes are a solution to that problem, the radical slashing conservatives have brought to them are also partly responsible for our current malaise.
Is it useful to clarify data and claims in the economics blogosphere? Probably not, but I’ll give it a shot, as there’s two sets of arguments that could use more light rather than heat.
What Happened in 2013? Sumner and Wren-Lewis
Scott Sumner wrote this about Simon Wren-Lewis:
“Simon Wren-Lewis also gets the GDP growth data wrong, in a way that makes austerity look worse. He claims that RGDP growth was 2.3% in 2012 and 2.2% in 2013 (the year of austerity in the US.) But that’s annual y-o-y data, and since the austerity began on January 1st 2013, you need Q4 over Q4 data. In fact, RGDP growth in 2012, Q4 over Q4, was only 1.67%, whereas growth in the austerity year of 2013 nearly doubled to 3.13%.”
There’s no getting it wrong here: there’s simply two methods. Is it better to take the average annual rates and compare them (as Wren-Lewis does) or is it better to look at strict endpoints (as Sumner does)? An important thing about looking at Q4 vs Q4 data, as Sumner does, is to make sure that you haven’t accidentally set up your endpoints to amplify a trend that isn’t there. That technique is very sensitive to where you put the endpoints.
And sure enough, the quarters before and after that range featured negative or near zero growth. What if you redo this moving the quarters back and forth one period? Well, Q1 over Q1 2014 data drops to 1.9%, while Q3 over Q3 2012 rises to 2.7% (Q1 over Q1 2012 was 2.1%). It’s not encouraging if your argument falls apart because you move the data one step. We can graph out the Q over Q data for every quarter in fact; note Sumner is points to a single quarter that obviously sticks out. There’s a reason people might want to average the data in these situations, as Lewis does.
Simon-Wren Lewis, whose blog I really enjoy, already pointed out austerity didn’t start on January 1st, 2013, of course. And it didn’t; note the more consistent growth in the graphic in late 2010. But even better, the fourth quarter of 2012 featured a massive decline in military spending. According to Alan Krueger for the White House, “A likely explanation for the sharp decline in Federal defense spending is uncertainty concerning the automatic spending cuts that were scheduled to take effect in January.” That’s an additional problem for setting up this issue this way.
What Did People Say Would Happen? Jeffrey Sachs
Jeffrey Sachs argues that people worried about additional austerity in 2013 were saying that it would cause another recession. Sachs: “Indeed, deficit cuts [especially in 2013] would court a reprise of 1937, when Franklin D. Roosevelt prematurely reduced the New Deal stimulus and thereby threw the United States back into recession.”
I paid a lot of attention to these debates, and saw three estimates of the impact of 2013 austerity on the recovery: Mark Zandi at Moody’s Economy, EPI, and the CBO. All three were close to each other in their estimates. None predicted that we’d go back into recession or have no growth.
What were they predicting? Zandi put it clearly: “Altogether, lower federal government spending and higher taxes are expected to reduce 2013 real GDP growth…With such a heavy fiscal weight on the economy, it is hard to see how growth could accelerate, at least in the first half of 2013.”
That’s consistent with what we’ve seen. A drag, preventing accelerate growth and delaying a takeoff in 2013 and into 2014. I don’t see how Sachs can obviously claim that these numbers aren’t consistent with the idea that the government has been a net drag since 2011, or point to a pickup in late 2014 as obviously disproving anything. Maybe on closer, empirical grounds you could (though the empirical literature is finding multiplers), but not at this high level.
In my original question about the Federal Reserve versus austerity in 2013, which seems to animate a lot of these debates, the issue I put forward was whether the Federal Reserve could hit the inflation target it announced with the Evans Rule shifting expectations and open-ended purchases to back that up, while government spending was a drag. It did not.
The Republicans have declared war against Dodd-Frank. But what kind of war is it, and on what fronts are they waging this war? I think there are at least three campaigns, each with its own strategic goals and tactics. Distinguishing these campaigns from each other will help us understand what Republicans are trying to do and how to keep them in check.
First, to understand the Republican campaigns, it’s useful go over what Dodd-Frank does. Dodd-Frank can be analogized to the way we regulate driving. First, there are simple rules of the road, like speed limits and stop signs, designed as outright prevention against accidents. Then there are efforts to help with stabilization if the driver gets in trouble, such as anti-lock brakes or road design. And then there are regulations for the resolution of accidents that do occur, like seat belts and airbags, designed to avoids worst-case scenarios.
These three goals map onto Dodd-Frank pretty well. Dodd-Frank also puts rules upfront to prevent certain actions, requires additional regulations to create stability within large financial firms, and lays out plans to allow for a successful resolution of a firm once it fails. Let’s graph that out:
Prevention: Dodd-Frank created a Consumer Financial Protection Bureau (CFPB), reforming consumer protection from being an “orphan mission” spread across 11 agencies by establishing one dedicated agency for it. The act also requires that derivatives trade with clearinghouses and through exchanges or else face additional capital requirements, which brought price transparency and additional capital to the market that brought down AIG. Another piece is the Volcker Rule, which separates the proprietary trading that can cause rapid losses from our commercial banking and payment systems.
Stabilization: Dodd-Frank also provides for the expansion of capital requirements across the financial sector, including higher requirements for the biggest firms relative to smaller ones, as well as higher requirements for those who use short-term funding in the “shadow” banking sector relative to traditional banks. These firms are designed by the Financial Stability Oversight Council (FSOC).
Resolution: The firms FSOC designates as systemically risky have to prepare themselves for a rapid resolution for when they do fail. They have to prove that they can survive bankruptcy without bringing down the entire system (an effort currently being fought). The FDIC has prepared a second line of defense, a special “resolution authority” (OLA) to use if bankruptcy isn’t a viable option in a crisis.
These elements of the law all flow naturally from the financial crisis of 2008. It would have been very helpful in the crisis for there to be more clarity in the derivatives market, more capital in shadow banks, and a process to resolve Lehman Brothers. Maybe these are great ways to approach the problem or maybe they aren’t, but to suggest they have no basis in the crisis, as the American Enterprise Institute comically does, is pure ideology.
But ignore the more ridiculous arguments. The actual war against Dodd-Frank is much more sophisticated, and it’s being waged on numerous fronts. Let’s make a map of the battlefield:
There are three distinct campaigns being waged:
Guerrilla Deregulation. The goal here is to undermine as much of the efforts of derivatives regulation, the Volcker Rule, and the CFPB as possible through quick, surprise attacks. This, in turn, has a chilling effect across regulators and throws the regulatory process into chaos.
The main tactic, as in any good guerrilla campaign, is to do hit-and-run ambushes of key, important targets vulnerable to raids. David Dayen had a helpful list of some key bills that must pass in 2015, bills likely to be perfect targets for a good guerrilla raid. The guerrilla campaign had a major victory in weakening the Section 716 “push-out” rule in the Cromnibus bill. And that will probably be a model going forward for these tactics, replicated in the recent attacks we’ve seen, down to counting the small handful of Democrats signing on as some sort of concession of bipartisanship.
Another guerrilla element will be the focus on victory through attrition. It’s not like the House Republicans have their own theory of how to regulate the derivatives market, or that they are making the full case against the Volcker Rule or the CFPB, or even proposing their own anything. They are winning simply through weakening both the rules and the resolve of reformers.
Administrative Siege: Aside from the guerrilla war of deregulation, the GOP is also waging war on another front, through a long-term siege of the regulatory agencies. This includes blockading them from resources like funding and personnel, consistent harassment, discrediting them in the eyes of the broader public, and weakening their power to act. This is a long-term battle, going back to the beginning of Dodd-Frank, and their terms are unconditional surrender.
The seriousness of this campaign became clear when the GOP first refused to appoint any director of the CFPB unless there was a complete overhaul to weaken it. This campaign has continued against the CFTC, and now extends to the FSOC trying to designate firms as systemically risky. The recent House bill to extend cost-benefit analysis to financial regulations, where it has little history, unclear analytical benefit, and could easily lead to worse rules, is also part of this siege.
One key argument the GOP is pushing is that the regulators are historically too powerful and out of control. As House Financial Services Committee chairman Jeb Hensarling said to the Wall Street Journal, the CFPB is “the single most unaccountable agency in the history of America.” This is just silly agitprop. The structure — independent budgets and a single director — looks exactly like their counterparts in the OCC. It’s not only subject to the same rule-making process as other regulators, but other regulators can in fact veto the CFPB, making it significantly more accountable compared to any other agency.
The same is being said about FSOC. Note that there’s always room for improvement; Americans for Financial Reform (AFR) have some ways to improve the transparency of FSOC here. But as AFR’s Marcus Stanley notes, the House’s recent FSOC bill “appears better calculated to hinder FSOC operations than to improve its transparency.” Indeed, as Better Markets notes, this FSOC battle is in large part over the regulation of money market funds, a crucial reform in fixing shadow banking. But making government work better isn’t the goal of the siege; this campaign’s goal is to break these agencies and their ability to regulate the financial system.
Reactionary Rhetoric: The goal of this ideological programming campaign is to push the argument that Dodd-Frank simply reinforces the worst part of the bailouts and does nothing productive toward reform. Instead of a series of methods to check and reduce Too Big To Fail, this campaign argues that Dodd-Frank does worse than fail. Following the rhetoric of reaction, reform simply makes the problem far worse. The point here is to remove the FDIC’s ability to put systemically risky firms into a receivership while also preparing the ground for a full repeal.
Advancing the argument that Dodd-Frank has made the bailouts of 2008-2009 permanent and serves only to benefit the biggest financial firms has become a marching order for the movement right. It was basically the entire GOP argument against Dodd-Frank in 2012 (Mitt Romney calling the act “the biggest kiss” to Wall Street), and it still dominates their talking points. If this were the case, the largest banks would receive a large Too Big To Fail subsidy, and we’d subsequently see a reduction in their borrowing costs.
Major studies tells us that the opposite is the case; since 2010 Too Big To Fail subsidies have fallen instead of stabilized or increased. This doesn’t mean the work is done – we could still see a major failure cause systemic risk, and just “avoiding catastrophic collapses” isn’t really a headline goal for a functioning financial system. There’s also little evidence that Dodd-Frank enriches the biggest banks; firms go out of their way to avoid a SIFI designation, which they wouldn’t if there were a benefit to them, and Wall Street analysts take it for granted that capital requirements and other regulations are more binding for the largest firms.
There could be a productive discussion here about finding a way to reform the bankruptcy code to help combat Too Big To Fail while keeping resolution authority as a backup option. That backup option is key though. Unlike OLA, bankruptcy is slow and deliberate, isn’t designed to preserve ongoing firm business, doesn’t have guaranteed funding available, can’t prevent runs from short-term creditors, and has trouble internationally. But again, the point for Republicans isn’t to try to come up with the best regime; it’s to discredit the effort at reform entirely so the other campaigns, and the overall campaign for repeal, can be that much easier.
Why do Republicans want all this? The answer you will normally hear is that they are in the pocket of Wall Street or in the thrall of free-market fundamentalism. And there’s truth to that. But they’ve also created a whole institutionally enforced counter-narrative where there was no real crisis and Wall Street committed no bad behavior except for when ACORN made them. This narrative is, bluntly, dumb. But it is the narrative their movement has chosen, and movements have a way of forcing well-meaning people who’d otherwise want to find good solutions to fall in line.
2015 will require reformers to wage their own campaigns to push additional reform (here’s a start), push for stronger action from regulators, and make the public understand the progress that has been made. But first we need to understand that while conservatives may look like they are running a smash-and-grab operation when it comes to Dodd-Frank, it’s actually a quite sophisticated series of campaigns, and they are already winning battles.
Looks like a smart plan he announced yesterday. I wrote about it, and public options more generally, here at The Nation. I hope you check it out!
Mike here. This post is from my colleague Brad Miller over the important debate on the Antonio Weiss nomination. Brad is a former U.S. Representative who recently joined the Roosevelt Institute as a Senior Fellow, so he has firsthand knowledge of the internal negotiations around financial reform. Check it out below.
The opposition to the nomination of another investment banker, Antonio Weiss, to a top position in the U.S. Treasury is not just a demagogic appeal to anti-Wall Street prejudices, as his supporters argue.
Weiss’s actual experience appears to be a poor match for the specific duties of the position in question, and may be less laudable than his supporters claim. Weiss’s principal credential appears to be that he is a product of the same culture that produced our other recent economic policymakers.
And that is the real problem for opponents, who believe that economic policies should be subject to democratic debate and require the consent of the governed.
The response to the financial crisis was the most consequential economic policy in generations. Wall Street and Washington insiders alike argue that those policies, endlessly indulgent of banks and pitiless to homeowners, were technocratic decisions that required the recondite knowledge of Wall Street professionals.
To Weiss’s supporters, disregard for public opinion is a virtue. “Making economic policy isn’t a popularity contest,” David Ignatius wrote in The Washington Post, “especially when financial markets are in a panic.” “Our job was to fix it,“ former Treasury Secretary Timothy Geithner said, “not make people like us.”
Criticism did not just come from politicians pandering to the great unwashed, however.
Most economists argued that the lesson of past financial crises was to take economic pain quickly, recognize losses on distressed debt, and take insolvent banks through an orderly receivership. The “standard playbook” for financial crises since the 1870s was to “shut down insolvent institutions so executives and shareholders in the future do not think they will escape the consequences of the moral hazard they created.” “Zombie” banks, economists argued, only delay recovery.
The Bush and Obama Administrations instead helped too-big-to-fail banks pretend to be solvent and provided subsidies and other dispensations until the banks became profitable, an effort that continues.
Unfortunately, any effective effort to reduce foreclosures required banks to recognize losses on mortgages. Insiders regarded foreclosures as a lesser concern. Geithner said that even if the government used federal funds “to wipe out every dollar of negative equity in the U.S. housing market…it would have increased annual consumption by just 0.1 to 0.2 percent.”
According to Atif Mian and Amir Sufi, two prominent economists, “that is dead wrong.” Household wealth fell by $9 trillion after the housing bubble burst in 2006, which greatly reduced consumer demand. “The evidence,” Amir and Sufi said, “is pretty clear: an aggressive bold attack on household debt would have significantly reduced the horrible impact of the Great Recession on Americans.”
Wall Street’s critics did not lose that debate. There was no debate.
Senator Obama endorsed legislation in his presidential campaign to allow the judicial modification of mortgages in bankruptcy. President Obama never publicly wavered from that position, but Treasury officials privately lobbied against the legislation in the Senate, where the legislation died. The determination by economic policymakers to protect their immaculate policies from tawdry politics may extend to attempts by the President to intrude.
Meddling by Members of Congress was certainly unwelcome. In a private meeting between Administration officials and disgruntled House Democrats, I said that foreclosure relief efforts appeared designed to help banks absorb losses gradually, not to help homeowners. Geithner was offended—indignant—at the suggestion. Neil Barofsky, then the Special Inspector General for the Troubled Asset Recovery Program, later confirmed that was exactly the purpose of the programs—in Geithner’s words, to “foam the runway” for the banks.
The success of policies that are unacknowledged or even denied is difficult to measure. Since the financial crisis, however, the financial sector, from whence our unguarded platonic guardians came and soon will return, has prospered. Others fared less well. Wealth and income inequality widened dramatically.
Weiss has not served in government, so opposition to his nomination may punish him for the sins of others, perhaps unfairly. There is nothing to indicate, however, that Weiss questions the assumption that the North Star of economic policy should be the prosperity of the financial sector, or that policies should be freely debated unless there’s money involved.
The presidential campaign in 2016 will undoubtedly largely be about economic policy. Voters may assume that the election of one candidate or another will result in implementation of that candidate’s economic policies.
If the culture of economic policymakers remains unchanged, the public positions of candidates and the votes of citizens may not matter much.
Brad Miller is a Senior Fellow at the Roosevelt Institute. Previously, he served for a decade in the U.S. House of Representatives.
The mass resignation at the New Republic had several people joking about how the magazine wanted to become “vertically integrated.” What does that even mean here? But if anything was vertically integrated in 2014, it was the conservative movement. And you could see this clearly from the reaction to the Halbig decision in July.
I’m occasionally asked what conservative sites people should read. My answer is usually that people should read the blogs of the major think tanks, like AEIdeas, Daily Signal (Heritage), and Cato-at-Liberty. There are many writers who are conservative, or who cover conservatives, who are interesting to read, of course. But if you want to understand the conservative movement as the actual movement it is, you want to look upstream to where the ideas and arguments are first formulated.
From there, you can then watch them move downstream, first to the set of gatekeepers on the right who can give these arguments credibility or otherwise charge them. From there they move down to the front line right-wing writers who incorporate them into their various Hot Takes, as well as the TV and radio stations with their massive audiences.
And we have a real-time example this year. Since 2011, think tanks have been building their Halbig argument, which is that the ACA doesn’t allow state-exchanges created by the federal government to access subsidies. They learned how to discuss it. Most of all they learned how they couldn’t call it a “glitch” but instead, given administrative law, had to argue it was a conscious decision. But the argument wasn’t part of the mainstream discussion.
But then the court case had a success July 22nd, where the Federal Circuit Court for the District of Columbia agreed with Halbig. And you could then watch it move down the river and become mainstream conservative logic almost immediately.
This is where gatekeepers were important. The editors of National Review immediately jumped on it (“States were expected to go along and establish their own exchanges. When it became clear that many states wouldn’t do so because the law was so unpopular, the IRS just rewrote the law”). One of the more important conservative gatekeepers, Ramesh Ponnuru, did the same at Bloomberg (“It’s wrong, then, to say that Congress obviously didn’t intend to include this restriction”).
With that, the low-level writers could write their takes and mass media personalities could speak as if this was always obviously always the case. Rush Limbaugh (“The Obamacare law specifically says […] the only people qualified for subsidies are those who acquire their insurance through state exchanges, exchanges established by the state”) is one of many example. Those far away from the think tanks who are good at digging up embarrassing examples soon found numerous examples of Jonathan Gruber embarrassing himself once they knew what to dig for, which in turn boosted the upstream arguments. Vertical integration.
You can go back and see liberal writers trying to figure out in real time how Halbig became conservative common wisdom, when none of the conservative reporters covering the bill while this was all debated ever noted it, or that it wasn’t part of the extensive rollout strategy by ACA supporters. Brian Beutler’s Why Are Conservative Health Journalists Covering for Halbig Truthers? and Jonathan Chait’s The New Secret History of the Obamacare Deniers are good examples. It shows a genuine surprise at how vertically integrated the conservative movement can be, and how quickly a new logic became their reality once an opportunity presented itself.
An important thing I noticed from the outside is how there was no strong opposition at the gatekeeper level, only mild skepticism. Reihan Salam wrote “I’m not a Halbig guy […] I am (at best) agnostic on whether Halbig is correct.” Ross Douthat tweeted that point while describing his own “conflictedness.” But this was the extent of it. Neither they or any other gatekeepers I could find leveled a strong charge, much less a sustained case, against Halbig from within the movement. In a movement, people know when to be quiet.
I noticed this dynamic quickly when I first started reading conservatives writing about the financial crisis. Virtually all the front-line writers were mimicking an odd argument about the GSEs that I didn’t recognize from my time in the industry. I quickly looked upriver to see it all comes from AEI’s Peter Wallison. Again, some crucial gatekeepers air quiet skepticism, like his GOP colleagues on the FCIC whose email trail shows how they tried to minimize his bad arguments. But that doesn’t stop the movement writers from pushing just that narrative at all times.
Next time you read a random article from a conservative site, see if you can see how it’s just a rewritten form of some talking points created far upstream. And always remember that when a movement acts, it creates its own reality.
2014 is over, and good riddance. This year I wanted to start some formal projects, as well as write longer pieces, and I managed to do just that. Here’s the high-level stuff I did this past year.
Financialization. I started a project on financialization with the Roosevelt Institute, and I’m helping with a big inequality project helmed by Joe Stiglitz. All this will bear fruit next year, but I did a piece on financialization for Washington Monthly, Frenzied Financialization, that gives you some sense of what we’ll be doing.
(Seeing one’s name on a magazine stand is still the coolest.)
Voluntarism. I wrote a big article on The Voluntarism Fantasy (pdf) for Democracy Journal, that had a lot of responses (collected here). One of my favorite pieces I’ve done; Philanthropy Daily, of all places, red-baited me, which was a neat enemy to make in 2014.
(Remember this cover?)
Piketty-Mania. There were races to see how quickly reviews of Piketty’s Capital in the 21st Century could get written. My review, Studying the Rich for Boston Review, was scheduled for late summer and had to be turned around in a week after the demand for readable summaries of the book exploded. I think my review holds up in describing the way the fault lines around the work would unfold.
The two points I wished I had included were Russell Jacoby’s fantastic comparison to the actual Marx’s Capital, on how economic critique has moved from Marx’s factories of production to Piketty’s spreadsheets of distribution, and that what constitutes a “wealth equality” agenda isn’t clear, which I later covered for The Nation. Speaking of…
The Score. I started as a columnist for The Nation with a new economics column, The Score, where I alternate with my former colleague Bryce Covert. It’s still starting up but I’m already happy with columns on socializing Uber and the growth of incarceration. It’s great to work with Bryce again and get to work with the talented editor Sarah Leonard, who is boosting the economic content of The Nation (she also helped launched The Curve with Kathy Grier since joining).
Issue Editing. I also got to help curate and edit an issue of The New Inquiry, The Money Issue. Rob Horning and I had wanted to do something with the weirdness and newness of the finance blogs circa 2007-2009, and hopefully part of that got through here with pieces by Izzy Kaminska and Steve Waldman. I’m very happy to have helped edit the excellent Disgorge the Cash by JW Mason, which will soon relate to the financialization project mentioned above.
Other Writing. I wrote less for the blog this year, but some notable pieces that caught people’s interest included how the “pragmatic” libertarian case for a basic income makes basic errors about the welfare state, explaining the end-of-year fight over financial reform, pieces on how neoconservatism and libertarians have helped get us to the policing situation in Ferguson and elsewhere, and on the limits of liberalism after the 2014 election.
Big pieces for other sites included a review of Playing the Whore for Wonkblog, explaining how we already have a public option for banking and we could expand it for Al-Jazeera America, and a group book review on the role of profit in the state for Boston Review.
What else? In other news I turned 35. My wife got me a Nick Cage in The Rock birthday cake, which is really the best present ever. I moved to Washington DC after a crazy final summer in New York. I’ve been reading a lot of history lately and want to keep incorporating that into my stuff, and I’m enjoying the Eric Foner MOOC of the Civil War era.
Anything you’d like to see different next year? Thanks for reading everyone, see you in 2015.
I’m pretty convinced that the term “crony capitalism,” as deployed by the right, is useless as a political or analytical tool. I keep a close eye on how conservatives talk about financial reform, and according to the right, Dodd-Frank is crony capitalism. Oh noes! But what does that mean, and how can we stop it? Here’s a fascinating case in point: two AEI scholars with different publications argue that we need to stop Dodd-Frank from enabling crony capitalism, and then proceed to describe two opposite, mutually exclusive sets of problems and solutions.
First, a good test question: The Federal Reserve recently required that the largest firms have a greater capital surcharge than had been originally proposed. Is that cronyism?
Here’s one story, from James Pethokoukis in ”Fighting the Crony Capitalist Alliance”: “our highly concentrated and interconnected, Too Big to Fail financial system […] gives a competitive edge to megabanks.” How is that? Regulators create incentives for big banks to take on risks “such as investing in mortgage-backed securities and complex derivatives.” Banks are the size they are, and do the activities that they do, because of the actions of regulators.
So how do we combat this problem? According to Pethokoukis, we should “substantially raise the capital requirements for Too Big To Fail banks” to limit risk. Even more, “such capital requirements might well nudge the biggest banks into shrinking themselves or breaking up.”
Here’s another story, from Tim Carney’s “Anti-Cronyism Agenda for the 114th Congress”: Dodd-Frank is cronyism because “[e]xcessive regulation is often the most effective crony capitalism.” What’s worse is that Dodd-Frank designates the biggest firms as Systemically Important Financial Institutions (SIFIs), meaning that they pose a systemic risk to the economy. Those firms are put under more regulation, but it’s obviously a cover for a permanent set of protections.
So what should we do? According to Carney’s agenda, Congress should “open banking up to more competition by repealing regulations that give large incumbent banks advantages over smaller ones.” Well, which regulations are those? “Congress should repeal its authority to designate large financial firms as SIFIs.”
Note that though these are from the same institution and carry the same banner of fighting “cronyism,” these agendas are the exact opposite of each other. For Pethokoukis, the important goal is identifying the largest and riskiest institutions and putting aggressive regulations on them, with capital requirements set high enough that they could fundamentally shrink those banks. For Carney, it’s important that we do not identify any firm as too large that it is risky for the economy, and thus increase their capital requirements, since doing so just encourages cronyism — indeed, it is the logical conclusion of cronyism. Don’t regulate the largest firms with more attention or care; just don’t do anything to them.
In the Pethokoukis version, the financial sector poses a real threat to the stability of the economy, and as such special efforts should be made to prevent failure and handle failure when it does occur. His answer is, essentially, to do more. In the Carney version, there’s no real danger outside the government’s interference, or at least not a danger that is worth a policy solution. His answer is to do nothing, except repeal what regulation already exists.
And, crucially, for Pethokoukis, the recent increase in capital surcharges for SIFIs are a good idea; for Carney, they enshrine the problem by working through the SIFI framework, and are a bad idea. How can a policy agenda be built around such a “cronyism” framework?
There are other problems with “cronyism” as described here. Pethokoukis blames cronyism for the concentration in the financial sector in the last few decades. However the previous argument had been that the size and geographic restrictions that prevented this concentration before the 1990s are the real cronyism. Dodd-Frank blocks a single financial firm from having liabilities in excess of 10 percent of all liabilities, benefitting smaller firms at the expense of larger ones. Is that cronyism or the opposite? Cronyism can’t just be “things turned out in a terrible way when left to the markets.”
As Rich Yeselson notes in a fantastic essay on New Left historians in the recent issue of Democracy, the Gabriel Kolko-inspired stories about how regulations evolves (stories that influence Carney) are monomaniacally mono-causal. So just quoting CEOs’ statements to the press about Dodd-Frank constitutes analysis, as the regulations must obviously flow from elite desires through their captured lackeys in the state.
But Dodd-Frank is more complicated than that – look at the effort to stop the CFPB from starting, or the epic battles both between and within regulators, the state and consumers over derivatives. Carney’s top-down inescapable vision of how reform works leaves no room for the contingency of actual efforts to fix a broken system. In turn, this leaves us with no way to actually critique what Dodd-Frank does. Worse, it conflates fighting “cronyism” with an agenda of laissez-faire economics, liberty of contract, and hard money, sneaking in a three-legged stool of reactionary thought through our concerns about fairness.
Actual cronyism is a real problem, but I’ve seen no evidence that it adds up to a systemic criticism of our economy as a whole. Instead, we need a language of accountability, benefit and power in how markets are structured. Without this, we’ll have no working compass for reform.
I have a new Score column at The Nation: Socialize Uber. It’s about Uber and other sharing economy companies as worker cooperatives. Normally I eyeroll when people talk about cooperatives as an economic solution, but I think there’s compelling stuff here. Given that the workers already own all the capital in the form of their cars, why aren’t they collecting all the profits? I’m particularly interested in the comparisons to the Populist movement in this new economy, as back then workers also were amazed by new technologies but also wanted fairness on the terms they could access them.
We’ve also revamped how the Score looks, particularly the online part of it, so I hope you check it out. There’s some commentary already from Will Wilkinson and Brian Dominick. It’s definitely a moment where people are thinking about this, as columns from Nathan Schneider and Trebor Scholz also came out at the same time making similar arguments about worker cooperatives.
Uber is also in the news because they turned on surge pricing during a terrorist hostage situation in Sydney, Australia. This has gotten people talking about surge pricing. I don’t mind surge pricing, but the moralizing way journalists talk about it is really off-putting. Matt Bruenig has a good response to an example of this by Olivia Nuzzi (“How does the world owe you a private car, priced as you deem acceptable, that didn’t exist five years ago? […] you might consider meandering over to a country with a different economic system”).
To expand on Matt, there’s two reasons why people might want to avoid surge pricing that virtually never get discussed.
One is that people care about fairness. As Arin Dube wrote about the minimum wage, “the economists Colin F. Camerer and Ernst Fehr have documented in numerous experimental studies that the preference for fairness in transactions is strong: individuals are often willing to sacrifice their own payoffs to punish those who are seen as acting unfairly, and such punishments activate reward-related neural circuits.” This is why you see high support for the minimum wage among people who otherwise support right-wing economic ideas, as we just saw in the 2014 elections.
People care about fairness; it’s in their utility function if you prefer. It’s a funny economic argument where markets are meant to serve what people want, and producers are meant to meet those needs at the lowest possible cost, but if people want fairness built into the cost model then it’s all sneering all the time. It’s almost as if the moment is about conditioning people to serve market needs, rather than markets to serve people needs. If people demanded a cola beveridge that, say, was less sweet, would we get Daily Beast articles about “how dare you, the world doesn’t owe you a less sweet cola, move to North Korea if you want to see your market demands turn into products.” And there’s a long history of using moral persuasion to try and limit price-gouging – check out Little House on the Prairie.
But the first issue becomes more relevant with a second concern, however, and that’s the increasingly negative view of Uber’s tactics. People don’t have perfect information, and it’s reasonable that they might want to pool the risk that they’ll be targeted for price discrimination. The obvious comparison here was that early moment Amazon turned out to be charging higher prices based on your browsing history, which it promptly shut down after public outcry. (Why don’t you meander over to a different country if you don’t want Amazon data-mining your browser to rip you off?)
Why were people offended? Because in that case the price discimination just transfered the surplus from the customers to the producers – there wasn’t any allocative effect. And the same worry can carry over to surge pricing.
Without perfect information, customers don’t really know if they are getting price surged based on supply-and-demand fundamentals or on their own individual characteristics. Imagine if the algorithm increased the liklihood of price surging based on people’s past willingness to select price surging. Or because a neighborhood is more like to accept price surging. I assume we’d be mad, right? That wouldn’t have an allocative effect – it would just be ripping off those people because the code can tell they’d be willing to pay more.
Are they doing this now, or will they do this in the future? Normally trust is what would help mitigate both these worries, but with stories about “God mode” and their take-no-prisoners approach to everything, trust is in increasingly low supply.
Section 716 of Dodd-Frank says that institutions that receive federal insurance through FDIC and the Federal Reserve can’t be dealers in the specialized derivatives market. Banks must instead “push out” these dealers into separate subsidiaries with their own capital that don’t benefit from the government backstop. They can still trade in many standardized derivatives and hedge their own risks, however. This was done because having banks act as exotic swap dealers put taxpayers at risk in the event of a sudden collapse. That’s it.
Why would you want a regulation like this? The first is that it acts as a complement to the Volcker Rule. As Americans for Financial Reform notes, the Volcker Rule allows banks to make markets in derivatives. What 716 does is regulate the most exotic and custom derivatives, like the custom credit default swaps that generated the financial crisis of 2008. These derivatives are the most difficult part for the Volcker Rule to manage, so 716 adds a crucial second layer of protection.
A second reason is 716 will also prevent exotic derivatives from being subsidized by the government’s safety net. As the Roosevelt Institute’s Rob Johnson notes, removing this language would “extend guarantees to complex derivatives within banks, which in turn will subsidize and encourage their overuse.” We should be finding a balance for the derivatives market, not expanding it.
The third reason is for the sake of financial stability. The major banks have been unable to produce credible living wills describing how they can go through bankruptcy without tearing down the system. There is no world in which these banks will be closer to achieving this crucial goal by cramming themselves full of even more exotic types of derivatives.
Pushing out these risky derivatives enables the financial sector to focus more on its core job. As Roosevelt’s Chief Economist Joseph Stiglitz wrote in favor of 716, “[b]y quarantining highly risky swaps trading from banking altogether, federally insured deposits (and our basic payments mechanism) will not be put at risk by toxic swaps transactions. Moreover, banks will be forced to behave like banks, focusing on extending credit in a manner that builds economic strength as opposed to fostering worldwide economic instability.”
Stiglitz reiterated this point today, saying “Section 716 facilitates the ability of markets to provide the kind of discipline without which a market economy cannot effectively function. I was concerned in 2010 that Congress would weaken 716, but what is proposed now is worse than anything contemplated back then.”
Now many on Wall Street would argue that this rule is unnecessary. However, their arguments are not persuasive.
They might argue that many people opposed this bill at the time it was proposed, and indeed it was the source of great controversy. But what they overlook is that there was already a wave of compromise on this provision during the drafting Dodd-Frank. 716 focuses mainly on a subset of risky and exotic derivatives. Under the final law, banks can still hold most types of standardized and common derivatives, like ones for interest rates. This is the vast majority of the market. Banks can also hold derivatives that they use to mitigate their own risk. There was significant debate in 2010 over how this regulation should play out, and the final language reflects this compromise.
They might also argue that the financial sector is taking care of this issue on its own. But instead of being moved out, derivatives are being moved into backstopped banks. As the former FDIC chairperson Sheila Bair notes, the “trend has been to move this activity from the investment banking affiliates, which do not use insured deposits, into the banks where the activity can be funded with cheap, FDIC backed deposits. Section 716 would at least keep certain credit default swaps outside of insured banks.”
The question of how we should regulate derivatives and the financial markets more broadly has not been settled. There’s still an ongoing debate over how derivatives will be regulated across borders. And as noted, banks are still unable to produce credible living wills to survive a bankruptcy court. It’s for reasons like this that a wide variety of people who didn’t support the initial language of 716 now oppose removing 716: Timothy Geithner, Jack Lew, Sheila Bair, Barney Frank, and more.
We should be strengthening, not weakening, financial reform. And removing this piece of the law will not benefit this project.
I’m very excited to share that AFR, EPI, and the Roosevelt Institute have teamed up to host a conference on monetary policy, the recovery and the financial sector today. The conference will feature keynotes from Paul Krugman and Senator Elizabeth Warren. It also features, among many other great panelists, friend of the blog and Roosevelt
There was a quiet revolution in the University of North Carolina higher education system in August, one that shows an important limit of current liberal thought. In the aftermath of the 2014 election, there’s been a significant amount of discussion over whether liberals have an economic agenda designed for the working and middle classes. This discussion has primarily been about wages in the middle of the income distribution, which are the first major limit of liberal thought; however, it is also tied to a second limit, which is the way that liberals want to provide public goods and services.
So what happened? The UNC System Board of Governors voted unanimously to cap the amount of tuition that may be used for financial aid for need-based students at no more than 15 percent. With tuition going up rapidly at public universities as the result of public disinvestment, administrators have recently begun using general tuition to supplement their ability to provide aid. This cross-subsidization has been heralded as a solution to the problem of high college costs. Sticker price is high, but the net price for poorer students will be low.
This system works as long as there is sufficient middle-class buy-in, but it’s now capped at UNC. As a board member told the local press, the burden of providing need-based aid “has become unfairly apportioned to working North Carolinians,” and this new policy helps prevent that. Iowa implemented a similar approach back in 2013. And as Kevin Kiley has reported for IHE, similar proposals have been floated in Arizona and Virginia. This trend is likely to gain strength as states continue to disinvest.
The problem for liberals isn’t just that there’s no way for them to win this argument with middle-class wages stagnating, though that is a problem. The far bigger issue for liberals is that this is a false choice, a real class antagonism that has been created entirely by the process of state disinvestment, privatization, cost-shifting of tuitions away from general revenues to individuals, and the subsequent explosion in student debt. As long as liberals continue to play this game, they’ll be undermining their chances.
First Limit: Middle-Class Wages
There’s been a wave of commentary about how the Democrats don’t have a middle-class wage agenda. David Leonhardt wrote the core essay, “The Great Wage Slowdown, Looming Over Politics,” with its opening line: “How does the Democratic Party plan to lift stagnant middle-class incomes?” Josh Marshall made the same argument as well. The Democrats have many smart ideas on the essential agenda of reducing poverty, most of which derive from pegging the low-end wage at a higher level and then adding cash or cash-like transfers to fill in the rest. But what about the middle class?
One obvious answer is “full employment.” Running the economy at full steam is the most straightforward way of boosting overall wages and perhaps reversing the growth in the capital-share of income. However, that approach hasn’t been adopted by the President, strategically or even rhetorically. Part of it might be that if the economy is terrible because of vague forces, technological changes and necessary pain following a financial crisis, then the Democrats can’t really be blamed for stagnation. That strategy will not work out for them.
The Democrats (and even many liberals in general) also haven’t developed a story about why inequality matters so much for the middle class. There are such stories, of course: the collapse of high progressive taxation creates incentives to rent seek, financialization makes the economy focused less on innovation and more on disgorging the cash, and new platform monopolies are deploying forms of market power that are increasingly worrisome.
Second Limit: Public Provisioning
A similar dynamic is in play with social goods. The liberal strategy is increasingly to leave the provisioning of social goods to the market, while providing coupons for the poorest to afford those goods. By definition, means-testing this way puts high implicit taxes on poorer people in a way that decommodification does not. But beyond that simple point, this leaves middle-class people in a bind, as the ability of the state to provide access and contain costs efficiently through its scale doesn’t benefit them, and stagnating incomes put even more pressure on them.
As noted, antagonisms between the middle class and the poor in higher education are entirely a function of public disinvestment. The moment higher education is designed to put massive costs onto individual students, suddenly individuals are forced to look out only for themselves. If college tuition was largely free, paid for by all people and income sources, then there’d be no need for a working-class or middle-class student to view poorer student as a direct threat to their economic stability. And there’s no better way to prematurely destroy a broader liberal agenda by designing a system that creates these conflicts.
These worries are real. The incomes of recent graduates are stagnating as well. The average length of time people are taking to pay off their student loans is up 80 percent, to over 13 years. Meanwhile, as Janet Yellen recently showed in the graphic below, student debt is rising as a percentage of income for everyone below the bottom 5 percent. It’s not surprising that studies find student debt impacting family formation and small business creation, and that people are increasingly looking out for just themselves.
You could imagine committing to lowering costs broadly across the system, say through the proposal by Sara Goldrick-Rab and Nancy Kendall to make the first two years free. But Democrats aren’t doing this. Instead, President Obama’s solution is to try and make students better consumers on the front-end with more disclosures and outcome surveys for schools, and to make the lowest-income graduates better debtors on the back-end with caps on how burdensome student debt can be. These solutions by the President are not designed to contain the costs of higher education in a substantial way and, crucially, they don’t increase the public buy-in and interest in public higher education.
The Relevance for the ACA
I brought up higher education because I think it’s relevant, but I think it also can help explain the lack of political payout for the Affordable Care Act. It’s here! The ACA is not only meeting expectations, it’s even exceeding them in major ways. Yet it still remains unpopular, even as millions of people are using the exchanges. There is no political payout for the Democrats.
Liberals chalk this up to the right-wing noise machine, and no doubt that hurts. But part of the problem is that middle-class individuals still end up facing an individual product they are purchasing in a market, except without any subsidies. Though the insurance is better regulated, serious cost controls so far have not been part of the discussion. Polling shows half of the users of the exchange are unsure if they can make their payments and are worried about being able to afford getting sick. This, in turn, blocks the formation of a broad-based coalition capable of defending, sustaining, and expanding the ACA in the same way those have formed for Social Security and Medicare.
Any serious populist agenda will have to have a broader agenda for wages, with full employment as the central idea. But it will also need to include social programs that are broader based and focused on cost controls; here, luckily, the public option is a perfect organizing metaphor.
Did you know that prosecutors were paid based on how many cases they tried in the 19th century? Or that Adam Smith argued for judges running on the profit motive in the Wealth of Nations? I have a new piece discussion the rise and fall of disinterested public service as a response to the abuses of the profit motive in government service, or how we got away from that system and how we are now going back to it, at Boston Review. It’s called Selling Fast: Public Goods, Profits, and State Legitimacy.
It’s a review of Against the Profit Motive: The Salary Revolution in American Government, 1780–1940 by Yale legal historian Nicholas R. Parrillo, The Teacher Wars by Dana Goldstein, and Rise of the Warrior Cop by Radley Balko. There’s a lot of interesting threads through all three, and I really enjoyed working on this review. I hope you check it out.
In the aftermath of the electoral blowout, a reminder: the Great Recession isn’t over. In fact, GDP growth was slower in 2013 than in 2012. Let’s go to the FRED data:
There’s dotted lines added at the end of 2012 to give you a sense that throughout 2013 the economy didn’t speed up. Even though we were another year into the “recovery” GDP growth slowed down a bit.
There’s a lot of reasons people haven’t discussed it this way. I saw a lot of people using year-over-year GDP growth for 2013, proclaiming it a major success. A problem with using that method for a single point is that it’s very sensitive to what is happening around the end points, and indeed the quarter before and after that data point featured negative or near zero growth. Averaging it out (or even doing year-over-year on a longer scale) shows a much worse story. Also much of the celebrated convergence between the two years was really the BEA finding more austerity in 2012. (I added a line going back to 2011 to show that the overall growth rate has been lower since then. According to David Beckworth, this is the point when fiscal tightening began.)
Other people were hoping that the Evans Rule and open-ended purchases could stabilize “expectations” of inflation regardless of underlying changes in economic activity (I was one of them), a process that didn’t happen. And yet others knew the sequestration was put into place and was unlikely to be moved, so might as well make lemonade out of the austerity.
And that’s overall growth. Wages are even uglier. (Note in an election meant to repudiate liberalism, minimum wage hikes passed with flying colors.) The Federal Reserve’s Survey of Consumer Finances is not a bomb-throwing document, but it’s hard not to read class war into their latest one. From 2010 to 2013, a year after the Recession ended until last year, median incomes fell:
When 45 percent of the electorate puts the economy as the top issue in exit polls, and the economy performs like it does here, it’s no wonder we’re having wave election after wave election of discontentment.
Hey everyone, I have two new pieces out there I hope you check out.
The first is a piece about the financialization of the economy in the latest Washington Monthly. I’m heading up a new project at Roosevelt, more details to come soon, about the financialization of the economy, and this essay is the first product. And I’m happy to have it as part of a special issue on inequality and the economy headed up by the fine people at the Washington Center for Equitable Growth. There’s a ton of great stuff in there, including an intro by Heather Boushey, Ann O’Leary on early childhood programs, Alan Blinder on boosting wages, and a conclusion by Joe Stiglitz. It’s all really great stuff, and I hope it shows a deeper and wider understanding of an inequality agenda.
The second is the latest The Score column at The Nation, which is a focus on the effect of high tax rates on inequality and structuring markets. It’s a writeup of the excellent Saez, Piketty, and Stantcheva Three Elasticies paper, and a continuation of a post here at this blog.
In the latest National Affairs, Jason Delisle and Jason Richwine make what they call ”The Case for Fair-Value Accounting.” This is the process of using the price of, say, student loans in the capital markets to budget and discount government student loans. (The issue also has articles walking back support for previously acceptable moderate-right ideas like Common Core and the EITC, showing the way conservative wonks are starting to line up for 2016.)
In the piece Delisle and Richwine make two basic mistakes in financial theory, mistakes that undermine their ultimate argument. Let’s dig into them, because it’s a wonderful opportunity to get some finance back into this blog (like it used to have back when it was cool).
Error 1: Their Definition of FVA Is Wrong
What is fair-value accounting (FVA)? According to the authors, FVA “factors in the cost of market risk,” meaning “the risk of a general downturn in the economy.” This market risk reflects the potential for defaults; it’s “the cost of the uncertainty surrounding future loan payments.”
These statements are false. There is a consensus that FVA incorporates significantly more than this definition of market risk.
Here’s the Financial Economists Roundtable, endorsing FVA: “Use of Treasury rates as discount factors, however, fails to account for the costs of the risks associated with government credit assistance — namely, market risk, prepayment risk, and liquidity risk.”
And the CBO specifically incorporates all these additional risks when it evaluates FVA: “Student loans also entail prepayment risk… investors… also assign a price to other types of risk, such as liquidity risk… CBO takes into account all of those risks in its fair-value estimates.”
This is a much broader set of concerns than what Delisle and Richwine bring up. For instance, FVA requires taxpayers to be subject to the same liquidity and prepayment risks as the capital markets. Remember when the federal government stepped in to provide liquidity to the capital markets when they failed in late 2008, because the markets couldn’t? That gives us a clue that there might be some differences between public and private risks.
Crucially, it’s not clear to me that taxpayers have the same prepayment risk as the capital markets. Private holders of student loans are terrified that their loans might be paid back too quickly, because they are likely to get paid back when interest rates are low and it will be tough to reinvest at the same rate. This is a particularly big risk with the negative convexity of student loan payments, which can be prepaid without penalty. Private actors need to be compensated generously for this risk.
Do taxpayers face the same risk? If student loans owed to the government were paid down faster than anyone expected, would taxpayers be furious? I wouldn’t. I certainly wouldn’t say “how are we going to continue to make the profit we were making?” as a citizen, though it would be an essential question as a private bondholder. Either way, it’s as much a political question as an economic one. (I make the full argument for this in a blog post here.)
Error 2: Their Definition of Market Risk Is Wrong
The authors like FVA because it accounts for market risk. But what is market risk? According to Delisle and Richwine, market risk is “associated with expecting future loan repayments,” as “[s]tudents might pay back the expected principal and interest” but they also may not. It is also “the risk of a general downturn in the economy… market risk cannot be diversified away.”
So the first part is wrong: market risk is not credit risk, or the risk of default or missing payments. The International Financial Reporting Standards (IFRS7), for instance, requires reporting market risk separate from credit risk, because they are obviously two different things. I’ve generally only heard market risk used in the context of bond portfolios to mean interest rate risk, which they also don’t mention. So if market risk isn’t credit risk or interest rate risk, what is it?
I’m not sure. What I think is going on is they are confusing the concept with the market risk of a stock, specifically its beta. A stock’s beta is its sensitivity to overall equity prices. (Pull up a random stock page and you’ll see the beta somewhere.) It’s very common phrasing to say this risk can’t be diversified away and is a proxy for the risk of general downturns in the economy, which is the same language used in this piece.
Market risk for stocks is the question of how much your portfolio will go down if the market as a whole goes down. But this has nothing to do with student loans, because students (aside from an enterprising few) don’t sell equity; they take out loans. If students paid for school with equity, in theory an economic downturn would lead to less revenue, since students would make less money overall. But even then it’s a shaky concept.
This isn’t just academic. There’s a reason people don’t speak of a one-to-one relationship between a market downturn and the value of a bond portfolio, as the authors’ “market risk” definition does. If the economy tanks, credit risk increases, so bonds are worth less, but interest rates fall, meaning the same bonds are worth more. How this all balances is complicated, and strongly driven by the distribution of bond maturities. This is why financial risk management distinguishes between credit, liquidity, and interest rate risks, and doesn’t conflate those concepts as the authors do.
(Though they are writing as experts, I think they are just copying and pasting from the CBO’s confusing and erroneous definition of “market risk.” If they are sourcing any kind of common financial industry practices or definitions, I don’t see it. I guess Jason Richwine didn’t get a chance to study finance while publishing his dissertation.)
Here again I’d want to understand more how the value of student loans to taxpayers moves with interest rates. Repayments are mentioned above. And for private lenders, higher interest rates mean that they can sell bonds for less and that they’re worth less as collateral. They need to be compensated for this risk. Do taxpayers have this problem to the same extent? If interest rates rise, do we worry we can’t sell the student loan portfolio for the same amount to another government, or that we can’t use it as collateral to fund another war? If not, why would we use this market rate?
Is This Just About Credit Risk?
Besides all the theoretical problems mentioned above, there’s also the practical problem that the CBO uses the already existing private market for student loans (“relied mainly on data about the interest rates charged to borrowers in the private student loan market”), even though there’s obviously a massive adverse selection problem there. Though not an error, it’s a third major problem for the argument. The authors don’t even touch this.
But for all the talk about FVA, the only real concern the authors bring up is credit risk. “What if taxpayers don’t get paid?” is the question raised over and over again in the piece. The authors don’t articulate any direct concerns about, say, a move in interest rates changing the value of a bond portfolio, aside from the possibility that it might mean more credit losses.
So dramatically scaling back consumer protections like bankruptcy and statute of limitations for student debtors wasn’t enough for the authors. Fair enough. But there’s an easy fix: the government could buy some credit protection for losses in excess of those expected on, say, $10 billion of its portfolio, and use that price as a supplemental discount. This would be quite low-cost and provide useful information. But it’s a far cry from FVA, even if FVA’s proponents don’t quite understand that.
(With conservatives looking to make big gains Tuesday, it’s important to understand how they understand the financial crisis. Luckily we have a guest post by David Fiderer, on a recent book about the crisis. For over 20 years, Fiderer has been a banker covering the energy industry. He is trained as a lawyer and is
QE3 is over. Economists will debate the significance of it for some time to come. What sticks out to me now is that it might have been entirely backwards: what if the Fed had set the price instead of the quantity?
To put this in context for those who don’t know the background, let’s talk about carbon cooking the planet. Going back to Weitzman in the 1970s (nice summary by E. Glen Weyl), economists have focused on the relative tradeoff of price versus quantity regulations. We could regulate carbon by changing the price, say through carbon taxes. We could also regulate it by changing the quantity, say by capping the amount of carbon in the air. In theory, these two choices have identical outcomes. But, of course, they don’t. It depends on the risk involved in slight deviations from the goal. If carbon above a certain level is very costly to society, then it’s better to target the quantity rather than the price, hence setting a cap on carbon (and trading it) rather than just taxing it.
This same debate on the tradeoff between price and quantity intervention is relevant for monetary policy, too. And here, I fear the Federal Reserve targeted the wrong one.
Starting in December 2012, the Federal Reserve started buying $45 billion a month of long-term Treasuries. Part of the reason was to push down the interest rates on those Treasuries and boost the economy.
But what if the Fed had done that backwards? What if it had picked a price for long-term securities, and then figured out how much it would have to buy to get there? Then it would have said, “we aim to set the 10-year Treasury rate at 1.5 percent for the rest of the year” instead of “we will buy $45 billion a month of long-term Treasuries.”
This is what the Fed does with short-term interest rates. Taking a random example from 2006, it doesn’t say, “we’ll sell an extra amount in order to raise the interest rate.” Instead, it just declares, “the Board of Governors unanimously approved a 25-basis-point increase in the discount rate to 5-1/2 percent.” It announces the price.
Remember, the Federal Reserve also did QE with mortgage-backed securities, buying $40 billion a month in order to bring down the mortgage rate. But what if it just set the mortgage rate? That’s what Joseph Gagnon of the Peterson Institute (who also helped execute the first QE), argued for in September 2012, when he wrote, “the Fed should promise to hold the prime mortgage rate below 3 percent for at least 12 months. It can do this by unlimited purchases of agency mortgage-backed securities.” (He reiterated that argument to me in 2013.) Set the price, and then commit to unlimited purchases. That’s good advice, and we could have done it with Treasuries as well.
What difference would this have made? The first is that it would be far easier to understand what the Federal Reserve was trying to do over time. What was the deal with the tapering? I’ve read a lot of commentary about it, but I still don’t really know. Do stocks matter, or flows? I’m reading a lot of guesswork. But if the Federal Reserve were to target specific long-term interest rates, it would be absolutely clear what they were communicating at each moment.
The second is that it might have been easier. People hear “trillions of dollars” and think of deficits instead of asset swaps; focusing on rates might have made it possible for people to be less worried about QE. The actual volume of purchases might also have been lower, because the markets are unlikely to go against the Fed on these issues.
And the third is that if low interest rates are the new normal, through secular stagnation or otherwise, these tools will need to be formalized. We should look to avoid the herky-jerky nature of Federal Reserve policy in the past several years, and we can do this by looking to the past.
Policy used to be conducted this way. Providing evidence that there’s been a great loss of knowledge in macroeconomics, JW Mason recently wrote up this great 1955 article by Alvin Hansen (of secular stagnation fame), in which Hansen takes it for granted that economists believe intervention along the entirety of the rate structure is appropriate action.
He even finds Keynes arguing along these lines in The General Theory: “Perhaps a complex offer by the central bank to buy and sell at stated prices gilt-edged bonds of all maturities, in place of the single bank rate for short-term bills, is the most important practical improvement which can be made in the technique of monetary management.”
The normal economic argument against this is that all the action can be done with the short-rate. But, of course, that is precisely the problem at the zero lower bound and in a period of persistent low interest rates.
Sadly for everyone who imagines a non-political Federal Reserve, the real argument is political. And it’s political in two ways. The first is that the Federal Reserve would be accused of planning the economy by setting long-term interest rates. So it essentially has to sneak around this argument by adjusting quantities. But, in a technical sense, they are the same policy. One is just opaque, which gives political cover but is harder for the market to understand.
And the second political dimension is that if the Federal Reserve acknowledges the power it has over interest rates, it also owns the recession in a very obvious way.
This has always been a tension. As Greta R. Krippner found in her excellent Capitalizing on Crisis, in 1982 Frank Morris of the Boston Fed argued against ending their disaster tour with monetarism by saying, “I think it would be a big mistake to acknowledge that we were willing to peg interest rates again. The presence of an [M1] target has sheltered the central bank from a direct sense of responsibility for interest rates.” His view was that the Fed could avoid ownership of the economy if it only just adjusted quantities.
But the Federal Reserve did have ownership then, as it does now. It has tools it can use, and will need to use again. It’s important for it to use the right tools going forward.
Janet Yellen gave a reasonable speech on inequality last week, and she barely managed to finish it before the right-wing went nuts.
It’s attracted the standard set of overall criticisms, like people asserting that low rates give banks increasingly “wide spreads” on lending — a claim made with no evidence, and without addressing that spreads might have fallen overall. One notes that Bernanke has also given similar inequality speeches (though the right also went off the deep end when it came to Bernanke), and Jonathan Chait notes how aggressive Greenspan was with discussing controversial policies to crickets on the right.
But I also just saw that Michael Strain has written a column arguing that by even “by focusing on income inequality [Yellen] has waded into politically choppy waters.” Putting the specifics of the speech to the side, it’s simply impossible to talk about the efficacy of monetary policy and full employment during the Great Recession without discussing inequality, or discussing economic issues where inequality is in the background.
Here are five inequality-related issues off the top of my head that are important in monetary policy and full employment. The arguments may or not be convincing (I’m not sure where I stand on some), but to rule these topics entirely out of bounds will just lead to a worse understanding of what the Federal Reserve needs to do.
The Not-Rich. The material conditions of the poorest and everyday Americans are an essential part of any story of inequality. If the poor are doing great, do we really care if the rich are doing even better? Yet in this recession everyday Americans are doing terribly, and it has macroeconomic consequences.
Between the end of the recession in 2009 and 2013, median wages fell an additional 5 percent. One element of monetary policy is changing the relative interest in saving, yet according to recent work by Zucman and Saez, 90 percent of Americans aren’t able to save any money right now. If that is the case, it’s that much harder to make monetary policy work.
Indeed, one effect of committing to low rates in the future is making it more attractive to invest where debt servicing is difficult. For example, through things like subprime auto loans, which are booming (and unregulated under Dodd-Frank because of auto-dealership Republicans). Meanwhile, policy tools that we know flatten low-end inequality between the 10 and 50 percentile — like the minimum wage, which has fallen in value — could potentially boost aggregate demand.
Expectations. The most influential theories about how monetary policy can work when we are at the zero lower bound, as we’ve been for the past several years, involve “expectations” of future inflation and wage growth.
One problem with changing people’s expectations of the future is that those expectations are closely linked to their experiences of the past. And if people’s strong expectations of the future are low or zero nominal growth in incomes because everything around them screams inequality, because income growth and inflation rates have been falling for decades, strongly worded statements and press releases from Janet Yellen are going to have less effect.
The Rich. The debate around secular stagnation is ongoing. Here’s the Vox explainer. Larry Summers recently argued that the term emphasizes “the difficulty of maintaining sufficient demand to permit normal levels of output.” Why is this so difficult? “[R]ising inequality, lower capital costs, slowing population growth, foreign reserve accumulation, and greater costs of financial intermediation.” There’s no sense in which you can try to understand the persistence of low interest rates and their effect on the recovery without considering growing inequality across the Western world.
Who Does the Economy Work For? To understand how well changes in the interest-sensitive components of investment might work, a major monetary channel, you need to have some idea of how the economy is evolving. And stories about how the economy works now are going to be tied to stories about inequality.
The Roosevelt Institute will have some exciting work by JW Mason on this soon, but if the economy is increasingly built around disgorging the cash to shareholders, we should question how this helps or impedes full output. What if low rates cause, say, the Olive Garden to focus less on building, investing, and hiring, and more on reworking its corporate structure so it can rent its buildings back from another corporate entity? Both are in theory interest-sensitive, but the first brings us closer to full output, and the second merely slices the pie a different way in order to give more to capital owners.
Alternatively, if you believe (dubious) stories about how the economy is experiencing trouble as a result of major shifts brought about by technology and low skills, then we have a different story about inequality and the weak recovery.
Inequality in Political and Market Power. We should also consider the political and economic power of industry, especially the financial sector. Regulations are an important component to keeping worries about financial instability in check, but a powerful financial sector makes regulations useless.
But let’s look at another issue: monetary policy’s influence on underwater mortgage financing, a major demand booster in the wake of a housing collapse. As the Federal Reserve Bank of New York found, the spread between primary and secondary rates increased during the Great Recession, especially into 2012 as HARP was revamped and more aggressive zero-bound policies were adopted. The Fed is, obviously, cautious about claiming pricing power from the banks, but it does look like the market power of finance was able to capture lower rates and keep demand lower than it needed to be. The share of the top 0.1 percent of earners working in finance doubled during the past 30 years, and it’s hard not to see that not being related to displays of market and political power like this.
These ideas haven’t had their tires kicked. This is a blog, after all. (As I noted, I’m not even sure if I find them all convincing.) They need to be modeled, debated, given some empirical handles, and so forth. But they are all stories that need to be addressed, and it’s impossible to do any of that if there’s massive outrage at even the suggestion that inequality matters.
(image via NYPL)
Guess what? I’m challenging you to a game of tennis in three days. Here’s an issue though: I don’t know anything about tennis and have never played it, and the same goes for you.
In order to prepare for the game, we are each going to do something very different. I’m going to practice playing with someone else who isn’t very good. You, meanwhile, are going to train with an expert. But you are only going to train by talking about tennis with the expert, and never actually play. The expert will tell you everything you need to know in order to win at tennis, but you won’t actually get any practice.
Chances are I’m going to win the game. Why? Because the task of playing tennis isn’t just reducible to learning a set of things to do in a certain order. There’s a level of knowledge and skills that become unconsciously incorporated into the body. As David Foster Wallace wrote about tennis, “The sort of thinking involved is the sort that can be done only by a living and highly conscious entity, and then it can really be done only unconsciously, i.e., by fusing talent with repetition to such an extent that the variables are combined and controlled without conscious thought.” Practicing doesn’t mean learning rules faster; it means your body knows instinctively where to put the tennis racket.
The same can be said of most skills, like learning how to play an instrument. Expert musicians instinctively know how the instrument works. And the same goes for driving. Drivers obviously learn certain rules (“stop at the stop sign”) and heuristics (“slow down during rain”), but much of driving is done unconsciously and reflexively. Indeed a driver who needs to think through procedurally how to deal with, say, a snowy off ramp will be more at risk of an accident than someone who instinctively knows what to do. A proficient driver is one who can spend their mental energy making more subtle and refined decisions based on determining what is salient about a specific situation, as past experiences unconsciously influence current experiences. Our bodies and minds aren’t just a series of logic statements but also a series of lived-through meanings.
This is my intro-level remembrance of Hubert Dreyfus’ argument against artificial intelligence via Merleau-Ponty’s phenomenology (more via Wikipedia). It’s been a long time since I followed any of this, and I’m not able to keep up with the current debates. As I understand it Dreyfus’ arguments were hated by computers scientists in the 1970s, then appreciated in the 1990s, and now computer scientists assume cheap computing power can use brute force and some probability theory to work around it.
But my vague memory of these debates is why I imagine driverless cars are going to hit a much bigger obstacle than most. I was reminded of all this via a recent article on Slate about Google’s driverless cars from Lee Gomes:
[T]he Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway […] But the maps have problems, starting with the fact that the car can’t travel a single inch without one. […]
Because it can’t tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. […] Computer scientists have various names for the ability to synthesize and respond to this barrage of unpredictable information: “generalized intelligence,” “situational awareness,” “everyday common sense.” It’s been the dream of artificial intelligence researchers since the advent of computers. And it remains just that.
Focus your attention on the issue that the car can’t tell the difference between a dangerous rock to avoid and a newspaper to drive through. As John Dewey found when he demolished the notion of a reflex arc, reflexes become instinctual so attention is paid only when something new breaks the habitual response. Or, experienced human drivers don’t see the rock, and then decide to move. They just as much decide to move because that forces them to see the rock. The functionalist breakdown, necessary to the propositional logic of computer programming, is just an ex post justification for a whole, organic action. This is the “everyday common sense” alluded to in the piece.
Or let’s put it a different way. Imagine learning tennis by setting up one of those machines that shoots tennis balls at you, the same repetitive way. There would be a strict limit in how much you could learn, or how much that one motion would translate into you being able to play an entire game. But by teaching cars to drive by essentially having them follow a map means that they are playing tennis by just repeating the same ball toss, over and over again.
Again, I’m willing to sustain the argument that the pure, brute force of computing power will be enough – stack enough processors on top of each other and they’ll eventually bang out an answer on what to do. But if the current action requires telling cars absolutely everything that will be around them, instead of some sort of computational ability react to the road itself, including via experience, this will be a much harder issue. I hope it works, but maybe we can slow down the victory laps that are already calling massive overhauls to our understanding of public policy (like the idea that public buses are obsolete) until these cars encounter a situation they don’t know in advance.
There’s a new argument about taxes: the United States is already far too progressive with taxation, it says, and if we want to build a better, eglitarian future we can’t do it through a “soak the rich” agenda. It’s the argument of this recent New York Times editorial by Edward D. Kleinbard, and a longer piece by political scientists Cathie Jo Martin and Alexander Hertel-Fernandez at Vox. I’m going to focus on the Vox piece because it is clearer on what they are arguing.
There, the researchers note that the countries “that have made the biggest strides in reducing economic inequality do not fund their governments through soak-the-rich, steeply progressive taxes.” They put up this graphic, based on OECD data, to make this point:
You can quickly see that the concept of “progressivity” is doing all the work here, and I believe the way they are going to use that word will be problematic. What does it mean for Sweden to be one of the least progressive tax state, and the United States the most?
Let’s graph out two ways of soaking the rich. Here’s Rich Uncle Pennybags in America, and Rik Farbror Påse av Mynt in Sweden, as well as their respective tax bureaus:
When average people usually talk about soaking the rich, they are talking about the marginal tax rates the highest income earners pay. But as we can see, in Sweden the rich pay a much higher marginal tax rate. As Matt Bruenig at Demos notes, Sweden definitely taxes its rich much more (he also notes that what they do with those taxes is different than what Vox argues).
At this point many people would argue that our taxes are more progressive because the middle-class in the United States is taxed less than the middle-class in Sweden. But that is not what Jo Martin and Hertel-Fernandez are arguing.
They are instead looking at the right-side of the above graphic. They are measuring how much of tax revenue comes from the top decile (or, alternatively, the concentration coefficient of tax revenue), and calling that the progressivity of taxation (“how much more (or less) of the tax burden falls on the wealthiest households”). The fact that the United States gets so much more of its tax revenue from the rich when compared to Sweden means we have a much more progressive tax policy, one of the most progressive in the world. Congratulations?
The problem is, of course, that we get so much of our tax revenue from the rich because we have one of the highest rates of inequality across peer nations. How unequal a country is will be just as much of a driver of the progressivity of taxation as the actual tax polices. In order to understand how absurd this is, even flat taxes on a very unequal income distribution will mean that taxes are “progressive” as more income will come from the top of the income distribution, just because that’s where all the money is. Yet how would that be progressive taxation?
We can confirm this. Let’s take the OECD data that is likely where their metric of tax progressivity comes from, and plot it against the market distribution. This is the share of taxes that come from the top decile, versus how much market income the top decile takes home:
As you can see, they are related. (The same goes if you use gini coefficients.)
Beyond the obvious one, there’s a much deeper and more important relationship here. As Saez, Piketty and Stantcheva find, the fall in top tax rates over the past 30 years are a major driver of the explosion of income inequality during that same exact period. Among other ways, lower marginal tax rates give high-end mangagement a greater incentive to bargain for higher wages, and for corporate structures to pay them out. This is an important element for the creation of our recent inequality, and it shouldn’t get lost among odd definitions of the word “progressive,” a word that always seems to create confusion.
I have a new column at The Score: Why Prisons Thrive Even When Budgets Shrink. Even as the era of big government was over, the incarceration rate quintupled over just 20 years. It had previously been stable for a century. Logically, three actors set the rate of incarceration: here’s how they made this radical transformation of the state.
(Wonkish, as they say.)
I wrote a piece in the aftermath of the Michael Brown shooting and subsequent protests in Ferguson noting that the police violence, rather than a federalized, militarized affair, should be understood as locally driven from the bottom-up. Others made similar points, including Jonathan Chait (“Why the Worst Governments in America Are Local Governments”) and Franklin Foer (“The Greatest Threat to Our Liberty Is Local Governments Run Amok”). Both are smart pieces.
The Foer piece came into a backlash on a technical point that I want to dig into, in part because I think it is illuminating and helps proves his point. Foer argued that “If there’s a signature policy of this age of unimpeded state and local government, it’s civil-asset forfeiture.” Civil-asset forfeiture is where prosecutors press charges against property for being illicit, a legal tool that is prone to abuse. (I’m going to assume you know the basics. This Sarah Stillman piece is fantastic if you don’t, or even if you do.)
Two libertarian critics jumped at that line. Jonathan Blanks of the Cato Institute wrote “the rise of civil asset forfeiture is a direct result of federal involvement in local policing. In what are known as ‘equitable sharing’ agreements, federal law enforcement split forfeiture proceeds with state and local law authorities.”
Equitable sharing is a system where local prosecutors can choose to send their cases to the federal level and, if successful, up to 80 percent of the forfeited funds go back to local law enforcement. So even in states where the law lets law enforcement keep less than 80 percent of funds to try and prevent corruption (by handing the money to, say, roads or schools), “federal equitable sharing rules mandate those proceeds go directly to the law enforcement agencies, circumventing state laws to prevent “‘policing for profit.’”
Lucy Steigerwald at Vice addresses all three posts, and make a similar point about Foer. “Foer mentions the importance of civil asset forfeiture while skirting around the fact that forfeiture laws incentivize making drug cases into federal ones, so as to get around states with higher burdens of proof for taking property…Include a DEA agent in your drug bust—making it a federal case—and suddenly you get up to 80 percent of the profits from the seized cash or goods. In short, it’s a hell of a lot easier for local police to steal your shit thanks to federal law.”
Equitable sharing, like all law in this realm, needs to be gutted yesterday, and I’m sure there’s major agreement on across-the-board reforms. But I think there’s three serious problems with viewing federal equitable sharing as the main driver of state and local forfeitures.
Legibility, Abuse, Innovation
The first is that we are talking about equitable sharing in part because it’s only part of the law that we are capable of measuring. There’s a reason that virtually every story about civil asset forfeiture highlights equitable sharing . It’s because it’s one of the few places where there are good statistics on how civil asset forfeiture is carried out.
As the Institute for Justice found when they tried to create a summary of the extent of the use of civil asset forfeiture, only 29 states have a requirement to record the use of civil asset forfeiture at all. But most are under no obligation to share that information, much less make it accessible. It took two years of FOIA requests, and even then 8 of those 29 states didn’t bother responding, and two provided unusable data. There’s problematic double-counting and other problems with the data that is available. As they concluded, “Thus, in most states, we know very little about the use of asset forfeiture” at the county and state level.
We do know about it at the federal level however. You can look up the annual reports of the federal Department of Justice’s Assets Forfeiture Fund (AFF) and the Treasury Forfeiture Fund (TFF) of the U.S. Department of the Treasury. There you can see the expansion of the program over time.
You simply can’t do this in any way at the county or state levels. You can’t see statistics to see if equitable sharing is a majority of forfeiture cases – though, importantly, equitable sharing was the minority of funds in the few states the Institute for Justice were able to measure, and local forfeitures were growing rapidly – or the relationship between the two. It’s impossible to analyze the number of forfeiture cases (as opposed to amount seized), which is what you’d want to measure to see the increased aggressiveness in its use on small cases.
This goes to Foer’s point that federal abuses at least receive some daylight, compared to the black boxes of county prosecutor’s offices. This does, in turn, point the flashlight towards the Feds, and gives the overall procedure a Federal focus. But this is a function of how well locals have fought off accountability.
The second point is that the states already have laws that are more aggressive than the Fed’s. A simple graph will suffice (source). The Feds return 80 percent of forfeited assets to law enforcement. What do the states return?
Only 15 states have laws that that are below the Fed’s return threshold. Far, far more states already have a more expansive “policing for profit” regime set in at the state level than what is available at the Federal level. It makes sense that for those 15 states equitable sharing changes the incentives , of course, and the logic extends to the necessary criterion to make a seizure. But the states, driven no doubt by police, prosecutors and tough-on-crime lawmakers, have written very aggressive laws in this manner. They don’t need the Feds to police for profit; if anything they’d get in the way.
The third is that the innovative expansion of civil asset forfeiture is driven at the local level just as much as the federal level. This is the case if only because equitable sharing can only go into effect if there’s a federal crime being committed. So aggressive forfeiture of cars of drunk drivers or those who hire sex workers (even if it your wife’s car) is a local innovation, because there’s no federal law to advance them.
There’s a lot of overlap for reform across the political spectrum here, but seeing the states as merely the pawns of the federal government when it comes to forfeiture abuse is problematic. Ironically, we see this precisely because we can’t see what the states are doing, but the hints we do know point to awful abuses, driven by the profit motive from the bottom-up. To take two prominent, excellent recent examples. Stillman at the New Yorker: “through a program called Equitable Sharing…At the Justice Department, proceeds from forfeiture soared from twenty-seven million dollars in 1985 to five hundred and fifty-six million in 1993.”
And Michael Sallah, Robert O’Harrow Jr., Steven Rich of the Washington Post: “There have been 61,998 cash seizures made on highways and elsewhere since 9/11 without search warrants or indictments through the Equitable Sharing Program, totaling more than $2.5 billion.”
If either wanted to get these numbers at the state and local levels it would be impossible. I understand why one want to put an empirical point on it, and the law needs to be changed no matter what, but the core empirical work relating payouts to equitable sharing isn’t as aggressive as you’d imagine. Most of the critical results aren’t significant at a 5% level, and even then you are talking about a 25% increase in just equitable sharing (as opposed to the overall amount forfeited by locals, which we can’t measure) relative to 100% change in state law payouts.
Which makes sense – no prosecutor is going to be fired for bringing in too much money into the school district, if only because money is fungible on the back end.
In light of the increasingly good news about the launch of the Affordable Care Act, I wanted to write about what experts think should be next on the health care front. Particularly with the implosion of the right-wing argument that there would be something like a death spiral, I wanted to flesh out what the left’s critique would be at this point. Several people pointed me in the direction of the original bill that passed the House, the one that was abandoned after Scott Brown’s upset victory in early 2010 in favor of passing the Senate bill, as a way forward.
Here’s the piece. Hope you check it out.
Many people are pointing to the police violence unfolding in Ferguson, Missouri as part of a “libertarian moment.” Dave Weigel of Slate writes “Liberals are up in arms about police militarization. Libertarians are saying: What took you so long?” Tim Carney of the Washington Examiner notes that the events in Ferguson bolster the claim that we are experiencing a libertarian moment because “libertarianism’s warnings today ring truer than ever.”
It will be a great thing if the horror of what is going on builds a broader coalition for putting the excess of the carceral state in check. But I also think that Ferguson presents a problem for libertarian theory about this situation in particular and the state in general. Their argument is a public choice-like story in which the federal government is the main villain. But this will only tell a partial story, and probably not even the most important one. And, as the deeper story of the town is told, the disturbing economics of the city look similar to what the right thinks is the ideal state. Let’s take these in turn.
People on the right are telling a story where the problems of the police are primarily driven by the federal government. As Rand Paul said: “Not surprisingly, big government has been at the heart of the problem.” Big government here is strictly a federal phenomenon though, one where “Washington has incentivized the militarization of local police precincts.” Paul Ryan’s comment on Ferguson is telling: “But in all of these things, local control, local government, local authorities who have the jurisdiction, who have the expertise, who are actually there are the people who should be in the lead.” (h/t Digby) The culprits in these criticisms are usually programs, accelerated after the start of the War on Terror, that give military surplus to local police.
But rather than just a top-down phenomenon of centralized, federal bureaucrats, the police violence we see is just as much a bottom-up, locally-driven affair. “Militarized” police equipment didn’t shoot Michael Brown, or kill Eric Garner in a chokehold. And aggressive police reactions to protests haven’t required extensive military equipment over the past 40 years.
As Tamara Nopper and Mariame Kaba note in the pages of Jacobin, the idea that there is suddenly a “militarized” police force here betrays that the militarization began in the 1960s in response to the urban crisis. And even though militarized dollars have flowed to all parts of the country, it is in black urban areas where the equipment has been deployed in an aggressive manner by local authorities. And militarization isn’t just about equipment, but about the broader framework of mass incarceration and zero-tolerance, order-maintenance policing.
You can see the consequences of this through simple polls. As Dorian Warren notes, “Because for black Americans, what Sen. Paul disparages as ‘big government’ is actually the government we trust most…blacks are the least likely [racial and economic group] to trust their local governments.” Though these military equipment programs, which give away all kinds of odd things, are a serious problem and should be curtailed, they should be placed within the context of a criminal justice system that is punitive towards minorities and is among the most expansive in the world.
This has political consequences. Democrats have been weak on criminal justice issues. But for several years Blue Dog Democrats, lead by Jim Webb, have pushed for reform. But Webb’s big bill to bring together non-binding suggestions for reform, the National Criminal Justice Commission Act, wasn’t blocked by centrist Democrats. It was blocked by libertarians and conservatives. Most Republicans, including Tom Coburn and Rand Paul, voted against it on the basis of “states’ rights.” Commentators on the right found the arguments dubious and scandalous, but this will become more and more of an issue if the problem is just one of the federal government.
The Right-Wing Dream City
If you are a libertarian, you probably have two core principles when it comes to how the government carries out its duties. The first is that people should pay taxes in direct proportion to how much they benefit from government services. The government is like another business, and to the extent it can provide public-like goods the market will not, people should pay only as much as they benefit from them. Taxes should essentially be the individual’s price of “purchasing” a government service.
You also probably want as much of what the government does to be privatized as possible. Government services provided by private firms use the profit motive to seek out efficiencies and innovation to provide the best service possible. But even if it doesn’t, the right’s public choice theory tells us that private agents will do a better job tending to services because of the essential impulse of the public state to corruption.
So what do we see in Ferguson? It’s becoming clear that there’s a deep connection between an out-of-control criminal justice system and debt peonage. As Vox reports, “court fees and fines are the second largest source of funds for the city; $2.6 million was collected in 2013 alone.”
These fines that come from small infractions will grow rapidly when people can’t afford to pay them immediately, much less hire lawyers to handle the complicated procedures. So you have a large population with warrants and debts living in a city that functions as a modern debtors’ prison. This leads to people functioning as second-class citizens in their own communities. And as Jelani Cobb notes in the New Yorker, this debtor status keeps many citizens of Ferguson off the streets, not protesting or acting as political agents.
How did we get here? As Sarah Stillman noted in a blockbuster New Yorker story, this is referred to as an “offender-funded” justice system, one that aims to “to shift the financial burden of probation directly onto probationers.” How? “Often, this means charging petty offenders—such as those with traffic debts—for a government service that was once provided for free.”
As Stillman notes, this process has grown alongside state-level efforts to privatize probation and other incarceration alternatives by replacing them with for-profit companies. (Missouri is one of many states that does this.) There are significant worries that this privatized probation industry has severe corruption and abuse problems. Crucially, their incentive is less rehabilitation or judging actual threats to the public, and more to keep people in a permanent debt peonage. The state, in turn, gets funded without having to raise any general taxes.
Having people who “use” the criminal justice system pay for it strikes me as pretty close to the libertarian vision of how taxes should function. And having state power executed by private, profit-seeking entities is the logical outcome of how they think services should function. I’m sure that a libertarian would say that they are against this kind of outcome, though it’s not clear to me how taxation and services along these lines couldn’t do anything other than lead to punitive outcomes. (Perhaps people versed in public choice theory should apply it to what happens when you put public choice theories into practice.)
This is yet another way in which the growth of market society is wedded to the growth of a carceral state. But thinking through this issue can lead you to interesting places. If you think that this offender-funded system is unfair because the poor don’t have the ability to pay for it, you are basically 90% of the way to an argument for progressive taxation. And if you think private parties using coercive power invites abuse, abuses that should be checked by basic mechanisms of democratic accountability, you are also pretty close to an argument for the modern, professionalized, administrative state. Welcome to the team.
Before it was anything else, the neoconservative movement was a theory of the urban crisis. As a reaction to the urban riots of the 1960s, it put an ideological and social-scientific veneer on a doctrine that called for overwhelming force against minor infractions — a doctrine that is still with us today, as people are killed for walking down the street in Ferguson and allegedly selling single cigarettes in New York. But neoconservatives also sought, rather successfully, to position liberalism itself as the cause of the urban crisis, solvable only through the reassertion of order through the market and the police.
Edward Banfield was one of the first neoconservative thinkers who started writing in the 1960s and ’70s and was a prominent figure in the movement, though he isn’t remembered as well as his close friends Milton Friedman or Leo Strauss, or his star student James Q. Wilson. Banfield contributed to the beginning of neoconservative urban crisis thinking, the Summer 1969 “Focus on New York” issue of The Public Interest, which began to formalize neoconservatives’ framing of the urban crisis as the result of not just the Great Society in particular but the liberal project as a whole.
In his major book The Unheavenly City (pdf here), Banfield set the tone for much of what would come in the movement. Commentary described the book as “a political scientist’s version of Milton Friedman’s Capitalism and Freedom” at the time. It sold 100,000 copies, and gathered both extensive news coverage and academic interest.
The Unheavenly City’s most infamous chapter is “Rioting Mainly for Fun and Profit.” Fresh off televised riots in Watts, Detroit, and Newark, Banfield argued that it was “naive to think that efforts to end racial injustice and to eliminate poverty, slums, and unemployment will have an appreciable effect upon the amount of rioting that will be done in the next decade or two.” Absolute living standards had been rising rapidly. For Banfield, this was entirely the result of market and social forces rather than the state, and the poor, with their short time-horizons and desire for immediate gratification, would largely be left behind and always be prone to rioting. Today’s classic, if often implicit, repudiations of poor people’s humanity were clearly expressed here.
Rather than political protests or rebellions, Banfield argued that riots were largely opportunistic displays of violence and theft. He broke down four types of riots: (1) rampages, where young men are simply looking for trouble and act out violently; (2) pillaging, where theft is the main focus, and the riot serves as a solution for a type of collective action problem for thieves; (3) righteous indignation, where people act against an insult against their community; and (4) demonstrations, which are neither spontaneous nor violent but instead designed for a specific political purpose.
Banfield argued that the poor mainly engaged in the first two types of riots. Righteous indignation riots were a feature of the working class, because the “lower-class individual is too alienated to be capable of much indignation.” Demonstrations were largely the focus of the middle and upper classes, as they ran organizations and were able to make coherent claims on the state.
At this point Banfield’s text reads like a list of cranky, armchair reactionary observations about riots. It received considerable blowback at the time. What was innovative, for a neoconservative agenda, was where he put the blame. Young men will be young men, the text seems to suggest. The problem is what enables them to riot.
The initial perpetrators included the media, whose neutral (or even sensationalistic) coverage “recruited rampagers, pillagers, and others to the scene.” They also made the rioting more dangerous by expanding the knowledge base of the rioters. The larger academic community was also at fault since, to Banfield’s ear, “explaining the riots tended to justify them.” Upper-class demonstrators were also responsible for raising expectations of what the poor could demand from the state and from society writ large.
But according to Banfield, the core problem was modern liberalism, and in an interesting way. The big issue was the “professionalism” and bureaucratization of city services. The rioters had nothing to fear from the police, who were blocked from exercising their own judgement on the ground by an administrative layer of police administrators. In the logic that would form the basis of Broken Windows policing, the poor learning “through experience that an infraction can be done leads, by an illogic characteristic of childish thought, to the conclusion that it may be done.” And potential rioters were learning this because “the patrolman’s discretion in the use of force declined rapidly” with the growth of the modern liberal state.
Returning, therefore, to a “pre-professional” model of policing is one of the stated goals of Broken Windows. As James Q. Wilson explained in the 1982 Atlantic Monthly article that popularized the topic, “the police in this earlier period assisted in that reassertion of authority by acting, sometimes violently, on behalf of the community.”
Before the modern liberal state of accountability and due process, the police force wasn’t judged by “its compliance with appropriate procedures” but instead by its success in maintaining order. Since the 1960s, “the shift of police from order maintenance to law enforcement has brought them increasingly under the influence of legal restrictions… The order maintenance functions of the police are now governed by rules developed to control police relations with suspected criminals,” writes Wilson. According to this theory, order is preserved by the police out there, acting in the moment against minor infractions with a strong display of force, not by liberal notions of accountability and fairness.
This neoconservative vision that started in the 1960s and continues into today doesn’t just inform local arguments about policing, but rather the entire policy debate. So much of the debate over the (neo)conservative movement emphasizes suburban warriors, or evangelicals, or the Sun Belt, or the South. But as Alice O’Connor demonstrates in her paper “The Privatized City: The Manhattan Institute, the Urban Crisis, and the Conservative Counterrevolution in New York,” there was a distinct urban character to this thinking as well. Rather than a crisis of race relations, police violence, poverty, or anything else, rioting and the broader urban crisis were framed by the neoconservative movement as a crisis of values and culture precipitated by liberalism.
The broader urban crisis, in this story, hinges not on structural issues but on personal morality and behavior that can be restored by the extension of the market. Crime and urban “disorder” fit right next to social engineering and failing state institutions as a corrupt legacy of the liberal project and its bureaucratic, administrative governing state. Only the conservative agenda, as O’Connor puts it, of “zero-tolerance law enforcement, school ‘choice,’ hard-nosed implementation of welfare reform, and the large-scale privatization of municipal and social services” is capable of dismantling it. Only through the market, individual responsibility, and freedom from government “interference” can order result from the restoration of “political and cultural authority to a resolutely anti-liberal elite.” This legacy harnesses police excess to the triumph of the market. And as we see, it will be hard to dislodge one while the other reigns supreme.
I’m excited to tell you about The Score, a new monthly economics column page in The Nation magazine, that I’ll be writing with Bryce Covert. Each month will have a lead essay based on a chart, along with sidebars of information related to the economy as a whole. Check it out in print if you can, because the formatting of the page itself is very sharp.
It will also be online, and our first column on wealth inequality went up this week. Given so much interest in wealth inequality (it is the basis of the two best works from the left in 2014), what would a wealth equality agenda look like?
The Nation also recently launched The Curve, a great blog on the intersection of feminism and economics, spearheaded by Kathy Geier. It’s great to see The Nation moving in this direction, and I hope you check it all out!
Cato Unbound has a symposium on the “pragmatic libertarian case” for a Basic Income Guarantee (BIG), as argued by Matt Zwolinski. What makes it pragmatic? Because it would be a better alternative to the welfare state we now have. It would be a smaller, easier, cheaper (or at least no more expensive) version of what we already do, but have much better results.
Fair enough. But for the pragmatic case to work, it has to be founded on an accurate understanding of the current welfare state. And here I think Zwolinski is wrong in his description in three major ways.
He describes a welfare state where there are over a hundred programs, each with their own bureaucracy that overwhelms and suffocates the individual. This bureaucracy is so large and wasteful that simply removing it and replacing it with a basic income can save a ton of money. And we can get a BIG by simply shuffling around the already existing welfare state. Each of these assertions are misleading if not outright wrong.
Obviously, in an essay like this, it is normal to exaggerate various aspects of the reality in order to convince skeptics and make readers think in a new light. But these inaccuracies turn out to invalidate his argument. The case for a BIG will need to be built on a steadier footing.
Too Many Programs?
Zwolinski puts significant weight on the idea that there are, following a Cato report, 126 welfare programs spending nearly $660 billion dollars. That’s a lot of programs! Is that accurate?
Well, no. The programs Zwolinski describes can be broken down into three groups. First you have Medicaid, where the feds pay around $228 billion. Then you have the six big programs that act as “outdoor relief” welfare, providing cash, or cash-like compensation. These are the Earned Income Tax Credit, Temporary Assistance for Needy Families, Supplemental Security Income, Supplemental Nutrition Assistance Program (food stamps), housing vouchers and the Child Tax Credit. Ballpark figure, that’s around $212 billion dollars.
So only 7 programs are what we properly think of as welfare, or cash payments for the poor. Perhaps we should condense those programs, but there aren’t as many as we originally thought. What about the remaining 119 programs?
These are largely small grants to local institutions of civil society to provide for the common good. Quick examples involve $2.5 billion to facilitate adoption assistance, $500 million to help with homeless shelters, $250 million to help provide food for food shelters (and whose recent cuts were felt by those trying to fight food insecurity), or $10 million for low income taxpayer clinics.
These grants go largely to nonprofits who carry out a public purpose. State funding and delegation of public purpose has always characterized this “third sector” of civil institutions in the United States. Our rich civil society has always been built alongside the state. Perhaps these are good programs or perhaps they are bad, but the sheer number of programs have nothing to do with the state degrading the individual through deadening bureaucracy. If you are just going after the number of programs, you are as likely to bulldoze our nonprofit infrastructure that undergirds civil society as you are some sort of imagined totalitarian bureaucracy.
Inefficient, out-of-control bureaucracy?
But even if there aren’t that many programs, certainly there are efficiencies to reducing the seven programs that do exist. Zwolinski writes that “[e]liminating a large chunk of the federal bureaucracy would obviously…reduce the size and scope of government” and that “the relatively low cost of a BIG comes from the reduction of bureaucracy.”
So are these programs characterized by out of control spending? No. Here they are calculated by Robert Greenstein and CBPP Staff.
The major programs have administrative costs ranging between 1 percent (EITC) and 8.7 percent (housing vouchers), each proportionate to how much observation of recipients there is. Weighted, the average administrative cost is about 5 percent. To put this in perspective, compare it with private charity. According to estimates by Givewell, their most favored charities spend 11 percent on administrative costs, significantly more than is spent on these programs.
More to the point, there isn’t a lot of fat here. If all the administrative costs were reduced to 1 percent, you’d save around $25 billion dollars. That’s not going to add enough cash to create a floor under poverty, much less a BIG, by any means.
Pays for Itself?
So there are relatively few programs and they are run at a decent administrative cost. In order to get a BIG, you’ll need some serious cash on the table. So how does Zwolinski argues that “a BIG could be considerably cheaper than the current welfare state, [or at least it] would not cost more than what we currently spend”?
Here we hit a wall with what we mean by the welfare state. Zwolinski quotes two example plans. The first is from Charles Murray. However, in addition to the seven welfare programs mentioned above, he also collapses Social Security, Medicare, unemployment insurance, and social insurance more broadly into his basic income. If I recall correctly, it actually does cost more to get to the basic income he wants when he wrote the book in 2006, but said that it was justified because Medicare spending was projected to skyrocket a decade out, much faster than the basic income.
His other example is a plan by Ed Dolan. Dolan doesn’t touch health care spending, and for our purposes doesn’t really touch Social Security. How does he get to his basic income? By wiping out tax expenditures without lowering tax rates. He zeros out tax expenditures like the mortgage interest deduction, charitable giving, and the personal exemption, and turns the increased revenue into a basic income.
We have three distinct things here. We have the seven programs above that are traditionally understood as welfare programs of outdoor relief, or cash assistance to the poor. We have social insurance, programs designed to combat the Four Horsemen of “accident, illness, old age, loss of a job” through society-wide insurance. And we have tax expenditures, the system that creates an individualized welfare state through the tax code.
Zwolinski is able to make it seem like we can get a BIG conflict-free by blurring each of these three things together. But social insurance isn’t outdoor relief. People getting Social Security don’t think that they are on welfare or a public form of charity. Voters definitely don’t like the idea of scratching Medicare and replacing it with (a lot less) cash, understanding them as two different things. And social insurance, like all insurance, is able to get a lot of bang for the buck by having everyone contribute but only take out when necessary, for example they are too old to work. Public social insurance, through its massive scale, has an efficiency that beats out private options. If Zwolinski wants to go this route, he needs to make the full case against the innovation of social insurance itself.
Removing tax expenditures, which tend to go to those at the top of the income distribution, certainly seems like a good way to fund a BIG. However we’ll be raising taxes if we go this route. Now, of course, the idea that there is no distribution of income independent of the state is common sense, so the word “redistribution” is just a question-begging exercise. However the top 20 percent of income earners will certainly believe their tax bill is going up and react accordingly.
Zwolinski is trying to make it seem like we can largely accomplish a BIG by shuffling around the things that state does, because the state does them poorly. But the numbers simply won’t add up. Or his plan will hit a wall when social insurance is on the chopping block, or when the rich revolt when their taxes go up.
The case for the BIG needs to be made from firmer ground. Perhaps it is because the effects of poverty are like a poison. Or maybe it will provide real freedom for all by ensuring people can pursue their individual goals. Maybe it is because the economy won’t produce jobs in the capital-intensive robot age of the future, and a basic income will help ensure legitimacy for this creatively destructive economy. Heck, maybe it just compensates for the private appropriation of common, natural resources.
But what won’t make the case is the idea that the government already does this, just badly. When push comes to shove, the numbers won’t be there.
It’s been a week of whiplash when it comes to the issue of Too Big To Fail (TBTF). First the GAO released a report saying that it is difficult to find any bailout subsidy for the largest banks, implying that there’s been progress on ending TBTF. Then, late Tuesday, the FDIC and Federal Reserve released a small bombshell saying that the living wills submitted by the 11 largest banks “are not credible and do not facilitate an orderly resolution under the U.S. Bankruptcy Code.” These living wills were designed to make sure that banks could fail without causing chaos in the economy, and this report implies TBTF is still with us.
One of them has to be wrong, right? In order to understand this contradiction it’s important to map out where the actual disagreement is. Doing so will also help explain how the battle over TBTF will play out in the near future.
So look – a large, systemically risky financial firm is collapsing! Oh noes! What has happened and will happen?
There are two levels of defense when it comes to ending this firm. The first is through a bankruptcy court, and the second is through the FDIC taking over the firm, much like what it does to a failing regular bank. The next several paragraphs give some technical details (skip ahead if your eyes are already glazing over).
As you can see in the graphic above, before the failure, regulators will have failed to use “prompt corrective action” to guide the firm back to solvency. These are efforts regulators use to push a troubled firm to fix itself before a collapse. For example, if bank capital falls below a certain point, the bank can’t pay out bonuses or make capital purchases in order to attempt to make it more secure.
Once a failure happens, there are two lines of defense. The default course of action is putting the firm in bankruptcy, similar to what happened with Lehman Brothers. Why might this be a problem for a major financial firm? The Bankruptcy Code is slow and deliberate, when financial firms often need to be resolved fast. It isn’t designed to preserve ongoing firm business, which is a problem when those businesses are essential to the economy as a whole. It can’t prevent runs by favoring short-term creditors. There is no guaranteed funding available to keep operations running and to help with the relaunch. And there are large problems handling the failure of a firm operating in many different countries.
With these concerns in mind, Dodd-Frank sets up a second line of defense. Regulators can direct the FDIC to take over the failed firm and do an emergency resolution (OLA), like they do with commercial banks. In order to active the OLA, there’s a comically complicated procedure in which the Treasury Secretary, the Federal Reserve Board, and the FDIC all have to turn their metaphoric keys.
OLA, particularly with its new “single point of entry” (SPOE) framework, solves many of the problems mentioned above. OLA comes with a line of emergency funding from Treasury to facilitate resolution if private capital isn’t available, as it likely won’t be in a crisis. OLA would also be able to prioritize speed, as well as protect derivatives and short-term credit, stopping potential runs. SPOE, by focusing its energy at the bank’s holding company level, also helps to deal with coordinating the failure internationally. However, OLA would be executed by administrators instead of judges, and it could put taxpayer money at risk. (More on all of this here.)
So, what is the battle over? How are we making progress yet also making no progress?
All the innovation in the past 18 months in combating TBTF has taken place at the second line of defense. When Sheila Bair, for instance, says there’s been significant progress in ending taxpayer bailouts, or the Bipartisan Policy Institute releases a statement saying adopting an SPOE approach has the potential to eliminate TBTF, they are referencing the progress that is taking place at this second line of defense.
But there’s no progress at the first line of defense. The living wills that regulators found insufficient are, by statute, part of the first line of defense. Dodd-Frank says that if the living will “would not facilitate an orderly resolution of the company under title 11, [Bankruptcy]” then the FDIC and the Fed “may jointly impose more stringent capital, leverage, or liquidity requirements, or restrictions on the growth, activities, or operations of the company.” They purposefully didn’t drop the hammer in their announcement, instead telling the banks to go back to the drawing board rather than enforcing stricter requirements. But they can get as aggressive as they want here.
So the FDIC and the Fed are drawing a line in the sand here – the first line of defense needs to work. The regulators call out the banks for their “failure to make, or even to identify, the kinds of changes in firm structure and practices that would be necessary to enhance the prospects for orderly resolution.” So making this line of defense work will not be a trivial endeavor.
If the first line of defense doesn’t work, why don’t we just rely on the second line? Thomas Hoenig, Vice Chairman of the FDIC and an aggressive opponent of TBTF, released a statement accompanying the regulators’ release, specifically saying that they would not find this argument convincing. It’s worth noting how clear he is about this:
“Some parties nurture the view that bankruptcy for the largest firms is impractical because current bankruptcy laws won’t work given the issues just noted. This view contends that rather than require that these most complicated firms make themselves bankruptcy compliant, the government should rely on other means to resolve systemically important firms that fail. This view serves us poorly by delaying changes needed to assert market discipline and reduce systemic risk, and it undermines bankruptcy as a viable option for resolving these firms. These alternative approaches only perpetuate ‘too big to fail.’”
That’s a strong statement that they are going to hold the first line.
Note here that the GAO results could still stand. The market’s lack of a subsidy could reflect the second line of defense. Or it could reflect that even if they both fail, Congress, which is gridlocked, would not pass a bailout. It’s not clear what would happen if a major bank failed, but the market is right not to assume the banks are permanently safe.
It will be interesting to see how this shakes out. Those who think reform didn’t go far enough like the idea of fighting on the first line, because there is significant leeway to push for more systemic changes to Wall Street. To get a sense of the stakes, Sheila Bair told Tom Braithwaite back in 2010 that she would break up an institution that couldn’t produce a credible living will.
This will also animate the Right, but in a different way. From the get-go, their preferred approach to TBTF was just to create a special new bankruptcy code Chapter, removing any type of independent regulatory administrative state like the FDIC from the issue. It’s not clear if they’ll support regulators pushing aggressively to restructure firms so they can go through the bankruptcy code as it is written right now.
The administration appears to be silent for now. It’s also not clear whether it will see this as a second bite to get higher capital requirements, or if they is happy enough with the second line of denfense as it is. If the second is true, that would be unfortunate. The banks remain undercapitalized and too complex for bankruptcy, and regulators have a responsiblity to make sure each line of defense is capable of stopping the panic of 2008.
To emphasize a point I made yesterday, we need to think of ending Too Big To Fail (TBTF) as a continuum rather a simple yes-no binary. The process of failing a large financial firm through the Orderly Liquidation Authority (OLA) can go very well, or it could go very poorly. It’s important to understand that the recent GAO report, arguing that the TBTF subsidy has largely diminished, is incapable of telling the difference.
What would make for a successful termination of a failed financial firm under OLA? To start, bankruptcy court would be a serious option as a first response. Assuming that didn’t work, capital in the firm is structured in such a way that facilitates a successful process. There’s sufficient loss-absorbing capital both to take losses and give regulators options in the resolution. There’s also sufficient liquidity, both within the firm due to strong new capital requirements and through accountable lender-of-last-resort lending, that prevents a panic from destroying whatever baseline solvency is in the firm. As a result, less public funding is necessary to achieve the goals.
Living wills actually work, and allow the firm to be resolved in a quick and timely manner. The recapitalization is sufficient to repay any public funding without having to assess the financial industry as a whole. There’s no problems with international coordination, and the ability of the FDIC to act as a receiver for derivatives contracts is standardized and clear in advance, reducing legal uncertainty.
That’s a lot! And it’s a story about what could go right or wrong that is becoming more and more prevalent in the reform community . Let’s chart it out, along with the opposite happening.
Again, from the point of view of the GAO report, these are identical scenarios. Both would impose credit losses on firms. Thus the GAO’s empirical model, scanning and predicting interest rates spreads to imply credit risk, picks up both scenarios the same way. Whether OLA goes smoothly or is a disaster doesn’t matter. But from the point of view of taxpayers, those trying to deal with the uncertainty and panic that would come with such a scenario, and the economy as a whole, the bad scenario is a major disaster. And we are nowhere near the point where success can be taken for granted. Tightening the regulations we have is necessary to making the successful scenario more likely, and the apparent lack of a subsidy should not distract us from this.
 Note the common similarities along these lines in the critical discussion of OLA from across the entire reform spectrum. You can see this story in different forms in Stephen Lubben’s “OLA After Single Point of Entry: Has Anything Changed?” for the Unfinished Mission project, the comment letter from the Systemic Risk Council, Too Big to Fail: The Path to a Solution from the Bipartisan Policy Center, and the “Failing to End Too Big to Fail” report from the Republican Staff of the House Committee on Financial Services.
The GAO just released its long-awaited report on whether Wall Street receives an implicit subsidy for still being seen as Too Big To Fail (TBTF). I’m still working through the report, but the headline conclusion is that “large bank holding companies had lower funding costs than smaller ones during the financial crisis” and that there is “mixed evidence of such advantages in recent years. However, most models suggest that such advantages may have declined or reversed.”
For a variety of reasons, whether this subsidy exists has become a major focal point in the discussion about financial reform. The Obama administration wants the headline that TBTF is over, and the President’s opponents want to argue that Dodd-Frank has institutionalized bailouts. Hopefully this GAO report puts that “permanent bailouts” talking point to rest.
More generally, however, I find that there are three problems with this emphasis on a possible Wall Street subsidy in the financial reform debate:
The first is that it makes it seem like the bailouts were the only problem with the financial sector. Let’s do a thought experiment: imagine that in September 2008, Lehman Brothers went crashing into bankruptcy and…nothing happened. There was no panic in interbank lending or the money market mutual funds. The Federal Reserve didn’t do emergency lending, and nobody suggested that Congress pass TARP. There was nothing but crickets out there in the financial press.
Even if that had happened, we’d still have needed a massive overhaul of the financial system. Think of all the other things that went wrong: Wall Street fueled a massive housing bubble that destroyed household wealth and generated bad debts that have choked the economy for half a decade. Neighborhoods were torn apart by more than 6 million foreclosures while bankers laughed all the way to the bank. A hidden derivatives market radically distorted the price of credit risk and led to the creation of instruments designed to rip off investors. Wall Street failed at its main job — to allocate capital to productive ends in the economy. Instead, it went on a rampage that did serious harm to investors, households, and ultimately our economy.
TBTF is the most egregious example of the out-of-control financial system, and it’s a major problem that needs to be checked. But if emphasized too much, it makes it seem as if the problem is only how much damage a firm can do to the economy when it fails. In fact, the problem is much broader than that, and solving it requires transparency in the derivatives market, consumer protections, accountability in the securities Wall Street makes and sells, a focus on actual business lines, and regulation of shadow banking as a whole, not just last rites for individual firms.
This is important because the second problem is that some will take this report as evidence that reform is just right, or has even gone too far. And scanning the coverage, I see that the commentators who are applauding the GAO’s conclusions are often the same people who have said that, for instance, liquidity rules in Dodd-Frank have gone too far, or that the Volcker Rule should be tossed out. This is even as the GAO points to these provisions as necessary reforms.
We can debate whether a subsidy for failing banks exists or how big it is, but the goal of regulation should not be to fine-tune that number. The subsidy is only a symptom of much larger problems with the financial system, and the point of regulation is to build a system that works.
Finally, the third issue is that emphasizing the subsidy makes us think of ending TBTF as a binary, check-yes-or-no, pass-fail kind of test. Again, there are political reasons for this emphasis, but TBTF isn’t a switch that can be flipped on or off. Addressing the problem is an ongoing process that will be carried out through the Orderly Liquidation Authority (OLA), and that process can be either more or less robust.
It’s good that the financial markets have confidence in the OLA, but the FDIC is still crafting the living wills and the details of how they will be implemented. Major questions and challenges still remain. For instance, a rule has not yet been written to determine how much unsecured debt firms are required to carry. And conservatives are already floating the idea that a successful OLA would be a “bailout” anyway.
The success of an orderly liquidation process will depend on many different factors, but we should think of it not as a binary, but as a continuum — a continuum on which one end has more capital and slimmer business lines to protect taxpayer dollars and keep the risks contained, and the other end has us crossing our fingers and hoping that the aggregate damage isn’t too bad. [UPDATE: See more on this point from me here.]
The GAO report is welcome news. We’ve made progress on the most outrageous problem with the financial sector. But that doesn’t mean the work is done by any means.
Header image via Thinkstock
Paul Ryan released his anti-poverty plan yesterday, and lots of people have written about it. Bob Greenstein has a great overview of the block-granting portion of the plan. I’m still reading and thinking about it, but in the interest of answering the call for constructive criticism, a few points jump out that I haven’t seen
Live at The New Republic, I have a piece describing Year Four of Dodd-Frank, which celebrates its birthday this week. The news coverage of the past year has had a “stuck, spinning its wheels” argument to it. I argue that this past year saw some major and important advancements, directions on where to go next, and also made what will be the biggest challenges going forward very clear. I hope you check it out.
“Yes, but the whole point of a doomsday machine is lost if you keep it a secret! Why didn’t you tell the world, eh?” – Dr. Strangelove
So the DC Circuit Court ruled in Halbig v. Burwell hat health care subsidies can only go to states that set up their own health care exchanges rather than use the federal ones. That means indivdiuals from the 34 states who get subsidized health care from the federal exchange would no longer be able to get subsidies. (Another court ruled against this logic today.) I don’t normally do health care stuff, but I’ve read a lot about this case and something strikes me as very odd.
As I understand it, those on the right who are pushing the Halbig case argue that there’s a doomsday machine built into Obamacare. (Adam Serwer at MSNBC also caught this doomsday machine analogy.) If states don’t set up their health care exchanges then they don’t receive subsidies for health care from the federal government. According to this theory, the liberals who designed health care reform did this knowing that if subsidies were pulled, the system would collapse for states that didn’t set up their exchanges.
It’s important to note that those on the right are not arguing that this is a typo in the bill, because that wouldn’t necessarily be sufficient to overturn the subsidies. They are arguing that Congress intentionally put this language in there to compel, bribe, incentivize, and otherwise threaten states that didn’t set up their own exchanges. In the rightwing argument, liberals were saying “we are making the citizens of your state purchase health care, and if you don’t set up an exchange they won’t get the subsidies necessary to make the system work, so you’d better set up an exchange.”
The right’s argument hinges on the idea that since there’s no evidence that this isn’t the intent, it must be the intent. As the two authors of the legal challenge put it, Obamacare “supporters’ approval of this text reveals that their intent was indeed to enact a bill that restricts tax credits to state-run Exchanges. At no point have defenders of the rule identified anything in the legislative history that contradicts” their reading.
Here’s the thing, though: like Strangelove notes, a doomsday machine only works if you tell others about it. So, why weren’t the people in the vast network associated with Obamacare telling everyone about this threatening doomsday device after the bill passed?
If this was actually the intent, you’d expect that during the period where states were debating whether to set up exchanges, this would have been a major threat raised by somebody. Anyone, from President Obama to congressional leaders to health care experts and lawyers to activist groups on the ground in red states fighting for implementation, would have been saying, “if your state doesn’t set up a health care exchange, your citizens are screwed. At the very least, you’ll be leaving money on the table.” (“Leaving money on the table” is always a good point to bring up, and if this doomsday machine really were the intent of the law, it would be true.)
I know of no evidence of this being the case. Does anybody? Numerous people are arguing that the legislative intent is clearly on providing subsidies to the federal exchange users. No wonder the dissent argued that the Halbig ruling was “a fiction, a post hoc narrative concocted to provide a colorable explanation for the otherwise risible notion that Congress would have wanted insurance markets to collapse in States that elected not to create their own Exchanges.”
That doesn’t change what the DC Circuit did, of course, but it should make a random person stop and wonder how much of a cynical ploy this whole thing is.
How have search models influenced the current economic debates? John Quiggin had an interesting post up at Crooked Timber about how poorly the branch of economics that falls under search theory has done in the age of the internet. (Noah Smith has follow-up.)
Search and matching models are fascinating to me because they are central to both the debate over mass unemployment during the Great Recession as well as how economists understand the minimum wage. And here you can see search model being deployed for worse and for better.
The Great Recession
In his 2010 Annual Report, Federal Reserve Bank of Minnesota President Narayana Kocherlakota gave a presentation using the popular Diamond-Mortensen-Pissarides (DMP) search model to explain what he thought was wrong with the labor markets. Given that Kocherlakota was dissenting against QE2 at the time, a lot of eyes were on his arguments. Many focused on his infamous argument he later reversed that low rates cause disinflation, but I found this equally fascinating at the time.
The economy suffered from low job openings. But why were employers not creating job openings and hiring? Kocherlakota summarized the DMP model in this graphic:
Job openings and hires are a function of unemployment, productivity, and what was going on with the unemployed. He concluded that productivity after taxes was falling because of an increase in government debt, regulations and the proposed repeal of some of the Bush-era tax cuts. Also expansions of social insurance, including unemployment insurance and presumably the Medicaid expansion in Obamacare, was increasing the “utility” of not working. This, he believed, was the major reason why unemployment was so high and job openings so low. Using a back-of-the-envelope estimate with made-up numbers, he proposed that the natural rate of unemployment could be around 8.7 percent – very close to the then current 9.0 percent unemployment.
Think about this model and the recession long enough, and two things should jump out. The first is that this model, going back to Robert Shimer’s (2005) seminal work on the topic, is terrible at explaining movements in unemployment. The volatility in vacancies and productivity aren’t anywhere near the magnitude necessary to cause unemployment movements that we see in recessions. Productivity would need to drop significantly to create the changes in unemployment we see in recessions, and it doesn’t move to anywhere near that extent.
The second is that, as Robert Hall and many others have pointed out, productivity actually increased during the recent recession. It’s moving the wrong direction for it to impact unemployment the way we’d see it during the Great Recession. The unemployment-to-vacancy ratio also increased – 2009 was a fantastic time to create job openings according to this model, yet they collapsed. This is extra problematic given that economists have tried to respond to the initial problem by amplifying productivity movements in their models. But, out here in the actual world, the thing is simply moving in the wrong direction to make any sense.
Note that there’s no place for things like aggregate demand or the zero lower bound to plug into the equation. The only things that can matter are things like taxes, government uncertainty and social insurance, and they all work in the negative direction. That search models have become so influential to the background knowledge of unemployment helps explains the default ideology of why economists were so eager to find “structural” explanations for why unemployment was so high.
There’s some debate on this, but it looks like economists are softening on their opposition to raising the minimum wage, particularly if the question is phrased as whether or not a slightly higher minimum wage would pass a cost-benefit test. Search theory might be a reason why. If you are schooled in thinking of the labor markets in a search model, the idea that the minimum wage might not have an adverse employment effect makes more sense. A higher minimum wage means that low-wage workers will search harder for low-end jobs. They’ll be more likely to accept those jobs, and less likely to turn them over as well. These all would help raise the equlibrium employment level.
Even further, if you think that each job has a bit of a search friction surrounding it, then the idea that the employer has a little bit of monopoly power over the job makes sense. Employers might not raise wages to a market clearing rate because that, in turn, would mean having to raise the wages for all their workers. A minimum wage pushes against that. Understanding the labor markets through this lens ideas helps explain why any disemployment effects are minimial compared to the economics 101 story.
As I read it, much of this theory took hold in labor economics to help explain the data people were seeing. Why were there so many vacancies in fast food? Why didn’t minimum wage hikes obviously cause unemployment in the data? This should tell us something – theory, when built up out of observations and data, can tell us something useful. But the same theory moved over to the business cycle, where it ignores conflicting data and is propelled downward by partisan and ideological forces, can be an utter disaster.
With President Obama’s student loan announcement in the news this week, an argument over whether or not taxpayers make a profit from student loans is no doubt close behind. We do make a profit using the government’s accounting tools, but there’s an argument that we should instead use “fair value accounting,” or the rate at which the private market adjusts for risk (here’s Jared Bernstein with a recent piece). By that standard, we see a much smaller profit.
Most people reference the CBO on this, though its numbers are entirely opaque. For instance, it says that it “relied mainly on data about the interest rates charged to borrowers in the private student loan market,” but there are gigantic adverse selection problems right out of the gate. The private student loan market is where the worst credit risks go, so of course they have higher rates. Is the CBO able to control for this? Nobody knows.
But beyond that, there’s a simple finance logic reason for why I don’t buy the argument for fair value accounting. I don’t believe that taxpayers face prepayment risk, and to whatever extent they do, it’s a matter of politics, not economics, that determines this.
The concept of prepayment risk might not make sense for people without some financial background, so let’s walk through it. If you lend money to a person you know, whether it’s a questionable relative or a partner who asks to hold some money until they get their check next week, you just want to get paid back in full. And, here’s the kicker, you are really happy if you get paid back sooner than you had expected. You want the money back.
Is that how private capital markets work? No. Let’s say you manage a large portfolio of private student loans. And let’s say you get a note at the office that they are being paid back more quickly than you had expected. Are you happy? No. You are not, and you might even get fired.
Why wouldn’t you be happy about getting paid back earlier?
1. You have to physically do something with the money you get paid back to get it earning more money again. No matter what you end up doing — reinvesting it in student loans, putting it into a different set of assets, or just stuffing it in the equivalent of a mattress — it takes time, energy, and resources, all of which cost money.
2. Often you want to set a certain time frame for repayment. Say you really want to have a cash flow at a certain date far off into the future because you are funding a pension or insurance liability. Getting paid back earlier doesn’t meet that goal, and it confuses your expectations for cash flows.
3. Crucially, you are likely to get paid back exactly when you don’t want the money. Say you locked in private student loans at a high interest rate, but then interest rates decline dramatically. At this point students will pay back their loans more quickly, which leaves you with more cash on hand at a time when interest rates are low.
This isn’t some partisan ideological point; it’s just basic finance. You can see it described in a CFA study guide under call and prepayment risk and reinvestment risk. (People more baller than I who actually did the CFA can nitpick specifics, but the general layout is correct.)
So private investors in student loans are genuinely worried that they’ll get paid back too quickly, and as a result charge a higher interest rate which leads to a larger discount rate. Because, and follow this, their goal isn’t to “get paid back.” Their goal is to achieve a consistent rate of return given a risk profile, with predictable cash flows given other institutional constraints. Getting paid back quickly is a risk to all this.
So, here’s my question: do taxpayers face this prepayment risk? If you saw a headline that said “student debtors are paying off their public student loans faster than expected,” would you be happy as a citizen, or furious?
I’d say happy. As a citizen, I’m not interested in earning a certain amount of profit consistently and with certainty over time, especially with the money paid back by student debtors, though I am as a private investor. If citizens were paid back more quickly, we could return the money to taxpayers, or use it for different purposes, or whatever. I certainly wouldn’t say “how are we going to continue to make the profit we were making?” as a citizen, though that’s exactly what I’d say if I were an investor in a private student loan portfolio. We could debate this — perhaps you think the goal of the government here is to extract maximum financial profits no matter what. But it would be a political debate, divorced from the logic of financial market valuation.
This is not a trivial concern. Anyone with experience modeling a mortgage-backed security is very conscious of how greater prepayments impose massive risks and uncertainty. Normally the private sector goes to great lengths to imposes penalties and limitations on paying back loans early, though the government doesn’t do this for student loans. And since citizen do not face prepayment risks the way the private sector does, the discount rate for public funds, by definition, must be less than private funds when it come to student loans. Hence private sector discount rates aren’t a valid benchmark.
Note that the Financial Economist Roundtable (cited approvingly by Jason Delisle here) brings up prepayment risk specifically as something that fair value accounting is meant to capture. But the prepayment risk they specify — which is “costly to lenders because prepayments are most likely to occur when market interest rates have decreased and loan values have appreciated,” reflecting the third issue noted above — exists primarily because private capital has to reinvest money in a worse environment. Do taxpayers have to reinvest money they get from student loans? No, unlike private direct lenders, they don’t. The financial logic has broken down. (And don’t even get me started on using the private sector’s liquidity risk as a measure of the state’s.)
I have yet to see an argument addressing this head-on, much less a convincing one, but perhaps this post will change that.
Several people are comparing Chris Giles’s piece in the Financial Times, which criticizes the data Thomas Piketty used in his book Capital in the 21st Century, to the Reinhart-Rogoff (R-R) incident from last year. That was when Carmen Reinhart and Kenneth Rogoff’s paper ”Growth in a Time of Debt,” which found that growth went negative above a 90 percent debt-to-GDP threshold, was criticized by Thomas Herndon, Michael Ash, and Robert Pollin (HAP). HAP found data and methodology errors in R-R, and now Giles finds data and methodology errors in Piketty. (I wrote about Giles’s article here.)
So the critiques must be similar, right? No. They are quite different, and in fact there are at least four ways in which they are practically the opposite of each other: in their transparency; in the object of their criticism; in the severity of their critiques; and in their democratic implications.
Transparency and Accessibility of the Data
Piketty’s data is public. That is why we are debating it, because that’s how Giles went about critiquing it. R-R kept their data hidden for years as their policies shaped the international debate over austerity.
R-R based their argument on post-war debt and growth, but their site had no spreadsheet saying “here are the countries and growth rates we used for the post-war period.” Instead they offered links to various other sites for growth data, without clarifying which ones they used. If you tried to replicate the data yourself, as many did, you’d find 110 high-debt data points, but R-R only used 96. Again, it wasn’t clear which were being used.
It’s a minor point, but one worth emphasizing. I can think of at least three sets of economists who stated publicly that R-R had not released their data between 2010 and 2012 . This was before Carmen Reinhart sent their raw data to an innocuous graduate student named Thomas Herndon, which formed the basis for HAP.
Attacking the Data Versus Attacking the Argument
Giles is questioning Piketty’s underlying, original data. HAP took the data that R-R provided for granted, even though it likely would have similar questions, and instead criticized what they did with said data.
A lot of people are pointing out that creating brand new data sets, especially using data that spans countries and centuries, will necessarily involve a lot of difficult calls around merging and splicing various sources. To put that a different way, it would be odd if someone went back into the raw, underlying data and didn’t find some difficult calls that could be questioned.
Critics took R-R’s underlying data for granted in the debate. Perhaps they shouldn’t have. As Bivens and Irons of EPI pointed out in their 2010 discussion, R-R use gross debt, which seems inappropriate compared to debt held by the public if the story they’re telling is about debt and economic outcomes. Yeva Nersisyan and Randy Wray argued that R-R also did a poor job of noting whether a debt was denominated in its own currency.
Those are good points, but they’re not what HAP focused on. They looked at the methodology and construction of results and took the R-R data as given instead of nitpicking the underlying data calls — calls which are always fraught with ambiguity. Critics generally didn’t try to undermine the data R-R presented in This Time is Different; they took on a supplemental argument tacked onto that data, and the problems they found were less subjective and much more devastating.
The Actual Problems Identified Were Far Different in Scale
Giles focused his analysis on the most speculative data chapter in the book. According to Piketty, inequality in the ownership of wealth is one of the two channels that can lead to greater income inequality, but it’s the less important and far more speculative one, developed at the end of the book and added with many, many caveats by the author himself. This chapter is also at the farthest edge of the research frontier, as evidenced by the fact that new research on this topic is still breaking. Even if the whole chapter collapses, there are still very open questions about the growth of capital stock, how much of the economic pie capital will take home, the rise in labor inequality, and many other topics that comprise a much bigger part of the book.
In contrast, within 72 hours of HAP, support for the idea that there was a debt “threshold” collapsed. John Taylor said that the G20, a far cry from a group of liberal bloggers, omitted specific deficit or debt-to-GDP targets as a result of HAP’s critique. What happened?
First, the actual methodological problem was more important in R-R. It became clear that the choices made in weighting and averaging radically overstated the effects of one year from New Zealand in which R-R recorded a negative 7.6 percent change in GDP. But more generally, HAP showed that the final results were very sensitive to minor data adjustments.
This gets confused in the subsequent narrative, but R-R largely accepted the numbers of HAP. In fact, they said that the smaller numbers HAP found were in line with their new research, which found a smaller decline and correlation between debt and GDP, implicitly abandoning their 2010 paper that had become the focus of world policy. But they still argued that a negative relationship was present.
Since the data was made available by HAP, it took only 24 hours before other researchers found major problems that R-R’s response did not address. Specifically, the economist Arin Dube showed that “simple exercises suggest that the raw correlation between debt-to-GDP ratio and GDP growth probably reflects a fair amount of reverse causality. We can’t simply use correlations like those used by R-R (or ones presented here) to identify causal estimates.” In other words, low growth led to a higher debt-to-GDP ratio, not the other way around.
There was no convincing answer forthcoming from R-R about this issue. A month later, the economist Miles Kimball and Yichuan Wang found that they “could not find even a shred of evidence in the Reinhart and Rogoff data for a negative effect of government debt on growth.”
That no other researchers have used Giles’s findings to immediately disprove, or at least cast doubt on, Piketty’s central arguments is telling. This could still happen, so it’s important to be critical. But the general work in Capital, leaving aside the question of inequality of the ownership of capital in Chapter 10, has evolved over decades and has had its tires kicked many, many times. The debt threshold of R-R never passed peer review, and it is unlikely it could have given the obvious reverse causality issues.
The Difference in Democratic Accountability
It seems like everyone who brings up Capital in the 21st Century has to immediately remark about how impossible it would be to do anything about Piketty’s findings given our current reality. Did you hear that Piketty’s solutions in Capital are impractical? They’re impractical, you know. A global wealth tax? Impractical!
But some people, when they act, create their own reality. Even though it had never been replicated, R-R’s paper was immediately moved to the center of elite discussion. It was one of the most cited pieces of evidence during the Great Recession. It became a justification for austerity, and it was one of the central economic arguments for the Ryan Plan, the budget that Mitt Romney would have tried to put into place had he won the 2012 election.
Some people want to argue that the R-R Excel error was no big deal. And in an econometric sense, they might have a point. But in a political sense, it mattered. It showed that hundreds of millions of people’s lives were being guided by a piece of research with an error that literally anyone would have found if R-R had let another set of human eyeballs look at it.
This is why democratic accountability is so important with economics. It’s good to see Piketty checked here, even if the concerns are overplayed; as Piketty says, the distribution of wealth “is too important an issue to be left to economists, sociologists, historians, and philosophers. It is of interest to everyone, and that is a good thing.” Indeed it is, just as austerity and government budgets are. First, “Government Debt and Economic Growth.” Bivens and Irons of EPI, July 2010, footnote 5: “The actual data used in the [R-R] study have not been made available to the public by the authors.”
Second, “Not Following Professional Ethics Matters Also.” Dean Baker, July 2010: “Mr. Rogoff and Ms. Reinhart have declined to adhere to standard ethics within the economics profession and have refused to share the data on which they base their conclusion with other researchers.”
Third, “Is High Public Debt Always Harmful to Economic Growth?” Minea and Parent, Feburary 2012, footnote 4: “Our efforts for obtaining the database used by RR were…unfortunately unsuccessful.”
Chris Giles at the FT just wrote a critique of the data in Thomas Piketty’s Capital. Many people will rightfully debate the empirics of what Giles has found, which he believes shows that inequality of the ownership of wealth – how much of wealth is held by the top 1% – isn’t increasing, but it’s important to understand how it fits into the larger argument.
Their Problem With the Theory
Giles writes: “The central theme of Prof Piketty’s work is that wealth inequalities are heading back up to levels last seen before the first world war.”
This is incorrect, or at least badly stated. Piketty’s central theme is not that inequality of the ownership of wealth is going to skyrocket. If you look at the text , he’s somewhat agnostic about this, but it’s not determinative. The central theme is that the 1% already owns a lot of the capital stock, and the capital stock is going to get gigantic relative to the rest of the economy.
Inequality expert Branko Milan also tweeted this point, but let’s go through it and break down the theory Piketty puts forward. I used three dominos in my Boston Review writeup, and I’m adding a fourth here to make Giles’ critique explicit. Let’s describe Piketty’s argument as four dominos falling into each other:
1. The return on capital is greater than the growth rate. The infamous “r > g” inequality. Meanwhile growth begins to slow, perhaps because of demographics.
2. The amount of capital, or private wealth, relative to the size of the economy will begin to grow rapidly as growth slows. This is the “past tends to devour the future” line. The size and role of wealth of the past will take on a greater relevance to the everyday economy.
3. If the rate of return doesn’t fall, or doesn’t fall that quickly, the capital share of income will increase. More of our economic pie will go to people who own capital.
4. The ownership of capital is very concentrated, historically and across a wide variety of countries. It is unlikely to fall quickly, much less spontaneously democratize itself, in response to these trends. So the income and power of capital owners will skyrocket.
So right away, rising inequality in the ownership of capital is not the necessary, major driver of the worries of the book. It isn’t that the 1% will own a larger share of capital going forward. It’s that the size and importance of capital is going to go big. If the 1% own a consistent amount of the capital stock, they have more income and power as the size of the capital stock increases relative to the economy, and as it takes home a larger slice. However, obviously, if inequality in wealth ownership goes up, it will make the situation worse. (It’s noteworthy that these numbers Giles is analyzing aren’t introduced until Chapter 10, after Piketty has gone through the growth of capital stock and the returns to capital at length in previous chapters.)
The way that Giles could put a serious dent into Piketty’s theory through this analysis is by showing that inequality of wealth ownership is falling in the recent past. This is not what Giles finds. He mostly finds what Piketty finds, except in England, where it’s flat instead of slightly growing in the recent past.
From the four dominos, we can also see what flaws in the data would make people believe that Piketty’s argument is fundamentally unsound. Remember that Piketty has constructed data for each of these trends, not just the fourth one. Piketty and Zucman’s data on private wealth and national income, for instance, is here. But to really dent the theory you need to take down one of the dominos. Most have been fighting about the third one – that either the rate of return on wealth will fall quickly, or that it is determined by institutional factors that are politically created.
But the idea that the ownership of capital will become more concentrated isn’t an essential part of the theory. Though obviously if it does grow, then it’s an even greater problem.
Notes on the Empirical Arguments
I’m not blown away by the criticism so far, but I hope Piketty responds to the individual issues. Especially what’s going on in Britain, because this could be a good learning experience. A few quick points from me, will hopefully have more later. The two major criticisms outside Britain are:
Giles argues that when comparing Britain, France and Sweden, Piketty should weigh by population, instead of equally. Why? Because weighing the countries equally “is questionable, as it gives every Swedish person roughly seven times the weight of every French or British person.”
But weighing here, as always, depends on what you are trying to examine. I’d say the variable is the system of laws and economies that produce a consistent output among a group defined by space over time – i.e. the nation-state. And, especially if you want the variable not to be size but different economic systems, you have a collector’s set of what Gøsta Esping-Andersen calls The Three Worlds of Welfare Capitalism between England (liberal), France (corporatist) and Sweden (social democratic). If none of them are producing a fall in wealth inequality, that’s a remarkable fact. Weighing them by economic system makes sense. I’d be happy to be convinced otherwise, but Giles makes no such deep argument.
USA Data Missings?
Giles states that “it is not possible to say anything much about the top 10 per cent share between 1870 and 1960, as the data for the US simply does not exist.” However, as Matt Bruenig points out, since Piketty’s book came out there’s been significant new work by Emmanuel Saez and Gabriel Zucman telling us exactly that. Check out the slides, they are awesome. Well respected work that fills in the makeshift gaps Piketty had to use to make the wealth inequality data for the United States in this period. This is a sign of a good work – subsequent work is bearing out its results.
And this new work points to wealth inequality increasing in the United States. Dramatically. Go figure. Piketty’s conclusion from Chapter 10, which is when he introduces inequality in the ownership of wealth: “To sump up: the fact that wealth is noticeably less concentrated in Europe today than it was in the Belle Epoque is largely a consequence of accidentlal events…and specific institutions. If those institutions were ultimately destroyed, there would be a high risk of seeing inequalities of wealth close to those observed in the past….Nothing is certain: inequality can move in either direction….it is an illusion to think that somthing about the nature of modern growth or the laws of the market economy ensures that inequaity of wealth will decrease and harmonious stability will be achieved.”
It’s fair to say that this isn’t the only worrisome sign he points out in the book.
Follow or contact the Rortybomb blog:
There’s a certain liberal fascination with the idea of conservative “reformers” showing up and recalibrating the Republican Party toward policies that would benefit working Americans and lead to potential bipartisan solutions. This fascination is on display in the reaction to the new Room to Grow report, available for free online, by the YG Network. Already being covered by liberals, this volume features various reform conservative writers addressing a range of innovative economic policy ideas, with the hope that Republicans lawmakers will pay attention.
But if this is the best the new wave of conservatives can do on financial reform, it’s probably not the biggest worry that elected Republicans aren’t listening. The chapter that focuses on Dodd-Frank and the regulation of the financial markets after the crisis is by American Enterprise Institute’s James Pethokoukis. It’s billed as “financial reforms to combat cronyism,” but it offers little in terms of reform. The reformers should, at the very least, explain what they would repeal or replace in Dodd-Frank (a tension that exists with Obamacare as well), and this is left unclear.
The problems start with Pethokoukis’s take on the story of what went wrong in the first place. But he also glosses over the key issues facing policymakers today. The general idea of attacking “cronyism” and promoting competition tells us nothing about what needs to be done, making it so this report is a poor guide to the actual ongoing debates happening in financial reform. And this silence on contentious matters is so deafening that it bodes poorly for any kind of genuine positive agenda for the right or bipartisan alignment with liberal reformers. Understanding where Pethokoukis goes wrong, however, can tell us why conservatives are going to have a hard time dealing with actual reform in the age of Dodd-Frank.
The Story of What Went Wrong
For Pethokoukis, a lack of competition in the financial markets led to the crisis of 2008. To whatever extent there were problems, those problems existed because of the government’s safety net and backstopping of deposits and commercial banks.
A quick glance at most accounts of the financial crisis argues otherwise. The whole point of deregulation in the financial markets was to increase competition. The book that made the case for repealing Glass-Steagall argued for “an enhanced role for competition.” Economists associated with the Clinton administration also believed deregulation would lead to more competition and fix the financial sector. There was an explicit assumption that private entities like the ratings agencies would act as better regulators because they faced competition, and they explain how those agencies became so pivotal to the entire system.
So what went wrong? All these new types of “shadow” banks turned out to have the same problems as any other banking sector. They had massive conflicts of interest, were capable of generating panics and runs with no lender-of-last-resort to fall back on, and there were no regulatory tools to wind them down. The goal of Dodd-Frank, in this version of the story, is to extend the core, tried-and-tested methods of financial reform to this shadow banking sector. Under these new regulations, the FDIC can take down shadow banks, derivatives have to be traded in an exchange, the CFPB provides transparency and accountability for consumers, and so on. Perhaps this narrative is wrong, or perhaps these are terrible policy goals that follow from it, but it goes entirely undiscussed in Pethokoukis’s account.
No Conservative Answer to Too Big To Fail
The problems become more obvious when you consider two of the most debated parts of Dodd-Frank: the FDIC’s ability to create a death panel for failing banks, known as resolution authority; and the Federal Reserve’s power to act as a “lender of last resort” in periods of crisis. Pethokoukis only obliquely addresses these issues, though they go to the core of Too Big To Fail.
He argues that Dodd-Frank “explicitly permits bailouts through its resolution authority provision.” What he is referencing is sadly not cited, because Dodd-Frank in fact requires “that unsecured creditors bear losses in accordance with the priority of claim.” (If Pethokoukis would argue that the power to differentiate payments is a de facto bailout, then all of bankruptcy is a permanent bailout, as those powers look just like critical vendor orders or other parts of the bankruptcy process in the proposed FDIC rules.)
Pethokoukis also argues against any type of lender-of-last-resort functionality for the non-commercial banking sector. Awkwardly, this in turn functions as a defense of the 2007 status quo. Take an investment bank, allow it to be subject to market panics, and have no resolution process in place other than tossing it into bankruptcy. This is the exact experiment we did with Lehman Brothers.
In supporting materials, Pethokoukis argues that conservative reformers “have ideas to end Too Big To Fail once and for all,” but it’s not clear what they actually are, or even what they could look like. He doesn’t engage in the debate over resolution authority, and he doesn’t mention various conservative replacements to Dodd-Frank that involve a special bankruptcy code. Maybe that’s because the leading proposals make it purposely difficult to lend in a crisis by penalizing lenders, an approach that violates the wisdom of economists going back to Bagehot.
Not a Roadmap for Our Current Debates
Now granted, the report is about messaging and priorities rather than the intricacies of specific reforms. But even here Pethokoukis’s general guiding star of pro-competition and anti-cronyism doesn’t tell us anything about what we need to know to assess the problems on the ground.
Derivative reforms are notably missing from this paper. I’d argue that forcing price transparency in the derivatives market is pro-competition because it leads to better information and an even playing field, and that pushing for aggressive international enforcement of those rules is anti-cronyism, because Wall Street shouldn’t get to flout the rules by cleverly housing its operations somewhere. Would conservative reformers agree? Based on this report, I have no idea.
Is the fact that Wall Street has such an extensive presence in commodities like aluminum a cause for concern? Do we want to push back on the market mediated complex credit chains that comprise shadow banking? Did Dodd-Frank not go far enough in restructuring the financial system, or did it already go too far with the Volcker Rule and concentration limits? The fact that the conservative reformers’ framework is incapable of guiding us in any plausible direction on these major unfolding issues is very problematic. It points to an absolute void in reform conservative policy on the practical regulatory challenges of the day.
The most promising thing in Pethokoukis’s piece is the call for higher capital requirements, perhaps on the order of 15 percent. Though a very good idea, this won’t end Too Big To Fail. And again this doesn’t engage with the current debates over capital, which involve how to balance multiple needs of capital. If you have a straight leverage requirement by itself, won’t that be gamed by firms taking on big risks? If you have a lot of capital but no liquidity, won’t you be subject to runs? Should banks hold long-term, unsecured debt, perhaps engineered to turn into equity during a failure? People often seek a silver bullet here, but one of the points of Basel is to try and balance all these needs against each other. Pethokoukis is correct that requirements should be higher, but unclear on this balancing act.
Mediating Institutions Require Regulations
Capital requirements aside, it’s surprising how unsurprised I am by the supposedly bold new thinking on financial reform contained in this report. The report is ideologically focused on using the government to build the spaces between the individual and state, the space of mediating institutions that include the market. But one of the best ways we can do that is by enforcing transparency and accountability among people participating in a market. Indeed, arguably the biggest blow to cronyism in 2014 has been the disclosure by the SEC of serious, widespread breaches in the private equity market – breaches that are reportable because of Dodd-Frank.
Here’s an example of a policy I’d love to see the right embrace: fiduciary requirements updated for a landscape of 401(k)s, IRAs, and all the other personal, private, tax-exempt savings accounts that people have to deal with. The Department of Labor is trying to do this right now, in fact, and the House Tea Party is trying to stop them.
One might expect that conservatives thinking in terms of civil society would support fiduciary requirements. They’ve existed since antiquity, going back to the Code of Hammurabi, Judeo-Christian traditions, Chinese law, and, a bonus for the right, centuries of common law. Using the state to set a guidepost for ethical norms that have existed across time and place, and thereby boosting people’s ability to take responsibility for their investments, is remarkably consistent with a richer civil society. But it’s not there in this report.
I hope these reformers succeed in checking the furthest right-wing elements of their party, although the rehabilitation of Bush-era “compassionate conservatism” (a term whose absence is conspicuous) is a far heavier lift given the libertarian focus of today’s conservatism. But if this vision is going to be centered on mediating institutions rather than direct state action, it will be essential for reformers to understand how the state creates the market, and how it sets the terms for enforcing consumers interests, for private agents to get access to information, and for trading, prices, and risk to move throughout the economy. The core balance of transparency, accountability, stability, and innovation is not something that can simply be waved away by appeals to a “free” market as is done here.
When I wrote a long piece about the Voluntarism Fantasy at Democracy Journal, several people accused me of attacking a strawman. My argument was that there’s an influential, yet never clearly articulated, position on the conservative right that we jettison much of the federal government’s role in providing for economic security. In response, private charities, churches and “civil society” will rush in and do a better job. Who, complained conservatives, actually argues this?
Well, here’s McKay Coppins with a quite flattering 7,000 word piece on how Paul Ryan has a “newfound passion for the poor.” What is the animating core and idea of his new passion?
Ryan’s broad vision for curing American poverty is one that conservatives have been championing for the last half-century, more or less. He imagines a diverse network of local churches, charities, and service organizations doing much of the work the federal government took on in the 20th century. Rather than supplying jobless Americans with a never-ending stream of unemployment checks, for example, Ryan thinks the federal government should funnell resources toward community-based work programs like Pastor Webster’s.
I’m happy to have been part of the editing team on this piece by JW Mason for The New Inquiry’s money and finance issue, Disgorge the Cash. It summarizes some of the issues he’s been developing at his blog slackwire on the relationship between the financial sector and the real economy. As both an economic matter, with the relationship between corporate borrowing, investments and dividends before and after the early 1980s, as well as a socio-cultural matter of managers and their relationships to the firms they manage, it’s fascinating stuff. It also points to a question, one Piketty doesn’t touch in his new Capital book, of whether supermanagers who are creating the runaway 1% labor incomes gain should really be thought of more as part of capital income.
Much of the rest of the finance and money issue is now online, though you should still subscribe.
From the piece:
In 1960, there was a strong link between borrowing and investment. A firm that was borrowing $1 million more than a typical firm of that size would usually be investing $750,000 more. […] Before 1980, there was no statistical relationship between borrowing and payouts in the form of dividends and share repurchases at the firm level. But since then, a clear positive relationship emerged, especially at business-cycle peaks. Firms that borrow more have significantly higher payouts to shareholders. […] It was a common trope in accounts of the housing bubble that greedy or shortsighted homeowners were extracting equity from their houses with second mortgages or cash-out refinancing to pay for extra consumption. What nobody mentioned was that the rentier class had been playing a similar game longer and on a much larger scale.[…]At the moment, finance seems to be doing its job well. The idea that corporations will spontaneously socialize themselves looks utopian and naïve. The evolution described by Keynes, Berle and Means, Galbraith, and other theorists of managerialism early in the 20th century had been halted or reversed by its end.But that doesn’t mean it wasn’t real. Just look at the scale of the financial apparatus required to keep productive enterprises focused on profit maximization, and the fear capitalists have of allowing managers discretion over corporate resources, even when their incentives have been arduously “aligned.” Isn’t it testimony to how tenuous and unnatural production for profit is? In these far from revolutionary times, radicals often fret about the difficulty of transforming the existing organization of production into socialism. But this project is nothing compared with the Sisyphean task faced by the other side, of constantly transforming the existing organization of production into capitalism.
He also finds that this effect is stronger for those who are unlikely to receive unemployment insurance.
One comment I had. There’s an argument that the long-term unemployed are the weakest employees, those who were fired during the first wave of layoffs that started in 2008. These workers were going to have a hard time finding jobs not based on the labor market but because, to be blunt, they weren’t good workers. (One manifestation: Tyler Cowen did a lot with this idea of zero marginal product workers, ignoring that the marginal product of labor is impacted by demand, back in 2011.) Since long-term unemployed workers look a lot like the general unemployment pool, this is thought to be driven by softer, not-quantifiable, worker characteristics.
Leave aside for a moment the difficulty that the long-term unemployed, those who were unlucky and have been looking for a job for more than 52 weeks, have in finding a job. Even those who have been unemployed zero weeks are having trouble finding jobs in this economy. And this is important evidence against the idea that the labor market is doing better than people realize if you just ignore the long-term unemployed.
Here’s a data point that I’m particularly interested in: how often are employed people going straight to another job, rather than leaving their job and enduring a period of unemployment before finding new work?
Though most people think of the employed spending some time in unemployment before starting a new job (an idea that was central to the recent theory that quit rates predicted a healthy job market), a substantial number of people move directly from one job to another without ever counting as unemployed. Since our statistics (and most of the economic models) are set up to observe people who are looking for work but are unable or unwilling to accept a job, these steadily employed workers can go missing in the discussion. That’s a shame, because historically they comprise almost half of all those who accept a new job.
The Rortybomb blog has long been a fan of the job flows data, or the statistics that show who is moving between employment and unemployment and in and out of the labor force. However, the easiest way to access this data didn’t distinguish between those who stayed employed with a single employer and those who stayed employed but moved between different employers.
Luckily, someone pointed me in the direction of the Employer-to-Employer Flows in the U.S. Labor Market , compiled by the Federal Reserve, which breaks out those who move from one employer to another without being unemployed (described as “EE transitions” for the rest of this post). This data is current through the end of 2013.
If the economy is heating up significantly and the long-term unemployed aren’t capable of taking jobs, then the EE transition rate should be increasing. So how is it doing?
This is the percentage of the employed who are in EE transition (the results are the same for EE transition as a percentage of the labor force). As we can see, it declined during the crisis and hasn’t recovered even as of 2013.
Let’s also look at this from a different point of view: what percentage of those taking jobs are currently employed? If the economy was heating up and the unemployed or those out of the labor force couldn’t take jobs, we would expect this to increase. Taking EE transitions as a percentage of all those who are transitioning into new jobs, we see the following:
New hires are increasingly coming from the ranks of the unemployed and those not in the labor force rather than the currently employed. Where the employed were 40 percent in the 1990s, and 35 percent in the pre-crisis 2000s, it’s down to 30 percent now.
Why does this matter? First off, these quits also create a new job opening, which the unemployed can take. There’s a significant labor economics literature that argues that job-to-job transitions are a major driver of wage growth for workers (starting here and continuing to this day, h/t Arin Dube). If the number of people moving directly from one job to another is in decline, that’s a bad sign for wage growth, as well as inflation and monetary policy. This appears to be undertheorized and not discussed enough in academic or policy discussions.
But why is this happening? The American Time Use Survey hasn’t been able to tell me whether the employed are spending more or less time searching for other jobs since the recession started; the sample size is too small to make conclusive predictions about changes. If potential wage gains are a primary motivation of job-to-job transitions, then lack of wage growth or even inflation could be contributing to less churn in the economy.
When it comes down to it, the problems of those who aren’t working and want a job are similar to the problems of those who are working but want a new job. As Alan Krueger found in this chart in his recent paper (also see Ben Casselman’s chart here), the rate of successful job searches is down not just for the long-term unemployed, but also for the short-term unemployed, when compared to 2007. It appears the same holds true for those with an unemployment duration of zero. The page indicates that it was last updated in 2004, or perhaps 2011. But the excel document has data through the end of 2013. Sneaky.
Follow or contact the Rortybomb blog:
Alan B. Krueger, Judd Cramer, and David Cho of Princeton recently released a Brookings paper on the state of the labor market titled “Are the Long-Term Unemployed on the Margins of the Labor Market?” Their big headline result is that the long-term unemployed are going to have trouble finding steady work, both as a historical matter and from what we’ve seen in the Great Recession. It’s fascinating work we’ll revisit here.
But what does that mean for the job market right now, with its mix of short-term and long-term unemployed? The second takeaway is that if we only look at short-term unemployment, the economy makes more sense than if we look at total unemployment. As Tim Hartford wrote, this research shows that if “we replotted the Phillips curve[‘s mix of inflation and unemployment]… using statistics on short-term unemployment… it turns out that the old statistical relationships would work just fine.” Some are arguing that we should just focus on short-term unemployment for the moment as an indicator of how the economy is doing.
Is that the case? Not really. We should be careful with this argument now, because this is really a matter of 2009-2012. Back then, the question was why inflation was as steady as it was given very high unemployment. In 2014 the question is very different: why is inflation so low given high unemployment and the relationship of the past several years? We need to explain a different problem.
Let’s look at a key chart from the Krueger paper (green boxes my addition):
This is the change in core inflation versus unemployment. (There’s a similar dynamic with wage inflation in a different chart.) The left graphic is the change in core inflation versus overall unemployment, and the right graphic is the change versus short-term unemployment. As the paper’s authors argue, it’s a much tighter relationship if you just look at short-term unemployment. But there are three things to note here.
First, as flagged in the green box in the left graphic, the outliers are the years 2009-2012. Looking at their wage inflation version of this in particular, the authors note that they get a higher R-squared and better predictive value using short-term unemployment. But replicating this chart (data), if you simply take out 2009-2011, you also end up with the higher R-squared and better predictive value.
More importantly, as a second matter look at where we are now via the 2013 data point. The total unemployment number for 2013 is right on the line in the left graph. However, as we can see from the green circle on the right, using short-term unemployment shows inflation much lower than anticipated. This is not surprising; one of the more important economic stories of 2013 was the collapse of inflation. Note that if the labor market were actually getting much tighter, inflation should have been increasing during this time period. More broadly, if the problem were the preponderance of long-term unemployed in the general labor market, we wouldn’t expect 2013 to go into freefall and hop over the trendline as it did.
I’m very interested in why we didn’t collapse into deflation from 2009 to 2011. I imagine the Fed has something to do with it. But as a third point I’d be a little cautious about using just short-term unemployment during that time as an important indicator about the labor market, as job separations collapsed during the crisis. A low short-term unemployment rate reflects people simply not leaving their jobs more than it reflects the idea that the economy was doing better than we’d expect.
But this question is also a historical one. Krueger and his co-authors acknowledge this, using phrasing like “since 2009” as the basis of their paper. But other people might not catch this, and assume that the short-term unemployment rate is crucial for right now. But that doesn’t reflect our current situation of low inflation, a falling rate of long-term unemployment, and an unemployment rate that is going to be stuck in the mid-6% range for some time. We shouldn’t use a way of adjusting data to examine what was going on in 2010 to argue there’s less slack than there actually is out here in 2014.
I helped edit (curate might be a better word) the latest New Inquiry issue on Money and Finance. Their editor Robert Horning wanted to get some of the vibe of the older financial blogs, when the thing was still a wild west, and so we got a ton of our favorite old-school finance writers like Steve Waldman, Izzy Kaminska, and the Epicurean Dealmaker to contribute. I also helped edit a good explainer of MMT from Rebecca Rojer, and a definitive “disgorge the cash” piece on the rentier takeover of the economy by JW Mason, both which are definitely worth your time. I have my own piece in the article, now also online, about buying the future.
These pieces will eventually be rolled out and available online over the next month, but for now you can read it by subscribing. Hope you check it out!
My recent Voluntarism Fantasy piece (pdf) for Democracy Journal has gotten a fair amount of coverage. So I’m going to use this post, which will be updated, to keep track of the links to other people engaging, if only so I can respond in the future.
The piece was also reprinted at The Altantic Monthly.
Reddit thread with comments.
In favor of the piece:
Matt Bruenig notes that the way we discuss this reflects a deep status quo bias at The Week.
Elizabeth Stoker, channeling Niebuhr, makes the strong Christian case that charity and government social insurance go together at The Week.
Sally Steenland of Center for America Progress also addresses the fantasy in this article.
Erik Loomis makes an excellent point that in addition to the rest of the 19th century state, the “federally subsidized westward expansion was also part of this welfare state, as Republicans especially explicitly saw the frontier as a social safety net that would alleviate poverty without directly giving charity to people.”
James Kwak agrees that there’s “No Substitute for the Government” here.
Jordan Weissmann argues that “Charity Can’t Replace the Safety Net” over at Slate.
Less in favor:
Marvin Olasky, author of the Tragedy of American Compassion (which is one focal point of the article), responds in World.
Philathrophy Daily ran two articles critical of the piece, both at the forefront of the voluntarism fantasy’s worldview. The first is from Hans Zeiger and the second from Martin Morse Wooster, who breaks out the paralipsis “I could argue that Mike Konczal and the Roosevelt Institute has a hidden agenda: to force the U.S. to accept Soviet-style communism … I won’t make that argument because I know it isn’t true.”
Rich Tucker at Townhall says that I do “a better job than Barack Obama did explaining the president’s ‘You didn’t build that’ philosophy,” which I’ll take as a compliment.
Reihan Salam has a set of responses at The Agenda.
Howard Husock argues that charitably-funded, non-governmental programs are better than government at helping help individuals thrive at Forbes.
Don Watkins at the Ayn Rand Institute has a five part (!) critical response; you can work backwards from the fifth part here.
Anarchist Kevin Carson sees “the welfare state nevertheless as an evil necessitated by the state-enforced model of capitalism, and ultimately destined to wither away along with economic privilege and exploitation” in his response.
I’ll add any more as they happen. (Last updated April 11th.)
I have a piece on the Tea Party and Wall Street up at the New Republic. The Tea Party’s theory of the financial crisis has absolved Wall Street completely, and this has serious implications for how the policy framework will evolve if the Tea Party gains in power in 2014 and 2016. I also got a chance to reference two pieces explaining various theories of the crisis which I recommend: Dean Starkman on the falsehood that Everyone Is To Blame and Adam Levitin’s review of recent financial crisis books.
I hope you check out the new piece.
I had a piece at The New Republic last week I haven’t share here yet. It’s a response to Adolph Reed’s long Harper’s piece about liberalism at this moment. You should check it out.
I wanted to clarify one thing because several historically-minded people asked me about it. I have a general rule that after I write something I should immediately delete the most “clever” thing I included, or at least go back and carefully edit it. In this piece I didn’t do that or make my point clear, and the glib results caused some confusion.
I opened the piece with a New Republic essay from 1940 criticizing Franklin Roosevelt from the left for leaving the economic problem unresolved. I had just read the essay in the (excellent, recommended) collection of New Deal Thought by Howard Zinn from the 1960s, and really thought it would be clever to include it in a current New Republic piece. What I ideally wanted the piece to reference was (i) pointing out that Reed’s golden period of the late 1930s, where he has things working out well for the left, was more problematic at the time than he lets on, and in ways similar to where we are now.
But also (ii) point out that historical shifts often happen even when Presidents are floundering, as the “second New Deal” was formalizing an order that would reign for 40 years even though Franklin Roosevelt was making a mess of the late 1930s with his disastrous turn to austerity. As a result we can’t answer the most important question about President Obama – is he the beginning of a longer-term shift, or someone that forecloses the potential of that longer-term shift – by pointing to individual actions by him, which is the core of Reed’s argument. Also (iii) to reference the 1940 piece at the end of mine, with their smart observation about self-enforcing reform and the open question over whether Obamacare, etc. will ever have those dynamics.[And as a personal fun point, I also like (iv) pointing out that the New Republic was pretty lefty back when within its own online pages.]
However I botched the intro, cutting for space, and wrote it in a glib manner that referenced it to dismiss valid criticism now and then. Some thought I was excusing the screwups of 1937, others comparing President Obama to FDR. And since it introduced the piece, it hung over the rest (which I’m pretty happy with). I’ll try better next time.
This is not an issue as to whether people shall go hungry or cold in the United States. It is solely a question of the best method by which hunger and cold shall be prevented. It is a question as to whether the American people on one hand will maintain the spirit of charity and mutual self-help through voluntary giving and the responsibility of local government as distinguished on the other hand from appropriations out of the Federal Treasury for such purposes. My own conviction is strongly that if we break down this sense of responsibility of individual generosity to individual and mutual self-help in the country in times of national difficulty and if we start appropriations of this character we have not only impaired something infinitely valuable in the life of the American people but have struck at the roots of self-government. Once this has happened it is not the cost of a few score millions, but we are faced with the abyss of reliance in future upon Government charity in some form or other. The money involved is indeed the least of the costs to American ideals and American institutions.
1931. This is when food riots were breaking out in American cities. Unemployment would be over 15 percent that year, climbing to 25 percent the next. In 1932 you’d also see Douglas MacArthur, Dwight Eisenhower and George Patton leading an army to destroy the encampments of thousands of people occupying Washington DC demanding relief.
The world was falling apart, liberal democracy was facing its worst challenge in decades, and Hoover wouldn’t budget on the idea that there was any role for the public or the federal state in meeting the challenges of mass economic insecurity. This is what many conservatives still believe, and it’s important to dissect why it fails and what we can do about it.
Follow or contact the Rortybomb blog:
(Wonkish. Part One of Two. This part covers theory, part two some data.)
Can the number of people quitting their jobs tell us anything useful about slack in the labor market? No. At most, it tells us to focus even more on the stuff we already knew to be watching.
Evan Soltas has argued that the rate at which people quit their job tells us all we need to know about the unemployment rate, in particular how likely it is that the long-term unemployed and people who have left the labor force could be brought back into the labor force. There were several responses from