Analysis and Commentary by Roosevelt Fellow Mike Konczal

In the aftermath of the electoral blowout, a reminder: the Great Recession isn’t over. In fact, GDP growth was slower in 2013 than in 2012. Let’s go to the FRED data:

There’s dotted lines added at the end of 2012 to give you a sense that throughout 2013 the economy didn’t speed up. Even though we were another year into the “recovery” GDP growth slowed down a bit.

There’s a lot of reasons people haven’t discussed it this way. I saw a lot of people using year-over-year GDP growth for 2013, proclaiming it a major success. A problem with using that method for a single point is that it’s very sensitive to what is happening around the end points, and indeed the quarter before and after that data point featured negative or near zero growth. Averaging it out (or even doing year-over-year on a longer scale) shows a much worse story. Also much of the celebrated convergence between the two years was really the BEA finding more austerity in 2012. (I added a line going back to 2011 to show that the overall growth rate has been lower since then. According to David Beckworth, this is the point when fiscal tightening began.)

Other people were hoping that the Evans Rule and open-ended purchases could stabilize “expectations” of inflation regardless of underlying changes in economic activity (I was one of them), a process that didn’t happen. And yet others knew the sequestration was put into place and was unlikely to be moved, so might as well make lemonade out of the austerity.

And that’s overall growth. Wages are even uglier. (Note in an election meant to repudiate liberalism, minimum wage hikes passed with flying colors.) The Federal Reserve’s Survey of Consumer Finances is not a bomb-throwing document, but it’s hard not to read class war into their latest one. From 2010 to 2013, a year after the Recession ended until last year, median incomes fell:

When 45 percent of the electorate puts the economy as the top issue in exit polls, and the economy performs like it does here, it’s no wonder we’re having wave election after wave election of discontentment.

Follow or contact the Rortybomb blog:
 
  

 

Tagged under:

Hey everyone, I have two new pieces out there I hope you check out.

The first is a piece about the financialization of the economy in the latest Washington Monthly. I’m heading up a new project at Roosevelt, more details to come soon, about the financialization of the economy, and this essay is the first product. And I’m happy to have it as part of a special issue on inequality and the economy headed up by the fine people at the Washington Center for Equitable Growth. There’s a ton of great stuff in there, including an intro by Heather Boushey, Ann O’Leary on early childhood programs, Alan Blinder on boosting wages, and a conclusion by Joe Stiglitz. It’s all really great stuff, and I hope it shows a deeper and wider understanding of an inequality agenda.

The second is the latest The Score column at The Nation, which is a focus on the effect of high tax rates on inequality and structuring markets. It’s a writeup of the excellent Saez, Piketty, and Stantcheva Three Elasticies paper, and a continuation of a post here at this blog.

Follow or contact the Rortybomb blog:
 
  

 

Tagged under:

In the latest National Affairs, Jason Delisle and Jason Richwine make what they call ”The Case for Fair-Value Accounting.” This is the process of using the price of, say, student loans in the capital markets to budget and discount government student loans. (The issue also has articles walking back support for previously acceptable moderate-right ideas like Common Core and the EITC, showing the way conservative wonks are starting to line up for 2016.)

In the piece Delisle and Richwine make two basic mistakes in financial theory, mistakes that undermine their ultimate argument. Let’s dig into them, because it’s a wonderful opportunity to get some finance back into this blog (like it used to have back when it was cool).

Error 1: Their Definition of FVA Is Wrong

What is fair-value accounting (FVA)? According to the authors, FVA “factors in the cost of market risk,” meaning “the risk of a general downturn in the economy.” This market risk reflects the potential for defaults; it’s “the cost of the uncertainty surrounding future loan payments.”

These statements are false. There is a consensus that FVA incorporates significantly more than this definition of market risk.

Here’s the Financial Economists Roundtable, endorsing FVA: “Use of Treasury rates as discount factors, however, fails to account for the costs of the risks associated with government credit assistance — namely, market risk, prepayment risk, and liquidity risk.”

And the CBO specifically incorporates all these additional risks when it evaluates FVA: “Student loans also entail prepayment risk… investors… also assign a price to other types of risk, such as liquidity risk… CBO takes into account all of those risks in its fair-value estimates.”

This is a much broader set of concerns than what Delisle and Richwine bring up. For instance, FVA requires taxpayers to be subject to the same liquidity and prepayment risks as the capital markets. Remember when the federal government stepped in to provide liquidity to the capital markets when they failed in late 2008, because the markets couldn’t? That gives us a clue that there might be some differences between public and private risks.

Crucially, it’s not clear to me that taxpayers have the same prepayment risk as the capital markets. Private holders of student loans are terrified that their loans might be paid back too quickly, because they are likely to get paid back when interest rates are low and it will be tough to reinvest at the same rate. This is a particularly big risk with the negative convexity of student loan payments, which can be prepaid without penalty. Private actors need to be compensated generously for this risk.

Do taxpayers face the same risk? If student loans owed to the government were paid down faster than anyone expected, would taxpayers be furious? I wouldn’t. I certainly wouldn’t say “how are we going to continue to make the profit we were making?” as a citizen, though it would be an essential question as a private bondholder. Either way, it’s as much a political question as an economic one. (I make the full argument for this in a blog post here.)

Error 2: Their Definition of Market Risk Is Wrong

The authors like FVA because it accounts for market risk. But what is market risk? According to Delisle and Richwine, market risk is “associated with expecting future loan repayments,” as “[s]tudents might pay back the expected principal and interest” but they also may not. It is also “the risk of a general downturn in the economy… market risk cannot be diversified away.”

So the first part is wrong: market risk is not credit risk, or the risk of default or missing payments. The International Financial Reporting Standards (IFRS7), for instance, requires reporting market risk separate from credit risk, because they are obviously two different things. I’ve generally only heard market risk used in the context of bond portfolios to mean interest rate risk, which they also don’t mention. So if market risk isn’t credit risk or interest rate risk, what is it?

I’m not sure. What I think is going on is they are confusing the concept with the market risk of a stock, specifically its beta. A stock’s beta is its sensitivity to overall equity prices. (Pull up a random stock page and you’ll see the beta somewhere.) It’s very common phrasing to say this risk can’t be diversified away and is a proxy for the risk of general downturns in the economy, which is the same language used in this piece.

Market risk for stocks is the question of how much your portfolio will go down if the market as a whole goes down. But this has nothing to do with student loans, because students (aside from an enterprising few) don’t sell equity; they take out loans. If students paid for school with equity, in theory an economic downturn would lead to less revenue, since students would make less money overall. But even then it’s a shaky concept.

This isn’t just academic. There’s a reason people don’t speak of a one-to-one relationship between a market downturn and the value of a bond portfolio, as the authors’ “market risk” definition does. If the economy tanks, credit risk increases, so bonds are worth less, but interest rates fall, meaning the same bonds are worth more. How this all balances is complicated, and strongly driven by the distribution of bond maturities. This is why financial risk management distinguishes between credit, liquidity, and interest rate risks, and doesn’t conflate those concepts as the authors do.

(Though they are writing as experts, I think they are just copying and pasting from the CBO’s confusing and erroneous definition of “market risk.” If they are sourcing any kind of common financial industry practices or definitions, I don’t see it. I guess Jason Richwine didn’t get a chance to study finance while publishing his dissertation.)

Here again I’d want to understand more how the value of student loans to taxpayers moves with interest rates. Repayments are mentioned above. And for private lenders, higher interest rates mean that they can sell bonds for less and that they’re worth less as collateral. They need to be compensated for this risk. Do taxpayers have this problem to the same extent? If interest rates rise, do we worry we can’t sell the student loan portfolio for the same amount to another government, or that we can’t use it as collateral to fund another war? If not, why would we use this market rate?

Is This Just About Credit Risk?

Besides all the theoretical problems mentioned above, there’s also the practical problem that the CBO uses the already existing private market for student loans (“relied mainly on data about the interest rates charged to borrowers in the private student loan market”), even though there’s obviously a massive adverse selection problem there. Though not an error, it’s a third major problem for the argument. The authors don’t even touch this.

But for all the talk about FVA, the only real concern the authors bring up is credit risk. “What if taxpayers don’t get paid?” is the question raised over and over again in the piece. The authors don’t articulate any direct concerns about, say, a move in interest rates changing the value of a bond portfolio, aside from the possibility that it might mean more credit losses.

So dramatically scaling back consumer protections like bankruptcy and statute of limitations for student debtors wasn’t enough for the authors. Fair enough. But there’s an easy fix: the government could buy some credit protection for losses in excess of those expected on, say, $10 billion of its portfolio, and use that price as a supplemental discount. This would be quite low-cost and provide useful information. But it’s a far cry from FVA, even if FVA’s proponents don’t quite understand that.

Follow or contact the Rortybomb blog:
 
  

 

Tagged under:

(With conservatives looking to make big gains Tuesday, it’s important to understand how they understand the financial crisis. Luckily we have a guest post by David Fiderer, on a recent book about the crisis. For over 20 years, Fiderer has been a banker covering the energy industry. He is trained as a lawyer and is

Tagged under:

QE3 is over. Economists will debate the significance of it for some time to come. What sticks out to me now is that it might have been entirely backwards: what if the Fed had set the price instead of the quantity?

To put this in context for those who don’t know the background, let’s talk about carbon cooking the planet. Going back to Weitzman in the 1970s (nice summary by E. Glen Weyl), economists have focused on the relative tradeoff of price versus quantity regulations. We could regulate carbon by changing the price, say through carbon taxes. We could also regulate it by changing the quantity, say by capping the amount of carbon in the air. In theory, these two choices have identical outcomes. But, of course, they don’t. It depends on the risk involved in slight deviations from the goal. If carbon above a certain level is very costly to society, then it’s better to target the quantity rather than the price, hence setting a cap on carbon (and trading it) rather than just taxing it.

This same debate on the tradeoff between price and quantity intervention is relevant for monetary policy, too. And here, I fear the Federal Reserve targeted the wrong one.

Starting in December 2012, the Federal Reserve started buying $45 billion a month of long-term Treasuries. Part of the reason was to push down the interest rates on those Treasuries and boost the economy.

But what if the Fed had done that backwards? What if it had picked a price for long-term securities, and then figured out how much it would have to buy to get there? Then it would have said, “we aim to set the 10-year Treasury rate at 1.5 percent for the rest of the year” instead of “we will buy $45 billion a month of long-term Treasuries.”

This is what the Fed does with short-term interest rates. Taking a random example from 2006, it doesn’t say, “we’ll sell an extra amount in order to raise the interest rate.” Instead, it just declares, “the Board of Governors unanimously approved a 25-basis-point increase in the discount rate to 5-1/2 percent.” It announces the price.

Remember, the Federal Reserve also did QE with mortgage-backed securities, buying $40 billion a month in order to bring down the mortgage rate. But what if it just set the mortgage rate? That’s what Joseph Gagnon of the Peterson Institute (who also helped execute the first QE), argued for in September 2012, when he wrote, “the Fed should promise to hold the prime mortgage rate below 3 percent for at least 12 months. It can do this by unlimited purchases of agency mortgage-backed securities.” (He reiterated that argument to me in 2013.) Set the price, and then commit to unlimited purchases. That’s good advice, and we could have done it with Treasuries as well.

What difference would this have made? The first is that it would be far easier to understand what the Federal Reserve was trying to do over time. What was the deal with the tapering? I’ve read a lot of commentary about it, but I still don’t really know. Do stocks matter, or flows? I’m reading a lot of guesswork. But if the Federal Reserve were to target specific long-term interest rates, it would be absolutely clear what they were communicating at each moment.

The second is that it might have been easier. People hear “trillions of dollars” and think of deficits instead of asset swaps; focusing on rates might have made it possible for people to be less worried about QE. The actual volume of purchases might also have been lower, because the markets are unlikely to go against the Fed on these issues.

And the third is that if low interest rates are the new normal, through secular stagnation or otherwise, these tools will need to be formalized. We should look to avoid the herky-jerky nature of Federal Reserve policy in the past several years, and we can do this by looking to the past.

Policy used to be conducted this way. Providing evidence that there’s been a great loss of knowledge in macroeconomics, JW Mason recently wrote up this great 1955 article by Alvin Hansen (of secular stagnation fame), in which Hansen takes it for granted that economists believe intervention along the entirety of the rate structure is appropriate action.

He even finds Keynes arguing along these lines in The General Theory: “Perhaps a complex offer by the central bank to buy and sell at stated prices gilt-edged bonds of all maturities, in place of the single bank rate for short-term bills, is the most important practical improvement which can be made in the technique of monetary management.”

The normal economic argument against this is that all the action can be done with the short-rate. But, of course, that is precisely the problem at the zero lower bound and in a period of persistent low interest rates.

Sadly for everyone who imagines a non-political Federal Reserve, the real argument is political. And it’s political in two ways. The first is that the Federal Reserve would be accused of planning the economy by setting long-term interest rates. So it essentially has to sneak around this argument by adjusting quantities. But, in a technical sense, they are the same policy. One is just opaque, which gives political cover but is harder for the market to understand.

And the second political dimension is that if the Federal Reserve acknowledges the power it has over interest rates, it also owns the recession in a very obvious way.

This has always been a tension. As Greta R. Krippner found in her excellent Capitalizing on Crisis, in 1982 Frank Morris of the Boston Fed argued against ending their disaster tour with monetarism by saying, “I think it would be a big mistake to acknowledge that we were willing to peg interest rates again. The presence of an [M1] target has sheltered the central bank from a direct sense of responsibility for interest rates.” His view was that the Fed could avoid ownership of the economy if it only just adjusted quantities.

But the Federal Reserve did have ownership then, as it does now. It has tools it can use, and will need to use again. It’s important for it to use the right tools going forward.

Follow or contact the Rortybomb blog:
 
  

 

Tagged under:

Janet Yellen gave a reasonable speech on inequality last week, and she barely managed to finish it before the right-wing went nuts.

It’s attracted the standard set of overall criticisms, like people asserting that low rates give banks increasingly “wide spreads” on lending — a claim made with no evidence, and without addressing that spreads might have fallen overall. One notes that Bernanke has also given similar inequality speeches (though the right also went off the deep end when it came to Bernanke), and Jonathan Chait notes how aggressive Greenspan was with discussing controversial policies to crickets on the right.

But I also just saw that Michael Strain has written a column arguing that by even “by focusing on income inequality [Yellen] has waded into politically choppy waters.” Putting the specifics of the speech to the side, it’s simply impossible to talk about the efficacy of monetary policy and full employment during the Great Recession without discussing inequality, or discussing economic issues where inequality is in the background.

Here are five inequality-related issues off the top of my head that are important in monetary policy and full employment. The arguments may or not be convincing (I’m not sure where I stand on some), but to rule these topics entirely out of bounds will just lead to a worse understanding of what the Federal Reserve needs to do.

The Not-Rich. The material conditions of the poorest and everyday Americans are an essential part of any story of inequality. If the poor are doing great, do we really care if the rich are doing even better? Yet in this recession everyday Americans are doing terribly, and it has macroeconomic consequences.

Between the end of the recession in 2009 and 2013, median wages fell an additional 5 percent. One element of monetary policy is changing the relative interest in saving, yet according to recent work by Zucman and Saez, 90 percent of Americans aren’t able to save any money right now. If that is the case, it’s that much harder to make monetary policy work.

Indeed, one effect of committing to low rates in the future is making it more attractive to invest where debt servicing is difficult. For example, through things like subprime auto loans, which are booming (and unregulated under Dodd-Frank because of auto-dealership Republicans). Meanwhile, policy tools that we know flatten low-end inequality between the 10 and 50 percentile — like the minimum wage, which has fallen in value — could potentially boost aggregate demand.

Expectations. The most influential theories about how monetary policy can work when we are at the zero lower bound, as we’ve been for the past several years, involve “expectations” of future inflation and wage growth.

One problem with changing people’s expectations of the future is that those expectations are closely linked to their experiences of the past. And if people’s strong expectations of the future are low or zero nominal growth in incomes because everything around them screams inequality, because income growth and inflation rates have been falling for decades, strongly worded statements and press releases from Janet Yellen are going to have less effect.

The Rich. The debate around secular stagnation is ongoing. Here’s the Vox explainer. Larry Summers recently argued that the term emphasizes “the difficulty of maintaining sufficient demand to permit normal levels of output.” Why is this so difficult? “[R]ising inequality, lower capital costs, slowing population growth, foreign reserve accumulation, and greater costs of financial intermediation.” There’s no sense in which you can try to understand the persistence of low interest rates and their effect on the recovery without considering growing inequality across the Western world.

Who Does the Economy Work For? To understand how well changes in the interest-sensitive components of investment might work, a major monetary channel, you need to have some idea of how the economy is evolving. And stories about how the economy works now are going to be tied to stories about inequality.

The Roosevelt Institute will have some exciting work by JW Mason on this soon, but if the economy is increasingly built around disgorging the cash to shareholders, we should question how this helps or impedes full output. What if low rates cause, say, the Olive Garden to focus less on building, investing, and hiring, and more on reworking its corporate structure so it can rent its buildings back from another corporate entity? Both are in theory interest-sensitive, but the first brings us closer to full output, and the second merely slices the pie a different way in order to give more to capital owners.

Alternatively, if you believe (dubious) stories about how the economy is experiencing trouble as a result of major shifts brought about by technology and low skills, then we have a different story about inequality and the weak recovery.

Inequality in Political and Market Power. We should also consider the political and economic power of industry, especially the financial sector. Regulations are an important component to keeping worries about financial instability in check, but a powerful financial sector makes regulations useless.

But let’s look at another issue: monetary policy’s influence on underwater mortgage financing, a major demand booster in the wake of a housing collapse. As the Federal Reserve Bank of New York found, the spread between primary and secondary rates increased during the Great Recession, especially into 2012 as HARP was revamped and more aggressive zero-bound policies were adopted. The Fed is, obviously, cautious about claiming pricing power from the banks, but it does look like the market power of finance was able to capture lower rates and keep demand lower than it needed to be. The share of the top 0.1 percent of earners working in finance doubled during the past 30 years, and it’s hard not to see that not being related to displays of market and political power like this.

These ideas haven’t had their tires kicked. This is a blog, after all. (As I noted, I’m not even sure if I find them all convincing.) They need to be modeled, debated, given some empirical handles, and so forth. But they are all stories that need to be addressed, and it’s impossible to do any of that if there’s massive outrage at even the suggestion that inequality matters.

Follow or contact the Rortybomb blog:
 
  

 

Tagged under:

(image via NYPL)

Guess what? I’m challenging you to a game of tennis in three days. Here’s an issue though: I don’t know anything about tennis and have never played it, and the same goes for you.

In order to prepare for the game, we are each going to do something very different. I’m going to practice playing with someone else who isn’t very good. You, meanwhile, are going to train with an expert. But you are only going to train by talking about tennis with the expert, and never actually play. The expert will tell you everything you need to know in order to win at tennis, but you won’t actually get any practice.

Chances are I’m going to win the game. Why? Because the task of playing tennis isn’t just reducible to learning a set of things to do in a certain order. There’s a level of knowledge and skills that become unconsciously incorporated into the body. As David Foster Wallace wrote about tennis, “The sort of thinking involved is the sort that can be done only by a living and highly conscious entity, and then it can really be done only unconsciously, i.e., by fusing talent with repetition to such an extent that the variables are combined and controlled without conscious thought.” Practicing doesn’t mean learning rules faster; it means your body knows instinctively where to put the tennis racket.

The same can be said of most skills, like learning how to play an instrument. Expert musicians instinctively know how the instrument works. And the same goes for driving. Drivers obviously learn certain rules (“stop at the stop sign”) and heuristics (“slow down during rain”), but much of driving is done unconsciously and reflexively. Indeed a driver who needs to think through procedurally how to deal with, say, a snowy off ramp will be more at risk of an accident than someone who instinctively knows what to do. A proficient driver is one who can spend their mental energy making more subtle and refined decisions based on determining what is salient about a specific situation, as past experiences unconsciously influence current experiences. Our bodies and minds aren’t just a series of logic statements but also a series of lived-through meanings.

This is my intro-level remembrance of Hubert Dreyfus’ argument against artificial intelligence via Merleau-Ponty’s phenomenology (more via Wikipedia). It’s been a long time since I followed any of this, and I’m not able to keep up with the current debates. As I understand it Dreyfus’ arguments were hated by computers scientists in the 1970s, then appreciated in the 1990s, and now computer scientists assume cheap computing power can use brute force and some probability theory to work around it.

But my vague memory of these debates is why I imagine driverless cars are going to hit a much bigger obstacle than most. I was reminded of all this via a recent article on Slate about Google’s driverless cars from Lee Gomes:

[T]he Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway […] But the maps have problems, starting with the fact that the car can’t travel a single inch without one. […]

Because it can’t tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. […] Computer scientists have various names for the ability to synthesize and respond to this barrage of unpredictable information: “generalized intelligence,” “situational awareness,” “everyday common sense.” It’s been the dream of artificial intelligence researchers since the advent of computers. And it remains just that.

Focus your attention on the issue that the car can’t tell the difference between a dangerous rock to avoid and a newspaper to drive through. As John Dewey found when he demolished the notion of a reflex arc, reflexes become instinctual so attention is paid only when something new breaks the habitual response. Or, experienced human drivers don’t see the rock, and then decide to move. They just as much decide to move because that forces them to see the rock. The functionalist breakdown, necessary to the propositional logic of computer programming, is just an ex post justification for a whole, organic action. This is the “everyday common sense” alluded to in the piece.

Or let’s put it a different way. Imagine learning tennis by setting up one of those machines that shoots tennis balls at you, the same repetitive way. There would be a strict limit in how much you could learn, or how much that one motion would translate into you being able to play an entire game. But by teaching cars to drive by essentially having them follow a map means that they are playing tennis by just repeating the same ball toss, over and over again.

Again, I’m willing to sustain the argument that the pure, brute force of computing power will be enough – stack enough processors on top of each other and they’ll eventually bang out an answer on what to do. But if the current action requires telling cars absolutely everything that will be around them, instead of some sort of computational ability react to the road itself, including via experience, this will be a much harder issue. I hope it works, but maybe we can slow down the victory laps that are already calling massive overhauls to our understanding of public policy (like the idea that public buses are obsolete) until these cars encounter a situation they don’t know in advance.

Follow or contact the Rortybomb blog:
 
  

 

Tagged under:

Does the USA Really Soak the Rich?

There’s a new argument about taxes: the United States is already far too progressive with taxation, it says, and if we want to build a better, eglitarian future we can’t do it through a “soak the rich” agenda. It’s the argument of this recent New York Times editorial by Edward D. Kleinbard, and a longer piece by political scientists Cathie Jo Martin and Alexander Hertel-Fernandez at Vox. I’m going to focus on the Vox piece because it is clearer on what they are arguing.

There, the researchers note that the countries “that have made the biggest strides in reducing economic inequality do not fund their governments through soak-the-rich, steeply progressive taxes.” They put up this graphic, based on OECD data, to make this point:

You can quickly see that the concept of “progressivity” is doing all the work here, and I believe the way they are going to use that word will be problematic. What does it mean for Sweden to be one of the least progressive tax state, and the United States the most?

Let’s graph out two ways of soaking the rich. Here’s Rich Uncle Pennybags in America, and Rik Farbror Påse av Mynt in Sweden, as well as their respective tax bureaus:

When average people usually talk about soaking the rich, they are talking about the marginal tax rates the highest income earners pay. But as we can see, in Sweden the rich pay a much higher marginal tax rate. As Matt Bruenig at Demos notes, Sweden definitely taxes its rich much more (he also notes that what they do with those taxes is different than what Vox argues).

At this point many people would argue that our taxes are more progressive because the middle-class in the United States is taxed less than the middle-class in Sweden. But that is not what Jo Martin and Hertel-Fernandez are arguing.

They are instead looking at the right-side of the above graphic. They are measuring how much of tax revenue comes from the top decile (or, alternatively, the concentration coefficient of tax revenue), and calling that the progressivity of taxation (“how much more (or less) of the tax burden falls on the wealthiest households”). The fact that the United States gets so much more of its tax revenue from the rich when compared to Sweden means we have a much more progressive tax policy, one of the most progressive in the world. Congratulations?

The problem is, of course, that we get so much of our tax revenue from the rich because we have one of the highest rates of inequality across peer nations. How unequal a country is will be just as much of a driver of the progressivity of taxation as the actual tax polices. In order to understand how absurd this is, even flat taxes on a very unequal income distribution will mean that taxes are “progressive” as more income will come from the top of the income distribution, just because that’s where all the money is. Yet how would that be progressive taxation?

We can confirm this. Let’s take the OECD data that is likely where their metric of tax progressivity comes from, and plot it against the market distribution. This is the share of taxes that come from the top decile, versus how much market income the top decile takes home:

As you can see, they are related. (The same goes if you use gini coefficients.)

Beyond the obvious one, there’s a much deeper and more important relationship here. As Saez, Piketty and Stantcheva find, the fall in top tax rates over the past 30 years are a major driver of the explosion of income inequality during that same exact period. Among other ways, lower marginal tax rates give high-end mangagement a greater incentive to bargain for higher wages, and for corporate structures to pay them out. This is an important element for the creation of our recent inequality, and it shouldn’t get lost among odd definitions of the word “progressive,” a word that always seems to create confusion.

Follow or contact the Rortybomb blog:
 
  

 

Tagged under:

I have a new column at The Score: Why Prisons Thrive Even When Budgets Shrink. Even as the era of big government was over, the incarceration rate quintupled over just 20 years. It had previously been stable for a century. Logically, three actors set the rate of incarceration: here’s how they made this radical transformation of the state.

Follow or contact the Rortybomb blog:
 
  

 

Tagged under:

(Wonkish, as they say.)

I wrote a piece in the aftermath of the Michael Brown shooting and subsequent protests in Ferguson noting that the police violence, rather than a federalized, militarized affair, should be understood as locally driven from the bottom-up. Others made similar points, including Jonathan Chait (“Why the Worst Governments in America Are Local Governments”) and Franklin Foer (“The Greatest Threat to Our Liberty Is Local Governments Run Amok”). Both are smart pieces.

The Foer piece came into a backlash on a technical point that I want to dig into, in part because I think it is illuminating and helps proves his point. Foer argued that “If there’s a signature policy of this age of unimpeded state and local government, it’s civil-asset forfeiture.” Civil-asset forfeiture is where prosecutors press charges against property for being illicit, a legal tool that is prone to abuse. (I’m going to assume you know the basics. This Sarah Stillman piece is fantastic if you don’t, or even if you do.)

Two libertarian critics jumped at that line. Jonathan Blanks of the Cato Institute wrote “the rise of civil asset forfeiture is a direct result of federal involvement in local policing. In what are known as ‘equitable sharing’ agreements, federal law enforcement split forfeiture proceeds with state and local law authorities.”

Equitable sharing is a system where local prosecutors can choose to send their cases to the federal level and, if successful, up to 80 percent of the forfeited funds go back to local law enforcement. So even in states where the law lets law enforcement keep less than 80 percent of funds to try and prevent corruption (by handing the money to, say, roads or schools), “federal equitable sharing rules mandate those proceeds go directly to the law enforcement agencies, circumventing state laws to prevent “‘policing for profit.’”

Lucy Steigerwald at Vice addresses all three posts, and make a similar point about Foer. “Foer mentions the importance of civil asset forfeiture while skirting around the fact that forfeiture laws incentivize making drug cases into federal ones, so as to get around states with higher burdens of proof for taking property…Include a DEA agent in your drug bust—making it a federal case—and suddenly you get up to 80 percent of the profits from the seized cash or goods. In short, it’s a hell of a lot easier for local police to steal your shit thanks to federal law.”

Equitable sharing, like all law in this realm, needs to be gutted yesterday, and I’m sure there’s major agreement on across-the-board reforms. But I think there’s three serious problems with viewing federal equitable sharing as the main driver of state and local forfeitures.

Legibility, Abuse, Innovation

The first is that we are talking about equitable sharing in part because it’s only part of the law that we are capable of measuring. There’s a reason that virtually every story about civil asset forfeiture highlights equitable sharing [1]. It’s because it’s one of the few places where there are good statistics on how civil asset forfeiture is carried out.

As the Institute for Justice found when they tried to create a summary of the extent of the use of civil asset forfeiture, only 29 states have a requirement to record the use of civil asset forfeiture at all. But most are under no obligation to share that information, much less make it accessible. It took two years of FOIA requests, and even then 8 of those 29 states didn’t bother responding, and two provided unusable data. There’s problematic double-counting and other problems with the data that is available. As they concluded, “Thus, in most states, we know very little about the use of asset forfeiture” at the county and state level.

We do know about it at the federal level however. You can look up the annual reports of the federal Department of Justice’s Assets Forfeiture Fund (AFF) and the Treasury Forfeiture Fund (TFF) of the U.S. Department of the Treasury. There you can see the expansion of the program over time.

You simply can’t do this in any way at the county or state levels. You can’t see statistics to see if equitable sharing is a majority of forfeiture cases – though, importantly, equitable sharing was the minority of funds in the few states the Institute for Justice were able to measure, and local forfeitures were growing rapidly – or the relationship between the two. It’s impossible to analyze the number of forfeiture cases (as opposed to amount seized), which is what you’d want to measure to see the increased aggressiveness in its use on small cases.

This goes to Foer’s point that federal abuses at least receive some daylight, compared to the black boxes of county prosecutor’s offices. This does, in turn, point the flashlight towards the Feds, and gives the overall procedure a Federal focus. But this is a function of how well locals have fought off accountability.

The second point is that the states already have laws that are more aggressive than the Fed’s. A simple graph will suffice (source). The Feds return 80 percent of forfeited assets to law enforcement. What do the states return?

Only 15 states have laws that that are below the Fed’s return threshold. Far, far more states already have a more expansive “policing for profit” regime set in at the state level than what is available at the Federal level. It makes sense that for those 15 states equitable sharing changes the incentives [2], of course, and the logic extends to the necessary criterion to make a seizure. But the states, driven no doubt by police, prosecutors and tough-on-crime lawmakers, have written very aggressive laws in this manner. They don’t need the Feds to police for profit; if anything they’d get in the way.

The third is that the innovative expansion of civil asset forfeiture is driven at the local level just as much as the federal level. This is the case if only because equitable sharing can only go into effect if there’s a federal crime being committed. So aggressive forfeiture of cars of drunk drivers or those who hire sex workers (even if it your wife’s car) is a local innovation, because there’s no federal law to advance them.

There’s a lot of overlap for reform across the political spectrum here, but seeing the states as merely the pawns of the federal government when it comes to forfeiture abuse is problematic. Ironically, we see this precisely because we can’t see what the states are doing, but the hints we do know point to awful abuses, driven by the profit motive from the bottom-up.

[1]  To take two prominent, excellent recent examples. Stillman at the New Yorker: “through a program called Equitable Sharing…At the Justice Department, proceeds from forfeiture soared from twenty-seven million dollars in 1985 to five hundred and fifty-six million in 1993.”

And Michael Sallah, Robert O’Harrow Jr., Steven Rich of the Washington Post: “There have been 61,998 cash seizures made on highways and elsewhere since 9/11 without search warrants or indictments through the Equitable Sharing Program, totaling more than $2.5 billion.”

If either wanted to get these numbers at the state and local levels it would be impossible.

[2] I understand why one want to put an empirical point on it, and the law needs to be changed no matter what, but the core empirical work relating payouts to equitable sharing isn’t as aggressive as you’d imagine. Most of the critical results aren’t significant at a 5% level, and even then you are talking about a 25% increase in just equitable sharing (as opposed to the overall amount forfeited by locals, which we can’t measure) relative to 100% change in state law payouts.

Which makes sense – no prosecutor is going to be fired for bringing in too much money into the school district, if only because money is fungible on the back end.

Follow or contact the Rortybomb blog:
 
  

 

Tagged under: