Bond Rating Agency Decided To Try To Rate Some Bonds

Author:
Publish date:

What does a AA credit rating mean? The intuitive answer is something like "it means that the rating agency rating the thing thinks it has a probability of default no higher than X% and no lower than Y%," where X and Y are the boundaries of AA- and AA+ respectively, and sure, that's about right. But there's an important loophole there which is that each rating agency can set X and Y to be whatever they want. I can make AA a ~1% probability of default, and you can make it ~20%, and no one can tell us who's right and who's wrong, because it's just some letters, y'know? There's no a priori relationship between those letters and any particular probability of default.

That seems sort of odd, so Congress in the Dodd-Frank Act directed the SEC to look into standardizing the relationship, and the SEC looked into it, and in 2012 they came back to Congress and said no dice. Because basically everyone - ratings agencies but also issuers of and investors in bonds - preferred the current non-standardized system where ratings agencies just rate bonds however they want.1

Not really however they want, though, since issuers hire ratings agencies to rate bonds, and if you rate everything CCC you won't get asked to rate anything, and if you rate everything AAA you also won't get asked to rate anything, because it turns out that a AAA rating from Joe's Optimistic Ratings Shack doesn't actually help sell bonds. The trick is to be roughly in line with peers, but with enough apparatus and fiddling and occasional divergence to make it look like you're engaged in an exercise with some intellectual integrity rather than just copying off someone else's test.

This becomes awkward when, as occasionally happens, the intellectual-integrity fiddling takes you too far away from everyone else, and then you lose business, so you have to fiddle your way back. And this awkwardness has a directionality, which is: if you're more lenient than your competition, and adjust to be in-line, everyone's more or less fine with that, but if you're stricter than your competition, and adjust to be in-line, everyone's all "you're lowering standards to win business because you're corrupt!," which is just a category mistake, that's not a corruption in the ratings agency business, that is the ratings agency business, there's not another part of it that's just pure intellectual truth-seeking, though some people in the business wish there was. If you build a fancy system to rate bonds and it tells you that all the bonds are junk then you are not a ratings agency, you are just a crank. To be a ratings agency you need to get paid by someone to rate bonds.

The directionality matters though. In the run-up to the financial crisis Moody's tended to rate RMBS CDOs more leniently than S&P, which was fine, since everyone can rate bonds however they want. But then S&P decided to get more lenient to match Moody's, which wasn't fine, because corruption or whatever, and now the government is suing them. They're not suing Moody's, nor would they sue Joe's Optimistic Ratings Shack. Any level of ratings is fine; the changes are the problem.

Anyway there is a Times article today that is all "S&P is lowering standards to win business because it's corrupt," because S&P a while back got out of line on CMBS ratings and so didn't rate any deals, and then last September it adjusted its system to get back in line and now rates deals. It comes with this slightly hard to read chart showing that (1) Moody's is the strictly strictest2 rater of CMBS bonds and also has the highest market share and (2) Morningstar is the laxest rater of CMBS bonds and has the second-lowest market share.

That is good news! If you're all Strict Ratings Are Good. Strictness and market share seem to have some rough positive relationship; the agencies with the highest standards rate the most deals. But the correlation isn't perfect, and the bad news is that S&P is now the second-laxest rater and has the second-highest market share, whereas two years ago it was the second-strictest rater (behind Moody's!) and had the lowest market share.3

Matt Yglesias has this to say about this story:

Something interesting about this is that there's a fair amount of sophisticated (or "sophisticated") argumentation out there to the effect of how in an idealized marketplace this kind of certification-not-regulation should work just great. But if you look at a situation like kosher certification where you're not going to have the state step into a religious matter so you have to rely on market solutions, what you see are constant scandals over hot dogs and passover and everything in between.

That analogy strikes me as a fertile one, in that, in both kosher certification and credit ratings:

  • The sausage-maker wants the loosest possible standards consistent with market credibility,
  • some sausage-buyers have a piety-based demand for strict standards that are consistent with their own view of what is correct, but
  • lots of buyers, deep down, want the loosest possible standards consistent with looking themselves in the mirror.

I assert that the market for Rabbinically Certified Kosher Bacon would be robust. In any case, the market for AAA Rated Garbage was large, even among people who knew what was in it, and one suspects it will always be large. (If you're doing your own credit work,4 you should always want the thing you're buying to have the highest possible rating, which makes it easier to sell to others, improves its regulatory capital treatment, etc.5)

And those market forces call forth the responses that market forces call forth: the ratings agencies supply the ratings that the market demands. Even better, since the letters are utterly arbitrary, the market forces actually lead to a standardization of ratings, more or less as Congress wanted. It's just that the path to that standardization makes it a little too clear how responsive those ratings are to those market forces.

Banks Find S.&P. More Favorable in Bond Ratings [DealBook]

1.A sample from the SEC's report:

The six NRSROs that submitted comments did not favor standardization. For example, in the opinion of Moody’s, standardization would lead to less diversity of rating opinions and would increase the risk of “system-wide disruption.” Both S&P and Fitch commented that standardization would result in less competition in the ratings industry and might increase reliance on credit ratings. ... DBRS further commented that credit ratings are based, in part, on qualitative factors that would be difficult to standardize. Similarly, Morningstar commented that standardization would prohibit credit rating agencies from developing better rating procedures, eliminate innovation and competition, and increase costs. ...

Among the non-NRSRO commenters, a majority were not in favor of standardization for many of the same reasons cited by the NRSROs. Andrew Davidson & Co. commented that credit ratings are qualitative judgments of rating committees and, therefore, are not amenable to standardization. The American Securitization Forum commented that standardization would, in addition to depriving investors of a diversity of rating opinions, discourage competition and compromise the quality, accuracy, and usefulness of credit ratings in the securitization market. The Mortgage Bankers Association also commented that standardization might lower the quality of credit ratings and, further, that it would not improve investors’ understanding of securitizations.

The Institutional Money Market Funds Association commented that standardization “would negate the need for more than one [credit rating agency]” and that the “absence of a competitive market could then result in a subsequent lowering of standards and potentially market failure.” The Investment Company Institute commented that standardization could lead to less innovation and competition among rating agencies, which could result in fewer rating agencies, “less pressure to ensure the quality of ratings,” and “the commoditization of ratings and the transformation of credit rating agencies into government approved utilities.”

2.That is: in the Times's sample, there are zero cases where Moody's rated any tranche of any deal higher than any other agency rated that tranche.

3.My use of terms like "strictest" is flawed because of course not everyone rated every deal. So for instance S&P rated fewer deals than Moody's in part because it would give worse ratings to those deals so wasn't asked, meaning that it was actually stricter than Moody's on a lot of deals, just not on any of the ones it actually rated. So the text is kind of exaggerated here. Indeed you could build a model where Moody's is the laxest ratings agency, on average, but each deal hires only Moody's and whichever other agency/agencies, in that one unusual instance, will give some tranches higher ratings. Still.

4.Or of course outsourcing your credit work to anyone but the ratings agencies. If you're outsourcing your credit work to the ratings agencies just stop doing that!

5.There's a too-neatness to this argument, in that an AAA rating on some crap gives the issuer an opportunity to argue that the pricing should be more AAA-like than you want, but you know what I mean. Holding price constant etc.

Related