Skip to main content

Some S&P Analysts Weren't Happy With S&P's Whole "Trying To Rate Deals" Thing

  • Author:
  • Updated:

Michael Lewis had this to say yesterday about Greg Smith's criticisms of Goldman Sachs:1

None of this exactly came as news. The news was that a living, breathing Goldman employee had said it. There was also, between the lines, a fresh hope: Goldman had employed an idealist! For a decade!

That was pretty much my reaction to the Department of Justice's curiously underwhelming complaint against S&P for misrating subprime-mortgage-backed securities in the run-up to the financial crisis. Wait: S&P got paid to rate deals, and wanted to rate more deals and get paid more, so it rated deals favorably? TELL ME MORE. But then it is enlivened by an occasional cameo from a quant truther whom you would not have expected to exist inside S&P. Like Executive H:

Or Senior Analyst C:

Haha what? You can tell that the DoJ is getting its analytical framework for this case from those quant truthers, but that framework is dumb. The world does not neatly divide between "superior analytics" and "sound empirical research," on one side, and craven fraud on the other. Oh you have sound empirical research on subprime default rates that allowed you to build a model that is consistent with history? That's great! How does your empirical research handle default rates that are much higher than the - limited - historical data would imply?2 You don't believe in reviewing your proposed criteria changes with investors? Why's that? Is it because you're so good at evaluating structured credit that investors have nothing to teach you? Then why do you work at S&P?

This is a description of S&P's new approach written by S&P managers and outside consultants, and it reads like a description written by managers and consultants, but it's also right in a way the quant truthers aren't:

If there was one right answer that you could identify in advance, then S&P would probably do the thing that generates the right answer. But so would everyone else and there'd be no need for S&P. S&P's business was to make plausible, defensible ratings models that also allowed it to rate deals. The quants would presumably have been happy to mutter to themselves about the purity of their models without actually rating deals, but they'd be sad when their paychecks stopped coming.

I don't want to defend the S&P executives and consultants too much; there are answers that are better and answers that are worse and there's plenty of evidence that the executives were dumb or excessively commercially motivated or both.3 But that's pretty much the life of a quant, right? Like:

  • You build a beautiful model.
  • Your manager does dumb things with the model out of stupidity, malice, or the fact that he's commercial.
  • Your aesthetic sensibilities are wounded, so you sulk.
  • There's a decent chance you sulk via an awkwardly worded email.4
  • Sometimes you sulk your way all the way to a courthouse.

This is the story of Eric Ben-Artzi, among others: quants who couldn't stand the lack of intellectual rigor and truth-seeking that they found at ... at, essentially, sales organizations like banks and ratings agencies. If you are a quantitative finance researcher your choices in life are:

  • Seek truth, unsullied by commercial considerations, in the academy;
  • Seek truth, mostly, while trying to make money, at Bridgewater or whatever; or
  • Seek money at a bank, rating agency, what have you.

If you choose the last option you are sort of already irretrievably compromised. The banks, ratings agencies, etc. hire you for your truth-seeking ability, and they really want you to seek truth, within reason. But there's a reason why banks and ratings agencies are rarely run by working quants.5 The truth-seeking is just one means to an end; the end is sales, and the beauty and rigor of your models are always subordinate to that. Your job really is to exist in an adversarial relationship to soulless model-less salespeople, and to advocate for your soul and your model. That adversarial relationship helps preserve S&P's analytical credibility, such as it is. But that credibility comes mostly from having truth-seeking quants, and letting them vent occasionally, not from letting them actually win. Someone's gotta actually sell the ratings.

Ten pages of the DoJ's complaint - pages 28 to 38 - are to the effect of "S&P kept telling people that its ratings weren't influenced by commercial factors," and that's true, but it's in such obviously meaningless boilerplate that if you believed it you have no business investing in anything. Here is a quote from "an August 23, 2007 publication titled 'The Fundamentals of Structured Finance Ratings' that S&P posted on its website":

That's called "protesting too much."

Why is this complaint so lame? DoJ seems to have come to it with the mindset of disgruntled quants, not bloodthirsty plaintiffs' lawyers. This is a ratings agency whose employees IMed each other saying "we rate every deal ... it could be structured by cows and we would rate it." But that exchange doesn't come up here until page 80, after dozens of other analyst complaints, by which point it just seems like another bit of petty griping.6 Here, on the other hand, is the DoJ's smoking gun:

Aha! They had a newer data set but didn't update their model in a timely fashion to reflect the new data set which in hindsight turned out to better predict defaults! CAUGHT RED-HANDED.7 Seriously the world is complicated and S&P obviously missed some things that better credit analysts might have caught, but just picking some individual things that they could have done differently doesn't seem even related to proving fraud. Go ahead, DoJ: tell me what historical data set will best predict future results. Explain how it was obvious that, say, the 1972-vintage loans in S&P's data set were relevant to predicting default in 2002. Everyone should have known that, right?

Speaking of, um, everyone, here is an email that the DoJ finds incriminating (paragraph 144):

We just lost a huge Mizuho RMBS deal to Moody's due to a huge difference in the required credit support level. What we found from the arranger was that our support level was at least 10% higher than Moody's. Losing one or even several deals due to criteria issues, but this is so significant that it could have an impact on future deals.

There's a lot of this; S&P seems to have done most of its shady things not to be more aggressive than Moody's, but just to get to level with Moody's. Yesterday I was unimpressed by S&P's defense that Moody's gave many securities the same ratings as S&P did, but I'm coming around on it. According to the DoJ complaint itself, S&P were just joining the race to the bottom, not starting it. S&P's alleged billion-dollar-plus fraud consisted basically of moving its ratings so they were more in line with Moody's. Moody's, you'll note, has not been sued.

U.S. v. McGraw-Hill Companies, Inc. [Reuters, etc.]

1.Also he said this:

Twenty five years ago I quit a job on Wall Street to write a book about Wall Street. Since then, every year or so, UPS has delivered to me a book more or less like my own ...

That is how you start a book review.

2."It handles them better than does using lower-than-historical default rates" is one answer, but it's arbitary. How do you know that ex ante? The point is that the past did not predict the future. Quants who believed too strongly in the past predicting the future were analytically wrong, though they turned out to be more accurate than (totally non-analytical) business goons who just said "the future will be great, shut up."

3.Like this:

4.Incidentally this is the response that Executive H's email elicited:

LDL, chief. LDL.

5.And why hedge funds often are.

6.To be fair the complaint does contain at least one masterpiece of an internal email, buried on page 99:

7.Another fun one is that S&P rated CDOs of RMBS based on S&P ratings of the underlying RMBS, and didn't take into account the likelihood that S&P would later downgrade that underlying RMBS unless that likelihood was reflected in a public Credit Watch Negative. (See paras. 93-94, 232.) This strikes me as a silly thing to complain about: S&P couldn't rely on S&P ratings? That said by June 2007 S&P had actually authorized negative ratings actions on RMBS that it didn't take into account in rating CDOs containing those deals (paras. 242-244), which seems pretty shady.