A thing I sometimes enjoy is reading research papers examining questions like:
- if you are a bank, and you are likely to be bailed out, do you take more risks than a bank all on its lonesome, and
- once you’ve been bailed out, what then?
We’ve looked at a BIS paper on international banks, which on certain assumptions found that (1) banks that were in fact bailed out took more risks pre-bailout than banks that weren’t (unsurprising) and (2) after the bailouts they pretty much stayed riskier (maybe surprising). And then there was a Fed paper about TARP banks, which on certain different assumptions found sort of the same results.
Anyway in the spirit of completism and also charts here is a Bank of Canada paper:
“Supported” banks seem to have been a wee bit less risky than regular banks before the crisis, and quite a bit less risky afterwards, somewhat contradicting those other findings. Here is a stab at an explanation:
[D]uring normal times, non-supported banks compete more fiercely with supported banks since the latter benefit from lower refinancing costs, which pushes non-supported banks towards higher risk-taking. … In line with the charter value argument, we conjecture that supported banks may enjoy a funding advantage during the crisis and therefore exhibit lower risk compared to banks that are not supported. At the same time we cannot rule out other explanations, as our findings would also be consistent with the idea that supported banks during crisis times are subject to greater scrutiny by supervisors that is effective in reducing risk taking ….
All fair enough: if you don’t have implicit government backing on your funding, you need to gamble more with that funding to get the same returns, and nobody is looking at you all that closely to see that you don’t.
Except! This paper is interesting because it separates “supported” and “non-supported” banks not based on ex post measures like “did you get a bailout or not” but rather on an interesting ex ante measure: the fact that Canadian rating agency DBRS changed its rating scheme in October 2006 to indicate which banks were more likely to get government support. So the researchers were able to pick banks that were likely to be bailed out in 2006, rather than banks that were actually bailed out in 2007-2008, to measure what effect bailout expectations had on banks. Which is interesting.
The problem is that rating agencies are dumb. Here is a true (partial) list of banks that were likely to get government support according to DBRS:
- Commonwealth Bank of Australia
- Bank of Nova Scotia
- Royal Bank of Canada
- HBOS plc
Not, you might say, hugely objectionable. Now here is a true (partial) list of banks that were not likely to get government support according to DBRS:
- Laurentian Bank
- Bank of Hawaii
- Bank of America
- Bank of New York
- JPMorgan Chase
This list is more problematic, though I’ll spot them Laurentian Bank.* The authors recognize this, though they defend it by saying “DBRS’s [government support] ratings were in full agreement with Fitch’s Support Ratings, indicating that as of 2006, none of the US banks was likely to be bailed out.” That is even more problematic: apparently nobody at a rating agency could figure out that JPMorgan might be systemically important.**
I don’t really know what this tells you other than: why would you trust a rating agency to tell you who will get a bailout? DBRS decided that, for instance, the two banks in charge of the triparty repo system were not systemically important and “there is no expectation of any form of timely external support” for them. That was pretty wrong, though I guess if you asked Jamie Dimon (if you could find him) he’d tell you that JPMorgan never really got any external support. Still: Citi!
In theory, rating agencies are in the business of (1) looking at public information about an issuer, (2) also looking at private information provided to them by management (or, sometimes, not doing that), and (3) analyzing that information to draw some conclusions about the issuer’s creditworthiness. Sometimes the combination of specialization, credit analysis expertise and access to private information gives the agencies an advantage in drawing those conclusions. Sometimes it does not.
But as a theory that seems like an okay theory. It’s harder to see the agencies’ advantage in predicting political decisions in developed economies, whether that’s S&P’s guess at the US’s creditworthiness or DBRS’s guess at who will be bailed out when and by whom. And the more an industry or an economy is tied to political decisions, the less likely it is that ratings will tell you anything you can’t figure out for yourself.
The Ex-Ante Versus Ex-Post Effect of Public Guarantees [Bank of Canada, pdf]
* Is that wrong? Are they a powerhouse north of the border?
** TBF they also run their analysis excluding the US banks and got similar results. But the sample is sort of uninteresting then, it’s all Canadian banks who all pretty much didn’t need bailouts, plus a tiny grab bag of idiosyncratic Belgian and Irish banks, so it’s hard to know how to feel.