One of the nice things about last year’s Fed bank stress tests was that they were released, and everyone was like “OMG Citi failed!!,” and then we all calmed down and realized that all that meant was that Citi’s capital return plans had failed, so it couldn’t launch a big share buyback, but it wasn’t going to be smashed into dust as a warning to its compatriots. That turned out to be cold comfort for Vikram Pandit but was soothing for the rest of us. This year, in part to avoid the Vikram thing, the rules have changed: today the straight-up stress test results were released, while the Fed will approve or reject capital return proposals next week, and there’ll be a lot of weird disclosure gamesmanship in the interim. Early signs point to Citi being out of the doghouse, and Goldman possibly being in it.
Also Ally Bank failed, sorry! Legit failed, not failed pro forma for capital return. So, smashed into dust.
Here is a chart you may or may not find amusing:
This chart is intended to answer the question: how many a 1-in-100 terrible days would these banks need to have in order to get the Fed’s estimated trading losses?1 Read more »
Banks are opaque, or so I hear, and so the only way many people can stand to be around them is if they can have some sort of number to serve as a flashlight into all that opacity. One of the big numbers is Basel III risk-weighted assets, which are intended to, as the name says, measure how much stuff a bank holds, weighted by the riskiness of the stuff. RWAs are related to another popular number – they are in many cases calculated based on value-at-risk models – and they determine capital requirements: the more risky assets you have, the more long-term loss-absorbing funding you need to have, to insulate you from loss if the risks come true.
All numbers deceive, though, and many people have noted that RWAs vary wildly across banks, with some banks reporting much lower RWAs per total account assets than others without any obvious decrease in risk. And since RWAs are based in large part on internal models, there is some suggestion that banks optimize their models to reduce RWAs. So risk-weighted assets don’t have any obvious correlation with (1) risk or (2) assets.
Which seems bad, as a first cut, but maybe isn’t: not having an obvious correlation doesn’t mean there’s no correlation. RWA critics are just looking at public information, and public information is, as noted, opaque. Maybe all the banks really are consistently and conscientiously measuring the riskiness of their assets, and just happen to hold different assets. Which is why it’s nice that the confab of bank regulators at the Bank of International Settlements went and issued a report on consistency of risk-weighted assets for market risk.1 The authors of this report, between them, regulate pretty much every big bank in the world. So they could go look at each bank’s trading book, see what assets they have, see how they risk weight them, and compare that to how other banks weight them, done.2
One kind of immediate thing to notice is that that didn’t happen. They’re as confused as you are: Read more »
A value-at-risk model basically works like this. You have some stuff, which is worth X today. Tomorrow it will be worth X + Y, where Y ranges from more or less negative infinity to positive infinity. Y is a function of a bunch of correlated random variables, rates and credit and stock prices and general whatnot. You look at a distribution of moves in those variables and take (usually) a 2-standard deviation daily move; if 95% of the time rates move by -10 to +10 basis points, your VaR model will assume a -10bp or +10bp move, whichever is bad for you. You take the 95%-worst-case, taking into account correlation etc., and tot up how much you’d lose in that case. Then you write that number down and feel a bit better, since you’ve sort of implicitly replaced “we have $X today and will have some number between negative and positive infinity tomorrow” with “we have $X today and will have some number between ($X – VaR) and positive infinity tomorrow,” though of course the first statement is true but unhelpful and the second is not true and also unhelpful.
But that aside! You get your VaR from a distribution of your variables, but the obvious question is what distribution. A good answer would be like “the distribution of those variables over the next three months,” say, for quarterly reporting, but of course that is only a good answer because it begs the question; if you knew what would happen over the next three months you would, one assume, always end those three months with more than $X and this VaR thing would be moot or moot-ish.1
So instead you look at things that you think will allow you to predict that future distribution as accurately as possible, which is epistemically troubling since VaR is a measure of how inaccurate your predictions might turn out to be. Anyway! You pick a distribution of variables based on the sort of stuff that you always use to estimate future distributions in your future-distribution-estimating business, which could mean distributions implied by market prices (e.g. option implied vol) but which seems to mostly mean historical distributions. You look at the last N days of data and assume that the world will be similarly distributed in the following M days, because really what else is there to do.
Picking the number of days to use is hard because, one, this is in some strict sense a nonsense endeavor, but also two, the world changes over time, so looking back one year is for instance rather different from looking back four years. Here is how different: Read more »