Skip to main content

The Quant QuagmireGoldman’s Quant Memo Points To Bigger Problems For The Quants

  • Author:
  • Updated:

Can the quants fix their problems? After last week’s quant bloodbath, this is a question that hedge fund investors and the quant fund managers have been asking. But one popular solution—reducing the risks from too many quant funds following the same factors by bringing in new factors—may create even more problems for at least some of the funds.
In one sense, the solution is obvious: if the problem is too many ducks in row, the solution is to have the ducks take different paths. But do all paths lead to profit? Can the ducks even find new paths?
But let’s back up for a minute and take a look at how we got here. More after the jump.

The favorite theory of the big quant firms to explain the market meltdown last week is that trouble in the credit markets forced an large multi-strategy hedge fund to liquidate its equity positions, which caused the prices of many stocks to diverge from those predicted by the models, triggering large losses in several quant funds which had been operating according to what appear to be very similar models. It was first formulated in talk among hedge fund managers trying to figure out what was going on, but got a real boost when it was adopted by Lehman Brothers economist Matthew Rothman. If you’ve been reading DealBreaker regularly, you’ve been familiar with the theory since last week, and we’ve explained it time and time again.
Because the giant quantitative hedge funds—such as Goldman Sach’s Global Alpha, AQR Capital Management or James Simons' Renaissance Technologies—use huge amounts of leverage to inflate their trading positions, they often account for more than half of all daily activity on the New York Stock Exchange. Over the last few years, many of these funds have been able to use this leverage to return enormous returns to their investors, and they haven’t been shy about crediting their success to the brilliance of their “proprietary” computer models. But when the market started to “misbehave”—which is quant-speak for when prices stubbornly refused to conform themselves to the predictive models the funds use to determine trading strategies—we quickly learned that their much prized proprietary trading programs were built to behave very similarly.
In a publication from Goldman Sachs Asset Management, which more or less adopts the Rothman Theory of last week’s troubles, Goldman takes note of the dangers of too many quant funds following too narrow of a strategy. “A key lesson from this episode is that too many quant managers were using the same quantitative factors,” the report states. “This has a couple of implications for us. First, going forward we need to develop better measures to assess the popularity of certain quant factors. Second, to protect our investors, we will need to make more of an effort to make sure that our proprietary factors remain proprietary.”
The “popularity of certain quant factors” is something that has been obvious to some of those who have been watching their stellar performances in recent years, if only because so many of them did so well, and so similarly well, under the same market conditions. But now that the risks of Factor Popularity have become obvious, it’s not clear that the escape from it will be as easy as the quant managers have been claiming.
That’s because there’s a good reason so many funds adopted the same factors into their models: they were working. By giving other factors additional weight, or adding new factors, the fund managers will attempt to maintain their ability to outperform while diverging from other funds. But are there enough successful quantitative strategies to go around? Or will the funds simply end up adopting the same new factors, and wind up trading all together once again?
One way to tell whether the quants successfully adopt divergent strategies will be if their performances diverge. If they have different and unique factors in the models, and these are given enough weight to make a difference, their performances should diverge. If they don’t, it’s a good bet that they’ve just thrown in some new factors but these are (a) either the same factors everyone else made up or (b) carry so little weight they don’t make a difference.
The divergence prospect is interesting because presumably there is a limited pool of people who can design truly unique and successful models, which means there should be even more bloodletting and more losses (or, at least, sub-par performance) at some quant firms. In short, they shouldn’t all lose money at once again but some of them should suffer while others do well. The risk of Factor Popularity will be removed but new risks from new factors which have not been tested, except through computer modeling, will be taken on. Factor Diversity may allow the funds to avoid the risks that crushed them last week but it might bring up new risks of its own.
All this reminds of us of something that happened on a recent trip we took to the Midwest. We were driving through lower Minnesota, somewhere near Walnut Grove, when a family of quail came across the road. Most the baby quails followed their mother to the left hand lane, where things were safer for the simple reason that no car was coming in the other direction. But a few didn’t follow their mother and stayed in our lane. One of those ended up under our left front tire, becoming instant road meat. It seemed mighty dumb to stay in our lane. But then again, those that followed the mother into the next lane might have also been crushed in the rare event that two cars passed each other on that spot on a road through corn and bean country.