Ed Thorp is by all accounts a pretty bright guy, having invented among other things1 card-counting for blackjack and the Black-Scholes formula for option pricing. Here is a short story2
… about Ed Thorp and his discovery of the formulae and their use for making money (rather than for publication and a Nobel Prize!) … Fischer Black asked Ed out to dinner to ask him how to value American options. By the side of his chair Ed had his briefcase in which there was an algorithm for valuation and optimal exercise but he decided not to share the information with Black since it was not in the interests of Ed’s investors!
The rest is some sort of history; Thorp made lots of money trading options
(and then lost it, as one does)2a; Black figured out how to value options, published the formula, got his name attached to it, and won a Nobel Prize. Nobel Prizes are harder to lose, so there’s that, but otherwise I think a lot of people would take the money.
A while back we talked about a sort of astonishing paper that showed that stocks go up predictably before every FOMC policy announcement, earning 3.89% excess return on average, and I said “One thing you might ask about any economics paper that identifies a colossal and tradeable market inefficiency is: why on earth would you publish this? … I guess it’s safe to assume that tombstone for ‘the pre-FOMC announcement drift’ will read ’1994-2011′: if, going forward, you can make a free 3.89% excess return by buying on pre-Fed days, you will, so you can’t.”
Today Dan Primack reports on some academics working in a field not dissimilar to that suggestion and it turns out I was only about 1/3rd right. The paper is by R. David McLean of Alberta and MIT/Sloan and Jeffrey Pontiff of BC, and it is called “Does Academic Research Destroy Stock Return Predictability?” and, read it, it is rich with things to ponder. They look at 66 academic studies that identify 82 “anomalies,” meaning looooooosely speaking examples of weak-form inefficient markets – things where publicly available historical data can be used to predict future performance. Here is the main conclusion:
We estimate the average anomaly’s post-publication return decays by about 35%. Thus, an in-sample alpha of 5% is expected to decay to 3.25% post-publication. We attribute this effect due to both statistical biases,3 and to the activity of sophisticated traders who observe the finding. We can reject the hypothesis that post-publication return-predictability does not change, and we can also reject the hypothesis that there is no post-publication alpha.
There are a bunch of confounding things here; for one thing, the authors do not discriminate “things that look inefficient” – anomalies that their authors identify as examples of predictable risk-adjusted outperformance – from “things that look efficient” – predictable performance relationships that have “strong theoretical motivation” in real risk preferences and such and thus shouldn’t go away even if everybody knows about it.4 For another, arbitrage is hard etc. etc., so in fact “there are significantly larger post-publication declines for anomaly portfolios that are subject to higher arbitrage costs” like transaction costs or idiosyncratic risks. In some ways this paper just smooshes together mathematically two separate things you should have known intuitively: when academics identify a systematic effect in the markets, there’s either a good reason for it (which the academics may or may not recognize), in which case the academic paper shouldn’t change it,5 or there’s not, in which case everyone will read the paper and arb the arbitrage out of existence.
Everyone, that is, who only found out about it by reading the paper. Here is my favorite part:
We find that after a specific anomaly is published, the correlation of the anomaly portfolio with other yet-to-be-published anomaly portfolios decreases, while the correlation of the anomaly with other already-published anomaly portfolios increases. One interpretation of this finding is that anomalies are the result of mispricing, and mispricing has a common source; this is why in-sample anomaly portfolios are highly correlated. Publication causes more arbitrageurs to trade on the anomaly, which causes anomaly portfolios to become more correlated with already-published anomaly portfolios that are also being pursued by arbitrageurs, and less correlated with yet-to-be-published anomaly portfolios. …
If anomalies reflect mispricing, and if mispricing has common causes, then we might expect in-sample anomaly portfolios to be correlated with other in-sample anomaly portfolios. This might also occur if some investors specialize in trading in anomalies that have not been uncovered by academics. If publication causes arbitrageurs to trade in an anomaly, then it could cause an anomaly portfolio to become more highly correlated with other published anomalies and less correlated with unpublished anomalies.
Why would “mispricing have a common source”? I dunno. I prefer the answer of “some investors specialize in trading anomalies that have not been uncovered by academics.” Everyone worries about hedge funds all doing the same trades and being correlated with each other and this paper is sort of loose evidence of that: there are anomalies in market efficiency, and they are not “publicly known” to the academic literature, but they are correlated with each other, perhaps suggesting that enough people know about them to all be trading them together. With enough people trading on these anomalies, eventually they’re going to come to the attention of academics who are silly enough to publish instead of joining the people profiting from them. And then the copycats who read the academics’ papers pile in, destroying (1/3rd of!) the anomaly, and it’s on to the next unpublished one for the people who are actually making money.
Academics: Academia destroys stock picking strategies [Term Sheet]
R. David McLean & Jeffrey Pontiff: Does Academic Research Destroy Stock Return Predictability? [SSRN]
1. Wikipedia tells me he is “best known as the ‘father of wearable computer’ after inventing the world’s first wearable computer in 1961″; I submit to you that whoever wrote that entry – “BEST KNOWN”! – was wearing the computer he typed it on.
2. I’m quoting Wilmott a lot today; I’ve been reading his weird little book on and off for a while. It’s good.
2a. Or not. A reader points out “Princeton Newport shut down due to the turmoil the RICO charges brought, not due to losses,” which is kind of a funny thing to say? But the charges were dropped. Anyway, yeah, apologies, my main point was: Ed Thorp, pretty cool.
3. By the neat trick of looking at post-study but pre-publication decay of anomalies, they figure that at most 10% of the 35% decay is due to statistical biases, with ~25% due to the actual dissemination of the information.
4. “We include all variables that relate to cross-sectional returns, including those with strong theoretical motivation such as Fama and MacBeth’s landmark 1973 study of market beta in the Journal of Political Economy and Amihud’s 2002 study of a liquidity measure in the Journal of Financial Markets. These studies argue that expected cross-sectional returns are a rational response to risk and costs, and thus sophisticated traders that observe the findings of these papers should not be induced to trade.”
5. As they tartly put it,
Cochrane (1999) contends that predictability is the outcome of rational asset pricing. Publication does not affect the information used by rational agents, so return predictability should be unaffected by publication.