Donald Trump is going to be the next President. That outcome shocked Wall Street if the overnight pre-market move was any indication. The result was equally shocking to political scientists, election forecasters, and pollsters. The polls consistently had Trump running an average of a 2-4% behind Clinton, yet Trump won the electoral college handily.
Without getting mixed up in messy arguments over politics, it is worth making a key point here – this election was characterized by a regime change. Not a regime in the sense of a ruling body or individual, but a regime in the sense of the broader environment. There has never been a U.S. president, nor even a candidate, like Trump. He will enter the Oval Office with substantial business experience, but no military or political experience. Maybe that’s good and maybe it’s bad – but that’s not the point. The point is that Trump is different and that makes forecasts and empirical modeling harder.
There is a solution that I will talk about in a moment, but the first point to realize here is that traditional data modeling doesn’t work if the model doesn’t have appropriate data or parameters.
The primary purpose of Big Data is to create better forecasts and performance evaluation. Effective forecasts whether in finance or any other area require that we have a good model that represents the world accurately. If our model is too detailed, it becomes impossible to use, and if it is wrong or too vague it’s equally impossible. To stretch an analogy, a map is simply a model of the real world – both satellite photos and maps drawn on a cocktail napkin are largely useless if I need directions somewhere. The same thing holds true in finance.
In the election, the forecast models were all wrong – they did not have the appropriate set of variables in place to capture Trump’s actual popularity. There was no way of knowing that the models were wrong until they were tested – in this case in the election.
Our current models about what stock and economic returns will be like under President Trump are also equally likely to be wrong. The country has no experience having a president with a background chiefly in business, or with many of the policies that Trump is promoting. Free trade is unambiguously good for society as a whole (though bad for some subsets of society) – but we have no idea what short term effects (if any) pulling out of trade agreements like NAFTA will create.
Fun fact for the next time you want to strike up a conversation with someone at a bar – the last time the U.S. pulled out of a trade agreement was in 1866, with the unilateral withdrawal from the Elgin-Marcy Treaty. If the U.S. really does withdraw from NAFTA, to name one possibility among many, we really have no idea what impact that will have on financial markets.
Does all of this mean that quantitative modeling is useless in situations like the one investors face now? The answer is no. But it does mean that the modeling is more complex. In these types of situations, the solution is to use what are called synthetic observations. We have no data on the impacts from a President like Donald Trump. As a result, statements like “historically, stocks have returned roughly 3% in the 60 days after a Presidential election” are useless. We’re in uncharted territory.
With synthetic modeling we can correct for this issue. Donald Trump may be a novel president, but we can think of a Trump presidency as being an amalgam of different factors. For example, we might do some analysis and conclude that Trump’s policies are part isolationism, part lower-taxes, part higher-infrastructure spending, and part trade-protectionism. Perhaps Trump is part Silvio Berlusconi, part Margaret Thatcher, part Ronald Reagan, and part Shinzo Abe. That’s an ad hoc characterization, but you get the point. No single person is all that similar to a President Trump, but put them all together than you may get a similar amalgam.
Isolating the factors that underlie Trump’s policies, we could then decompose past presidents, American and otherwise, into their own set of characteristics and determine the impact of each of these various components. We need good data to do it, and it’s not easy, but it is possible. This type of technique is common in sophisticated economic analysis.
Beyond the election itself, financial markets participants considering how to build models should always consider the possibility of regime change. In those circumstances, conventional models and data may not be effective. Instead, synthetic observations and the techniques that are used by economists provide a map with a way forward even when we are facing what seem like novel situations.
Mike McDonald is a PhD in finance and a university professor in the subject. He also runs a consulting company doing work on quantitative investing, big data, and machine learning for a variety of financial firms, asset managers, institutional investors, and government regulators. Prior to getting his PhD, Mike worked for a major Wall Street bank and one of the top hedge funds. Comments, questions, and concerns are always welcome – email Mike at M.McDonald@MorningInvestmentsCT.com or visit his firm’s website at www.MorningInvestmentsCT.com