New Evidence: Simple Forecasts Beat Complex Forecasts

New Evidence: Simple Forecasts Beat Complex Forecasts

September 22, 2015 Behavioral Finance
Print Friendly
(Last Updated On: January 18, 2017)

Always seek to simplify.

Occam’s razor teaches us we should cut away any extraneous factors that are unnecessary to explain something. Stated another way, we should avoid adding predictive elements unless they are absolutely necessary and strongly enhance our prediction.

But why should we believe this is necessarily true? Where’s the evidence? Forecasting is a subtle art. Perhaps more complex models can capture a wider range of class frequencies and offer more granular forecasts. Thus, maybe in the realm of predictions, we should favor complexity over simplicity?

Wharton Professor of Marketing J. Scott Armstrong has set about trying to measure the effectiveness of simple versus complex approaches to forecasting. He recently had a great post on the Wharton Blog Network relating to some research he’s done on this question.

In a new article in the Journal of Business Research, Armstrong discussed a recent meta study, or study of studies, he undertook involving 32 papers covering a range of forecasting methods (judgmental, causal, etc.), which included 97 independent comparisons of the accuracy of simple versus complex approaches.

What did the evidence say?

In a slam dunk, Armstrong found that in an extraordinary eighty-one percent of these independent comparisons, the simple forecasts beat complex ones. Moreover, the errors of from complex forecasts were 27 percent greater than for simple forecasts. So complexity reduced forecast accuracy and increased mistakes. In fact, in his survey of the research, Armstrong couldn’t find any papers that argued that complex predictive models beats simple ones.

Despite the seemingly conclusive nature of this finding, there’s also plenty of evidence that people still prefer complexity over simplicity. Why? It may be that there’s something persuasive about complexity itself.

For instance, Armstrong describes the “Dr. Fox effect,” from a famous experiment from 1970.

In the experiment, researchers had an actor, whom they called “Dr. Fox,” and described as a legitimate and esteemed “expert,” deliver a lecture that was intentionally engineered to consist of contradictions, meaningless references, non sequiturs and unintelligible mumbo jumbo. Yet despite the nonsensical nature of the talk, subjects gave Dr. Fox high satisfaction ratings.

What’s going on here?

Subjects were told Dr. Fox was a noted expert. He was lively and charismatic. He was even funny! He appeared to have a deep command of the material. He seemed to understand, and acted as if he understood. Yet…the material was very complicated. Very dense. And so, although it was difficult to understand him, the subjects still thought he was competent. They might have been saying to themselves, “well, if I can’t understand it, then it must be really high quality material indeed!” In a sense, it complexity itself that contributed to making Dr. Fox more credible.

Armstrong points out this effect may hold in academia, where the highly regarded journals tend to be more complex. Want to get published? Perhaps you should make your papers more dense and impenetrable to give them the best shot of getting published in the big name journals. If the writing is dense, then you must really know what you’re talking about. It’s harder to understand really smart people. By contrast, if your research is easy to read and accessible, then that might suggest you lack sophistication or your insights are too obvious. So as with Dr. Fox, again it is complexity itself that can contribute to credibility and positive assessments of competence.

Yet Armstrong’s meta study suggests this is the wrong intuition. We are falling victim to the siren song of complexity. We should do the opposite of what our gut tells us!

Instead, we should be suspicious of complexity, since it tends not to add predictive value, but rather to detract from the accuracy of forecasts. If we conclude the basis for a forecast is too complex, we should reject it on that basis alone. How might you go about doing this?

Armstrong suggests using a “simplicity checklist,” (a copy is here) as a way to assess whether an approach really is simple. The checklist focuses on prior knowledge used, relationship of model elements, and whether users can explain the forecasting process. If a model is not simple, and you can’t explain it, then you shouldn’t trust it.

I think this is useful thinking to apply to the asset management industry, where there is always a tug of war between simplicity and complexity. The Dr. Fox study reminds me of some industry dynamics I’ve witnessed.

In asset management, there are plenty of reasons to avoid simplicity and add complexity. After all, if it’s simple, then there’s nothing special about it. A client might say, “if it’s so easy for me to understand, and I can explain it to you, then maybe I know as much as you do, and maybe you’re not adding any value!”

Meanwhile, complexity sells. The “wow” effect is similar to the effect in academic journals. “If I can’t understand it, then it must be really good!”

Complexity is also a great substitute for a lack of substance. If a client wants a justification for something, you can devise a complex, heavily data mined solution that supports the decision, no matter what it is. Armstrong mentions the old saying, “if you can’t convince them, confuse them.”

So don’t be fooled by Dr. Fox! When in doubt, simplify, and avoid harmful complexity at all costs.

If you’d like to learn more about this in the context of finance, here is a post we wrote entitled, “Are you trying too hard?” The essence of the argument is to focus on simple robust processes and avoid complexity.


Note: This site provides no information on our value investing ETFs or our momentum investing ETFs. Please refer to this site.


Join thousands of other readers and subscribe to our blog.


Please remember that past performance is not an indicator of future results. Please read our full disclaimer. The views and opinions expressed herein are those of the author and do not necessarily reflect the views of Alpha Architect, its affiliates or its employees. This material has been provided to you solely for information and educational purposes and does not constitute an offer or solicitation of an offer or any advice or recommendation to purchase any securities or other financial instruments and may not be construed as such. The factual information set forth herein has been obtained or derived from sources believed by the author and Alpha Architect to be reliable but it is not necessarily all-inclusive and is not guaranteed as to its accuracy and is not to be regarded as a representation or warranty, express or implied, as to the information’s accuracy or completeness, nor should the attached information serve as the basis of any investment decision. No part of this material may be reproduced in any form, or referred to in any other publication, without express written permission from Alpha Architect.


Definitions of common statistics used in our analysis are available here (towards the bottom)




About the Author

David Foulke

Mr. Foulke is currently an owner/manager at Tradingfront, Inc., a white-label robo advisor platform. Previously he was a Managing Member of Alpha Architect, a quantitative asset manager. Prior to joining Alpha Architect, he was a Senior Vice President at Pardee Resources Company, a manager of natural resource assets, including investments in mineral rights, timber and renewables. He has also worked in investment banking and capital markets roles within the financial services industry, including at Houlihan Lokey, GE Capital, and Burnham Financial. He also founded two technology companies: E-lingo.com, an internet-based provider of automated translation services, and Stonelocator.com, an online wholesaler of stone and tile. Mr. Foulke received an M.B.A. from The Wharton School of the University of Pennsylvania, and an A.B. from Dartmouth College.