Models vs. Experts # 11: Simple models beating the experts

Models vs. Experts # 11: Simple models beating the experts

September 3, 2013 Behavioral Finance
Print Friendly
(Last Updated On: January 18, 2017)

The search for configural relationships in personality assessment: The diagnosis of psychosis vs. neurosis from the MMPI

  • Goldberg, L. R.
  • Multivariate Behavioral Research, 4, 523-536
  • An online version of the paper can be found here
  • Want a summary of academic papers with alpha? Check out our free Academic Alpha Database!

Abstract:

In 1956 Meehl predicted that the relationships between MMPI scores and the psychosis-neurosis diagnostic classification should be highly configural in character, and therefore that no linear combination of MMPI scores should be able to differentiate neurotic from psychotic patients as accurately as either experienced clinical psychologists or configural actuarial techniques. The present paper summarizes the findings from ten years of research on this question. While the research for configural actuarial procedures has led to a moderator variable, neither clinical experts, moderated regression analyses, profile typologies, the Perceptron algorithm, density estimation procedures, Bayesian techniques, nor sequential analyses-when cross-validated-have been-able to improve on a simple linear function. The implications of these negative findings for investigations of configural relationships with other problems are discussed.

Prediction:

The authors compare the diagnosis results for psychosis or neurosis across experts, complex models, and simple models.

Here are the results from the test with a few highlights for examples of the 1) human experts, 2)  complex model, and 3) simple model.

Goldberg_1969

Anyone else catching on to a pattern here? Simple beats complex and machines beat humans.


Note: This site provides NO information on our value investing ETFs or our momentum investing ETFs. Please refer to this site.


Join thousands of other readers and subscribe to our blog.


Please remember that past performance is not an indicator of future results. Please read our full disclosures. The views and opinions expressed herein are those of the author and do not necessarily reflect the views of Alpha Architect, its affiliates or its employees. This material has been provided to you solely for information and educational purposes and does not constitute an offer or solicitation of an offer or any advice or recommendation to purchase any securities or other financial instruments and may not be construed as such. The factual information set forth herein has been obtained or derived from sources believed by the author and Alpha Architect to be reliable but it is not necessarily all-inclusive and is not guaranteed as to its accuracy and is not to be regarded as a representation or warranty, express or implied, as to the information’s accuracy or completeness, nor should the attached information serve as the basis of any investment decision. No part of this material may be reproduced in any form, or referred to in any other publication, without express written permission from Alpha Architect.


Definitions of common statistics used in our analysis are available here (towards the bottom)




About the Author

Wesley R. Gray, Ph.D.

After serving as a Captain in the United States Marine Corps, Dr. Gray earned a PhD, and worked as a finance professor at Drexel University. Dr. Gray’s interest in bridging the research gap between academia and industry led him to found Alpha Architect, an asset management that delivers affordable active exposures for tax-sensitive investors. Dr. Gray has published four books and a number of academic articles. Wes is a regular contributor to multiple industry outlets, to include the following: Wall Street Journal, Forbes, ETF.com, and the CFA Institute. Dr. Gray earned an MBA and a PhD in finance from the University of Chicago and graduated magna cum laude with a BS from The Wharton School of the University of Pennsylvania.


  • This is a frequently mentioned result, but overlooks that in this case the outcomes being reported were themselves highly subjective constructs, making it questionable whether any particular decision can be independently verified as “correct” or not. The prevailing thinking at that time about those two psychological categories doesn’t correspond to any measurable facts in the physical world, and would not necessarily have the same meaning to psychiatrists or clinical psychologists practicing today.

    I think it’s true that decision rules can often outperform individuals, but there’s a hidden pitfall when “correctness” of either approach is determined by the output of similar rules, rather than totally objective real-world outcomes that can be observed independently.

  • makes sense–great point!