Models vs. Experts # 11: Simple models beating the experts

////Models vs. Experts # 11: Simple models beating the experts

Models vs. Experts # 11: Simple models beating the experts

By | 2017-01-18T11:11:33+00:00 September 3rd, 2013|Behavioral Finance|5 Comments
Print Friendly, PDF & Email
(Last Updated On: January 18, 2017)

The search for configural relationships in personality assessment: The diagnosis of psychosis vs. neurosis from the MMPI

  • Goldberg, L. R.
  • Multivariate Behavioral Research, 4, 523-536
  • An online version of the paper can be found here
  • Want a summary of academic papers with alpha? Check out our free Academic Alpha Database!

Abstract:

In 1956 Meehl predicted that the relationships between MMPI scores and the psychosis-neurosis diagnostic classification should be highly configural in character, and therefore that no linear combination of MMPI scores should be able to differentiate neurotic from psychotic patients as accurately as either experienced clinical psychologists or configural actuarial techniques. The present paper summarizes the findings from ten years of research on this question. While the research for configural actuarial procedures has led to a moderator variable, neither clinical experts, moderated regression analyses, profile typologies, the Perceptron algorithm, density estimation procedures, Bayesian techniques, nor sequential analyses-when cross-validated-have been-able to improve on a simple linear function. The implications of these negative findings for investigations of configural relationships with other problems are discussed.

Prediction:

The authors compare the diagnosis results for psychosis or neurosis across experts, complex models, and simple models.

Here are the results from the test with a few highlights for examples of the 1) human experts, 2)  complex model, and 3) simple model.

Goldberg_1969

Anyone else catching on to a pattern here? Simple beats complex and machines beat humans.


  • The views and opinions expressed herein are those of the author and do not necessarily reflect the views of Alpha Architect, its affiliates or its employees. Our full disclosures are available here. Definitions of common statistics used in our analysis are available here (towards the bottom).
  • Join thousands of other readers and subscribe to our blog.
  • This site provides NO information on our value ETFs or our momentum ETFs. Please refer to this site.

About the Author:

After serving as a Captain in the United States Marine Corps, Dr. Gray earned a PhD, and worked as a finance professor at Drexel University. Dr. Gray’s interest in bridging the research gap between academia and industry led him to found Alpha Architect, an asset management that delivers affordable active exposures for tax-sensitive investors. Dr. Gray has published four books and a number of academic articles. Wes is a regular contributor to multiple industry outlets, to include the following: Wall Street Journal, Forbes, ETF.com, and the CFA Institute. Dr. Gray earned an MBA and a PhD in finance from the University of Chicago and graduated magna cum laude with a BS from The Wharton School of the University of Pennsylvania.
  • This is a frequently mentioned result, but overlooks that in this case the outcomes being reported were themselves highly subjective constructs, making it questionable whether any particular decision can be independently verified as “correct” or not. The prevailing thinking at that time about those two psychological categories doesn’t correspond to any measurable facts in the physical world, and would not necessarily have the same meaning to psychiatrists or clinical psychologists practicing today.

    I think it’s true that decision rules can often outperform individuals, but there’s a hidden pitfall when “correctness” of either approach is determined by the output of similar rules, rather than totally objective real-world outcomes that can be observed independently.

  • makes sense–great point!

Bitnami