Models vs. Experts #8: Graduate School Admissions

Models vs. Experts #8: Graduate School Admissions

May 22, 2013 Behavioral Finance
Print Friendly
(Last Updated On: January 18, 2017)

A case study of graduate admissions: Application of three principles of human decision making

  • Dawes, R. M.
  • American Psychologist, 26, 180-188
  • An online version of the paper can be found here
  • Want a summary of academic papers with alpha? Check out our free Academic Alpha Database!

Abstract:

The problem is that the admissions committee does not know what they are (except perhaps on a vague verbal level). And it has no way of assessing them. Since the clinical judgment of the admissions committee is not even as good as two of the conventional variables considered singly, it can only be concluded that the attempt of the admissions committee to assess these other presumably important variables decreases rather than increases the validity of its judgments. What is needed is research concerning the determinants of graduate success.

Prediction:

This paper involves a fair amount of literature review and discussion on simple models versus experts.

A fascinating quote:

How can a model (linear or any other sort) based on an individual’s behavior do a better job of what the individual is trying to do than does the individual himself? The answer is that a mathematical model, by its very nature, is an abstraction of the process it models; hence, if the decision maker’s behavior involves following valid principles but following them poorly, these valid principles will be abstracted by the model—as long as the deviations from these principles are not systematically related to the variables the decision maker is considering.

Another quote:

For example, a decision maker may be weighting aptitude, past performance, and motivation correctly in predicting performance in graduate school and beyond, yet he may be influenced by such things as fatigue, headaches, boredom, and so on; in addition, he will be influenced by whether the most recent applications he has seen are particularly strong or weak.

Here is how the tests go down:

  1. Identify admission rankings for prospective PhD students based on their personal assessment of a variety of characteristics (GPA, GRE, transcript, recommendations, etc). All of this is done from 1964-1967.
  2. Let a computer pipe in GPA, Undergraduate Institution Quality, and GRE score.
  3. Collect performance ratings on students in 1969. The faculty rank students based on their realized performance in graduate school on a 5 point scale.
  4. Compare the performance of the admission committee rankings and the performance of the computer prediction.

Alpha Highlight:

Here are the results:

  • The average rating of the admissions committee is only 19% correlated with outcome.
  • Simply using GPA alone does a better job than the admissions committee (21%).
  • A simple multiple regression of the grades, GRE, and insitution quality has a 40% correlation.

==> A simple linear combination of the variables identifed in (2) above outperform the admission committee rankings.

Next the author uses multiple regression to “quantify” how the admissions committee makes their decisions. He then uses this information to predict future performance (they call this paramorphic representation…in other words, a computer model to predict how experts will act, based on the data on their decision making)

Paradoxically, after-the-fact performance is 25% correlated with the computer prediction of the experts behavior, whereas future performance was only 19% correlated with the experts actual decisions.

==>a computer predicting how the experts will act based on their historical actions, does a better job predicting than the experts themselves.

Chew on that one for a while…

 

 

Thoughts on the paper?


Note: This site provides NO information on our value investing ETFs or our momentum investing ETFs. Please refer to this site.


Join thousands of other readers and subscribe to our blog.


Please remember that past performance is not an indicator of future results. Please read our full disclosures. The views and opinions expressed herein are those of the author and do not necessarily reflect the views of Alpha Architect, its affiliates or its employees. This material has been provided to you solely for information and educational purposes and does not constitute an offer or solicitation of an offer or any advice or recommendation to purchase any securities or other financial instruments and may not be construed as such. The factual information set forth herein has been obtained or derived from sources believed by the author and Alpha Architect to be reliable but it is not necessarily all-inclusive and is not guaranteed as to its accuracy and is not to be regarded as a representation or warranty, express or implied, as to the information’s accuracy or completeness, nor should the attached information serve as the basis of any investment decision. No part of this material may be reproduced in any form, or referred to in any other publication, without express written permission from Alpha Architect.


Definitions of common statistics used in our analysis are available here (towards the bottom)




About the Author

Wesley R. Gray, Ph.D.

After serving as a Captain in the United States Marine Corps, Dr. Gray earned a PhD, and worked as a finance professor at Drexel University. Dr. Gray’s interest in bridging the research gap between academia and industry led him to found Alpha Architect, an asset management that delivers affordable active exposures for tax-sensitive investors. Dr. Gray has published four books and a number of academic articles. Wes is a regular contributor to multiple industry outlets, to include the following: Wall Street Journal, Forbes, ETF.com, and the CFA Institute. Dr. Gray earned an MBA and a PhD in finance from the University of Chicago and graduated magna cum laude with a BS from The Wharton School of the University of Pennsylvania.