I recently read an article by Jason Zweig and saw a reference to Lewis Goldberg’s, “Man Versus Model of Man” paper on Expert Studies in the 1970 Psychological Bulletin. There are hundreds of published studies that have a similar theme. Give an expert any and all available data that they want and ask them to make a judgement germane to their field of expertise (examples include Oncologist – how long will a patient live, Parole Board – who is most likely to recidivate, Wine Expert – price of wine at auction, etc.)
The experts tell the scientist which variables are most important in their decision and the scientist goes off and builds a model and compares the model’s results to the forecasts of the “experts.” Over the past 60 years, hundreds of expert studies have been performed and show that the model beats or ties the expert 94% of the time (1).
There was one of Goldberg’s quotes about the use of models versus clinical decision making that made me laugh:
Such an enterprise, originally viewed with considerable disdain by clinical psychologists, has recently weathered a period of intense controversy (Gough, 1962; Meehl, 1954; Sawyer, 1966), and may soon become a reasonably well accepted procedure in psychology—if not in medicine, stock forecasting, and other professional endeavors.
Consequently, it now seems safe to assert rather dogmatically that when acceptable criterion information is available, the proper role of the human in the decision-making process is that of a scientist: (a) discovering or identifying new cues which will improve predictive accuracy, and (b) constructing new sorts of systematic procedures for combining predictors in increasingly more optimal ways.
This quote was written 46 years ago yet clinical judgment still dominates psychology, medicine, and stock forecasting. Given the evidence, it is hard to argue against model-based decision making or man + model, but expert judgment still dominates.
The experts that will dominate the future (and are already beginning to do so) are the ones that embrace models as an extension of their own expertise. Models do not replace human judgment. The parameters models are built upon are determined by experts. Experts also are required to intuit when exceptions to the model are necessary.
My belief is that Lewis Goldberg’s prediction will come true in the next decade as computing power, statistical techniques, software, and zeitgeist have grown to a point where Man + Machine will become the rule instead of the exception.
Here are a few other great quotes from Lewis Goldberg’s article:
- Mathematical representations of such clinical judges can often be constructed to capture critical aspects of their judgmental strategies.
- The results of these analyses indicate that for this diagnostic task models of the men are generally more valid than the men themselves. Moreover, the finding occurred even when the models were constructed on a small set of cases, and then man and model competed on a completely new set.
- Ten years of research on the clinical judgment process have demonstrated that for many types of common clinical decisions and for many sorts of clinical judges, a simple linear regression equation can be constructed which will predict the responses of a judge at approximately the level of his own reliability. For documentation of this assertion and for details of the methodology, see Hoffman (1960), Hammond, Hursch, and Todd (1964), Naylor and Wherry (1965), and Goldberg (1968). While such regression models have been utilized (probably somewhat inappropriately) to explain the manner in which clinicians combine cues in making their diagnostic and prognostic decisions (see Green, 1968; Hoffman, 1968), there is little controversy about their power as predictors of the clinical judgments.
(1) “Comparative Efficiency of Informal and Formal Prediction Procedures” – William Grove and Paul Meehl, published in Psychology, Public Policy, and Law (1996)