Philip Tetlock's Expert Political Judgment: How Good is It? How Can We Know? is yet another reminder that my practice of periodically sweeping earlier through material, mine and others, to see how the forecast played out or was the tone too shrill is an exceedingly uncommon practice:
It is my want to revisit projections and forecasts, mine and others, to look for accuracy in both substance and timing; are assumptions still accurate and if not, why not; what new players and tools have entered the market; and what has shifted. The assumptions and the development process are more interesting than the answer as too many people treat a situation in time as something fixed, instead of seeing it as a still frame in a motion picture (where the trick is to predict the next scene).
What I have found from many engagements is how rare is such introspection, and even more so when the review of prediction is made public. Yet I am pulled up short as I may not measure up to Tetlock's current findings in predictive failure. He has certainly put me on guard both for what I do and for those whom I read. Expert Political Judgment nicely integrates Tetlock's four research areas:
- Accountability, "strategies people use to cope with social pressures to justify their views or conduct to others"
- Value conflict/taboo trade-offs/protecting the sacred, "the boundaries people often place on the range of the "thinkable""
- The concept of good judgment, defined as "styles of reasoning in individuals and groups"
- Political versus politicized psychology, "criteria [to] gauge the impact of moral and political objectives [on works] ostensibly dedicated [to] truth"
While too rigorous to be dismissed as a crossover work, Expert Political Judgment puts front and center in the public eye that:
people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either.
(Many readers may end their knowledge of Tetlock with Louis Menand's hagiographic Everybody's an Expert in the New Yorker but they should read on.)
I believe that Tetlock's claim of an utter lack of accountability in political prediction applies to virtually all areas of forecasting, notably its handmaidens of economic and strategic forecasting. I agree that "Our system of expertise is completely inside out: it rewards bad judgments over good ones." We regularly have to put aside the 'received wisdom' of earlier advisors in order to affect a solution.
Tetlock steps out (with his own great sound bite) to divide his pundits into two groups, hedgehogs and foxes:
- Hedgehogs ""know one big thing," aggressively [extending] the explanatory reach of that one big thing into new domains, display bristly impatience with those who "do not get it," and express considerable confidence that they are already pretty proficient forecasters, at least in the long term"
- Foxes "know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible "ad hocery" that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess."
Foxes score better than hedgehogs (although hedgehogs do very well on the chance hole-in-one), yet our "primitive attraction" is to "deterministic, overconfident hedgehogs":
A hedgehog is a person who sees international affairs to be ultimately determined by a single bottom-line force: balance-of-power considerations, or the clash of civilizations, or globalization and the spread of free markets. A hedgehog is the kind of person who holds a great-man theory of history [or] he or she might adhere to the "actor-dispensability thesis"… Whatever it is, the big idea, and that idea alone, dictates the probable outcome of events. For the hedgehog, therefore, predictions that fail are only "off on timing," or are "almost right," derailed by an unforeseeable accident. There are always little swerves in the short run, but the long run irons them out.
Foxes… don’t see a single determining explanation in history. They tend to "see the world as a shifting mixture of self-fulfilling and self-negating prophecies: self-fulfilling ones in which success breeds success, and failure, failure but only up to a point, and then self-negating prophecies kick in as people recognize that things have gone too far."
Disclaimer: I place myself among the foxes.
Even a work as potent as Expert Political Judgment may not penetrate the public's assumption of expertise among pundits (especially one's own favorite pundits). Yet in the field of the psychology of expertise, Tetlock's work brings no great surprise as it "is just one of more than a hundred studies that have pitted experts against statistical or actuarial formulas, and in almost all of those studies the people either do no better than the formulas or do worse." Relatively common assumptions in the field:
- People, experts included, "fall in love with our hunches [and] really, really hate to be wrong"
- Experts are no different from ordinary people in tending to "dismiss new information that doesn’t fit with what they already believe"
- The future is seen "as indeterminate and the past as inevitable"
- Double standards abound, "tougher in assessing the validity of information that undercut their theory than [in] crediting information that supported it
- Intellectuals too often deal in "solidarity goods" rather than "credence goods," tailoring predictions to fit those of their ideological brethren
- Most, experts included, tend to "find scenarios with more variables more likely" thus creating forecasts that require separate events to occur in order to be true
- "Plausible detail" clouds decision making and promotes selection of complex outcomes
- Hyperspecialization robs people, experts included, of a rounded basis for decision making
- The more "ingenious," the more "arresting" forecasts achieve greater cachet and fit the sound bite window of attention span
Following are all bibliography citations for parts 1, 2 and 3
Think you can beat the analysts in predicting 2006?
Readers have topped experts in last 5 years
By David Lieberman
01/11/2006 - Updated 08:48 PM ET
Evaluating Political Pundits
By CARL BIALIK
THE NUMBERS GUY
Wall Street Journal
January 6, 2006
A Psychology Talk and Interview Show
Shrink Rap Radio
Podcast Date: Dec 27, 2005 22:29:59
#19 - Philip Tetlock, Ph.D. on the Predictive Errors of Political Experts
December 26, 2005
THE GUESSING GAME
On the Media
December 9, 2005
Foxes, hedgehogs, and the study of international relations
Daniel W. Drezner
Wednesday, November 30, 2005
Who needs experts?
Daniel W. Drezner
November 29, 2005
EVERYBODY’S AN EXPERT
by Louis Menand
Issue of 2005-12-05
China - Thunder From The Silent Zone
Paul Monk and Rowan Callick
Background Briefing on ABC Radio National (Australia)
18 September 2005
The New Neuromorality
W. H. Brady Program in Culture and Freedom Conference
AEI, Washington, D.C. 20036
June 1, 2005
The Implicit Prejudice Exchange: Islands of Consensus in a Sea of Controversy
Philip E. Tetlock and Hal R. Arkes
2004, Vol. 15, No.4, 311-321
Making Unconscious Decisions Properly
ANDERSON COOPER 360 DEGREES
Aired May 6, 2005 - 19:00 ET
Expert Political Judgment: How Good is It? How Can We Know?
Philip E. Tetlock
CHAPTER 1: Quantifying the Unquantifiable
Blink and The Wisdom of Crowds
Book review in the form of an exchange between James Surowiecki and Malcolm Gladwell
Jan. 10, 2005, at 5:23 PM ET
Blink: The Power of Thinking without Thinking
by Malcolm Gladwell
If you want good information, ask around - a lot
Large groups are more accurate that any expert
By John Freeman
from the May 25, 2004 edition
The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations
By James Surowiecki