strategy+business is published by PwC Strategy& Inc.
 
or, sign in with:
strategy and business
Published: May 25, 2010
 / Summer 2010 / Issue 59

 
 

Cleaning the Crystal Ball

At the end of the fourth quarter of 1998, the delinquency rate for U.S. subprime adjustable-rate mortgages stood at just over 13 percent. By the end of the fourth quarter of 2008, this rate had almost doubled, to an astonishing 24 percent. This in turn led to the US$180 billion bailout of AIG. Although a 24 percent default rate seemed unprecedented to most bankers, a look back beyond their own lifetimes would have indicated the possibility. In 1934, at the height of the Great Depression, approximately 50 percent of all urban house mortgages were in default.

That is why looking back at past forecasts and their realizations can prove so valuable; it can help prevent overconfidence and suggest places where unexpected factors may emerge. Recently, researchers Victor Jose, Bob Nau, and Bob Winkler at Duke University proposed new rules to score and reward good forecasts. An effective “scoring rule” provides incentives to discourage the forecaster from sandbagging, a proverbial problem in corporate life. For example, Gap Inc. measures the performance of store managers on the difference between actual sales and forecast sales, as well as on overall sales. By assessing forecasting accuracy, the rules penalize sales above the forecast number as well as sales shortfalls. Unfortunately, Gap is an exception. To date, few firms have picked up on the research into incentive mechanisms and scoring rules to improve forecasts, despite the proven success in fields such as meteorology.

It may seem like an obvious thing to do, but most companies do not revisit their forecasts and track the actual results. A recent survey by decision analysis consultant Douglas Hubbard found that only one out of 35 companies with experienced modelers had ever attempted to check actual outcomes against original forecasts — and that company could not present any evidence to back up the claim. Airbus and Boeing spend resources in generating their “Global Market Forecast” and “Current Market Outlook” reports, but they do not report on the accuracy of their previous forecasts. On the other hand, Eli Lilly has developed a systematic process of tracking every drug forecast to understand its predictive accuracy.

Wisdom of Crowds

Increasingly, conventional wisdom also challenges the logic of expert forecasters even if they have been trained to rein in their overconfidence through continuous feedback of actual results. Journalist James Surowiecki presented the case in his bestseller, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies, and Nations (Doubleday, 2004). Furthermore, research into forecasting in a wide range of fields by Wharton professor J. Scott Armstrong showed no important advantage for expertise. In fact, research by James Shanteau, distinguished professor of psychology at Kansas State University, has shown that expert judgments often demonstrate logically inconsistent results. For example, medical pathologists presented with the same evidence twice would reach a different conclusion 50 percent of the time.

The old game of estimating the number of jelly beans in a jar illustrates the innate wisdom of the crowd. In a class of 50 to 60 students, the average of the individual guesses will typically be better than all but one or two of the individual guesses. Of course, that result raises the question of why you shouldn’t use the best single guesser as your expert forecaster. The problem is that we have no good way to identify that person in advance — and worse yet, that “expert” may not be the best individual for the next jar because the first result likely reflected a bit of random luck and not a truly superior methodology.

For this reason, teams of forecasters often generate better results (and decisions) than individuals, but the teams need to include a sufficient degree of diversity of information and perspectives. A naive forecaster often frames the question a different way and thinks more deeply about the fundamental driver of the forecast than an expert who has developed an intuitive, but often overconfident, sense of what the future holds.

 
 
 
Follow Us 
Facebook Twitter LinkedIn Google Plus YouTube RSS strategy+business Digital and Mobile products App Store

 

 
Close
Sign up to receive s+b newsletters and get a FREE Strategy eBook

You will initially receive up to two newsletters/week. You can unsubscribe from any newsletter by using the link found in each newsletter.

Close