Skip to contentSkip to navigation

Cleaning the Crystal Ball

How intelligent forecasting can lead to better decision making.

(originally published by Booz & Company)

 Peter Drucker once commented that “trying to predict the future is like trying to drive down a country road at night with no lights while looking out the back window.” Though we agree with Drucker that forecasting is hard, managers are constantly asked to predict the future — be it to project future product sales, anticipate company profits, or plan for investment returns. Good forecasts hold the key to good plans. Simply complaining about the difficulty does not help.

Nonetheless, few forecasters receive any formal training, or even expert apprenticeship. Too many companies treat the forecasting process like a carnival game of guessing someone’s weight. And given the frequency of sandbagged (deliberately underestimated) sales forecasts and managed earnings, we even wonder how often the scale is rigged. This lack of attention to the quality of forecasting is a shame, because an effective vehicle for looking ahead can make all the difference in the success of a long-term investment or strategic decision.

Competence in forecasting does not mean being able to predict the future with certainty. It means accepting the role that uncertainty plays in the world, engaging in a continuous improvement process of building your firm’s forecasting capability, and paving the way for corporate success. A good forecast leads, through either direct recommendations or informal conversation, to robust actions — actions that will be worth taking, no matter how the realities of the future unfold. In many cases, good forecasting involves recognizing, and sometimes shouting from the rooftops about, the inherent uncertainty of the estimates, and the fact that things can go very bad very quickly. Such shouts should not invoke the paranoia of Chicken Little’s falling sky; instead, they should promote the development of contingency plans to both manage risks and rapidly take advantage of unexpected opportunities.

Fortunately, better forecasting can be accomplished almost as simply as improving Drucker’s driving challenge. Turn on the headlights, focus on the road ahead, know the limits of both the car and the driver, and, if the road is particularly challenging, get a map — or even ask others for directions. By using the language of probability, a well-designed forecast helps managers understand future uncertainty so they can make better plans that inform ongoing decision making. We will explore the many approaches that forecasters can take to make their recommendations robust, even as they embrace the uncertainty of the real world.

The Flaw of Averages

In forecasting the future, most companies focus on single-point estimates: They propose a number for the market size or the company’s unit sales in the coming year, typically based on an average of expected data. Though companies generally manage against a specific target like revenue or profit, and also share that information with outside analysts, we often forget that a point forecast is almost certainly wrong; an exact realization of a specific number is nearly impossible.

This problem is described at length by Sam Savage, an academic and consultant based at Stanford University, in The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty (Wiley, 2009). He notes how focusing on an average without understanding the impact of the range can lead to flawed estimates. Better decisions result from taking the time to anticipate the likelihood of overshooting or undershooting the point, and then considering what to do today, given the range of possibilities in the future.

Savage highlights the simple example of a manager estimating the demand for 100,000 units of a product — based on a range of possible market conditions — and then extrapolating that average to produce a profit estimate. But the plausible demand could be as much as 50 percent above or below the average, with potentially dangerous consequences. If demand runs 50 percent above the average, the plant will miss some sales because it will be unable to increase capacity that much in the time period. Conversely, if demand runs 50 percent below the forecast average demand, the profit per unit will be dramatically lower, since the plant has to spread its fixed cost over fewer units. As a result, the profits at an average demand level will be much different from an average of the profits across the range of possibilities. Rather than a simple average, a better forecast would present a wide range of scenarios coupled with a set of potential actions to influence the demand and profitability. Such a forecast would encourage management to heed early signals of consumer interest to accelerate marketing and/or cut fixed costs if sales fall short, or to ramp up production quickly if sales appear to be at the high end of the forecast.

Reflecting risk in forecasts is a simple concept and one that may seem easy to put into practice, but managers commonly ignore the uncertainties and simply collapse their forecasts into averages instead. We often see this in predictions of project completion timelines. Consider a project with 10 parallel tasks. Each task should take between three and nine months, with an average completion time of six months for all of them. If the 10 tasks are independent and the durations are distributed according to a triangular distribution, chances are less than one in 1,000 that the project will be completed in six months, and the duration will be close to eight months. But using the six-month figure instead offers an almost irresistible temptation; after all, that’s the average input.

Despite the potential that point estimates carry for misleading decision makers, many firms default to them in forecasts. For example, Airbus and Boeing present passenger traffic and freight traffic annual growth rates over a 20-year horizon as point estimates in their respective biannual “Global Market Forecast” and “Current Market Outlook” reports. Although a close reading of the reports suggests that the forecasters considered ranges when generating the forecasts — and even conducted sensitivity analyses to understand the implications of different assumptions — such scenarios are not reported. A forecast showing the range and not just the average would be more valuable in making plans, and would help the industry avoid overconfidence.

In short, forecasting should not be treated as a game of chance, in which we win by getting closest to the eventual outcome. Occasionally being “right” with a particular prediction creates no real benefit and can in fact lead to a false sense of security. No one can produce correct point forecasts time and time again. Instead, it’s better to use the range of possible outcomes as a learning tool: a way to explore scenarios and to prepare for an inherently uncertain future.

Drivers of Uncertainty

The most useful forecasts do not merely document the range of uncertainties; they explain why the future may turn in different directions. They do this by “decomposing” the future into its component parts — the driving forces that determine the behavior of the system. Just asking “Why might this happen?” and “What would happen as a result?” helps to uncover possible outcomes that were previously unknown. Recasting the driving forces as metrics, in turn, leads to better forecasts.

For example, the general business cycle is a driving force that determines much of the demand in the appliance industry. Key economic metrics, such as housing starts, affect the sales of new units, but a consumer’s decision to replace or repair a broken dishwasher also depends on other factors related to the business cycle, such as levels of unemployment and consumer confidence. With metrics estimating these factors in hand, companies in that industry — including the Whirlpool Corporation in the U.S. and its leading European competitor, AB Electrolux — use sophisticated macroeconomic models to predict overall industry sales and, ultimately, their share of the sales.

Here, too, the effective use of metrics requires an embrace of uncertainty. Simply focusing on the output of the model (the projected sales figures) rather than the input (such as unemployment and consumer confidence) can actually do more harm than good. Whirlpool’s planners use their industry forecast models to focus executive attention, not replace it. The planners present the model for the upcoming year or quarter, describing the logic that has led them to choose these particular levels of demand and the reason the outcomes are meaningful. Executives can set plans that disagree with the forecasters’ predictions, but everyone has to agree on which input variables reflect an overly optimistic or pessimistic future. Even more important, managers can begin influencing some of the driving forces: For example, they can work with retail partners to encourage remodeling-driven demand to offset a drop in housing starts.

Black Boxes and Intuition

As the Whirlpool example demonstrates, mathematical models can help focus discussions and serve as a foundation for effective decision making. Thanks to the increasing power of personal computers and the Internet, we have a host of advanced mathematical tools and readily available data at our disposal for developing sophisticated models.

Unfortunately, such models can quickly prove to be a “black box,” whose core relationships and key assumptions cannot be understood by even a sophisticated user. Black-box models obfuscate the underlying drivers and accordingly can lead to poor decision making. Without a clear understanding of the drivers of the model, executives will not be attuned to the changes in the environment that influence the actual results. Executives who blindly trust a black-box model rather than looking for leading indicators inevitably find themselves captive to the “too little, too late” syndrome.

A lack of understanding of the black boxes tempts many managers to dismiss the planners’ models and simply “go with the gut” in predicting possible challenges and opportunities. But that approach poses equally daunting problems. Back in the early 1970s, Nobel laureate Daniel Kahneman and his longtime collaborator Amos Tversky began a research stream employing cognitive psychology techniques to examine individual decision making under uncertainty. Their work helped popularize the field of behavioral economics and finance. (See “Daniel Kahneman: The Thought Leader Interview,” by Michael Schrage, s+b, Winter 2003.) Work in this field has demonstrated that real-life decision makers don’t behave like the purely rational person assumed in classic decision theory and in most mathematical models.

As illustrated by a variety of optical illusions, our brains seek out patterns. The ability to fill in the blanks in an obscured scene helped early man see predators and game in the savannas and forests. Though critical in evolutionary survival, this skill can also lead us to see patterns where they do not exist. For example, when asked to create a random sequence of heads and tails as if they were flipping a fair coin 100 times, students inevitably produce a pattern that is easily discernible. The counterintuitive reality is that a random sequence of 100 coin flips has a 97 percent chance of including one or more runs of at least five heads or five tails in a row. Virtually no one assumes that will happen in an invented “random” sequence. (Any gambler’s perceived “lucky streak” offers a similar example of the typical human being’s pattern-making compulsion.)

Our tendency to see patterns even in random data contributes to a key problem in forecasting: overconfidence. Intuition leads people to consistently put too much confidence in their ability to predict the future. As professors, we demonstrate this bias for our MBA students with another simple class exercise. We challenge the students to predict, with a 90 percent confidence level, a range of values for a set of key indicators such as the S&P 500, the box office revenues for a new movie, or the local temperature on a certain day. If the exercise is done correctly, only one out of 10 outcomes will fall outside the predicted range. Inevitably, however, the forecasts fail to capture the actual outcome much more frequently than most of the students expect. Fortunately, the bias toward overconfidence diminishes over time as students learn to control their self-assurance.

History Matters

Although Peter Drucker fretted about looking out the rear window of the car, in reality too many forecasters fail to examine history adequately. Consider the subprime mortgage crisis. In 1998, AIG began selling credit default swaps to insure counterparties against the risk of losing principal and interest on residential mortgage-backed securities. AIG’s customers eventually included some of the largest banking institutions in the world, such as Goldman Sachs, Société Générale, and Deutsche Bank.

At the end of the fourth quarter of 1998, the delinquency rate for U.S. subprime adjustable-rate mortgages stood at just over 13 percent. By the end of the fourth quarter of 2008, this rate had almost doubled, to an astonishing 24 percent. This in turn led to the US$180 billion bailout of AIG. Although a 24 percent default rate seemed unprecedented to most bankers, a look back beyond their own lifetimes would have indicated the possibility. In 1934, at the height of the Great Depression, approximately 50 percent of all urban house mortgages were in default.

That is why looking back at past forecasts and their realizations can prove so valuable; it can help prevent overconfidence and suggest places where unexpected factors may emerge. Recently, researchers Victor Jose, Bob Nau, and Bob Winkler at Duke University proposed new rules to score and reward good forecasts. An effective “scoring rule” provides incentives to discourage the forecaster from sandbagging, a proverbial problem in corporate life. For example, Gap Inc. measures the performance of store managers on the difference between actual sales and forecast sales, as well as on overall sales. By assessing forecasting accuracy, the rules penalize sales above the forecast number as well as sales shortfalls. Unfortunately, Gap is an exception. To date, few firms have picked up on the research into incentive mechanisms and scoring rules to improve forecasts, despite the proven success in fields such as meteorology.

It may seem like an obvious thing to do, but most companies do not revisit their forecasts and track the actual results. A recent survey by decision analysis consultant Douglas Hubbard found that only one out of 35 companies with experienced modelers had ever attempted to check actual outcomes against original forecasts — and that company could not present any evidence to back up the claim. Airbus and Boeing spend resources in generating their “Global Market Forecast” and “Current Market Outlook” reports, but they do not report on the accuracy of their previous forecasts. On the other hand, Eli Lilly has developed a systematic process of tracking every drug forecast to understand its predictive accuracy.

Wisdom of Crowds

Increasingly, conventional wisdom also challenges the logic of expert forecasters even if they have been trained to rein in their overconfidence through continuous feedback of actual results. Journalist James Surowiecki presented the case in his bestseller, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies, and Nations (Doubleday, 2004). Furthermore, research into forecasting in a wide range of fields by Wharton professor J. Scott Armstrong showed no important advantage for expertise. In fact, research by James Shanteau, distinguished professor of psychology at Kansas State University, has shown that expert judgments often demonstrate logically inconsistent results. For example, medical pathologists presented with the same evidence twice would reach a different conclusion 50 percent of the time.

The old game of estimating the number of jelly beans in a jar illustrates the innate wisdom of the crowd. In a class of 50 to 60 students, the average of the individual guesses will typically be better than all but one or two of the individual guesses. Of course, that result raises the question of why you shouldn’t use the best single guesser as your expert forecaster. The problem is that we have no good way to identify that person in advance — and worse yet, that “expert” may not be the best individual for the next jar because the first result likely reflected a bit of random luck and not a truly superior methodology.

For this reason, teams of forecasters often generate better results (and decisions) than individuals, but the teams need to include a sufficient degree of diversity of information and perspectives. A naive forecaster often frames the question a different way and thinks more deeply about the fundamental driver of the forecast than an expert who has developed an intuitive, but often overconfident, sense of what the future holds.

Group dynamics can produce a different sort of challenge in bringing together a team; people vary in their styles and assertiveness. The most vocal or most senior person — rather than the person with the keenest sense of possibilities — might dominate the discussion and overly influence the consensus. This has been the case in a host of classroom simulations based on wildfires, plane crashes, and boat wrecks. They all place teams into a simulated high-pressure situation where collective insight should help. Typically, a dominant personality steps forth and drives the process toward his or her predetermined view, making little or no use of the wisdom of the crowd. In The Drunkard’s Walk: How Randomness Rules Our Lives (Pantheon, 2009), physicist and writer Leonard Mlodinow describes a number of research studies that show how most people put too much confidence in the most senior or highest-paid person. Does that sound like your executive team?

Culture and Capability

To become proficient at forecasting, a company must develop capabilities for both achieving insight and converting that insight into effective decision making. The firm need not seek out the star forecaster, but instead should invest in cultivating an open atmosphere of dialogue about uncertainty and scrutiny — one that brings to the fore a more complete picture of the expert knowledge that already resides in many of its existing employees.

The resulting culture will be one in which managers recognize and deal with uncertainty more easily; they won’t feel they have to resort to the extreme of either throwing up their hands in despair or pretending that they have all the answers.

In the end, overcoming the problems and traps in forecasting probably requires the use of all of these approaches together, within a supportive culture. An example of how difficult this is can be found in the U.S. National Aeronautics and Space Administration (NASA), which probably contains as analytically rigorous a set of people as can be found in a single organization.

The disintegration of space shuttle Columbia in 2003 on reentry during its 28th mission demonstrates how culture can overrule capability. After problems during the shuttle’s launch, NASA engineers developed extensive models for a wide range of scenarios, including the possibility that foam pieces had struck the wing, the event ultimately deemed responsible for the accident. But rather than focus on contingency plans for dealing with the known issue but unknown impact, NASA officials placed too much faith in their mathematical models, which suggested that the wing had not sustained a dangerous degree of damage. The results were catastrophic.

Less than a month after the Columbia disaster, this pervasive cultural problem at NASA was described in an article in the New York Times that quoted Carnegie Mellon University professor Paul Fischbeck. (Fischbeck, an expert on decision making and public policy, had also been the coauthor of a 1990 NASA study on the 1986 Challenger explosion caused by an O-ring failure at cold temperatures.) “They had a model that predicted how much damage would be done,” he said, “but they discounted it, so they didn’t look beyond it. They didn’t seriously consider any of the outcomes beyond minor tile damage.” In other words, even NASA’s brilliant rocket scientists couldn’t outsmart their own inherent biases. They needed processes and practices to force them to do so.

And so, probably, does your company. Too many managers dismiss the inherent uncertainty in the world and therefore fail to consider improbable outcomes or invest sufficient effort in contingency plans. The world is full of unknowns, even rare and difficult-to-predict “black swan” events, to use the term coined by trader, professor, and best-selling writer Nassim Nicholas Taleb. Overreliant on either their intuition or their mathematical models, companies can become complacent about the future.

Consider, for example, the 2002 dock strike on the West Coast of the U.S., which disrupted normal shipping in ports from San Diego to the border with Canada for a couple of weeks. A survey conducted by the Institute for Supply Management shortly afterward found that 41 percent of the respondents had experienced supply chain problems because of the strike — but only 25 percent were developing contingency plans to deal with future dock strikes.

We can train our intuition to offer a better guide in decision making. To do so, we must be aware of our biases and remember that all models start with assumptions. Engaging a diverse set of parties, including relatively naive ones, forces us to articulate and challenge those assumptions by seeking empirical data. No model is objective, reflecting some universal truth. Instead, business models represent highly subjective views of an uncertain world. Rather than seeking the ultimate model or expert, managers should adopt the axiom cited by General Dwight D. Eisenhower regarding the successful but highly uncertain D-day invasion in World War II. He asserted that “plans are nothing; planning is everything.” A good forecast informs decisions today, but equally important, forces us to consider and plan for other possibilities.

Reprint No. 10202

Author profiles:

  • Tim Laseter holds teaching appointments at an evolving mix of leading business schools, currently including the Darden School at the University of Virginia and the Tuck School at Dartmouth College. He is the author of Balanced Sourcing (Jossey-Bass, 1998) and Strategic Product Creation (with Ronald Kerber; McGraw-Hill, 2007), and is an author of the newest edition of The Portable MBA (Wiley, 2010). Formerly a partner with Booz & Company, he has more than 20 years of experience in operations strategy.
  • Casey Lichtendahl is an assistant professor of business administration at the University of Virginia’s Darden Graduate School of Business. His research focuses on forecasting and decision analysis.
  • Yael Grushka-Cockayne is an assistant professor of business administration at the University of Virginia’s Darden Graduate School of Business. Her research focuses on project management, strategic and behavioral decision making, and new product development.
Get s+b's award-winning newsletter delivered to your inbox. Sign up No, thanks
Illustration of flying birds delivering information
Get the newsletter

Sign up now to get our top insights on business strategy and management trends, delivered straight to your inbox twice a week.