Black Boxes and Intuition
As the Whirlpool example demonstrates, mathematical models can help focus discussions and serve as a foundation for effective decision making. Thanks to the increasing power of personal computers and the Internet, we have a host of advanced mathematical tools and readily available data at our disposal for developing sophisticated models.
Unfortunately, such models can quickly prove to be a “black box,” whose core relationships and key assumptions cannot be understood by even a sophisticated user. Black-box models obfuscate the underlying drivers and accordingly can lead to poor decision making. Without a clear understanding of the drivers of the model, executives will not be attuned to the changes in the environment that influence the actual results. Executives who blindly trust a black-box model rather than looking for leading indicators inevitably find themselves captive to the “too little, too late” syndrome.
A lack of understanding of the black boxes tempts many managers to dismiss the planners’ models and simply “go with the gut” in predicting possible challenges and opportunities. But that approach poses equally daunting problems. Back in the early 1970s, Nobel laureate Daniel Kahneman and his longtime collaborator Amos Tversky began a research stream employing cognitive psychology techniques to examine individual decision making under uncertainty. Their work helped popularize the field of behavioral economics and finance. (See “Daniel Kahneman: The Thought Leader Interview,” by Michael Schrage, s+b, Winter 2003.) Work in this field has demonstrated that real-life decision makers don’t behave like the purely rational person assumed in classic decision theory and in most mathematical models.
As illustrated by a variety of optical illusions, our brains seek out patterns. The ability to fill in the blanks in an obscured scene helped early man see predators and game in the savannas and forests. Though critical in evolutionary survival, this skill can also lead us to see patterns where they do not exist. For example, when asked to create a random sequence of heads and tails as if they were flipping a fair coin 100 times, students inevitably produce a pattern that is easily discernible. The counterintuitive reality is that a random sequence of 100 coin flips has a 97 percent chance of including one or more runs of at least five heads or five tails in a row. Virtually no one assumes that will happen in an invented “random” sequence. (Any gambler’s perceived “lucky streak” offers a similar example of the typical human being’s pattern-making compulsion.)
Our tendency to see patterns even in random data contributes to a key problem in forecasting: overconfidence. Intuition leads people to consistently put too much confidence in their ability to predict the future. As professors, we demonstrate this bias for our MBA students with another simple class exercise. We challenge the students to predict, with a 90 percent confidence level, a range of values for a set of key indicators such as the S&P 500, the box office revenues for a new movie, or the local temperature on a certain day. If the exercise is done correctly, only one out of 10 outcomes will fall outside the predicted range. Inevitably, however, the forecasts fail to capture the actual outcome much more frequently than most of the students expect. Fortunately, the bias toward overconfidence diminishes over time as students learn to control their self-assurance.
Although Peter Drucker fretted about looking out the rear window of the car, in reality too many forecasters fail to examine history adequately. Consider the subprime mortgage crisis. In 1998, AIG began selling credit default swaps to insure counterparties against the risk of losing principal and interest on residential mortgage-backed securities. AIG’s customers eventually included some of the largest banking institutions in the world, such as Goldman Sachs, Société Générale, and Deutsche Bank.