Skip to contentSkip to navigation

The Power of Plausibility Theory

A new form of decision analysis is helping executives reevaluate risk management.

(originally published by Booz & Company)

Illustration by Lars Leetaru
Investors and executives are spending an awful lot of time these days analyzing the “bet the company” decisions that corporate leadership teams increasingly are called upon to make. Some are obviously, colossally bad — for example, the bankrupting decision by the late dot-com Webvan to run up an $830 million cumulative loss by building facilities three times larger than needed to meet the market demand. Other big bets turn out spectacularly well. In the mid-1980s, theorizing that Japanese competition would commoditize his main business, Andy Grove, then chairman of Intel, decided to fundamentally shift the company from memory chips to microprocessors, a risky strategy that, as it turned out, positioned Intel to become the dominant player it is today.

But a company’s fate doesn’t hinge only on the big strategic bets of the top brass. More commonly, it depends upon the myriad day-to-day decisions of managers at multiple levels. Do we invest in a new manufacturing technology? What price should we accept for a long-range contract with a major customer? Do we introduce a new product in a new market segment? If enough of these routine decisions go awry — and they easily can — a company will eventually falter. Managers, although rational, still possess the human biases, frailties, and emotions that can cloud effective decision making.

To counteract the hazard of human error in risk assessment and decision making, businesses for decades have employed rigorous analytical techniques (such as decision trees, simulation models, and probabilistic reasoning) drawn from a discipline known as decision analysis. Yet, despite several decades of exposure to these techniques, human intuition and emotion still upend the best-laid plans of CEOs.

Defenders of existing methods of decision analysis argue for better training to overcome these weaknesses. But rather than fight human behavior, decision analysis can em-brace intuition. Plausibility Theory is a promising new approach that accepts the rationality of intuitive decision making and offers business leaders a path forward.

The analytic underpinnings — as well as the weaknesses — of conventional decision analysis lie in Bayesian statistics, named for Thomas Bayes, an 18th-century English Presbyterian minister who developed rules for weighing the likelihood of different events and their expected outcomes. In the 1960s, Harvard Business School Professor Howard Raiffa popularized the application of Bayesian analysis in a business context. Managers influenced by Bayesian theory make decisions based on a rigorous calculation of the probabilities of all the possible outcomes. By weighting the value of each outcome by the probability and summing the totals, Bayesian analysis calculates an “expected value” for any given decision. The technique teaches managers to accept decisions with positive expected values and avoid those with negative ones.

The Gambling Instinct
Unfortunately, making decisions on the basis of an expected value is not very intuitive for most people. Consider a coin toss. You are offered a bet by which you’ll receive $100,000 if the coin lands on heads, but you must pay $50,000 if it lands on tails. Although the expected value of this bet is a positive $25,000 ([50% x $100,000] – [50% x $50,000]), few people would rush to take the wager. The potential downside — losing $50,000 — is simply too great.

However, many decision makers who would reject the high-stakes gamble on the single flip of a coin might accept a situation that redefines the gamble, based upon the results of 100 flips of the same coin. Although the expected value of each individual flip remains $25,000, the chance of a major loss is now extremely low.

Even those with limited mathematical training recognize that the acceptance of 100 independent coin tosses lowers risk; it’s the logic that underlies diversified stock portfolios. But despite the appeal of portfolio betting, Bayesian decision analysis faults the intuition behind it as the “fallacy of large numbers,” since the single bet always has the same expected value of $25,000, whether it is part of a portfolio or not. Paul Samuelson, the 1970 Nobel laureate in economics, showed that even though our intuition tells us to reject the one-shot bet and to accept the portfolio of bets, it is logically inconsistent to do so.

Despite the mathematical proof defending the logic of expected value, in the real world, when hearts and minds do battle, the heart — one’s fears and hopes — often prevails. Our gut instinct knows that focusing on the average result may work in the long run, but as individuals we are more concerned about the specific case: How much can we lose? What’s the likelihood of a bad result occurring?

Plausibility Theory replaces the Bayesian expected-value calculation with a risk threshold that is more comfortable for most people. Although developed only in the last five years, it shows great promise as a way to drive rigorous decision analysis while focusing on the real priority of most decision makers: downside risk. This new theory still examines the range of possible outcomes but focuses on the probability of hitting a threshold point — such as a net loss — relative to an acceptable risk.

For example, using Plausibility Theory to analyze the coin-tossing bet would yield different conclusions about the appropriateness of the one-time bet versus the portfolio of 100 bets. A conservative decision maker might set as a risk threshold no more than a 1 percent chance of losing money. Using the calculus of Plausibility Theory, the gamble on a single coin toss — which presents a 50 percent chance of losing $50,000 — would be rejected. But the gamble of flipping the coin 100 times would be acceptable because the probability of a loss would be well under the risk threshold.

Unknowable Risks
The use of a risk threshold also resolves another conundrum associated with Bayesian statistics: the problem of unknowable risk. Most business decisions involve a mix of knowable and unknowable risks. Knowable risks involve predictable odds. For example, Capital One Financial Corporation in Richmond, Va., amasses data on millions of customers, which allows the company to predict precisely the probability that a customer with a certain demographic profile will default on his or her credit card debt. Uncertainty over whether a particular customer will default remains, but the odds of default are understood well enough that the company can set interest rates high enough to profit. With enough data, such decisions are like the roulette wheel at a casino. Any one customer may win or lose, but “the house” will definitely come out ahead in the long run.

In contrast, unknowable risks cannot be defined with predictable odds. When Capital One first experimented with an auto loan business, it had no historical data to predict the behavior of this new type of customer. Bayesian decision analysis defines a probability for such unknowable risks by inference from the choices made by the decision maker. This approach, however, can also lead to nonintuitive results.

Consider another hypothetical gamble (a bit more complicated than a coin toss, but necessary to illustrate the point, so please bear with us). It’s based on randomly drawing a ball from an urn containing three balls. You have been assured the urn contains one red ball. All you know about the other two balls is that they are either blue or yellow: The urn could contain one red plus two blue balls; or it could have one red plus two yellow balls; or it could contain one red, one blue, and one yellow ball. The knowledge that there is one red ball provides an example of “knowable risk.” The uncertain mix of blue and yellow balls represents “unknowable risk.”

You are given the option to receive a payout of $1,500 based upon the color of one ball drawn randomly from the urn. You can pick red or blue — not yellow — as your winning color. A strict Bayesian view treats the two choices as equal, given the lack of information about the blue balls. But, since it is possible that the urn contains no blue balls, most people will choose red, for it offers the known probability of one chance in three of winning.

If you are then offered another gamble from an identical urn, your choices can easily appear, to a Bayesian, even more irrational. Suppose you are offered $750 if either one of a pair of selected colors — blue/yellow or red/yellow — is drawn. In this scenario, the choices are again between a known and unknown risk. Although you don’t know the mix of blue and yellow, you do know that only one ball is red. So the first option of selecting the pair of blue and yellow as your winning colors produces the “known” probability, a two-thirds chance of winning. The second option, choosing red and yellow, returns us to the “unknown,” because we don’t know how many yellow balls are in the urn: There could be zero, one, or two, and each scenario would produce a very different probability of winning.

So, most people choose the first option because it offers a precisely quantifiable, known probability of two-thirds versus an unknown probability.

Bayesians are troubled by this behavior. If you chose red in the first gamble, it suggests you believe that it is more likely that the urn contains two yellow balls than two blue balls. But, if this is true, you should then prefer the combination of red and yellow in the second bet. From a Bayesian point of view, you are behaving inconsistently based on the contradictory probabilities implicit in your decisions.

Plausibility Theory finds no fault with these intuitive choices. We are rationally choosing knowable risks over unknowable risks because they allow us to examine our decision against a risk threshold.

Back to Business
Analogous gambles occur regularly in companies whenever unknowable risks with no historical precedents drive the profitability of a business strategy. Go back to the Webvan story. According to a February 2001 report in the Wall Street Journal, a venture capitalist told Webvan’s founder, retailing entrepreneur Louis Borders, “Louis, I think this is going to be a billion-dollar company.” Mr. Borders replied, “Naw, it’s going to be $10 billion. Or zero.”

In a sense, the colloquy is like the urn example, for it underscores the limited value of treating all decisions with a common metric of “expected value” presumably equally relevant to any “rational player.” Mr. Borders and the venture capitalist each had different risk thresholds that made them willing to take the bet on the unknown. A successful and wealthy entrepreneur, Mr. Borders implicitly understood that he was making a one-off gamble on an unknowable risk with potentially extreme outcomes. The venture capitalist’s willingness to wager was based on his ownership of a portfolio of risky businesses, wherein a few winners more than justify the majority of startups that bomb. Unfortunately, the thousands of individual investors caught up in the hype surrounding Webvan’s public offering clearly had far lower risk thresholds than either Mr. Borders or the venture capitalist, but most failed to appreciate the unknowable nature of the risks in Webvan. A more explicit recognition of their individual risk thresholds could have saved many naive investors from squandering their nest eggs.

Establishing a risk threshold helps to define downside risks. The financial-services industry, for instance, has begun to embrace a rigorous analysis of downside risk rather than a simple examination of expected value. Using historical data, regulators can assess the amount of money that a bank stands to lose with a probability of some threshold percentage over a specified period of time. The Basel Committee on Banking Supervision recently set forth detailed guide-lines for the calculation of a risk-threshold limit called “value-at-risk,” to determine a bank’s required minimum capital holdings (www.bis.org). These capital adequacy rules are proposed for the Basel II Accord.

Although an explicit calculation of downside risk is still rare outside the field of financial services, the concept could clearly be applied more broadly. Consider a business looking to build a plant in China. The managers might analyze the decision by comparing the cost of the investment to the risk of complete failure. The analysis would examine a range of scenarios — for example, a rapid growth in consumer affluence coupled with favorable exchange rates, versus continued poverty-level existence for the vast majority of citizens coupled with protectionist government tariffs. Although the “expected value” across all of the scenarios may be large because of some very high returns in the most favorable scenarios, the “downside risk” of the worst scenarios may be beyond the risk threshold for the company.

Changing Paradigms
Rigorous application of Plausibility Theory’s new math could change the way many strategic decisions are made. No longer forced to choose between their gut instincts and “rational” analysis, managers can now apply rigorous analysis in a far more instinctive way. Plausibility Theory embraces rather than challenges the rationality of intuitive decision making. Its use of risk thresholds offers an approach to decision analysis that is much easier for managers to accept than the Bayesian expected value. Plausibility Theory offers a comprehensive set of consistent rules for decision making. It draws upon the hypothesis-testing logic of classical statistical methodology while avoiding some of the “paradoxes” created by the Bayesian method.

Further work remains to be done, of course, before the new theory can be established in the world of statistical analysis. The current Bayesian paradigms draw upon more than a century of testing and refinement by several generations of mathematicians, whereas the basic logic of Plausibility Theory has emerged only in the last five years. Nonetheless, many signs within the world of business suggest that the time is ripe for a fundamental rethinking of our definitions of “rational” thought.

The greatest resistance to this new theory as a method for strategic decision making may come from within the community of academics, economists, and statisticians committed to the Bayesian view. As one senior scholar commented, “I hope I die before this takes over. I’ve invested too much effort learning the traditional model to switch at this point.” But businesspeople tend to follow a more practical approach: If it works, use it.

Reprint No. 04204

Author profiles:


Tim Laseter (lasetert@darden.virginia.edu) is the author of Balanced Sourcing: Cooperation and Competition in Supplier Relationships (Jossey-Bass, 1998) and serves on the operations faculty at the Darden Graduate School of Business Administration at the University of Virginia. Formerly a vice president with Booz Allen Hamilton, he has 20 years of experience in supply chain management and operations strategy.

Matthias Hild (hildm@darden.virginia.edu) is an assistant professor of business with the Darden Graduate School of Business Administration at the University of Virginia. His forthcoming book, The Inference Machine: On the First Principles of Inductive Reasoning (Cambridge University Press, 2004) synthesizes his latest research on decision making, statistics, and risk management.
 
Get s+b's award-winning newsletter delivered to your inbox. Sign up No, thanks
Illustration of flying birds delivering information
Get the newsletter

Sign up now to get our top insights on business strategy and management trends, delivered straight to your inbox twice a week.