The Myth of Cost-Benefit Analysis
The U.S. government’s method for evaluating risk isn’t as objective as it’s made out to be.
(originally published by Booz & Company)
Illustration by Lars Leetaru |
But a Harvard Center for Risk Analysis cost-benefit study on the regulation, paid for that same year by the EPA and coauthored and peer-reviewed by three EPA scientists, had reached a dramatically different conclusion. Its study showed that same $750 million that the coal industry would give up would allow the public to reap a benefit of nearly $5 billion per year, 100 times the EPA’s public estimate, by decreasing the neurological and cardiac damage attributed to mercury poisoning.
What was the source of the discrepancy? The EPA’s cost-benefit analysis had focused on the effects of reducing mercury levels in freshwater fish only, while ignoring the idea that ocean fish might also be affected by coal plant emissions. But the Harvard report had asserted, with at least enough credibility to merit investigation, that coal emissions could affect mercury levels in tuna and other fish. And, to be sure, most of the fish Americans eat — including tuna, which is responsible for much of the mercury exposure in the U.S. — comes from oceans. The EPA also greatly reduced the cost of cardiac damage in its cost-benefit analysis, declaring that although mercury could indeed damage the heart, the harm might be offset by the cardiac benefits of eating fish.
This example, along with scores like it over the past decades, provides ample evidence that using cost-benefit analysis to determine the value of new regulations isn’t working, and that it’s time to find a better approach. Businesspeople should be particularly eager for such a change, because although many regard cost-benefit analysis almost as a game that can be finessed, “customizing” the numbers can have dangerous consequences. Aside from its obvious implications for exposing businesses and the public to undue risk, it can effectively quash new ventures, or arbitrarily favor some technologies over others.
Cost-benefit analysis has long been extolled as the best method for stripping regulatory decisions of bias and anchoring them with objective, real-world economic consequences. To that end, President Bill Clinton signed a law in 1993 requiring that every regulatory proposal — even those mandated by Congress — undergo at least one cost-benefit analysis before being submitted for approval. This policy assumes that cost-benefit analysis is unbiased, but that is not the case in practice. The flaws of cost-benefit analysis, which has gained momentum over the past 40 years, have become more apparent. Sometimes it gives government agencies or corporations a disproportionate influence over what goes into the analysis and therefore what comes out of it. Other times, it skews the results in unexpected ways simply because of hidden biases or unintentional misapplication of the data. Even when conducted with the best of intentions, the method is still problematic, because it substitutes calculation for informed and considered judgment. Although we need not abandon such analysis altogether, we must recognize how and why it is subject to misuse and abuse.
The main problem with cost-benefit analysis is that it requires translation of all value of a given proposal into economic terms. To proponents, this is its chief asset. Because the cost-benefit approach uses economic value as a universal metric, they say, it is a neutral tool; monetizing risk and benefit is the least biased way to judge the impact of regulatory decisions.
But quantitative analyses are never neutral. To be useful, any data, including economic data, must be considered in the context of the decision that is being made. Also, no matter how clever the mathematics, certain key inputs in a cost-benefit analysis cannot be translated into economic value. Security and safety, the preservation of wildlife and open spaces, the reduction of fear in a community, and scientific uncertainty in fields that spawn technological innovation are all economic intangibles — and omitting them when they are clearly important factors should invalidate the analysis. But it never does.
Perhaps most important, analysis is an act performed by human beings. Complex systems and modeling experts have long asserted that human beings operating in isolation or within the mental models of their professional training simply aren’t able to be objective enough to come up with the right answers to the kinds of problems that cost-benefit analysis addresses. Certainly, within tightly defined boundaries — when they practice the scientific method, for instance — people can approximate objectivity. Yet even then, they cannot help making value judgments at every step. Those judgments include, for example, choosing what assumptions they use to create their models, what data they include in and leave out of their calculations, what rules they use to compute critical values such as the cost of a human life, what populations they attribute costs and benefits to, and how they adjust for imperfections in market prices.
Some of these concerns have been voiced in studies of cost-benefit analysis itself, including those conducted by some of the most highly respected anti-regulation scholars. Although it has come to be seen as the methodology of choice by people who balk at government intervention, studies by members of that group show its effectiveness to be questionable at best. A 2007 study by Robert W. Hahn of the AEI-Brookings Joint Center for Regulatory Studies and Paul C. Tetlock of the Yale School of Management and the Red McCombs School of Business at the University of Texas suggests that although economic analyses have probably influenced the outcome of particular regulations, “there is little evidence that such analysis has had a large overall impact” on the total cost or volume of regulation.
On the pro-regulation side, a February 2004 analysis by Ruth Ruttenberg & Associates for the Public Citizen Foundation concluded that in 30 years of federal regulatory activity, the U.S. government had consistently inflated cost estimates for health, safety, and environmental protections. Rarely, if ever, did actual compliance costs reach the estimates provided by the regulating agency — and costs never reached the levels estimated by the private sector.
“Cost benefit studies still may provide useful information to policymakers, but [their] practical application...involves a significant number of controversial value judgments...that have become embedded in the practice of economics as we know it,” wrote Tyler Cowen, an economist at George Mason University and at the Center for Study of Public Choice, in 1998.
As any social scientist can tell you, this tension between data and values is deep and wide, and perpetuating the divide is a mainstay of many disciplines. But I believe that the gap itself is at the heart of the problem. As long as regulators refuse to acknowledge the need for a methodological bridge between data and values, cost-benefit analysis is the perfect cover for a biased assessment.
Realistic Data Is Elusive
“Most cost-benefit analysis is hokum,” says Alan Roberts, vice president of the Dangerous Goods Advisory Council, a nonprofit organization that works with state and federal regulators. From 1975 until his retirement in 1999 as the manager of the U.S. Department of Transportation’s hazardous materials program, Roberts handled more than 100 rule-making projects and, he says, “For every one of them, I had to make a declaration of cost and impact” — even though the most relevant data for calculating those costs and impacts was often in short supply.
One reason that good data is scarce is that the government office that collects and evaluates regulators’ cost-benefit analyses — the Office of Management and Budget (OMB), whose administrators are appointed by the White House — is also the office that decides what data regulators can gather to support their analyses.
“You’re only allowed to collect the data you need with approval from OMB,” says Roberts. If you need data the office hasn’t approved for collection, “you have to guess” at what the right numbers might be. He recalls a time several years ago when his office director was having tremendous difficulty finding what Roberts terms “realistic data” from industry about the volume of shipments of hazardous materials. “The staff finally came up with 800,000 new shipments a day, with 1.2 million shipments in transit on any given day, which is probably pretty accurate,” said Roberts. “But if someone wanted to see how we got that number, it would be very difficult to support.”
Roberts’s experience underlines another troubling issue: Even if OMB grants an agency the right to request data, cost-benefit studies rely primarily on information provided by private industry — most often, the companies that will be governed by the regulations being formulated. Companies often don’t respond to requests in time, leaving agencies like Roberts’s to extrapolate from small samples. When they do respond, studies show that companies generally overestimate costs and underestimate benefits. Also, most industries insist on data confidentiality, making it impossible to verify the information or hold sources accountable for accuracy.
Once the data is collected, there’s another concern: how it is shaped into the basis for decision making. Today the data is often framed to protect existing industries and technologies and discourage innovation, as demonstrated by two studies of the costs of compliance with a proposed noise standard. The Ruttenberg report noted that in 1974, industry presented to the Occupational Safety and Health Administration (OSHA) an analysis by defense contractor Bolt Beranek and Newman (BBN) that estimated the cost of an 85-decibel noise standard to be $31.6 billion. Another study, released to OSHA by independent industrial engineer and noise expert Glenn Warnaka, estimated the same noise control compliance to cost $11.7 billion.
Why are the two figures so different? The BBN study “ignored new technology being developed in the noise abatement field — in sharp contrast to the Warnaka study, which made newly developing technology a key element in its costs of noise control compliance,” the Ruttenberg report said. BBN’s authors even admitted that they had relied on some of the most expensive procedures available to make their estimates, “whereas Warnaka considered opportunities for redesign or substitution of noisy components of existing equipment.”
Similarly, in the early 1980s, when the National Highway Traffic Safety Administration was considering regulations for fuel economy, U.S. car manufacturers objected, claiming the new rules would be impossibly expensive because the necessary technology did not exist. But foreign car manufacturers, including Volvo, Toyota, and Volkswagen, were already using U.S.-patented products to comply with U.S. fuel economy regulations.
All this dissembling and finagling leads to inconclusive or misleading analyses that serve no one well, including the industries being overseen. This is not a new problem, just one that remains determinedly unaddressed. In 1995, when the Office of Technology Assessment published a retrospective study on the techniques of OSHA analytics, it concluded that “a lack of continuing insights on the potential of leading-edge technology hinders the agency in performing its mission.” And in 1997, in testimony before the House of Representatives, a director in the U.S. General Accounting Office criticized the EPA’s traditional approach to environmental regulation as “precluding innovation.”
Economic analyses that downplay innovation do more harm than simply presenting a false, frozen-in-time reality in which business costs are unaffected by new technology and processes. As regulations themselves have often driven innovations that positively affect the economy as a whole, they smack of anti-competitiveness as well. “Once there is a rule, or threat of a rule, the incentives change,” the Ruttenberg study notes. “Regulatory cost analyses do not offset the economic benefits from vibrant new businesses and jobs that emerge...[involving products] from safety shoes to catalytic converters, from wastewater treatment chemicals to process safety management software.” In the context of one of its own regulatory decisions, OSHA even admitted that “this tendency toward overestimation of costs and underestimation of benefits allows decisions to be biased on the side of the current economic situation.”
And although their stance is antiregulatory, Hahn and Tetlock voice concerns similar to Ruttenberg’s when they note the “frequent failure” of analyses to quantify alternatives as well as benefits. They cite a 2004 study that examined whether the EPA used all available information to develop its cost-benefit analyses. “Of the 60 [regulatory impact analyses] that monetized at least some costs and considered at least one alternative, 11 did not monetize at least some costs of alternatives,” the authors write.
Another recent example of this issue is proposed legislation about fuel economy standards. In 2007, the Senate’s Ten-in-Ten Fuel Economy Act introduced mandatory cost-benefit analysis into the National Highway Traffic Safety Administration’s standard-setting process. The agency would be compelled to consider a variety of factors in determining a rule’s cost, including national security and greenhouse gas emissions. But the agency then provides no indication of or guidelines about how an intangible such as national security should be quantified for inclusion in the analysis. Costs to human health, the economy, and the environment resulting from greenhouse gas emissions are likewise left to the imagination of the analyst, who may choose to ignore such costs entirely.
There are also general economic benefits that most cost-benefit analyses ignore altogether, because the analytical gauge is too narrow. “Government economists aren’t concerned with what really happens in the economy when regulations are estimated,” says Adam Finkel, a former OSHA administrator who is now professor of environmental and occupational health at the University of Medicine and Dentistry of the New Jersey School of Public Health. “They care about the first-order effects — that the polluters will have to spend money. But this isn’t the right question. Money that’s spent to comply with an environmental regulation employs people and produces revenue for other firms. The effect on the economy is much more complicated.”
Appropriate Evaluation
Can cost-benefit analysis ever provide the basis for credible regulatory oversight? Of course it can, provided that we redesign the way it is managed and the context in which it is used. Cost-benefit analysis can be an effective tool to analyze simple, one-dimensional problems, such as whether to install dividers on dangerous stretches of highway, where relatively unambiguous data is in abundant supply and there is little controversy. It also is a good way to elucidate the trade-offs for a given policy or regulation, or to produce a summary statistic about its economic efficiency.
But the cost-benefit method loses its authority when it’s used to assess more complex decisions. It is inadequate for evaluations of interventions that will affect many different dimensions, such as markets, economies, health, the environment, and endangered species. Cost-benefit analysis is also inappropriate for products or processes over which there are disagreements about benefits or about which outcomes are important, such as new medical technologies like genetic testing. And it should never be used as the basis for regulation in the presence of scientific uncertainty or value conflicts, or in an area where there are no authorities one can trust to know all the answers, as is the case with biotechnologies such as genetic engineering and stem cell research. Decisions like these require a more expansive methodology — one that isn’t dependent on the affectation of translating all value into economic terms, that is more transparent and responsive to outside criticism, and that pragmatically represents the interests of everyone involved: industry, government, and the public.
Luckily, this more expansive methodology does exist. It was developed specifically for regulators and lawmakers more than a decade ago. The Committee on Risk Characterization, the Commission on Behavioral and Social Sciences and Education, and the National Research Council joined forces to address the bias, scientific uncertainty, lack of transparency, and data–values dichotomy that derail cost-benefit analysis. Their work culminated in the 1996 report Understanding Risk: Informing Decisions in a Democratic Society.
Its original goal was to find a way to translate the output of a cost-benefit analysis into a document that nontechnical decision makers could understand, but the group quickly expanded its charter. “We asked, How do you find out what the relevant information is — for all the interested and affected parties, not just the [regulatory] agencies?” says Paul Stern, who directed the study. “Quantification is one method, but what are the others?”
In response to this larger question, the committee, chaired by Harvey Fineberg, now president of the Institute of Medicine of the National Academies, described and sanctioned an approach it called the “analytic deliberative process” that elevates the “judgment” side of the equation — call it quantitative logic or human deliberation — to a role in which it is just as important to a defensible analysis as are technical data and calculations. Longer and more complex than a simplistic cost-benefit analysis, the process is in essence a collaborative, multidimensional cost and benefit assessment, developed by an expanded group of participants who are demonstrably interested in, or affected by, the decision to be made.
And it actually succeeds at what the cost-benefit approach has failed to accomplish. Through an iterative process, participants question value judgments and assumptions from a fresh perspective. They challenge one another’s biases and data. They see many dimensions of costs and benefits that were previously invisible to or ignored by specialists. They use values and judgment as a positive force to give context and authority to traditional analysis.
Transparency is the other critical benefit to the process, and one that seems particularly important in light of the nearly irresistible urge to game the analysis that is presented by the cost-benefit approach. “If you have a regulatory system that enshrines a collaborative, analytic, deliberative process like the one we proposed, you’ve created an institutional structure that works directly against outside influence,” says Stern. “It’s designed to figure out the information that the group needs and provide the checks and balances to prove the information is trustworthy, since all the stakeholders have the ongoing opportunity to question each other and resolve disputes.”
One example is the resolution brokered between the citizens of Valdez, Alaska, and the marine oil trade after the 1989 tanker crash and oil spill off the Alaskan coast. The two sides had been engaged in a years-long dispute about what kinds of tug vessels should be deployed in Prince William Sound to help prevent oil spills. Instead of funding the usual competing risk assessments in an effort to influence the decision, the Regional Citizens’ Advisory Council, the oil industry, and the government agencies involved in the decision agreed to jointly sponsor, fund, and support a single deliberative assessment.
A steering committee with representatives from all three groups assembled a research team of technical experts and industry and citizen advisors. Over the course of the proceedings, everyone involved learned about the technical intricacies of maritime risk assessment. The process “increased our understanding of the problem domain,” one member of the research team told a researcher studying the deliberation. “The assumptions were brought out in painful detail and explained.” The team decided that the existing records didn’t provide enough data for a proper risk assessment, so the steering committee helped them find the data they needed. As a result, one of the new tug vessels was deployed in 1997 — with the final risk assessment accepted as authoritative by stakeholders who had formerly been at war.
The process also helped resolve a thorny risk question for a 1993 EPA water regulation in a way that satisfied regulators and technical experts from various fields, as well as the people who were to be affected by the decision. As detailed in Understanding Risk, the EPA turned to negotiation because its water experts already knew that the ordinary rule-making process would be contentious and therefore probably futile.
Two noteworthy components were part of the deliberation process. First, the EPA hired an outside firm to determine who should be at the table, taking into account how their biases might affect the ability of the rest of the group to deliberate. And second, by the end of the process the committee had decided it still didn’t have enough data or evidence to make some of the most important decisions needed for a hard-and-fast regulation. So it proposed an “information collection rule,” which required large public water suppliers to regularly test source water and to take specific actions based on the results of these tests. This breakthrough compromise didn’t just create a legislated means by which to reduce scientific uncertainty over time. It also required uncertainty to be addressed as part of the process of protecting public health, instead of pretending that the uncertainties didn’t exist.
Meanwhile, all is not lost for conventional cost-benefit analysis. The newly formed Society for Benefit-Cost Analysis is assembling in an effort to help improve the tool by openly recognizing its limitations and expanding its use where appropriate. “One of our main goals is to help create a uniform set of best practices and standards,” says Richard Zerbe, a professor of public affairs and an adjunct law professor at the University of Washington. Funded by the MacArthur Foundation and the School of Public Affairs at the University of Washington, the society will hold its first formal meeting in June 2008.
Let’s hope they move quickly, and that regulators can be compelled to be amenable to their suggestions. The long-standing distrust of regulation in the U.S. has started to shift. Global competition, a growing fear of liability lawsuits, and tough state and local laws have inspired American businesses, for the first time in 15 years, to push for new federal regulations to address health, safety, and environmental concerns. If the worthiness of each of those new regulations is going to be judged by the same tainted cost-benefit analyses as have come before, competition and innovation will continue to suffer. Deploying more expansive cost and benefit methodology — before laws are passed and before products are put on the market — would not only restore integrity to the process of regulation, but would also have a salutary economic effect.
Reprint No. 08103
Author profile:
Denise Caruso (caruso@hybridvigor.org) is the executive director and chair of the Hybrid Vigor Institute, which supports cross- disciplinary inquiry and collaboration on science, technology, and social issues. The author of Intervention: Confronting the Real Risks of Genetic Engineering and Life on a Biotech Planet (Hybrid Vigor Press, 2006), she also writes regularly for the Sunday business section of the New York Times.