strategy+business is published by PwC Strategy& LLC.
or, sign in with:
strategy and business
 / Summer 2004 / Issue 35(originally published by Booz & Company)


Health Care’s Technology Cost Crisis

Spiraling drug prices aren’t the only challenge facing the U.S. health-care system. Medical technology costs must be controlled, without sacrificing innovation.

In the ongoing national debate in the U.S. about the spiraling price tag for health care, drug costs have received the lion’s share of attention. But there’s another culprit lurking that will soon attract government and business scrutiny as pressure builds to control spending: the cost of medical technology.

If manufacturers and health-care providers want to play a pivotal role in the coming debate over technology spending, it is time for them to act together to develop a better process for assessing the value of medical technologies and balancing costs against outcomes. At the least, they need to brace themselves for future pressure to reduce spending.

Although the benefits of innovative medical technologies are undeniable (for example, advances have cut the death rate from cardiovascular disease by 25 percent over the last 20 years), innovation comes with a price. Overall health-care costs have outpaced GNP growth by more than four percentage points, on average, in the last five years and now total $1.5 trillion per year. Spending on medical technology has accounted for about 20 percent of that growth, and now exceeds $200 billion per year. This spending surge presents a challenge to the U.S. economy and society: How can we control cost increases without sacrificing the benefits of innovation?

There is substantial evidence that overutilization and misuse of technology lead to spending that exceeds its value for patients. In the diagnostic imaging technology category — which has grown to nearly a $100 billion business — spending increases are driven to a large extent by the growth in the number of machines installed in hospitals, doctors’ offices, and imaging centers. This has led in turn to overcapacity in many areas and has created incentives for doctors to prescribe unnecessary procedures. Duplication of procedures (i.e., a patient receives an MRI, then a PET scan, even though doing both procedures does not help doctors get closer to a diagnosis) and overuse of high-end procedures in situations where they add little value have also driven up technology spending unnecessarily.

We have identified three important reasons medical technology is not being used cost-effectively. First, patients do not pay directly for the health care they receive, so they sometimes make unreasonable demands on physicians for diagnosis and treatment. Second, a new technology may be adopted because of its clinical superiority to existing technologies, but there is no market mechanism to ensure that it will be used where it is clinically most appropriate or where it offers highest value for a patient compared with other treatments. Third, because there is no market mechanism for determining the value of medical technology, there is currently no generally accepted screening process to assess its value; cost-effectiveness is not a criterion for regulatory approval of procedures, and manufacturers do not consistently perform studies of the economic benefits of new procedures.

Private medical insurers and companies that pay for health-care plans have started to realize the significant impact of medical technology on health-care costs, and are likely to look for ways to reduce costs without hindering innovation. In principle, given continued third-party payment for health care, there are two options: One is to force a national debate about ways to introduce consistent and generally accepted value calculations into the evaluation of new technologies; the other is to look for targeted strategies that reduce costs in specific areas of the health-care system.

The most important question in establishing a value paradigm is the level at which value assessments would be made. One possibility is to use a federal agency, such as the Food and Drug Administration, akin to the national authorities that evaluate new technologies in Canada, France, the United Kingdom, and other countries. Another possibility would be to create a public/private partnership between existing government entities and private health-care groups. Both options have advantages and disadvantages; it seems more likely in the short term, however, that the United States will opt for a smaller-scale approach — for example, contracting the work to several small private entities, similar to the privately run centers that currently perform technology assessments.

Follow Us 
Facebook Twitter LinkedIn Google Plus YouTube RSS strategy+business Digital and Mobile products App Store


Sign up to receive s+b newsletters and get a FREE Strategy eBook

You will initially receive up to two newsletters/week. You can unsubscribe from any newsletter by using the link found in each newsletter.