PNHP Logo

| SITE MAP | ABOUT PNHP | CONTACT US | LINKS

NAVIGATION PNHP RESOURCES
Posted on November 12, 2007

Health Care Special Issue: Creative Destruction

PRINT PAGE
EN ESPAÑOL



The development of DBS was one part basic knowledge—an understanding of how Parkinson’s works and how the brain responds to electrical stimulation—and one part sheer luck. Profits, on the other hand, had relatively little to do with it. According to Robert Gross, an Emory University neurosurgeon and expert in the field, Benabid had actually approached the companies that already made electrodes for use in treating chronic pain, suggesting they develop a device specifically for Parkinson’s. But they declined initially, so Benabid had to use the existing devices and adapt them on his own. “The companies did not lead those advances,” Gross says. “They followed them.”

In this sense, DBS offers an important window into the way medical innovation actually happens. The great breakthroughs in the history of medicine, from the development of the polio vaccine to the identification of cancer- killing agents, did not take place because a for-profit company saw an opportunity and invested heavily in research. They happened because of scientists toiling in academic settings. “The nice thing about people like me in universities is that the great majority are not motivated by profit,” says Cynthia Kenyon, a renowned cancer researcher at the University of California at San Francisco. “If we were, we wouldn’t be here.” And, while the United States may be the world leader in this sort of research, that’s probably not—as critics of universal coverage frequently claim—because of our private insurance system. If anything, it’s because of the federal government.

The single biggest source of medical research funding, not just in the United States but in the entire world, is the National Institutes of Health (NIH): Last year, it spent more than $28 billion on research, accounting for about one-third of the total dollars spent on medical research and development in this country (and half the money spent at universities). The majority of that money pays for the kind of basic research that might someday unlock cures for killer diseases like Alzheimer’s, aids, and cancer. No other country has an institution that matches the NIH in scale. And that is probably the primary explanation for why so many of the intellectual breakthroughs in medical science happen here.

There’s no reason why this has to change under universal health insurance. NIH has its own independent funding stream. And, during the late 1990s, thanks to bipartisan agreement between President Clinton and the Republican Congress, its funding actually increased substantially—giving a tremendous boost to research. With or without universal coverage, subsequent presidents and Congress could ramp up funding again—although, if they did so, they would be breaking with the present course. It so happens that, starting in 2003, President Bush and his congressional allies let NIH funding stagnate, even though the cost of medical research (like the cost of medicine overall) was increasing faster than inflation. The reason? They needed room in the budget for other priorities, like tax cuts for the wealthy. In this sense, the greatest threat to future medical breakthroughs may not be universal health care but the people who are trying so hard to fight it.

So is that the end of the story? No. Somebody still has to turn scientific knowledge into practical treatments. Somebody has to apply the understanding of how, say, a cancer cell reacts in the presence of a chemical in order to produce an actual cancer drug. It’s a laborious, frustrating, and risky process—one for which, traditionally, the private sector has taken primary responsibility. And, yet, that doesn’t mean the private sector always performs this function particularly well. Unlike the NIH, whose support for medical research seems to represent a virtually unambiguous good, the private sector’s efforts to translate science into medicine are much more of a mixed bag.

As books like Marcia Angell’s The Truth About the Drug Companies and Merrill Goozner’s The $800 Million Pill point out, a lot of the alleged innovation we get from private industry just isn’t all that innovative. Rather than concentrating on developing true blockbusters, for the last decade or so the pharmaceutical industry has poured the lion’s share of its efforts into a parade of “me-too” drugs—close replicas of existing treatments that offer little in the way of new therapeutic advantages but generate enormous profits because they are patented and because companies have become exceedingly good at promoting their sales directly to consumers.

The most well-known example of this is Nexium, which AstraZeneca introduced several years ago as the successor to Prilosec, its wildly successful drug for treating acid reflux. AstraZeneca promoted Nexium heavily through advertising— you may remember the ads for the new “purple pill”—and, as a result, millions of patients went to their doctors asking for it. Trouble was, the evidence suggested that Nexium’s results were not much better than Prilosec’s—if, indeed, they were better at all. And, since Prilosec was going off patent, competition from generic-brand copies was about to make it a much cheaper alternative. (The fact that Prilosec’s price was about to plummet, needless to say, is precisely why AstraZeneca was so eager to roll out a new, patented drug for which it could charge a great deal more money.)

The Nexium story highlights yet another problem with the private sector’s approach to innovation. Because the financial incentives reward new treatments— the kind that can win patents—drug- and device-makers generally show little interest in treatments that involve existing products. Yet sometimes finding a new way to use an old remedy is the best way to innovate. As Goozner notes in his book, even as Prilosec and its competitors (like Tagamet) were flying off the drugstore shelves, academic scientists were arguing that it made more sense to treat some patients with a regimen of older drugs—antibiotics—that could cure ulcers rather than combat their effects. But no drug company was going to make a fortune repackaging old antibiotics. So the industry, having already invested heavily in products like Nexium, basically ignored this possibility.

Just to be clear, this doesn’t mean that private industry plays no constructive role in medical innovation. Computed Tomography (CT)—which a survey of internal medicine doctors recently ranked the top medical innovation in recent history—owes its existence to basic scientific discoveries about physics. But it’s the steady involvement of companies like General Electric, which have poured untold sums into research and development of CT scanners, that produced the technology we have today—and will produce even better technology tomorrow.

Yet even this story has a downside, as Shannon Brownlee chronicles in her new book, Overtreated. It’s the potential to sell many more such devices, at a very high cost, that has enticed companies like GE to invest so much money in them. In fact, compared to the rest of the developed world, the United States has a relatively high number of CT machines (although Japan has more). But experts have been warning for years of CT overuse, with physicians ordering up scans when old-fashioned examinations would do just fine. (Some experts even worry that over-reliance on scans may be leading to atrophied general exam skills among physicians.) Studies have shown that the mere presence of more CT scanners in a community tends to encourage more use of them—in part because the machine owners need to justify the cost of having invested in them. The more CT devices we buy, the less money we have for other kinds of medical care—including ones that would offer a lot more bang for the buck.

And don’t forget one other thing: At least performing too many CT scans doesn’t tend to result in injury. The same can’t be said for other medical interventions with serious side-effects.

The ideal would be to come up with some way of achieving the best of both worlds—paying for innovation when it yields actual benefits, but without neglecting less glitzy, potentially more beneficial forms of health care. And that is precisely what the leading proposals for universal health care seek to do. All of them would establish independent advisory boards, staffed by leading medical experts, to help decide whether proposed new treatments actually provide clinical value. The fact that Barack Obama’s plan includes such a provision is particularly telling, since one of the plan’s architects is David Cutler— the economist constantly promoting the value of innovation.

Of course, the idea of involving the government in these decisions is anathema to many conservatives—since, they argue, the private sector is bound to make better decisions than a bunch of bureaucrats in Washington. But, while that’s frequently true in economics, health care may be an exception. One feature of the U.S. insurance system is its relentless focus on short-term good. Private insurers have little incentive to pay for interventions that don’t yield immediate benefits, because they are gaining and losing members all the time. As a result, money invested on patient health may very well help a competitor’s bottom line. What’s more, the for-profit insurance industry—like the pharmaceutical and device industries—responds to Wall Street, which cares more about quarterly filings than long-term financial health. So there’s relatively little incentive to spend money on the kinds of innovations that yield long-term, diffuse benefits—such as the creation of a better information infrastructure that would help both doctors and consumers judge what treatments are necessary when.

The government, by contrast, has plenty of incentive to prioritize these sorts of investments. And, in more centralized systems, it can do just that. Several European countries are way ahead of us when it comes to establishing electronic medical records. When fully implemented, these systems will allow any doctor, nurse, or hospital seeing a patient for the first time to discover instantly what drugs that person has taken. It’s the single easiest way to prevent medication errors—a true innovation. Thousands of Americans die because of such errors every year, yet the private sector has neither the will nor, really, the way to fix this problem.

Another virtue of more centralized health care is its ability to generate savings by reducing administrative waste. A universal coverage system that significantly streamlined billing (either by creating one common form or simply replacing basic insurance with one, Medicare-like program) and cut down on the need for so many insurance middle-men would leave more resources for actual medical care—and real medical innovation.

None of which is to say a universal coverage system couldn’t have a chilling effect on innovation while severely pinching access to medical care that is expensive but, arguably, worth it. All it would take was a system that had both a rigid budget and very low funding. The British have such a system, or something approximating it. Even after some recent spending increases, they still devote just 9 percent of the gross domestic product to health care, less than many European nations and a little more than half of what the United States spends. And that shows up in the availability of cutting-edge care. Relative to other highly developed countries, Britain is one of the last to get the latest cancer drugs to its patients. And that probably helps explain why British cancer survival rates generally lag, too.

But few of the plans under discussion in this country would create such a strict budget. And nobody in this country seriously proposes reducing U.S. spending to British levels. Rather, the goal is to reduce our spending moderately and carefully; the savings, most likely, would materialize over time. In the end, we would probably still spend more than what even the higher- spending countries in Europe pay. And that should be enough, given that the citizens of those countries are not exactly missing out on cutting-edge medical treatments. France and Switzerland—traditionally the two highest spenders—get the newest cancer drugs to their patients with virtually the same speed as the United States does. And, when it comes to cancer radiation equipment, France actually has more per person than we do.

So what, then, would have happened to my friend Mike Kinsley if such a system had been in place here? From the looks of things, exactly what has happened already: He would have gotten the DBS treatment. Nearly every country in Europe covers DBS under its national health insurance system, even England with its famously low spending and scrutiny of new treatments. People over 70 can’t always get the treatment in those countries, but, in part, that’s because many physicians believe it’s not usually worth the risks at that age. (And they may be right, depending on which studies you believe.) Medicare, meanwhile, also covers it, making it available to all of this country’s elderly. Working- age Americans, on the other hand, may face some obstacles: According to Medtronic, private insurers occasionally deny coverage—to say nothing of those people who don’t have insurance at all. DBS is just one example, to be sure, but it seems to be emblematic of the truth about universal health insurance: You don’t have to choose between universal access and innovation. It’s possible to have both—as long as you do it right.