A Guest Post by Dan Sarewitz of ASU, cross-posted with the CSPO Soapbox
“Faith based research is okay, shoddy research is common, but the two interact and end up ... in PNAS?”1
Now that the debate over health care reform is beginning to heat up, expect to hear a loudening chorus of voices insisting that the key to the future health of Americans is more research funding for the National Institutes of Health. An early salvo in this direction was published recently in the flagship publication of the National Academies (of science, engineering, and medicine), PNAS.
The article2 is an extraordinary exercise in statistical distortion. It’s basic points are these: (1) Rising expenditures on NIH research correlate with rising indicators of health in America; (2) As Americans get (on average) older, the economic well-being of the nation increasingly will depend on them to lead economically productive lives; (3) this, in turn, will demand better health interventions for an aging population; therefore, (4) NIH budgets need to keep up with this economic imperative. The paper concludes: “the size of NIH expenditures relative to GDP should quadruple to about 1% (≈$120 billion) and be done sufficiently rapidly (10 years) to compensate for the slowing growth of the
A variant of this hypothetical case is on display in countries that actually do make an effort to provide health care access for all citizens. As recently summarized in an article in the June 25th issue of The Economist, “Comparisons with other rich countries and within the
The authors state that the fit between total NIH funding and death rate curves (their Figure 4) explains “98% of the variation of age-adjusted mortality rates. Although [this] does not prove causation it makes the search for alternate explanatory variables of equal power difficult.” Nonsense. Given the authors don’t look at any other variables, they cannot test the real-world validity of their correlation. This is an act of faith, not science; it is a classic formula for generating spurious correlations. For example, given that budgets for pretty much everything have gone up during the last fifty years, and that budget trends across government programs tend to track one another, there are no doubt many other budget curves that could be nicely matched to the death-rate curve. And in any case we have seen already that other countries can deliver better health to their citizens for less money and with less research—so even if the correlation had some validity, it would merely underscore the inequity and inefficiency of the U.S. system.
The paper further errs by attributing to NIH (and the NIH budget) activities and outcomes that in fact had little to do with NIH. For example, the authors state that, in addition to medical interventions, “public health initiatives against smoking, and promoting screening for breast and colon cancers, led to the initiation of
Obviously my point is not that NIH does not contribute to the nation’s health in important ways, but that the contribution—one of many, many variables—cannot, in theory or practice, be teased out by discovering correlations between budget trends and health trends. This sort of analysis contributes to the notion that funding policy for NIH amounts to health policy for the nation. We’ve already tried that trick. After the failure of health care reform during the Clinton Administration, the government’s fall-back policy was to double NIH’s budget between 1998 and 2003. Surprise: health care costs continued to skyrocket, millions of more people became disenfranchised from an ever-more-unaffordable health care system, and more and more municipalities and corporations began to sink under the mounting obligation of providing unaffordable health care for their employees and pensioners. How much healthier might the nation have been if these trends had been reversed (even if NIH funding had stayed flat!)?
One final point: Imagine a publication in a prestigious journal claiming that pharmaceutical company revenues were strongly correlated with positive public health outcomes—that the more drugs the companies sold, the healthier the nation became. And imagine that the authors concluded, based on their analysis, that government policies should therefore encourage pharmaceutical profits, e.g., by extending patent lives or providing tax credits to the industry. And now finally imagine that the authors of the paper acknowledged that their research had been supported by millions of dollars of research funding from the pharmaceutical industry. Would this paper have any credibility? Could it even be published?
The PNAS article recommends a ridiculous four-fold NIH budget increase over the next decade. The article also includes, on the bottom of the first page, in small print, this statement: “The authors declare no conflict of interest.” Yet the first author of the paper was described in an August 21, 2002 New York Times article5 as “among the 10 biggest recipients of National Institutes of Health grants,” and the research reported in the PNAS article was also NIH supported. What’s the difference between the hypothetical case and the real one?
About the Author: Daniel Sarewitz is the co-director of CSPO.
1 Comment offered by a colleague who, having yet to achieve tenure, prefers to remain anonymous (which in itself raises the obvious question of how the tenure process is protecting freedom of expression—but that’s another post). I thank this same invisible person for help with this Soapbox post.
2 Manton, K., Gu, X-L, Lowrimore, G., Ullian, A., and Tolley, H.D., 2009, “NIH funding trajectories and their correlations with U.S. health dynamics from 1950 to 2004,” PNAS 106(27): 10981-10986.
3 “Heading for the Emergency Room”, 2009, The Economist, June 25, p. 75.
4 Proctor, R. The Nazi War on Cancer, 2000,
0 comments:
Post a Comment