90.9 WBUR - Boston's NPR news station
Top Stories:
PLEDGE NOW
Health

When the health care industry has tried to use Nate Silver-like methods, it has resulted in unreliable studies that suggest, for example, that a gene called Pol-9 turns people into Republicans. (Charles Dharapak/AP)

Just over a decade ago, in a fit of hubris that shook the global economy, financial managers came to believe they could take groups of questionable sub-prime mortgages and transmute them into reliable AAA-rated assets. Investment banks gathered high-risk mortgages, mixed and matched them into collateralized debt obligations, and convinced rating agencies to certify the consolidated crap as shiny new gold. Only later did we realize these products were actually financial weapons of mass destruction.

Yet the basic lessons from this disaster have not been learned by some doctors and researchers, who in a process eerily similar to that of investment managers, have in some cases made medical weapons of mass destruction.

Various clinical studies are lumped together in meta-analysis without regard for their worthiness.

In health care, the problem is a technique called “meta-analysis.” It works like this: Suppose a variety of medical studies come to conflicting conclusions on a clinical question, such as are whether hormone replacement therapy prevents heart disease in older women. Doctors are confused. The evidence is unclear. But patients want answers.

Enter meta-analysis. Just as sub-prime mortgages often are lumped together without clear regard for credit-worthiness, various clinical studies are lumped together in meta-analysis without regard for their worthiness. The clinical trials may have included people on different doses of medications or with slightly different forms of a disease, and you take a few dozen patients from a study in Japan, merge them with a few hundred similar from Denmark or Texas, and voila! You now have a presumably statistically larger sample size to answer your clinical question.

But the quality of randomized clinical trials varies widely. Some are fastidiously conceived and executed; others are helter-skelter. In the 1990s, for example, meta-analysis convinced many people that hormone replacement in post-menopausal women could cut heart attacks by one-third to one-half. In 1995, for example, the New England Journal of Medicine cited a meta-analysis that combined 31 mediocre studies to “strongly suggest” hormone replacement stopped heart attacks. But in 2002, a large, randomized trial of over 16,000 women was finally done and it showed heart attacks strongly increased with hormone replacement.

When the underlying data are very strong, meta-analysis can be powerful. For example, when New York Times blogger Nate Silver predicts elections by combining different polls to increase the sample size. He performs a kind of meta-analysis. The secret to his success, as explained by Slate’s Dan Engber, is that the polls are well done in the first place, so the fundamental data is very sound. The same can’t be said of most clinical trials.

Unfortunately, even widely-read medical journals like American Family Physician promote meta-analysis as the best possible source (rated “Level A”) of evidence to guide doctors. The problem is that someone can simply take a bunch of “Level B” evidence, like “lower quality randomized controlled trials,” batch them together using meta-analysis, and suddenly upgrade to an AFP-approved “Level A.”

After the vindication of Silver’s meta-analyses of poll data, sales of his book soared by over 850 percent. One can only hope that the number of meta-analyses published in the medical literature won’t enjoy a similar boost.

This has occurred repeatedly, and the results haven’t been pretty. In 1997, a group of Canadian researchers tried to put the brakes on meta-analysis in a startling paper. They pulled 19 meta-analyses from top-tier medical journals, on topics ranging from the effect of drugs on heart attacks to chemotherapy for breast cancer. (Keep in mind that such studies would be considered the highest form on medical evidence.) The researchers then located large, high quality randomized clinical trials on the same issues, performed years later, to see how the meta-analysis performed.

The researchers found the meta-analyses would have lead doctors to adopt useless treatments one-third of the time, and to reject helpful therapies another one-third of the time. After this debacle, why would anyone take the findings of meta-analyses very seriously?

Unfortunately, people do. In only the past month, various meta-analyses have been published in all manner of medical journals, arguing the magnesium can cut colon cancer rates, new drugs stop clots in heart rhythm problems, cholesterol drugs reduce cancer, and my recent favorite, that a gene called Pol-9 turns people into Republicans.

After the vindication of Nate Silver’s meta-analyses of poll data, sales of his book soared by over 850 percent. One can only hope that the number of meta-analyses published in the medical literature won’t enjoy a similar boost. They might work in political campaigns, but they don’t necessarily work in the medical field.

The views and opinions expressed in this piece are solely those of the writer and do not in any way reflect the views of WBUR management or its employees.

Please follow our community rules when engaging in comment discussion on this site.
  • Pingback: Life imitates satire–and scholarship | Genotopia

  • http://twitter.com/darshaksanghavi Darshak Sanghavi

    I must admit, I was fooled by a satirical site and didn’t quite get a joke…in my brief piece above, I point to a meta-analysis claiming a link between a gene and political party. In fact, the piece to which I link is in fact a satirical post about faulty meta-analysis and bad science. The point still stands, but I apologize for missing the joke on that particular link. (Hat tip to the Genotopia blog.)

    Also, for those who may be interested, here’s one other interesting detail about meta-analysis: The first large meta-analysis of
    scientific studies was produced in 1940, when researchers at Duke
    University analyzed dozens of studies about parapsychology—whether people
    could telepathically send symbols on a printer card—and concluded that ESP was
    real. The problem wasn’t that the meta-analysis was done wrong; it was that the
    underlying studies were fraudulent, and no amount of slicing-and-dicing the
    data can change that.

  • http://everydayscholar.tumblr.com/ Adam Mandeville

    As a psych major, I found similar problems with meta-analyses. No study is conducted in the same way, so trying to draw parallels is extremely difficult. At least Nate Silver gives weight to polls based on the previous success and how recently they were conducted, but this hardly seems like a solution for the medical industry.

  • http://twitter.com/nccomfort Nathaniel Comfort

    Glad you caught the joke–and kudos for the gracious acknowledgment. In fact, we’re making similar points. Comparison of GWAS studies reveals a correlation between humility and humor that preliminary studies suggest may be associated with thousands of blog page views!

TOP