Rogue Reporting

According to an article just published in the Journal of General Internal Medicine, results of drug studies published in medical journals may be misleading.

The UCLA-Harvard study says that the drug trials published in the most influential medical journals including the New England Journal of Medicine, the Journal of the American Medical AssociationThe Lancet, the Annals of Internal Medicine, the British Medical Journal and the Archives of Internal Medicine are frequently designed in a way that yields misleading or confusing results.

Investigators analyzed all the randomized drug trials published in the above journals between June 1, 2008, and Sept. 30, 2010, to determine the prevalence of outcome measures that make data interpretation difficult.  In addition, they reviewed each study’s abstract to determine the percentage that reported results using relative rather than absolute numbers, which can also be misleading.

They specifically looked at three outcome measures that have received increasing criticism from scientific experts: surrogate outcomes, composite outcomes and disease-specific mortality and found that :

  • 37% of the studies analyzed used surrogate outcomes – intermediate markers, such as a heart medication’s ability to lower blood pressure, but which may not be a good indicator of the medication’s impact on more important clinical outcomes, like heart attacks
  • 34% used composite outcomes which consist of multiple individual outcomes of unequal importance lumped together, such as hospitalizations and mortality, making it difficult to understand the effects on each outcome individually
  • 27% used disease-specific mortality, which measures deaths from a specific cause rather than from any cause. This may be a misleading measure because, even if a given treatment reduces one type of death, it could increase the risk of dying from another cause, to an equal or greater extent

Patients and doctors care less about whether a medication lowers blood pressure than they do about whether it prevents heart attacks and strokes or decreases the risk of premature death,” said the study’s lead author, Dr. Michael Hochman, a fellow in the Robert Wood Johnson Foundation Clinical Scholars Program at the David Geffen School of Medicine at UCLA’s division of general internal medicine and health services research, and at the U.S. Department of Veterans Affairs’ Los Angeles Medical Center.

Dr. Danny McCormick, the study’s senior author and a physician at the Cambridge Health Alliance and Harvard Medical School, added: “Patients also want to know, in as much detail as possible, what the effects of a treatment are, and this can be difficult when multiple outcomes of unequal importance are lumped together.”

The authors also found that 45% of exclusively commercially funded trials used surrogate endpoints, whereas only 29% of trials receiving non-commercial funding did. Furthermore, while 39% of exclusively commercially funded trials used disease-specific mortality, only 16% of trials receiving non-commercial funding did.

The study also showed that 44% of abstracts reported results in relative rather than absolute numbers, which can be misleading.  “The way in which study results are presented is critical,” McCormick said. “It’s one thing to say a medication lowers your risk of heart attacks from two-in-a-million to one-in-a-million, and something completely different to say a medication lowers your risk of heart attacks by 50 percent. Both ways of presenting the data are technically correct, but the second way, using relative numbers, could be misleading.”

To remedy the problems identified by their analysis, Hochman and McCormick believe that studies should report results in absolute numbers, either instead of, or in addition to, relative numbers, and that committees overseeing research studies should closely scrutinize study outcomes to ensure that lower-quality outcomes, like surrogate makers, are only used in appropriate circumstances.

So who’s to blame?  The pharma companies for using outcomes that are most likely to indicate favorable results for their products, the study authors for writing them up that way or the journals for accepting the manuscripts?  Let us know what you think.