Rogue Reporting

According to an article just published in the Journal of General Internal Medicine, results of drug studies published in medical journals may be misleading.

The UCLA-Harvard study says that the drug trials published in the most influential medical journals including the New England Journal of Medicine, the Journal of the American Medical AssociationThe Lancet, the Annals of Internal Medicine, the British Medical Journal and the Archives of Internal Medicine are frequently designed in a way that yields misleading or confusing results.

Investigators analyzed all the randomized drug trials published in the above journals between June 1, 2008, and Sept. 30, 2010, to determine the prevalence of outcome measures that make data interpretation difficult.  In addition, they reviewed each study’s abstract to determine the percentage that reported results using relative rather than absolute numbers, which can also be misleading.

They specifically looked at three outcome measures that have received increasing criticism from scientific experts: surrogate outcomes, composite outcomes and disease-specific mortality and found that :

  • 37% of the studies analyzed used surrogate outcomes – intermediate markers, such as a heart medication’s ability to lower blood pressure, but which may not be a good indicator of the medication’s impact on more important clinical outcomes, like heart attacks
  • 34% used composite outcomes which consist of multiple individual outcomes of unequal importance lumped together, such as hospitalizations and mortality, making it difficult to understand the effects on each outcome individually
  • 27% used disease-specific mortality, which measures deaths from a specific cause rather than from any cause. This may be a misleading measure because, even if a given treatment reduces one type of death, it could increase the risk of dying from another cause, to an equal or greater extent

Patients and doctors care less about whether a medication lowers blood pressure than they do about whether it prevents heart attacks and strokes or decreases the risk of premature death,” said the study’s lead author, Dr. Michael Hochman, a fellow in the Robert Wood Johnson Foundation Clinical Scholars Program at the David Geffen School of Medicine at UCLA’s division of general internal medicine and health services research, and at the U.S. Department of Veterans Affairs’ Los Angeles Medical Center.

Dr. Danny McCormick, the study’s senior author and a physician at the Cambridge Health Alliance and Harvard Medical School, added: “Patients also want to know, in as much detail as possible, what the effects of a treatment are, and this can be difficult when multiple outcomes of unequal importance are lumped together.”

The authors also found that 45% of exclusively commercially funded trials used surrogate endpoints, whereas only 29% of trials receiving non-commercial funding did. Furthermore, while 39% of exclusively commercially funded trials used disease-specific mortality, only 16% of trials receiving non-commercial funding did.

The study also showed that 44% of abstracts reported results in relative rather than absolute numbers, which can be misleading.  “The way in which study results are presented is critical,” McCormick said. “It’s one thing to say a medication lowers your risk of heart attacks from two-in-a-million to one-in-a-million, and something completely different to say a medication lowers your risk of heart attacks by 50 percent. Both ways of presenting the data are technically correct, but the second way, using relative numbers, could be misleading.”

To remedy the problems identified by their analysis, Hochman and McCormick believe that studies should report results in absolute numbers, either instead of, or in addition to, relative numbers, and that committees overseeing research studies should closely scrutinize study outcomes to ensure that lower-quality outcomes, like surrogate makers, are only used in appropriate circumstances.

So who’s to blame?  The pharma companies for using outcomes that are most likely to indicate favorable results for their products, the study authors for writing them up that way or the journals for accepting the manuscripts?  Let us know what you think.

Clinical Research under scrutiny?

If you watched the news at all over the past week you probably saw CNN‘s Sanjay Gupta‘s confrontation with disgraced doctor Andrew Wakefield.  He, as you may recall was the author of the 1998 study that linked autism to some childhood vaccines and set off a worldwide scare for parents.

In the intervening years there have been countless lawsuits against vaccine manufacturers and millions of children who, perhaps needlessly, have gone unvaccinated.  Recently,  an investigative report published in the British Medical Journal called the original study an elaborate fraud.

So, is Dr Wakefield alone in manipulating clinical trial data?  Can we rely on other clinical studies to provide us with the truth?

No, not according to researchers at Johns Hopkins.  In a report published January 4th in the Annals of Internal Medicine the authors concluded that the vast majority of published clinical trials of a given drug, device or procedure are routinely ignored by scientists conducting new research on the same topic.

Trials being done may not be justified, because researchers are not looking at or at least not reporting what is already known.  In some cases, patients who volunteer for clinical trials may be getting a placebo for a medication that a previous researcher has already determined works or may be getting a treatment that another researcher has shown is of no value. In rare instances, patients have suffered severe side effects and even died in studies because researchers were not aware of previous studies documenting a treatment’s dangers.

Not surprising then that they go on to say, “the failure to consider existing evidence is both unscientific and unethical.”

The report argues that these omissions potentially skew scientific results, waste taxpayer money on redundant studies and involve patients in unnecessary research.

Conducting an analysis of published studies, the Johns Hopkins team concludes that researchers, on average, cited less than 21% of previously published, relevant studies in their papers. For papers with at least five prior publications available for citation, one-quarter cited only one previous trial, while another quarter cited no other previous trials on the topic. Those statistics stayed roughly the same even as the number of papers available for citation increased. Larger studies were no more likely to be cited than smaller ones.

The extent of the discrepancy between the existing evidence and what was cited is pretty large and pretty striking,” said Karen Robinson, Ph.D., co-director of the Evidence Based Practice Center (EPIC) at the Johns Hopkins University School of Medicine and co-author of the research.  “It’s like listening to one witness as opposed to the other 12 witnesses in a criminal trial and making a decision without all the evidence. Clinical trials should not be started — and cannot be interpreted — without a full accounting of the existing evidence.”

The Hopkins researchers could not say why prior trials failed to be cited, but Robinson says one reason for the omissions could be the self-interest of researchers trying to get ahead.

Want to make sure that your clinical trials stay on track and that your publications are evidence-based?

Contact SRxA for more details.