Published reports on the new drug Fanapt gave it high marks.
One medical journal emphasized its “comparable efficacy” to other drugs used to treat schizophrenia, and prominently noted a lower risk of certain side effects.
There was no mention that competing drugs outperformed it in three of the first clinical trials, or that in one trial a placebo worked just as well. The Journal of Clinical Psychopharmacology‘s summary also made no mention of Fanapt’s tendency to disturb the heart’s electrical activity and increase risk of cardiac arrest.
The one-sided reporting, uncovered by researchers in Oregon, adds to growing evidence that medical journals paint an overly rosy picture of new drugs. The analysis, published this week in PLoS Medicine, found that many unfavorable results on psychiatric drugs never appeared in the articles doctors rely on to learn about trial results.
“It’s unsettling,” says lead author Dr. Erick Turner, a former drug reviewer for the federal Food and Drug Administration and now at the Portland Veteran Affairs Medical Center and Oregon Health & Science University.
The findings do not imply the drugs don’t work. But rather, doctors and consumers don’t get a full, nuanced picture about drugs and can’t make the best decisions without all the facts about safety and effectiveness.
The bias toward publishing positive results is a widespread problem for drug treatments of all kinds.
Trials with favorable outcomes were nearly five times more likely to be published than those without, researchers at the University of California San Francisco found in 2008 when they examined all the new drugs approved by the U.S. Food and Drug Administration in a two-year period. Medical journals frequently reported conclusions more favorable than those in reviews by the FDA.
“And that’s only focusing on half the picture,” says Lisa Bero, the UC professor who led the study. “The other half of that equation is safety. How much information about harm remains unpublished?”
Turner and co-authors obtained 24 clinical trials submitted to the FDA by drug companies seeking approval for eight second-generation anti-psychotic drugs: Abilify, Fanapt, Zyprexa, Invega, Seroquel, Risperdal, Consta, Geodon. The Oregon researchers compared the FDA data – some obtained only after a request under the Freedom of Information Act – with medical journal articles.
Four studies submitted to the FDA were never published. All yielded negative results. In three, the newer anti-psychotic drug worked no better than an inactive placebo. In one, the new drug proved no more effective than an older, cheaper competing drug.
Some journal articles selectively left out unflattering results. Studies showing Fanapt statistically inferior to three competing drugs were not brought up.
The lead author of the Fanapt paper, Dr. Steven Potkin of the University of California at Irvine, was traveling this week, an assistant said, and “unable to respond.”
Study authors who conduct clinical trials with funding from drug companies don’t necessarily have access to all of the data collected or the freedom to independently analyze findings, experts say. Bero, in her 2008 study, did not find any cases in which drug companies prohibited doctors from publishing trial results, but some researchers complained about foot-dragging. “It is clearly important that this should be published,” one clinical trial researcher said. “I have been and continue to be in contact with [the drug company] to see how this can be published.”
Novartis, the maker of Fanapt, in a written statement said it “is committed to transparently disclosing the results of all clinical trials, whatever their outcome, so that healthcare providers can make fully informed treatment decisions for their patients.” The company said safety and efficacy outcomes from all Novartis-sponsored Fanapt trials have been published in peer-reviewed journals or are on the FDA’s website. Novartis said studies finding Fanapt inferior to competing drugs “were not designed as head to head comparisons.”
The extent of bias in anti-psychotic studies was not as severe as Turner and colleagues found for antidepressants in a study they did in 2008. Nearly a third of the clinical trials of antidepressants by drug companies produced questionable or negative results that never appeared publicly in print.
Part of the problem rests with journal editors, who have a long history of favoring studies with positive results and rejecting those showing a treatment doesn’t work. Journals in recent years have tried to correct the bias.
To allow a more complete view of drug trial results, Congress in 2007 mandated a clinical trial database run by the National Institutes of Health. FDA spokeswoman Sandy Walsh said the agency “has initiated a number of transparency programs over the past few years to help inform the public of the agency’s activities while also preserving confidential information.”
Turner, Bero and others who have studied publication bias insist the FDA urgently needs to disclose more information — and make it easier for doctors and consumers to interpret.
“We need access to data that shows all of the outcomes,” Bero says. “Right now, the best place to get that is through regulatory agencies.”