- About
- Organization
- Organization Overview
- Dean’s Office
- Department of Bioengineering and Therapeutic Sciences
- Department of Clinical Pharmacy
- Department of Pharmaceutical Chemistry
- Quantitative Biosciences Institute
- Org Chart
- Research
- Education
- Patient Care
- People
- News
- Events
Study finds including unpublished FDA data alters drug effectiveness outcomes
By David Jacobson / Fri Jan 20, 2012
Every year U.S. drug regulators approve dozens of new medicines as “safe and effective,” but just how effective are they? How well do they alleviate specific aspects of illness, whether light sensitivity from migraine headaches or itching from eczema?
For answers, many physicians and other health care providers turn to systematic reviews, which combine research about a drug’s efficacy at achieving health outcomes to come up with cumulative estimates. A key tool of these reviews are meta-analyses, which statistically combine the data (e.g., blood pressure measurements, psychiatric test scores, rash cure rates) from multiple studies to seek more accurate answers.
But there is a catch to this effort at making more evidence-based medication decisions. Typically, such combinatorial reviews rely solely on data published in scientific journals. UCSF School of Pharmacy faculty member and health policy expert Lisa Bero, PhD, and colleagues find that if nine drugs that were the subjects of such reviews over the past decade had unpublished data added to their meta-analyses, it would change estimates of the extent of their efficacy more than 90% of the time.
Where does this unpublished data come from? Drug makers seeking U.S. approval to sell a new medication must submit the results of all of their clinical trials (which compare new drugs' efficacy to older ones and/or placebo) to the Food and Drug Administration (FDA), but not all of those findings are subsequently published in scientific journals.
Bero et al. re-crunched the meta-analysis numbers in a paper in the BMJ (formerly British Medical Journal). Their results suggest that especially in the case of new drugs, for which there is limited and often favorably biased information, not including unpublished trial data in meta-analyses could make it harder for clinicians to accurately predict their effects.
About this Article
UCSF Researchers: Lisa Bero, PhD, is a faculty member and vice chair for research in the Department of Clinical Pharmacy, UCSF School of Pharmacy, and a faculty member of the Philip R. Lee Institute for Health Policy Studies, UCSF School of Medicine. The study was co-authored by Beth Hart, a recent Doris Duke clinical research fellow at UCSF, and PhD student Andreas Lundh of the Nordic Cochrane Center in Denmark.
Journal citation: Hart B, Lundh A, Bero L, “Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses,” BMJ, Jan. 3, 2012. doi: 10.1136/bmj.d7202
Funding: Doris Duke Charitable Foundation.
The challenge
Previous studies by Bero and other researchers have documented reporting bias in published drug studies. This includes:
- Publication bias in which clinical trial results that show new drugs to be effective are more likely to be published in scientific journals than those that do not.
- Outcome reporting bias in which only some treatment outcome data (for instance, improvements in specific symptoms, at specific doses or time points, measured in certain ways) from clinical trials is published.
A 2008 study by Bero and UCSF colleagues found nearly half the unfavorable treatment outcomes in trial data submitted to the FDA for the approval of entirely new drugs in 2001 and 2002 were omitted from papers published over the next five years.
As Bero et al. note in their current BMJ article: “When unfavorable drug trials are not published, meta-analyses and systematic reviews that are based only on published data may overestimate the efficacy of the drugs.”
Previous studies that revised meta-analyses of the effectiveness of a dozen anti-depressants by including unpublished data found decreased efficacy and increased harm from the drugs. But a study of reporting bias on meta-analyses had not previously been done across a variety of drug classes.
The research
Bero and her co-authors screened 296 systematic reviews of drugs approved by the FDA in 2001 and 2002. They focused on drugs approved a decade before to allow time for an accumulation of published studies and then meta-analyses based on those publications of the drugs’ efficacy at achieving specific outcomes.
The published results were then supplemented with unpublished data in FDA records from the new drug approval process. The unpublished data is publicly available online but not in a standard format. It is sometimes incomplete, redacted, and difficult-to-read, making data extraction difficult and time-consuming.
The researchers eventually identified nine drugs in six different classes—treatments for conditions ranging from migraine headaches to schizophrenia—with both published meta-analyses of their efficacy on certain health outcomes and unpublished FDA data about their effectiveness on the same outcomes.
The unpublished data covered 42 different health outcomes (e.g., migraine pain relief after one hour, improvement in psychiatric symptoms) for which meta-analyses of published data had estimated the extent of a drug’s effect.
Given previous findings of reporting biases favoring new drugs, the authors hypothesized that adding previously unpublished data to revised meta-analyses would reduce the drugs’ estimated efficacy.
The results
In more than 90% of the 42 outcomes, the inclusion of the FDA’s unpublished trial data in revised meta-analyses changed the estimates of a given drug’s efficacy.
But, contrary to the researchers’ hypothesis, those changes were decidedly mixed: Estimates of drug efficacy increased as often as they decreased (for 46% of outcomes each). Another 7% of the outcomes in the revised meta-analyses showed the same drug effectiveness.
The size and significance of the changes in the drugs’ efficacy were also unpredictable and varied by drug—and even by outcomes for the same drug. For example, in the revised meta-analyses with unpublished data included:
- The migraine headache medication Relpax (eletriptan) showed a 37% increase in “pain relief at 30 minutes” but a 25% decrease in “pain-free at one hour.”
- Benicar (olmesartan medoxomil), a treatment for high blood pressure, showed a 37% increase in the effect of a 10-milligram dose on systolic pressure but a 24% decrease in the effect of a five-milligram dose on diastolic pressure.
- Abilify (aripiprazole), a drug used to treat schizophrenia and other psychiatric disorders, yielded a 166% increase at improving scores on one scale used for measuring symptom severity (Positive and Negative Syndrome Scale, PANSS) but a 53% decrease in improving scores on another scale (Brief Psychiatric Rating Scale, BPRS).
In the one re-analysis of an outcome involving harm, adding unpublished data increased the negative effects (i.e., adverse events) from the topical eczema cream Elidel (pimecrolimus) by 49%.
The implications
The revised meta-analyses did not find that any of the drugs were ineffective (i.e., did not have statistically significant outcomes vs. placebo). But Bero and co-authors note that “changes in effect sizes may be more meaningful to clinicians and patients.” Indeed, given competing drug choices, knowing how well a medication achieves a particular outcome can be especially important.
The unpredictability of their revised meta-analyses’ results would seem to support the researchers’ case for increasing access to all FDA clinical trial data: “The effect of including unpublished data must be measured for each drug and each outcome as the important differences may be found for some outcomes but not others.”
And more unpublished data appears to mean less accurate predictions of drug effects. In a third of the revised meta-analyses, unpublished FDA trial data comprised more than half of all the combined statistics. In those cases, the median change in estimates of drug efficacy was 19%. In the other revised meta-analyses, with less unpublished data included, the median change in drug effect estimates was just 7%.
The study authors add that the FDA reviews of new drug applications that they mined for unpublished data were inadequate in many cases: “We excluded some meta-analyses from recalculation because we could not find usable data for the unpublished outcomes.”
The European Medicines Agency has recently agreed to make complete clinical study reports available to researchers. Bero and her co-authors call for the FDA to do the same, making all protocols and raw unpublished trial data readily accessible.
Tags
Topics:
Category:
Sites:
School of Pharmacy, Department of Clinical Pharmacy, PharmD Degree Program
About the School: The UCSF School of Pharmacy aims to solve the most pressing health care problems and strives to ensure that each patient receives the safest, most effective treatments. Our discoveries seed the development of novel therapies, and our researchers consistently lead the nation in NIH funding. The School’s doctor of pharmacy (PharmD) degree program, with its unique emphasis on scientific thinking, prepares students to be critical thinkers and leaders in their field.