About this item:

33 Views | 20 Downloads

Author Notes:

Dr Anisa Rowhani-Farid, Department of Practice, Sciences, and Health Outcomes Research, University of Maryland Baltimore, Baltimore, MD 21201, USA. Email: anisarowhani@gmail.com

AR-F designed and led the study, extracted and verified data, conducted the statistical analyses, wrote the first draft of the manuscript, edited the manuscript and is the guarantor of the study; KH extracted and verified data and edited the manuscript; MG extracted and verified data and edited the manuscript; JR provided statistical consulting for the study and edited the manuscript; ADZ verified data and edited the manuscript; JDW provided statistical consulting and edited the manuscript and JSR designed the study, provided statistical consulting, edited the manuscript and provided mentorship for AR-F’s postdoctoral fellowship throughout the study.

When this work was conducted, the salaries of AR-F and KH were supported by the RIAT Support Center at the University of Maryland. The RIAT Support Center was supported by the Laura and John Arnold Foundation. KH was supported by the Food and Drug Administration (FDA) of the US Department of Health and Human Services (HHS) as part of a financial assistance award U01FD005946, unrelated to this manuscript, totalling US$5000 with 100% funded by FDA/HHS. The statistical support for this publication, provided by JR, was made possible by CTSA Grant Number UL1 TR001863 from the National Center for Advancing Translational Science (NCATS), a component of the National Institutes of Health (NIH). ADZ currently receives research support from the National Institutes of Aging through the Duke Creating ADRD Researchers for the Next Generation—Stimulating Access to Research in Residency (CARiNG-StARR) programme (R38AG065762). JDW reported receiving grant support by the US Food and Drug Administration, Arnold Ventures, Johnson & Johnson through Yale University, and the National Institute on Alcohol Abuse and Alcoholism of the National Institutes of Health under award No. 1K01AA028258; he reported serving as a consultant for Hagens Berman Sobol Shapiro LLP and Dugan Law Firm APLC. JSR currently receives research support through Yale University from Johnson and Johnson to develop methods of clinical trial data sharing, from the Medical Device Innovation Consortium as part of the National Evaluation System for Health Technology (NEST), from the Food and Drug Administration for the Yale-Mayo Clinic Center for Excellence in Regulatory Science and Innovation (CERSI) programme (U01FD005938), from the Agency for Healthcare Research and Quality (R01HS022882), from the National Heart, Lung and Blood Institute of the National Institutes of Health (NIH) (R01HS025164, R01HL144644) and from the Laura and John Arnold Foundation to establish the Good Pharma Scorecard at Bioethics International; in addition, JSR is an expert witness at the request of Relator’s attorneys, the Greene Law Firm, in a qui tam suit alleging violations of the False Claims Act and Anti-Kickback Statute against Biogen. MG has no conflicts of interest to disclose.

Subjects:

Research Funding:

The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Keywords:

  • evidence-based practice
  • health services research
  • policy
  • Humans
  • Cross-Sectional Studies
  • Logistic Models
  • Research Design
  • Research Report
  • Sample Size
  • Health Services Research
  • Evidence-Based Practice

Consistency between trials presented at conferences, their subsequent publications and press releases

Tools:

Journal Title:

BMJ Evidence-Based Medicine

Volume:

Volume 28, Number 2

Publisher:

, Pages 95-102

Type of Work:

Article | Final Publisher PDF

Abstract:

Objective This study examined the extent to which trials presented at major international medical conferences in 2016 consistently reported their study design, end points and results across conference abstracts, published article abstracts and press releases. Design Cross-sectional analysis of clinical trials presented at 12 major medical conferences in the USA in 2016. Conferences were identified from a list of the largest clinical research meetings aggregated by the Healthcare Convention and Exhibitors Association and were included if their abstracts were publicly available. From these conferences, all late-breaker clinical trials were included, as well as a random selection of all other clinical trials, such that the total sample included up to 25 trial abstracts per conference. Main outcome measures First, it was determined if trials were registered and reported results in an International Committee of Medical Journal Editors-approved clinical trial registry. Second, it was determined if trial results were published in a peer-reviewed journal. Finally, information on trial media coverage and press releases was collected using LexisNexis. For all published trials, the consistency of reporting of the following characteristics was examined, through comparison of the trials' conference and publication abstracts: primary efficacy endpoint definition, safety endpoint identification, sample size, follow-up period, primary end point effect size and characterisation of trial results. For all published abstracts with press releases, the characterisation of trial results across conference abstracts, press releases and publications was compared. Authors determined consistency of reporting when identical information was presented across abstracts and press releases. Primary analyses were descriptive; secondary analyses included χ 2 tests and multiple logistic regression. Results Among 240 clinical trials presented at 12 major medical conferences, 208 (86.7%) were registered, 95 (39.6%) reported summary results in a registry and 177 (73.8%) were published; 82 (34.2%) were covered by the media and 68 (28.3%) had press releases. Among the 177 published trials, 171 (96.6%) reported the definition of primary efficacy endpoints consistently across conference and publication abstracts, whereas 96/128 (75.0%) consistently identified safety endpoints. There were 107/172 (62.2%) trials with consistent sample sizes across conference and publication abstracts, 101/137 (73.7%) that reported their follow-up periods consistently, 92/175 (52.6%) that described their effect sizes consistently and 157/175 (89.7%) that characterised their results consistently. Among the trials that were published and had press releases, 32/32 (100%) characterised their results consistently across conference abstracts, press releases and publication abstracts. No trial characteristics were associated with reporting primary efficacy end points consistently. Conclusions For clinical trials presented at major medical conferences, primary efficacy endpoint definitions were consistently reported and results were consistently characterised across conference abstracts, registry entries and publication abstracts; consistency rates were lower for sample sizes, follow-up periods, and effect size estimates. Registration This study was registered at the Open Science Framework (https://doi.org/10.17605/OSF.IO/VGXZY).

Copyright information:

This is an Open Access work distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/).
Export to EndNote