About this item:

459 Views | 558 Downloads

Author Notes:

Email: cecile.janssens@emory.edu

Author Contributions: Conceptualization: A. Cecile J. W. Janssens.

Investigation: A. Cecile J. W. Janssens, Kimberly R. Powell.

Methodology: A. Cecile J. W. Janssens, Kimberly R. Powell.

Resources: A. Cecile J. W. Janssens.

Supervision: A. Cecile J. W. Janssens.

Visualization: A. Cecile J. W. Janssens.

Writing – original draft: A. Cecile J. W. Janssens.

Writing – review & editing: A. Cecile J. W. Janssens, Michael Goodman, Kimberly R. Powell, Marta Gwinn.

ACJWJ has filed a patent application for the method used to perform the co-citation ranking.

None of the others declare a conflict of interest.

Subjects:

Research Funding:

The author(s) received no specific funding for this work.

Keywords:

  • Citation analysis
  • Bibliometrics
  • Scientific publishing
  • Algorithms
  • Meta-analysis
  • Article-level metrics
  • Health care policy
  • Peer review

A critical evaluation of the algorithm behind the Relative Citation Ratio (RCR).

Tools:

Journal Title:

PLoS Biology

Volume:

Volume 15, Number 10

Publisher:

, Pages e2002536-e2002536

Type of Work:

Article | Final Publisher PDF

Abstract:

The influence of scientific publications is increasingly assessed using quantitative approaches, but most available metrics have limitations that hamper their utility [1]. Hutchins and colleagues recently proposed the Relative Citation Ratio (RCR) [2], which compares the citation rate of an article against the citation rate that is expected for its field. The metric is an attractive and intuitive solution to indicate whether an article is cited more or less frequently than its “peer” publications. As a ratio of rates, RCR is an article-level metric that is field and time independent and strongly correlated with peer review evaluations [2]; the metric was central in the proposed (and withdrawn) grant management policy of the National Institutes of Health (NIH) [3] even though the RCR has been criticized for lacking a theoretical model, having insufficient transparency, and having poor correlation with other peer evaluations [4,5]. We analyzed the algorithm behind the RCR and report several concerns about the calculation of the metric that may limit its utility.

Copyright information:

© 2017 Janssens et al

This is an Open Access work distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).
Export to EndNote