About this item:

625 Views | 873 Downloads

Author Notes:

Address for Correspondence: Joshua Wallenstein, MD, Emory University, 68 Armstrong St., Atlanta, GA 30303. Email: jwalle2@emory.edu.

The authors acknowledge the assistance of the Emory University School of Medicine Clinical Skills Center.

By the WestJEM article submission agreement, all authors are required to disclose all affiliations, funding sources and financial or management relationships that could be perceived as potential sources of bias. The authors disclosed none.

Subject:

Objective Structured Clinical Examinations Provide Valid Clinical Skills Assessment in Emergency Medicine Education

Tools:

Journal Title:

Western Journal Of Emergency Medicine: Integrating Emergency Care With Population Health

Volume:

Volume 16, Number 1

Publisher:

, Pages 121-126

Type of Work:

Article | Final Publisher PDF

Abstract:

Introduction: Evaluation of emergency medicine (EM) learners based on observed performance in the emergency department (ED) is limited by factors such as reproducibility and patient safety. EM educators depend on standardized and reproducible assessments such as the objective structured clinical examination (OSCE). The validity of the OSCE as an evaluation tool in EM education has not been previously studied. The objective was to assess the validity of a novel management-focused OSCE as an evaluation instrument in EM education through demonstration of performance correlation with established assessment methods and case item analysis. Methods: We conducted a prospective cohort study of fourth-year medical students enrolled in a required EM clerkship. Students enrolled in the clerkship completed a five-station EM OSCE. We used Pearson’s coefficient to correlate OSCE performance with performance in the ED based on completed faculty evaluations. Indices of difficulty and discrimination were computed for each scoring item. Results: We found a moderate and statistically-significant correlation between OSCE score and ED performance score [r(239) =0.40, p<0.001]. Of the 34 OSCE testing items the mean index of difficulty was 63.0 (SD =23.0) and the mean index of discrimination was 0.52 (SD =0.21). Conclusion: Student performance on the OSCE correlated with their observed performance in the ED, and indices of difficulty and differentiation demonstrated alignment with published best-practice testing standards. This evidence, along with other attributes of the OSCE, attest to its validity. Our OSCE can be further improved by modifying testing items that performed poorly and by examining and maximizing the inter-rater reliability of our evaluation instrument.

Copyright information:

© 2015 the authors.

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommerical-NoDerivs 3.0 Unported License ( http://creativecommons.org/licenses/by-nc-nd/3.0/), which permits distribution, public display, and publicly performance, making multiple copies, provided the original work is properly cited. This license requires copyright and license notices be kept intact, credit be given to copyright holder and/or author. This license prohibits exercising rights for commercial purposes.

Creative Commons License

Export to EndNote