Introduction: Interprofessional communication failures are estimated to be a factor in two-thirds of serious health care-related accidents. Using a standardized communication protocol during transfer of patient information between providers improves patient safety. An interprofessional education (IPE) event for first-year health professions students was designed using the Situation, Background, Assessment, Recommendation (SBAR) tool as a structured communication framework. IPE literature, including a valid measurement tool specifically tailored for SBAR, was utilized to design the Interprofessional Team Training Day (ITTD) and evaluate learner gains in SBAR skills. Methods: Learners from six educational programs participated in ITTD, which consisted of didactics, small-group discussion, and role-play using the SBAR protocol. Individual learners were assessed using the SBAR Brief Assessment Rubric for Learner Assessment (SBAR-LA) on SBAR communication skills before and after the ITTD event. Learners received a written clinical vignette and submitted video recordings of themselves simulating the use of SBAR to communicate to another health care professional. Pre- and postrecordings were scored using the SBAR-LA rubric. Normalized gain scores were calculated to estimate the improvement attributable to ITTD. Results: SBAR-LA scores increased for 60% of participants. For skills not demonstrated before the event, the average learner acquired 44% of those skills from ITTD. Learners demonstrated statistically significant increases for five of 10 SBAR-LA skills. Discussion: The value to patient safety of utilizing structured communication between health care providers is proven; however, evaluating IPE teaching of communication skills effectiveness is challenging. Using SBAR-LA, communication skills were shown to improve following ITTD.
Introduction Recent findings suggest that process and outcome-based efficacy beliefs are factorially distinct with differential effects for team performance. This study extends this work by examining process and outcome efficacy (TPE, TOE) of interprofessional (IP) care teams over time. Methods A within-team, repeated measures design with survey methodology was implemented in a sample of prelicensure IP care teams performing over three consecutive clinical simulation scenarios. TPE and TOE were assessed before and after each performance episode. Results Initial baseline results replicated the discriminant validity for TPE and TOE separate factors. Further findings from multilevel modelling indicated significant time effects for TPE convergence, but not TOE convergence. However, a cross-level interaction effect of € TOE (Start-Mean) ×Time' strengthened TOE convergence over time. A final follow-up analysis of team agreement's substantive impact was conducted using independent faculty-observer ratings of teams' final simulation. Conclusion Independent sample t-tests of high/low-agreement teams indicated support for agreement's substantive impact, such that high-agreement teams were rated as significantly better performers than low-agreement teams during the final simulation training. We discuss the substantive merit of methodological within-team agreement as an indicator of team functionality within IP and greater healthcare-simulation trainings at-large.
Introduction The setting demands imposed by performing in new, interdisciplinary cultures is common for modern healthcare workers. Both health science students and evidence-based workers are required to operate in professional cultures that differ from their own. As health organisations have placed increasing value on mindfulness for improving performance outcomes, so too have educational administrators embraced common, mindful competencies for improving training for improved patient outcomes. The training of future clinicians for diversified care. teams and patient populations has become known as interprofessional education (IPE). Although the goals for IPE suggest that individual differences in trait mindfulness may serve an important determinant for training effectiveness, it has gone unstudied in extant simulation training research. MethodsTo fill this gap, in this paper, we examine trait mindfulness' predictive power for training outcomes across two IPE cohort samples using two, prospective observational designs. Results Study 1's Findings supported trait mindfulness' prediction of perceived teamwork behaviours in training simulations between medical and nursing students (n=136). In study 2's expanded sample to five health professions (n=232), findings extended trait mindfulness' prediction of team efficacy and skill transfer, assessed 1 month after training. Conclusion A final, follow-up assessment 16 months later extended mindfulness' predictive validity to knowledge retention and teamwork attitudes. We discuss the theoretical and practical implication of our findings for advancing mindfulness research and IPE effectiveness assessment.
Introduction
Structured communication tools are associated with improvement in information transfer and lead to improved patient safety. Situation, Background, Assessment, Recommendation (SBAR) is one such tool. Because there is a paucity of instruments to measure SBAR effectiveness, we developed and validated an assessment tool for use with prepractice health professions students.
Methods
We developed the SBAR Brief Assessment Rubric for Learner Assessment (SBAR-LA) by starting with a preliminary list of items based on the SBAR framework. During an interprofessional team training event, students were trained in the use of SBAR. Subsequently, they were assigned to perform a simulated communication scenario demonstrating use of SBAR principles. We used 10 videos from these scenarios to refine the items and scales over two rounds. Finally, we applied the instrument on another subset of 10 students to conduct rater calibration and measure interrater reliability.
Results
We used a total of 20 out of 225 videos of student performance to create the 10-item instrument. Interrater reliability was .672, and for eight items, the Fleiss’ kappa was considered good or fair.
Discussion
We developed a scoring rubric for teaching SBAR communication that met criteria for validity and demonstrated adequate interrater reliability. Our development process provided evidence of validity for the content, construct, and response process used. Additional evidence from the use of SBAR-LA in settings where communication skills can be directly observed, such as simulation and clinical environments, may further enhance the instrument's accuracy. The SBAR-LA is a valid and reliable instrument to assess student performance.
Introduction: Creating a racially and ethnically diverse workforce remains a challenge for medical specialties, including emergency medicine (EM). One area to examine is a partnership between a predominantly white institution (PWI) with a historically black college and university (HBCU) to determine whether this partnership would increase the number of underrepresented in medicine (URiM) in EM who are from a HBCU. Methods: Twenty years ago Emory Department of Emergency Medicine began its collaboration with Morehouse School of Medicine (MSM) to provide guidance to MSM students who were interested in EM. Since its inception, our engagement and intervention has evolved over time to include mentorship and guidance from the EM clerkship director, program director, and key faculty. Results: Since the beginning of the MSM-Emory EM partnership, 115 MSM students have completed an EM clerkship at Emory. Seventy-two of those students (62.6%) have successfully matched into an EM residency program. Of those who matched into EM, 22 (32%) have joined the Emory EM residency program with the remaining 50 students matching at 40 other EM programs across the nation. Conclusion: Based on our experience and outcomes with the Emory-MSM partnership, we are confident that a partnership with an HBCU school without an EM residency should be considered by residency programs to increase the number of URiM students in EM, which could perhaps translate to other specialties.
INTRODUCTION: The emergency medicine clerkship director serves an important role in the education of medical students. The authors sought to update the demographic and academic profile of the emergency medicine clerkship director.METHODS: We developed and implemented a comprehensive questionnaire, and used it to survey all emergency medicine clerkship directors at United States allopathic medical schools accredited by the Liaison Committee on Medical Education. We analyzed and interpreted data using descriptive statistics.RESULTS: One hundred seven of 133 (80.4%) emergency medicine clerkship directors completed the survey. Clerkship Director's mean age was 39.7 years (SD-7.2), they were more commonly male 68.2%, of Caucasian racial backgrounds and at the instructor or assistant professor (71.3%) level. The mean number of years of experience as clerkship director was 5.5 (SD-4.5). The mean amount of protected time for clerkship administration reported by respondents was 7.3 hours weekly (SD-5.1), with the majority (53.8%) reporting 6 or more hours of protected time per week. However, 32.7% of emergency medicine clerkship directors reported not having any protected time for clerkship administration. Most clerkship directors (91.6%) held additional teaching responsibilities beyond their clerkship and many were involved in educational research (49.5%). The majority (79.8%), reported being somewhat or very satisfied with their job as clerkship director.CONCLUSION: Most clerkship directors were junior faculty at the instructor or assistant professor rank and were involved with a variety of educational endeavors beyond the clerkship.
OBJECTIVE: To use 360-degree evaluations within an Observed Structured Clinical Examination (OSCE) to assess medical student comfort level and communication skills with intimate partner violence (IPV) patients. METHODS: We assessed a cohort of fourth year medical students' performance using an IPV standardized patient (SP) encounter in an OSCE. Blinded pre- and post-tests determined the students' knowledge and comfort level with core IPV assessment. Students, SPs and investigators completed a 360-degree evaluation that focused on each student's communication and competency skills. We computed frequencies, means and correlations. RESULTS: Forty-one students participated in the SP exercise during three separate evaluation periods. Results noted insignificant increase in students' comfort level pre-test (2.7) and post-test (2.9). Although 88% of students screened for IPV and 98% asked about the injury, only 39% asked about verbal abuse, 17% asked if the patient had a safety plan, and 13% communicated to the patient that IPV is illegal. Using Likert scoring on the competency and overall evaluation (1, very poor and 5, very good), the mean score for each evaluator was 4.1 (competency) and 3.7 (overall). The correlations between trainee comfort level and the specific competencies of patient care, communication skill and professionalism were positive and significant (p<0.05). CONCLUSION: Students felt somewhat comfortable caring for patients with IPV. OSCEs with SPs can be used to assess student competencies in caring for patients with IPV.
Introduction: Evaluation of emergency medicine (EM) learners based on observed performance in the emergency department (ED) is limited by factors such as reproducibility and patient safety. EM educators depend on standardized and reproducible assessments such as the objective structured clinical examination (OSCE). The validity of the OSCE as an evaluation tool in EM education has not been previously studied. The objective was to assess the validity of a novel management-focused OSCE as an evaluation instrument in EM education through demonstration of performance correlation with established assessment methods and case item analysis.
Methods: We conducted a prospective cohort study of fourth-year medical students enrolled in a required EM clerkship. Students enrolled in the clerkship completed a five-station EM OSCE. We used Pearson’s coefficient to correlate OSCE performance with performance in the ED based on completed faculty evaluations. Indices of difficulty and discrimination were computed for each scoring item.
Results: We found a moderate and statistically-significant correlation between OSCE score and ED performance score [r(239) =0.40, p<0.001]. Of the 34 OSCE testing items the mean index of difficulty was 63.0 (SD =23.0) and the mean index of discrimination was 0.52 (SD =0.21).
Conclusion: Student performance on the OSCE correlated with their observed performance in the ED, and indices of difficulty and differentiation demonstrated alignment with published best-practice testing standards. This evidence, along with other attributes of the OSCE, attest to its validity. Our OSCE can be further improved by modifying testing items that performed poorly and by examining and maximizing the inter-rater reliability of our evaluation instrument.
Medical education is faced with a growing number of challenges. The playing field that most of us know and recognize has been evolving over the past decade. Many of the truths we knew as educators are no longer accurate and we are faced with educating our learners in this new environment. Accreditation standards through national organizations are more rigorous and based on attainment of competency; therefore, outcome-based education has developed as a key factor. The Accreditation Council for Graduate Medical Education (ACGME) introduced the six domains of clinical competency to the profession, and in 2009 it began a multiyear process of restructuring its accreditation system to be based on educational outcomes in these competencies.1 The Liaison Committee on Medical Education in standard 6.1 of its Functions and Structure of a Medical School states that “the faculty of a medical school define its medical education program objectives in outcome-based terms that allow the assessment of medical students’ progress in developing the competencies that the profession and the public expect of a physician.”2 Both undergraduate and graduate medical education accreditation agencies are focusing on educational outcomes. It is no longer good enough to demonstrate that your learners performed the skills; now you must document achievement of those competencies. Our clinical environment is less conducive to concentrating on education due to documentation, billing requirements, and the sheer volume in our emergency room departments.3–4 Evolving educational pedagogy is more focused on small groups, simulation, and less on large-group formats. These challenges are opportunities for educators but require new strategies, which require research to determine the best approach.