Yeates, P, Cope, N, Hawarden, A, Bradshaw, H, Mccray, G and Homer, M (2019) Developing a video-based method to compare and adjust examiner effects in fully nested OSCEs. Medical Education, 53 (3). pp. 250-263. ISSN 0308-0110

[thumbnail of Accepted full manuscript clean.docx] Text
Accepted full manuscript clean.docx - Accepted Version
Available under License Creative Commons Attribution Non-commercial.

Download (443kB)

Abstract

Background:
Whilst averaging across multiple examiners judgements reduces unwanted overall score variability in Objective Structured Clinical Examinations (OSCE), designs involving several parallel circuits of the OSCE require that different examiner-cohorts collectively judge performances to the same standard in order to avoid bias. Prior research suggests the potential for important examiner-cohort effects in distributed or national exams which could compromise fairness or patient safety, but despite their importance, these effects are rarely investigated as fully nested assessment designs make them very difficult to study. We describe initial use of a new method to measure and adjust for examiner-cohort effects on students’ scores.

Methods:
We developed Video-based Examiner Score Comparison and Adjustment (VESCA): volunteer students were filmed “live” on 10 out 12 OSCE stations. Following examination, examiners additionally scored station-specific common-comparator videos, producing partial crossing between examiner-cohorts. Many-Facet Rasch Modelling and Linear Mixed Modelling were used to estimate and adjust for examiner-cohort effects on students’ scores.

Results:
After accounting for students’ ability, examiner-cohorts differed substantially in their stringency/leniency (maximal global score difference of 0.47 out of 7.0 (Cohen’s d=0.96); maximal total percentage score difference of 5.7% (Cohen’s d=1.06) for the same student-ability by different examiner-cohorts). Corresponding adjustment of students’ global and total percentage scores altered the theoretical classification of 6.0% of students for both measures (either pass to fail or fail to pass), whilst 8.6-9.5% students’ scores were altered by at least 0.5 standard deviations of student ability.

Conclusions:
Despite typical reliability, the examiner-cohort which students encountered had a potentially important influence on their score, emphasising the need for adequate sampling and examiner training. Development and validation of VESCA may offer a means to measure and/or adjust for potential systematic differences in scoring patterns with could exist between locations in distributed or national OSCE exams, thereby ensuring equivalence and fairness.

Item Type: Article
Additional Information: This is the accepted author manuscript (AAM). The final published version (version of record) is available online via Wiley at https://doi.org/10.1111/medu.13783 - please refer to any applicable terms of use of the publisher.
Uncontrolled Keywords: assessment, OSCEs, assessor variability, psychometrics
Subjects: R Medicine > R Medicine (General) > R735 Medical education. Medical schools. Research
Divisions: Faculty of Medicine and Health Sciences > School of Medicine
Depositing User: Symplectic
Date Deposited: 09 Nov 2018 08:32
Last Modified: 21 Dec 2019 01:30
URI: https://eprints.keele.ac.uk/id/eprint/5493

Actions (login required)

View Item
View Item