Skip to main content

Research Repository

Advanced Search

Determining the influence of different linking patterns on the stability of students' score adjustments produced using Video-based Examiner Score Comparison and Adjustment (VESCA).

Yeates, Peter; McCray, Gareth; Moult, Alice; Cope, Natalie; Fuller, Richard; McKinley, Robert

Determining the influence of different linking patterns on the stability of students' score adjustments produced using Video-based Examiner Score Comparison and Adjustment (VESCA). Thumbnail


Authors

Richard Fuller

Robert McKinley



Abstract

BACKGROUND: Ensuring equivalence of examiners' judgements across different groups of examiners is a priority for large scale performance assessments in clinical education, both to enhance fairness and reassure the public. This study extends insight into an innovation called Video-based Examiner Score Comparison and Adjustment (VESCA) which uses video scoring to link otherwise unlinked groups of examiners. This linkage enables comparison of the influence of different examiner-groups within a common frame of reference and provision of adjusted "fair" scores to students. Whilst this innovation promises substantial benefit to quality assurance of distributed Objective Structured Clinical Exams (OSCEs), questions remain about how the resulting score adjustments might be influenced by the specific parameters used to operationalise VESCA. Research questions, How similar are estimates of students' score adjustments when the model is run with either: fewer comparison videos per participating examiner?; reduced numbers of participating examiners? METHODS: Using secondary analysis of recent research which used VESCA to compare scoring tendencies of different examiner groups, we made numerous copies of the original data then selectively deleted video scores to reduce the number of 1/ linking videos per examiner (4 versus several permutations of 3,2,or 1 videos) or 2/examiner participation rates (all participating examiners (76%) versus several permutations of 70%, 60% or 50% participation). After analysing all resulting datasets with Many Facet Rasch Modelling (MFRM) we calculated students' score adjustments for each dataset and compared these with score adjustments in the original data using Spearman's correlations. RESULTS: Students' score adjustments derived form 3 videos per examiner correlated highly with score adjustments derived from 4 linking videos (median Rho?=?0.93,IQR0.90-0.95,p?<?0.001), with 2 (median Rho 0.85,IQR0.81-0.87,p?<?0.001) and 1 linking videos (median Rho?=?0.52(IQR0.46-0.64,p?<?0.001) producing progressively smaller correlations. Score adjustments were similar for 76% participating examiners and 70% (median Rho?=?0.97,IQR0.95-0.98,p?<?0.001), and 60% (median Rho?=?0.95,IQR0.94-0.98,p?<?0.001) participation, but were lower and more variable for 50% examiner participation (median Rho?=?0.78,IQR0.65-0.83, some ns). CONCLUSIONS: Whilst VESCA showed some sensitivity to the examined parameters, modest reductions in examiner participation rates or video numbers produced highly similar results. Employing VESCA in distributed or national exams could enhance quality assurance or exam fairness.

Journal Article Type Article
Acceptance Date Jan 5, 2022
Online Publication Date Jan 17, 2022
Publication Date Jan 17, 2022
Journal BMC Medical Education
Publisher Springer Verlag
Peer Reviewed Peer Reviewed
Volume 22
Article Number 41
DOI https://doi.org/10.1186/s12909-022-03115-1
Keywords Assessment; Objective Structured Clinical Exams; Assessor variability; Distributed assessments; Test Equating; Many Facet Rasch Modelling; Psychometrics
Publisher URL https://bmcmededuc.biomedcentral.com/articles/10.1186/s12909-022-03115-1

Files




You might also like



Downloadable Citations