Dhiman, P, Ma, J, Andaur Navarro, CL, Speich, B, Bullock, G, Damen, JAA, Hooft, L, Kirtley, S, Riley, RD, Van Calster, B, Moons, KGM and Collins, GS (2022) Risk of bias of prognostic models developed using machine learning: a systematic review in oncology. Diagnostic and Prognostic Research, 6 (1). 13 - ?. ISSN 2397-7523

[thumbnail of s41512-022-00126-w.pdf]
s41512-022-00126-w.pdf - Published Version
Available under License Creative Commons Attribution.

Download (1MB) | Preview


BACKGROUND: Prognostic models are used widely in the oncology domain to guide medical decision-making. Little is known about the risk of bias of prognostic models developed using machine learning and the barriers to their clinical uptake in the oncology domain. METHODS: We conducted a systematic review and searched MEDLINE and EMBASE databases for oncology-related studies developing a prognostic model using machine learning methods published between 01/01/2019 and 05/09/2019. The primary outcome was risk of bias, judged using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). We described risk of bias overall and for each domain, by development and validation analyses separately. RESULTS: We included 62 publications (48 development-only; 14 development with validation). 152 models were developed across all publications and 37 models were validated. 84% (95% CI: 77 to 89) of developed models and 51% (95% CI: 35 to 67) of validated models were at overall high risk of bias. Bias introduced in the analysis was the largest contributor to the overall risk of bias judgement for model development and validation. 123 (81%, 95% CI: 73.8 to 86.4) developed models and 19 (51%, 95% CI: 35.1 to 67.3) validated models were at high risk of bias due to their analysis, mostly due to shortcomings in the analysis including insufficient sample size and split-sample internal validation. CONCLUSIONS: The quality of machine learning based prognostic models in the oncology domain is poor and most models have a high risk of bias, contraindicating their use in clinical practice. Adherence to better standards is urgently needed, with a focus on sample size estimation and analysis methods, to improve the quality of these models.

Item Type: Article
Additional Information: Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Uncontrolled Keywords: Prediction modelling; Machine learning; Systematic review; Risk of bias
Subjects: R Medicine > R Medicine (General)
Divisions: Faculty of Medicine and Health Sciences > School of Medicine
Related URLs:
Depositing User: Symplectic
Date Deposited: 28 Jul 2022 08:10
Last Modified: 28 Jul 2022 08:10
URI: https://eprints.keele.ac.uk/id/eprint/11156

Actions (login required)

View Item
View Item