Skip to main content

Research Repository

Advanced Search

Reporting Statistical Validity and Model Complexity in Machine Learning based Computational Studies

Reporting Statistical Validity and Model Complexity in Machine Learning based Computational Studies Thumbnail


Abstract

Background:: Statistical validity and model complexity are both important concepts to enhanced understanding and correctness assessment of computational models. However, information about these are often missing from publications applying machine learning.

Aim: The aim of this study is to show the importance of providing details that can indicate statistical validity and complexity of models in publications. This is explored in the context of citation screening automation using machine learning techniques.

Method: We built 15 Support Vector Machine (SVM) models, each developed using word2vec (average word) features --- and data for 15 review topics from the Drug Evaluation Review Program (DERP) of the Agency for Healthcare Research and Quality (AHRQ).

Results: The word2vec features were found to be sufficiently linearly separable by the SVM and consequently we used the linear kernels. In 11 of the 15 models, the negative (majority) class used over 80% of its training data as support vectors (SVs) and approximately 45% of the positive training data.

Conclusions: In this context, exploring the SVs revealed that the models are overly complex against ideal expectations of not more than 2%-5% (and preferably much less) of the training vectors.

Acceptance Date Apr 21, 2017
Publication Date Jun 15, 2017
Pages 128-133
Series Title International Conference on Evaluation and Assessment in Software Engineering
Book Title EASE '17: Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering (EASE'17)
ISBN 9781450348041
DOI https://doi.org/10.1145/3084226.3084283
Keywords computer science
Publisher URL http://doi.org/10.1145/3084226.3084283

Files




Downloadable Citations