Skip to main content

Research Repository

Advanced Search

Testing small study effects in multivariate meta-analysis.

Testing small study effects in multivariate meta-analysis. Thumbnail


Abstract

Small study effects occur when smaller studies show different, often larger, treatment effects than large ones, which may threaten the validity of systematic reviews and meta-analyses. The most well-known reasons for small study effects include publication bias, outcome reporting bias and clinical heterogeneity. Methods to account for small study effects in univariate meta-analysis have been extensively studied. However, detecting small study effects in a multivariate meta-analysis setting remains an untouched research area. One of the complications is that different types of selection processes can be involved in the reporting of multivariate outcomes. For example, some studies may be completely unpublished while others may selectively report multiple outcomes. In this paper, we propose a score test as an overall test of small study effects in multivariate meta-analysis. Two detailed case studies are given to demonstrate the advantage of the proposed test over various naive applications of univariate tests in practice. Through simulation studies, the proposed test is found to retain nominal Type I error rates with considerable power in moderate sample size settings. Finally, we also evaluate the concordance between the proposed test with the naive application of univariate tests by evaluating 44 systematic reviews with multiple outcomes from the Cochrane Database.

Acceptance Date Sep 10, 2019
Publication Date Jul 28, 2020
Journal Biometrics
Print ISSN 0006-341X
Publisher Wiley
DOI https://doi.org/10.1111/biom.13342
Keywords comparative effectiveness research, composite likelihood, outcome reporting bias, publication bias, small study effect, systematic review
Publisher URL https://doi.org/10.1111/biom.13342

Files




Downloadable Citations