Skip to main content

Research Repository

Advanced Search

Statistical analysis of longitudinal randomized clinical trials with missing data: a comparison of approaches

Joseph, Royes

Statistical analysis of longitudinal randomized clinical trials with missing data: a comparison of approaches Thumbnail


Authors

Royes Joseph



Abstract

Objectives
Missing data represent a source of bias in randomized clinical trials (RCTs). This thesis focuses on pragmatic RCTs with missing continuous outcome data and evaluates the use and appropriateness of current methods of analysis.

Methods
This thesis consists of three parts. First, a systematic review examined practices relating to missing data in published RCTs. Second, a simulation study compared the performance of various methods for handling missing data in a number of plausible trial scenarios. Finally, an empirical evaluation of two pragmatic RCTs investigated the use of a reminder process to inform whether missingness is likely to be non-ignorable.

Results
The majority of 91 trials in the systematic review adopted a form of single imputation, such as last observation carried forward (LOCF) for dealing with missing data. Mixed-effects model for repeated measures (MMRM) and/or multiple imputation (MI) were limited to eight trials. Sensitivity analyses were infrequently and inappropriately used, and insufficiently reported.

In the simulation study, LOCF yielded biased estimates of treatment effect in most scenarios, irrespective of missing data mechanisms. All methods, except LOCF, yielded unbiased estimates for scenarios of equal dropout rate and same direction of dropout in both treatment groups. MMRM and MI were more robust to bias than complete-case and LOCF-based analyses.
In the empirical study, the evaluation using reminder responses indicated the possibility of biased MMRM estimation in one trial and unbiased MMRM estimation in the other.

Conclusion
CCA and LOCF-based analysis should be disregarded in favour of methods such as MMRM and MI-based analysis. The proposed reminder approach can be used to assess the robustness of the missing at random (MAR) assumption by checking expected consistency in MAR-based estimates. If the results deviate, then analyses incorporating a range of plausible missing not at random assumptions are advisable, at least as sensitivity tests for the evaluation of treatment effect.

Publicly Available Date Mar 28, 2024

Files




Downloadable Citations