Article Text


181 Addressing continuous data for participants excluded from trial analysis: a guide for systematic reviewers
  1. S E Ebrahim1,
  2. Aki2,
  3. Mustafa1,
  4. Sun3,
  5. Walter1,
  6. Heels-Ansdell1,
  7. Alonso-Coello4,
  8. Johnston5,
  9. Guyatt1
  1. 1McMaster University, Toronto, Canada
  2. 2American University of Beirut, Beirut, Lebanon
  3. 3Kaiser Permanente Northwest, Portland, United States of America
  4. 4CIBERESP-IIB Sant Pau, Barcelona, Spain
  5. 5The Hospital for Sick Children, Toronto, Canada


Objectives To develop a framework for handling missing participant data for continuous outcomes in systematic reviews and assess its impact on risk of bias.

Methods We conducted a consultative, iterative process. We considered sources that reflect real observed outcomes in participants followed-up in individual trials included in the systematic review, and developed a range of plausible strategies that would be progressively more stringent in challenging the robustness of the pooled estimates. We applied our approach to two example systematic reviews.

Results We used 5 sources of data for imputing the means for participants with missing data: [A] the best mean score among the intervention arms of included trials, [B] the best mean score among the control arms of included trials, [C] the mean score from the control arm of the same trial, [D] the worst mean score among the intervention arms of included trials, [E] the worst mean score among the control arms of included trials. To impute SD, we used the median SD from the control arms of all included trials. Using these sources of data, we developed four progressively more stringent imputation strategies.

In the first example review, effect estimates were diminished and lost significance as the strategies became more stringent, suggesting the need to rate down confidence in estimates of effect for risk of bias. In the second review, effect estimates maintained statistical significance using even the most stringent strategy, suggesting missing data does not undermine confidence in the results. The differences are due to: [1] the size of the effect and its precision, and [2] the percentage of missing participant data.

Conclusions Our approach provides rigorous yet reasonable and relatively simple, quantitative guidance for judging the impact of risk of bias as a result of missing participant data in systematic reviews of continuous outcomes.

Statistics from

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.