Meta-analysis is a statistical method which synthesizes results from a set of individual studies to estimate an overall effect. If the studies for a meta-analysis are chosen through a literature review, an inherent selection bias may arise, since for example, studies may tend to be published more readily if they are statistically significant, or deemed to be more ‘interesting’ in terms of the impact of their outcomes. This phenomenon, known as ‘publication bias’, may distort the results of a meta-analysis due to the use of a non-representative set of significant results.; We develop a computationally straightforward rank-based data augmentation technique, formalizing the use of funnel plots, to estimate and adjust for the numbers and outcomes of missing studies. Several non-parametric estimators are proposed for the number of missing studies, and their properties are developed analytically and through simulations. We apply the method to a number of existing meta-analyses, including epidemiological and psychometric datasets, and to a subset of the Cochrane Database of Systematic Reviews. We illustrate how appropriately bias-adjusted estimates play a role in sensitivity analyses in calculating attributable risks.; Even in the situation where the true underlying effect size is unknown, our new method performs much better than those which currently exist. In simulations, the true number of missing values is well-estimated, and after adjusting for these missing studies using a ‘Trim and Fill’ algorithm, we find that the point estimate of the overall effect size is approximately correct and coverage of the effect size confidence intervals is substantially improved, in many cases recovering the true confidence levels entirely. We show that the ‘Trim and Fill’ method is both effective and consistent with other criteria in the literature.; The risk of adopting ineffective and even harmful medical practices is greater in the presence of bias-affected conclusions from meta-analyses, and journals which influence medical practice cannot afford to ignore this problem. This method should encourage researchers to routinely check whether the conclusions drawn from systematic reviews are robust to possible non-random selection mechanisms. |