Font Size: a A A

Simulation Studies On Multiple Treatments Meta-analysis And Indirect Comparisons

Posted on:2012-09-29Degree:MasterType:Thesis
Country:ChinaCandidate:W Q LiaoFull Text:PDF
GTID:2214330374954168Subject:Epidemiology and Health Statistics
Abstract/Summary:PDF Full Text Request
With the spread and popularity of evidence-based medicine, while making medical decisions for patients, more and. more clinicians would refer to valid certified clinical evidences. Well-designed multicenter randomized controlled clinical trial, systematic review and meta-analysis are at the highest level of evidence hierarchy in evidence-based medicine. The current methodologies of systematic review and meta-analysis just assess two interventions treating the same disease. After searching evidences, researchers do the quantitatively statistical analysis and then come to the conclusion that which intervention is better than the other.With the continuous developments of life sciences and bio-pharmaceutical industry, there are usually more than two drugs or treatments for the same disease. The more common situations is:due to the servere competition, current existing first-line treatments or new drugs carried out pre-marketing clinical trials in comparison just with placebo or active drug. While practicing medicine or in the choice of medication, clinicians maybe confused and wonder which current existing first-line treatment is the best out of the others for my patient? Due to the lack of enough clinical evidences, the effect sizes of pairwise treatments out of a series of interventions can not be directly estimated, not to mention conducting meta-analysis. In the absence of head-to-head comparisons from conducted randomized controlled trials (RCT), in 1997, Heiner C. Bucher, as a pioneer, innotatively proposed the methodology of indirect comparison to estimate the effect size of treatments which are not carried out RCT.It is commonly seen that drug-combination usage and different doses of therapy for different patient populations in clinical trials, both of the situations should be regarded as different interventions in the strict sense. As mentioned above, not all pairwise interventions treating the same disease have been conducted clinical trials, which means, not all pairwise interventions have direct clinical evidences. All these tricky problems make meta-analysis useless. Higgins, Whitehead, Lumley, Lu, Ades and other investgators focused on this field and did the research, improving the method of CPM (confidence profile method) proposed by Eddy (1990), and gradually developed multiple treatment meta-analysis (MTM). MTM aims to slove the problem of comparing and estimating the pairwise effect sizes of a variety of treatments for the same disease simultaneously but lack of sufficient evidences from conducted direct comparisons.The first chapter of the thesis introduced relevant contents of multiple treatments meta-analysis (MTM), including its generated background, the concepts of direct comparison and indirect comparison. And then introduced the basic thoughts, formulae, and the reliability of indirect comparison, also explained whether the results from indirect comparison can replace those from direct comparison or not. In section 2, the author introduced MTM in a more comprehensive way, including its definition, purposes, assumptions, and derived problems. Despite the fact that frequentist statisticians have tried on the field of MTM modelling, researches and publications focusing on Bayesian approach of MTM are far more than those from frequentist approach. And finally, in section 3, the author introduced meta-regression, using it in company with MTM to solve a real clinical problem. Heterogeneity and bias are the two important issues which can not be avoided while conducting meta-analysis. Doing a research on MTM often involves many different studies, and different studies have different methodological qualities and different targeted patient populations, so confounders should be taken into account while modeling. All of these areas were introduced or discussed in the first chapter. In chapter 2, the author introduced the current existing estimate methods for indirect comparison, and the test of evidence consistency/discrepancies testing the agreement (coherence) of the results from direct and indirect comparison, when the results of direct comparison exist. Then the author carried out simulation studies focusing on testing the accuracy of estimation from indirect comparisons, compared with the results from direct comparison.Indirect comparison borrows the strength from other clinical evidences, which have the same treatment for the uncompared pairwise interventions, also known as "common comparator". Whether indirect comparison can accurately estimate the real effect sizes of the treatments which haven't carried out clinical trials, that is, its estimated efficacy (accuracy) is the problem we were concerned about.The accuracy of estimations from indirect comparison would directly influence the application and development of this method. The simulation study considered three interventions in the closed loop (triangle), and the outcome was dichotomous data from homogeneious population. The author considered factors that would influence the estimated results from indirect comparison were:positive rates (incidences) of treatments (πn), sample sizes of the studies (n), the number of included studies (k), and then set the parameters for these factors according to different levels.There are two situations here (the researchers self-set parameters & randomly genetated parameters) and a total of 11 sets of parameters for the incidences (positive rates) of the three treatments (πn). The researcher took into account of different circumstances and then set 6 groups of parameters. Three interventions had different incidences but shared the same intervals, so 3 situations here in this part-large interval (πA=0.90,πB=0.50,πC=0.10), medium interval (πA=0.75,πB=0.50,πC=0.25), small interval (πA=0.55,πB=0.50,πC=0.45). Two interventions shared the same incidence but had different intervals from the third intervention:large interval (πA=0.85,πB=0.15,;πC= 0.15) and median interval (πA=0.60,πB= 0.40,πC=0.40). There are 2 cases here. The last case considered the situation that the incidences of three treatments were equal(πA=0.65,πB=0.65,πC=0.65).5 groups of random numbers were generated by the SAS software. The random numbers were uniformly distributed between 0 and 1. The generated random numbers represents the incidences of the three treatments:group 1 (πA=0.464,πB-0.296,πC=0.078), group 2 (πA=0.453,πB=0.817,πC=0.768), group 3 (πA=0.525,πB=0.915,πC=0.538)*, group 4 (πA=0.070,πB=0.998,πC=0.253), group 5 (πA=0.846,πB=0.566,πC=0.139), respectively.Consider 6 cases of the sample size (n) in the simulation study from small to large, respectively:30,50,100,200,500 and 1000.Consider 5 cases of the number of included studies (k) from small to large, respectively:5,10,15,20 and 25.The simulation results suggested that with the increase of sample size and the number of included studies, the simulated results will be stabilized. If the differences of incidences of interventions were within a certain range (i.e.0.25), using indirect comparison to estimate the effect sizes of treatments, the results had little difference from those obtained from Peto's direct comparison. Indirect comparison can accurately and effectively estimate the effect sizes of pairwise treatments. Because Bucher's consistency test is the acceptance of hypothesis testing, even if the differences of the incidences of treatments were between 0.4 and 0.8, it still did not reject the null hypothesis, but the results obtained by indirect comparison had greater discrepancies from the estimated results from Peto's direct comparison, the difference of the estimated OR value by two approaches can be greater than 1. If the difference of incidences was greater than 0.8, the null hypothesis would be rejected. The estimated results obtained from direct and indirect comparisons were seen to be inconsistent, the differences of the OR value were huge, which can be greater than 20 or more!Conclusion:In general, the differences of the efficacies of different drugs or interventions are within a certain range, assume that extreme differences rarely happened in drugs in clinical usage, then in the absence of evidences from direct comparisons, Bucher's indirect comparison can relatively accurate estimate the effect sizes of treatments and evaluate pharmaceutical interventions.In chapter 3, the author carried out simulation studies, focusing on the efficacy of estimation from Bayesian MTM model. Bayesian MTM model often combines the evidences from both direct comparison and indirect comparison to get the results called mixed treatment comparisons. The method is widely used to solve clinical problems. Its effectiveness and accuracy of the estimation is another problem that we are concerned.The simulation study was assumed 2-arm clinical trials with binary outcome, including five treatments. The researcher considered some factors influencing the efficacy of estimation by Bayesian MTM model were:the incidences (positive rates) of five treatments, the sample size of each study, the number of included studies. The researcher used Monte Carlo simulation method to generate random numbers to represent the outcomes from real clinical trial, while building three models with different mechanisms, finally compared the estimated outcomes from two approaches——Peto's direct comparison and Bayesian MTM model and then found the differences and discrepancies.The researcher set 4 groups for incidences (positive rates) of the treatments,3 of which were set by the researcher, considering different scenarios:(?) All the incidences of interventions were different but shared the same interval of 005:π1=0.6,π2=0.55,π3=0.5,π4=0.45,π5=0.4(?) Two incidences of interventions were the same, the rest were different, the intervals was 0.125:.π1=0.775,π2=0.65,π3=0.525,π4=0.525,π5=0.4(?) Two incidences of interventions were the same, the rest were different, but the intervals (0.025) were narrower than the upper one:π1=0.525,π2=0.5,π3=0.475,π4=0.475,π5=0.45(?) The last set of parameters was randomly generated by SAS, uniformly distributed from 0 to 1, which isπ1=0.126,π2=0.246,π3=0.725,π4=0.333,π5=0.517. The sample sizes of study were integer, determined by generated random numbers, which were followed by uniform distribution from 50 to 300. The three models are:(?) Model 1 (10 pairwise combinations from 5 interventions):the number of included study for each pairwise combination was k=15;(?) Model 2 (Missing model 1):6 pairwise groups of evidences for direct comparisons of interventions existed. The number of included studies for each group is k=12 (80% of the number of included studies of model 1). Three pairs of treatment combinations can form a closed path (loop);(?) Model 3 (Missing model 2):6 pairwise groups of evidences for direct comparisons of interventions existed (the same situation here with Model 2), but the number of included studies for each group was k=6 (40% of the number of included studies of model 1), but no closed path (loop) here.Use WinBUGS software to code Bayesian MTM model and Markov chain Monte Carlo (MCMC) algorithm to estimate the results. Based on the current findings from other biostatisticians and researchers, using random effects model (REM) is better and more appropriate than fixed effect model (FEM) to estimate the results in Bayesian approaches of MTM in this simulation study. So FEM was not used here. There is no test or method for checking evidence consistency like Bucher's consistency test currently, the estimated results and differences between models and approaches were only interpreted by descriptive statistics instead of making statistical inference.In comparison with the results from 4 groups of different parameters under 3 models conducted by Peto's head-to-head comparison and Bayesian MTM-REM model by combinging direct and indirect evidences, all the results suggested the robustness of MTM-REM model. Although the results from Peto's direct comparison has a little difference with those from Bayesian MTM-REM model, the discrepancies were still acceptable. And according to the results from the simulation studies testing the efficacy of Bucher's indirect comparison, we know that there are many factors that can cause discrepancies. The sample size and the number of included studies were relatively small in this simulation study, which may be very important factors contributed to differences and discrepancies.Conclusion:The researcher successfully constructed 3 models to test the efficacy of Bayesian MTM-REM model. Bayesian MTM-REM model is robust, which can effective estimate the effect sizes of multiple treatments simultaneously. The 95% confidence intervals (CI) from Baysian MTM model were narrower than those from Peto's direct comparison.In chapter 4, the author concluded and assessed the whole methodological system of multiple treatment meta-analysis and indirect comparison and predicted their perspects in the future. The author pointed out the innovations and shortcomings of the thesis and research work, and also hoped that the deficiencies can be continuously improved or sloved in future research work.
Keywords/Search Tags:Multiple treatments meta-analysis, Indirect comparison, Efficacy of estimation, Bayesian modeling, Monte Carlo simulation, Markvo Chain Monte Carlo (MCMC), Mixed treatment comparison (meta-analysis), Network meta-analysis
PDF Full Text Request
Related items