Font Size: a A A

Dimension jumping and auxiliary variable techniques for Markov chain Monte Carlo algorithms

Posted on:2008-10-05Degree:Ph.DType:Thesis
University:University of Colorado at BoulderCandidate:Mao, WenjinFull Text:PDF
GTID:2448390005477309Subject:Statistics
Abstract/Summary:
Markov chain Monte Carlo (MCMC) algorithms are powerful tools for the study of complex, analytically intractable stochastic processes. In this thesis, which in made up of three distinct parts, we develop variations of perfect and non-perfect MCMC algorithms, that serve to broaden the class of applications that may be approached in this way.; In the first part of this thesis, we develop a new perfect simulation algorithm based on the Metropolis-Hastings (MH) algorithm. Usually, the density which generates potential process transitions of the Markov chain produced through MH must propose a next state independently of the current state in order for perfect simulation to work. Our algorithm relaxes this restriction, and proves to be especially useful for simulating certain mixture distributions. We apply our algorithm to Bayesian hypothesis testing problems and show that it is equivalent to a perfect "reversible jump'' scheme.; Reversible jump Markov chain Monte Carlo (RJMCMC) schemes, allow one to simulate from a target density by constructing a Markov chain that travels over a space whose dimension varies over time. They are often considered to be inaccessible to all but "expert'' Monte Carlo practitioners. Indeed, they form a class of complicated, though extremely valuable, algorithms that are especially useful for making inferences regarding Gaussian mixture models. There exist several tutorials explaining the application of RJMCMC to such models, however, even these tutorials leave details to the reader that we feel are non-trivial enough to cause confusion leading to misuse of the RJMCMC algorithm. In the second part of this thesis, we break down the application of RJMCMC to Gaussian mixture models. We then discuss how one might use this algorithm to estimate the number of components in the mixture. Finally, we offer an alternative approach, using perfect simulation, that is significantly easier to apply and which gives comparable results.; In the third part of this thesis, we modify the perfect version of the independent Metropolis-Hastings algorithm, in order to simplify an optimization problem required for its execution. The idea is that we replace a "worst case scenario'' lower bound with a more forgiving stochastic lower bound. This work was motivated by an auxiliary variable approach to the problem of simulating values from a posterior density for model parameters, proportional to a likelihood function times a prior density, in the case when the likelihood is only known up to a "constant'' involving unknown parameters. Typically, the Metropolis-Hastings algorithm is quite amenable to simulating unnormalized target densities, as they appear in the algorithm only as ratios. However, in this case the posterior target density is a function of the model parameters and we do not have the desired cancellation in ratios that allows us to ignore the unknown "constant''.
Keywords/Search Tags:Chain monte carlo, Algorithm, Markov chain, RJMCMC
Related items