Font Size: a A A

Comparative Analysis Of The Mcmc Method Based On Slice Sampling

Posted on:2008-12-28Degree:MasterType:Thesis
Country:ChinaCandidate:X N ChenFull Text:PDF
GTID:2208360242463956Subject:Financial mathematics and econometrics
Abstract/Summary:PDF Full Text Request
Firstly, in this paper, we introduced the Monte Carlo (MC) method and common Markov Chain Monte Carlo (MCMC) methods. Then, according to the deficiency of these methods, we introduced the Slice Sampling method. In order to prove that slice sampling is a better sampler, we compared and made simulation over two sampling methods. As a result, the slice sampler can reduce the compute cost efficiently. At the end of this paper, we discussed how to extend the slice sampler and the developing direction of the simulation method.As the developing of the Bayesian method and the updating of the hardware of the computer, Markov Chain Monte Carlo (MCMC) methods have appeared. Now, MCMC has been getting popular and powerful to deal with high-dimensional numerical problems in statistical inference, including the evaluation of high-dimensional integral required in complex likelihood functions. The use of MCMC algorithm is established in the framework of Bayesian inference, where all model parameters are assumed to be random variables following certain pre-specified prior distributions, respectively.A Bayesian inference is based on posterior densities of the parameters, namely for the j-th parameterθ_j the conditional density .Let us consider an example of a Bayesian formulation of hierarchical models.When b_i , the posterior distribution is given as follows:The normalizing constant is independent of parameterθ, so the estimator ofθsuch as the posterior mode or posterior mean can be derived from f (θ,y) alone. If the priors forθ,π( ? ) is a constant, (called flat or noninformative priors), then the posterior above is effectively proportional to the likelihood function. Hence, the posterior mode is numerically identical to the maximum likelihood estimate. For this sake, flat priors or non-informative priors forβand D (as opposed to conjugate priors) are usually preferred.For the model above, it is really difficult to calculate the integrals of the posterior distribution. In order to solve the calculation problem, we need to refer to the random simulation method in section2 of this paper. Hereinto, the most popular MCMC method-Gibbs sampling has solved the integral problem. However, further researches show many shortcomings of Gibbs sampling, so, in order to overcome the problems of Gibbs sampling, in section3, we introduced the Slice Sampler which originate with the observation that one can sample from a univariate distribution by sampling points uniformly from the region under the curve of its density function, and then looking only at the horizontal coordinates of the sample points. To sample from a multivariate distribution, such single-variable slice sampling updates can be applied to each variable in turn, the procedures are given as follows:1. Specify and conditional distributionπ( u |x).2. Form joint distributionπ( x , u ) =π( x )π( u |x).3. Define transition kernels such that both kernels maintainπ( x ,u).4. Generate realizations via the systematic scan transition kernel P_x P_u .This approach is often easier to implement than Gibbs sampling and more efficient than simple Metropolis updates, due to the ability of slice sampling to adaptively choose the magnitude of changes made.Through the simulative analogism in the section4, this paper got a conclusion that the Slice Sampler is more effective than Gibbs sampler. In the last section of this paper, we discussed the meaning of Slice Sampler and the probability of its improvement.
Keywords/Search Tags:Bayesian Inference, Slice Sampling, MCMC, Gibbs Sampling
PDF Full Text Request
Related items