Font Size: a A A

Hierarchical Bayes Based Adaptive Sparsity In Gaussian Mixture Model

Posted on:2016-07-12Degree:MasterType:Thesis
Country:ChinaCandidate:B H WangFull Text:PDF
GTID:2308330461476599Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Gaussian Mixture Model (GMM) has been widely used in statistics for its great flexibility, simplicity, and mature theory, and gained good performance in many applications. However, when facing high-dimensional data, e.g., many thousands, parameter estimation for covariance matrix in GMM with millions of dimensionality is a great challenge. In this case, the number of observation data is far less than that of parameters to be estimated, thus easily overfitting. Recently, benefitting from the rapid development of LASSO induced variable selection and sparse regression model, a series of work focus on how to impose sparse constraint on the high-dimensional parameter space, and then discover its intrinsic sparse structure. Specific to Gaussian Graphical Model (GGM), one type of methods are from analyzing covariance matrix to precision matrix; The other type are from optimizing first-order problem to second-order problem. Among these methods, GLASSO is the most important and representative.In this paper, against the drawbacks of description limitaion in GGM, biased and hyper-parameter tuned GLASSO, we propose an effective method, named Adaptive Sparsity in Gaussian Mixture Model (ASGMM), from the structure learning’s perspective. Specifically, consider that discriminative L1 norm is equivalent to generative Laplace prior, represented by a two-layer Bayesian model, we incorporate a noninformative Jeffery prior on GMM to obtain an adaptive sparsity prior. The prior was imposed on the precision matrices, which encourages sparsity and largely reduces the number of samples. More important, the prior does not involve any hyperparameters to be tuned, and the unbiased estimate of sparse precision matrices adapts to the observation data. The proposed method is achieved by three steps:First, we formulate an adaptive hierarchical Bayes model of the precision matrices in the GMM with a Jeffrey’s noninformative hyperprior. Second, we perform a Cholesky decomposition on the precision matrices to impose the positive definite property. Finally, we construct an appropriate Q-Function for the objective function of ASGMM, and solve the problem of precision matrices and sparsity estimation of GMM in the expectation maximization (EM) framework.Experimental results on synthetic data demonstrate that ASGMM can discover the sparse structure of precision matrices with small estimated error. On real world data, it achieves the best clustering performance compared with several classical methods, including GMM.
Keywords/Search Tags:Gaussian Mixture Model, High-dimensional parameter estimation, Hierarchical Bayes, Adaptive sparsity prior, Graphical structure learning
PDF Full Text Request
Related items