Font Size: a A A

Discriminative Learning Approach For Gaussian Mixture Modeling Of Images

Posted on:2010-06-21Degree:DoctorType:Dissertation
Country:ChinaCandidate:X F ChenFull Text:PDF
GTID:1118360308955597Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Gaussian Mixture Model (GMM) is a widely used modeling tool in statistical pattern recognition comuunity. Because of its flexibility and robustness, GMM is being increasingly exploited as a convenient modeling tool in image classification. In the past decade the extent and the potential of the applications of GMM have widened considerably. Fields in which GMM have been successfully applied include document analysis and recognition, image and video retrieval, beiometric identification, object detection and tracking, biomedical image analysis and recognition, intelligent transportation, intelligenet surveillance, and etc.The learning methods for GMM can be classified into generative learning and discriminative learning. It has been well known that discriminative learning can get better results than generative learning for pattern recognition. GMM has continued to receive increasing attention over the years. This dissertation focuses on the problems of discriminative leaning for GMM of images, including objective function of discrimative learning for Bayesian classifiers, discriminative model selection method, and intelligent optimization method.This thesis proposes a novel soft target centered learning method for posterior pseudo-probabilities basedy Bayesian classifiers, called SoftDS-MMP for short. Tow adaptive soft targets of posterior pseudo-probabilities are defined for positive samples and negative samples of each class. The empirical loss of a classifier on training data is measured according to these two soft targets. Through minimizing the empirical loss while maximizing the difference between two soft targets, we obtain the optimial parameters in the posterior pseudo-probabilities measure functions of the classes as well as the values of soft targets. We further use the soft targets to dynamically select the training data in the training iterations to redeuce the risk of overfitting and improve the training efficiency. The samples with poster pseudo-probabilities distinctly larger than the corresponding soft target will be temporarily removed from the training set in certain times of iterations. Compared with the hard targets based learning methods, the SoftDS-MMP shows more effectiveness, higher efficiency, and better generalization.This thesis presents a discriminative method of GMM selection under SoftDS-MMP discriminative learning framework. Acturally, a marginalized Soft-MMP objective function is designed and approximated with Laplace method. Using a line search algorithm to find out the maximum value of approximate marginalized Soft-MMP objective function, the optimal structure and parameters of the GMM for a class are estimated simultaneously.This thesis also describes a hybrid method of the Covariance Matrix Adaptation Evolution Strategy based on Cholesky Factorization (Cholesky-CMA-ES) and the gradient decent algorithm for discriminative optimization of Bayesian classifiers. In the hybrid optimization method, the gradient information of objective function is exploited to adjust three crucial factors in Cholesky-CMA-ES, including the weighted mean of parent population, the covariance matrix of distribution and the global step size, to improve the effectiveness and efficiency of Cholesky-CMA-ES. In the first step, the hybrid optimization method is dominated by the Cholesky-CMA-ES to search promising solution regions globally. The influence of gradient information is then enhanced gradually along with training iterations to obtain better local exploitation performance. We also apply the proposed hybrid optimization method to SoftDS-MMP. The hybrid method combines the advantages of Cholesky-CMA-ES and gradient descent algorithm. On one hand, the risk of getting stuck at local optimum is decreased by multi-point stochastic search. On the other hand, the convergence speed is accelerated by exploiting the gradient information of objective function in parameter evolution.The proposed methods have been applied to handwritten digit recognition. Under SoftDS-MMP discriminative selection and learning framework for GMM, the hybrid optimization method is adopted to estimate the structure and paramters of handwritten digit classifiers. We conduct the experiments on the well-known CENPAMI and MNIST database to evaluate our classifiers. The recognition rate on CENPAMI database and on MNIST database are reached, respectively. The experiment results demonstrate the effectiveness of our methods.
Keywords/Search Tags:Discriminative Learning, Gaussian Mixture Model, Image Recognition, Evolution Strategy, Model Selection
PDF Full Text Request
Related items