Font Size: a A A

Research On Manifold Regression And Nonnegative Matrix Factorization In Image Classification

Posted on:2016-05-20Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y W LuFull Text:PDF
GTID:1108330503969839Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years, image classification is applied to many fields, such as information security, anti-terrorism. As everyone knows, feature extraction plays a particularly important role in data classification, especially in pattern recognition. Feature extraction not only reduces data complexity, but also effectively improves the classification accuracy. But different data often requires different feature extraction methods. How to extract effective features from data is one of the key problems in image processing and pattern recognition. It is also the focus of this dissertation.In this dissertation, we propose some new classification methods for image classification. The main research works in this dissertation are summarized as follows.(1) We propose a new manifold regression learning framework: in this framework, two algorithms for feature extraction and classification of images under different conditions are presented. These two algorithms are manifold discriminant regression learning(MDRL) and robust manifold discriminant regression learning(RMDRL). MDRL introduces a within-class graph and between-class graph in the regression model, which will derive a projection that can project the data points from the same class into a compact new space and make the data points from different classes as far as possible. By using the nuclear norm as the metric on the regularization term, RMDRL learns a low-rank transformation matrix which effectively improves the robustness of DLSR. Thus it can be used for robust image classification.(2) We propose a novel discriminant nonnegative matrix factorization method: Nonnegative Discriminant Matrix Factorization(NDMF). NDMF projects the low-dimensional representation of the subspace of the base matrix to regularize the NMF for discriminant subspace learning. Hence, we can combine the information of the base and coefficient matrix in NMF. The discriminant and localized properties as well as the orthogonality of the base matrix are fully taken into consideration and incorporated in one model. That is, NDMF combines nonnegative constraint, orthogonality and discriminant information in the objective function, in which both and are combined together to construct the regularized term. Two algorithms are given respectively based on the Euclidean distance measure and Kullback-Leibler(KL) divergence.(3) The traditional NMF methods are sensitive to outliers and their classification performance should be improved. To improve the robustness of NMF, a novel framework named Projective Robust Nonnegative Factorization(PRNF) is proposed for robust image feature extraction and recognition. A general framework of robust NMF model is first proposed to unify the existing robust NMF algorithms. PRNF can not only weaken the influence of the noise in learning the optimal projection for feature extraction effectively, but also keep the geometrical structure of the original data for a better parts-based representation. Three concrete algorithms taking different norm as sparsity constraints on the noise data are proposed under the PRNF framework. These three algorithms are based on the 1l, 1/ 2l and 2,1l norms, respectively. We also analyze their updating rules and prove the convergence of the three algorithms.(4) We propose a kernel linear regression classification(KLRC) by integrating the kernel trick and LRC effectively. KLRC implicitly maps the data into a high dimensional kernel space by using a nonlinear mapping associated with the kernel function, so it is able to make the data more separable. KLRC is a nonlinear extension of LRC and can offset the drawback of LRC.(5) To effectively improve the robustness of the preserving projection-based methods, we propose to use local preserving projections, sparsity and low rankness of high-dimensional data to build an informative graph. We propose a novel dimensionality reduction method, named Low Rank Preserving Projections(LRPP) for image classification. Firstly, we assume that data is grossly corrupted, and the noise matrix is sparse. The 2,1l and the nuclear norm regularization are added as sparse constrains on the noise matrix. Then, a low rank weight matrix which projects the data on a low-dimensional subspace is learned by LRPP. LRPP keeps the global structure of the data during the dimensionality reduction procedure and the learned low rank weight matrix can lower the disturbance of noises in the data.In summary, in order to improve the classification effect or robustness for different applications, this dissertation puts forward the corresponding feature extraction algorithms. Numerous experiments show that the proposed methods achieve the desirable performance, and have good applicat ion prospects.
Keywords/Search Tags:image classification, machine learning, face recognition, nonnegative matrix factorization, regression analysis, locality preserving projections
PDF Full Text Request
Related items