Font Size: a A A

Research On Model Learning Based On Sparse And Low-rank Constraints

Posted on:2017-04-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:X Z FangFull Text:PDF
GTID:1108330503469871Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Low-rank representation and sparse representation are novel and hot techniques which attract much attention in recent years. They have been widely applied in the fields of machine learning and pattern recognition,such as face recognition, video analysis,image segmentation. From the recent researches, we can see that 1) low-rank and sparse constraints can boost the robustness of model. 2) low-rank and sparse constraints can improve the learning efficiency of model such as improving the efficiency of dimensionality reduction, extracting efficient features and improving the accuracy of subspace clustering. Many novel sparse and low-rank constraints-based models are proposed for improving the robustness and learning efficiency of models and many satisfied results have been achieved by these models. Meanwhile, as the promotion of low-rank representation and sparse representation, some novel models that jointly integrate the low-rank and sparse constraints have been proposed and attracted much attention. The goal of this dissertation is to improve the robustness and learning efficiency of models. To this end, we propose many novel low-rank and sparse constraints-based models and successfully apply them to supervised learning, semi-supervised learning and unsupervised learning.Specifically, the works of the dissertation are as follows.(1) We propose a novel low-rank and sparse embedding framework to improve the efficiency of subspace learning and the robustness of model. This framework unifies the existing subspace learning methods into a unified low-rank and sparse embedding framework. First, some conventional subspace learning methods are rewritten as a unified linear model and then we integrate it into the low-rank and spare embedding subspace learning framework. Thus some novel subspace learning methods are developed. We use the lowrank and sparse constraints to effectively capture the global and local structures of data so that the efficiency of subspace learning is improved. We also use a sparse matrix to model noise so that the proposed framework is robust to different types of noise. The experiments show that the algorithms derived from the proposed framework can effectively improve the recognition rate in both supervised and unsupervised scenarios.(2) We propose a non-negative sparse constraint based graph learning model which explores the non-negative sparse constraint to improve the efficiency of graph learning and accuracy of semi-supervise clustering. Specifically, this dissertation firstly analyzes the disadvantages of conventional graph-based semi-supervised learning methods, that is the pre-defined the graph structure cannot accurately propagate the label information.To this end, this dissertation firstly proposes a novel non-negative sparse graph learning model which introduces the non-negative sparse constraint into graph learning and then integrates it and the semi-supervised learning into a unified learning framework. Based on the unified framework, the learned graph is optimal to the semi-supervised learning method and thus the label information can be propagated accurately. To deal with the novel sample, a linear regression model is incorporated into this framework. Therefore, the proposed framework can not only effectively improve the performance of semi-supervised learning but also effectively classify the novel sample. In other words, the problem of “out of sample” is effectively solved.(3) We propose a regularized label relaxation linear regression model and a sparse constraint based regularized label relaxation linear regression model to address the problem that conventional linear regression equally projects all the samples into a binary label matrix so that the dissimilarity among different samples is ignored. The proposed models can not only enlarge the distances between different classes as much as possible but also avoid the problem of overfitting. To enhance the robustness, we further propose a robust?2,1-norm loss function based model which can effectively address different types of noise.To solve the resulting optimization problems, we propose two iterative optimization algorithms which have fast convergence and more high classification accuracy.(4) We propose a sparse constraint based feature extraction model for extracting effective features. The model uses a simple k nearest graph to capture the local structure of data which derives the similarity between reconstruction coefficients. The proposed model uses a row-consistency sparse, i.e., ?2,1norm-based sparse projection matrix to select the relevant features which can preserve the locality and similarity well. The theory analysis and experimental results show that this proposed method can select discriminative features without label information and these selected features have good stability.(5) To improve the clustering accuracy of semi-supervised subspace clustering, we propose a robust non-negative low-rank representation subspace clustering model. By using the label information to guide the affinity graph construction and a combination model of non-negative low-rank representation and semi-supervised subspace clustering,the learned affinity graph can cluster the data to their respective subspaces. To improve the robustness of model, a sparse matrix is introduced and used to fit noise and thus the proposed model can effectively address noisy data and accurately recover original images from the noisy or occluded images.To sum up, to improve the robustness and learning efficiency of model, the dissertation proposes five sparse and low-rank constraints based models and successfully applies them to the dimensionality reduction, feature extraction, classification and clustering. Experiments on the publicly databases show that the robustness and learning efficiency of these proposed models are further improved by introducing the sparse and low-rank constraints.
Keywords/Search Tags:low-rank representation, sparse representation, subspace clustering, feature extraction, linear regression, model learning
PDF Full Text Request
Related items