Font Size: a A A

Principal Component Analysis Methods Based On Sparse And Low-rank Constraints

Posted on:2018-08-15Degree:DoctorType:Dissertation
Country:ChinaCandidate:S Y YiFull Text:PDF
GTID:1368330566498310Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Subspace learning is a basic and important research issue,and can be widely applied to computer vision,image processing,machine learning and pattern recognition,such as face recognition,object tracking,and reconstruction,etc.As a basic research,lots of subspace learning methods have been proposed in the past few decades.In this paper,we explore the representative subspace learning methods and generalize them as subspace learning methods based on mapping and subspace learning methods based on representation.Subspace learning methods based on mapping focus on dealing with dimension reduction,recognition and robust reconstruction for the single subspace,which are often interpreted in graph embedding framework.Subspace learning methods based on representation focus on dealing with the robust clustering and reconstruction for the multiple subspaces,which often involve in sparse representation and low-rank representation.Based on principal component analysis(PCA),we propose the new PCA methods by surrounding with the ultimate goal of extracting principal components.In this paper,we carry out research of PCA from four sides,i.e.,the interpretation of principal component,the robustness of principal component,the distinctive of principal component and the global optimization solution of principal component,and obtain the following innovative results.First,the joint sparse principal component analysis with pixel weights is proposed.More specifically,we analyze the difference between the traditional subspace learning methods and sparse subspace learning methods,build the relation between the graph embedding and sparse graph embedding,and generalize the self-contained regression type of graph embedding.Inspired by the self-contained regression,we propose the joint sparse principal component analysis with pixel weights by using l2,1-norm to introduce the consistent feature selection,such that the selected features can obviously interpret the physical meaning of principal components.Experimental results show that the proposed method can select the best representative features to interpret the principal components.Second,the robust joint sparse principal component analysis is proposed.The aligned images usually has the consistent sparse property,while the noises often break this nature.Inspired by the spirit of robust PCA,we propose the robust joint sparse principal component analysis,which divides an original image into reconstruction image and noisy image and impose the l2,1-norm to both error term and regularization term.Experimental results show that this proposed method can separate outliers from the original images and recover the intrinsic property of feature consistence.Third,the low-rank principal component analysis with locality preserving is proposed.Generally,subspace learning methods based on representation only consider the global information of data but ignore the local information of data.Hence,we seek the relationship between subspace learning methods based on mapping and subspace learning methods based on representation,introduce the spirit of locality preserving into the dictionary construction of low-rank representation,and propose a dual-sides reconstruction method to retain the advantages of both locality preserving projections and low-rank representation,such that an original image is divided into a dual-sides reconstruction image and a noisy image.Experimental results show that this proposed method obtains the more obvious distinctive reconstruction results than single-side reconstruction methods.Fourth,the joint sparse principal component analysis with sample weights is proposed.The previous proposed robust joint sparse principal component analysis exists the mean calculation problem,and thus we introduce the optimal mean variable into it and propose joint sparse principal component analysis with sample weights,which is general.Unfortunately,this proposed model does not have the global optimization solution.While,this proposed model can fortunately be converted into its equivalent convex model and hence obtain a global optimization solution.Experimental results show that this proposed method can select the effective features for robust reconstruction and unsupervised clustering.To sum up,all of these proposed methods are mainly generalized as two sides: one is introducing feature selection into PCA by means of the self-contained regression-type,such that the proposed methods interprets the principal components well,is robust to outliers,and express the original data well;the other is integrating the locality preserving spirit into the low-rank principal components,such that the proposed method can obtain the discriminative reconstruction for the manifold data.
Keywords/Search Tags:principal component, feature selection, robust reconstruction, self-contained regression, sparse representation, low-rank representation, mean calculation, global optimization, unsupervised clustering
PDF Full Text Request
Related items