Font Size: a A A

Feature Extraction And Selection Based On Subspace Learning And Graph Regularization

Posted on:2022-09-07Degree:MasterType:Thesis
Country:ChinaCandidate:M Y LuFull Text:PDF
GTID:2518306605471734Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
In recent years,in order to express increasingly rich and complex information,the characteristics of data have changed dramatically,the most significant of which is the trend of high-dimensional development.In response to the“dimension disaster”problems of sparse data samples and the difficult distance calculation in high-dimensional situations,scholars have proposed a series of effective dimensionality reduction methods.However,these methods may ignore the geometric structures of data,discriminant information,and underlying clustering structures,and thus cannot cope with complex and highly nonlinear scenarios.Therefore,it is worthwhile to investigate how to utilise the effective features in the original high-dimensional data for feature selection or extraction.In order to retain the information contained in the original data to the greatest extent and reduce the dimensionality of the data,this thesis carries out research on low-rank representation,local structure and discriminative information retention and adaptive graph regularisation from the perspective of dimensional simplification.The main research content of this thesis is mainly divided into the following aspects:(1)A feature extraction algorithm based on regularization of the nearest neighbor graph and low-rank projection learning subspace reconstruction(SRLRPL)is proposed.Firstly,the algorithm introduces the reconstruction coefficient matrix and the projection matrix in the subspace learning model to reconstruct the data,reducing the information difference between the original data space and the low-dimensional subspace.And then introduces a low-rank matrix in the reconstruction term to preserve the global structures of the data.In addition,the sparsity of the rows is guaranteed by introducing l2,1-norm constrained projection matrices to improve the interpretability.Finally,a regularization term of the neighborhood graph is constructed on the feature manifold to capture the local structures of the data.(2)A feature selection algorithm based on local discrimination and adaptive subspace learning(LDASL)is proposed.Firstly,the algorithm minimizes the reconstruction error of subspace learning to preserve the global reconstruction information of the data,and optimizes the similarity matrix and the projection matrix at the same time,which overcomes the shortcoming of keeping the manifold structure completely separated from the process of feature selection.This approach not only avoids the introduction of redundant low-dimensional embedding terms,but also learns the similarity matrix adaptively during the iteration,so that the local manifold structures of the data can be more accurately.Besides,the interpretability of the learned projection is improved by introducing l2,1-norm.Finally,the nuclearized local discriminant model is introduced to preserve the local discriminant information on the nonlinear manifold of the data,so as to provide effective guidance for the learning of the projection matrix.(3)A feature selection algorithm based on non-negative spectrum feature learning and maximum entropy graph regularization(NSMEGR)is proposed.Firstly,the algorithm introduces feature graph on the sparse transformation matrix in the sparse regression model to reveal the manifold information of the data feature space,and embeds the feature selection into the manifold learning.In addition,the algorithm applies the maximum entropy theory to adaptively construct the similarity matrix,and directly regularizes the learned clustering indicator matrix,which can embed the geometric relationship of the data into the manifold learning.Finally,the algorithm imposes both l1-norm constraints and non-negative constraints on the low-dimensional embedding matrix,thus providing more accurate discriminative information for the learning of the low-dimensional embedding matrix.
Keywords/Search Tags:Subspace learning, Low-rank representation, Graph regularization, Adaptive structure preservation, Nuclearized local discriminant model, Non-negative spectrum learning, Maximum entropy
PDF Full Text Request
Related items