Font Size: a A A

Sparse Representation Learning Algorithms And Their Applications

Posted on:2021-02-07Degree:MasterType:Thesis
Country:ChinaCandidate:W T HuFull Text:PDF
GTID:2428330611973201Subject:Software engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of CPU,computer memory and other hardware devices,the fields of pattern recognition,image processing and computer vision are faced with larger data dimensions and longer data processing time.Many scholars have paid attention to sparse representation,because of its advantages in solving the problem of tremendous data;and data samples processed by sparse representation have strong robust to the noises,which plays the key role in improving subsequent classification accuracies.In this paper,sparse representation method is applied to image processing algorithms such as dimension reduction algorithms and feature extraction algorithms;and the effectiveness of the proposed algorithm is verified by a large number of experiments.The main work includes:?1?The feature extraction algorithms only use the local structure or the global structure of data,so they will not get the global optimal projection matrix,and the learned projection matrix does not have good interpretability.In this paper,a low-rank projection learning algorithm based on neighborhood graph is proposed.The algorithm imposes the graph constraint on the reconstruction error of data to maintain the local structure of data,and introduces a low-rank term to preserve the global structure;the property of L2,1 norm row sparsity is used to constrain the projection matrix.In this way,redundant features can be eliminated,and the interpretability of projection matrix can be improved;meanwhile,a noise sparse term is introduced to weaken the interference of noise from samples.The model is solved by alternating iteration method,and the experimental results on multiple datasets show that the algorithm can effectively improve the classification accuracies.?2?As a classical data dimension reduction algorithm,local linear embedding has been applied in the field of image processing,but it ignores the differences between samples,which affects the performance of the model.Therefore,this paper proposes a robust sparse embedding algorithm based on self-paced learning.Firstly,the algorithm centers the neighborhood of the data samples to keep the neighborhood structure of the data;secondly,the concept of self-paced learning is introduced into the algorithm.With the self-paced regularization function,the training model first processes the samples with small sample difference,and then processes the samples with large sample difference and using L2,1 norm constrains the projection matrix,which makes the algorithm have strong robustness to the noise and outliers,and the algorithm has been verified on multiple data sets.?3?The traditional subspace learning algorithms include two processes:projection learning and classification,but the two processes are separate,and the algorithm is sensitive to outliers,which may not get the global optimal solution.To address these problems,the robust sparse subspace learning based on local preserving projections is proposed.In this method,feature learning and classification model are combined to make the obtained subspace features more discriminative;By using the row sparsity property of L2,1 norm,redundant features are eliminated,and the local relationship of data samples is considered in the algorithm,which improves the robustness of outliers.Finally,the iterative method is used to solve the model.Experiments on different datasets obtain good recognition results.
Keywords/Search Tags:sparse representation, image recognition, subspace learning, feature extraction, self-paced learning
PDF Full Text Request
Related items