Font Size: a A A

Research On Multi-Layer Dictionary Learning And Discriminative Sparse Representation

Posted on:2023-07-17Degree:MasterType:Thesis
Country:ChinaCandidate:R R XuFull Text:PDF
GTID:2568306818495244Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Image classification can distinguish different categories of images according to the se-mantic information of the images.It is a basic research task in the field of computer vision and pattern recognition,and plays a very critical role in other high-level tasks.Among many research methods of image czlassification,sparse representation theory and dictionary learning have been widely regarded as important tools in signal processing and machine learning.Sparse coding is a representation learning method that aims to find a sparse representation of the input data in the form of linear combinations of basic elements and these basic elements themselves,called atoms,which make up a dictionary.In recent years,classification algorithms represented by sparse representations have been widely studied by a large number of scholars,but in the face of many challenges in real-world image classification tasks,how to efficiently obtain discriminative data representations is still a topic worthy of attention.Based on sparse representation and dictionary learning,this thesis analyzes the shortcomings of existing image classification models and designs a new image classification model.In the process of dictionary learning,discriminative representation items are added to constrain the data between classes?the single-layer dictionary learning algorithm is extended to a multi-layer dictionary learning model to learn cascading features from different levels?sparse representation is combined with deep learning.Linear and nonlinear mappings are used for representation classification.The main work of this thesis is summarized as follows:(1)A classification learning algorithm based on discriminative non-negative representation is presented.Non-negative constraints on coding coefficients can strengthen the contribution of similar samples to reconstructed samples,reduce the contribution of heterogeneous samples to reconstructed samples,and enhance the distinguishability of the representation coefficients.On this basis,a regularization term with discriminative information is introduced to reduce the correlation between categories.In addition,due to the optimization of the?2norm adopted by this method,the closed-form solution can be obtained by derivation of the objective function,which improves the speed of classification.From the experimental results of the algorithm on public datasets,it can be seen that our proposed scheme is feasible and effective.(2)A unified and effective multi-layer dictionary learning model is introduced.Experi-ments show that multi-layer dictionary learning can learn more hidden features on the basis of single-layer dictionary learning.We make full use of the residuals obtained in the dictionary learning stage.The concatenation of residual representation features and original data is uti-lized as input data for the next layer of dictionary learning.Two types of multi-layer dictionary learning models are constructed using two types of classical dictionary learning algorithms as embedding algorithms:1)The LC-KSVD algorithm,which is the representative of synthesis dictionary learning algorithms,is embedded in the multi-layer dictionary learning model?2)The LC-PDL algorithm,which is the representative of analysis dictionary learning algorithms,is embedded in the multi-layer dictionary learning model.The soft voting strategy is used to fuse the class probabilities output by the classifiers of each layer to realize the integrated pre-diction of the classification results.To a certain extent,with the increasement of the number of layers,the classification accuracy also continues to rise.(3)A single-branch deep sparse representation algorithm and a multi-branch deep sparse representation algorithm are proposed.The single-branch deep sparse representation algorithm mainly consists of an encoder responsible for learning the mapping and encoding,a decoder for recovering the signal and decoding,and a sparse encoding layer between the encoder and the decoder for learning the representation of the encoded coefficients.While exploring non-linear mappings of data with deep neural networks,a sparsely coded representation for clas-sification can be obtained.The multi-branch deep sparse representation algorithm extends the single-branch deep sparse representation algorithm into a multi-branch structure,imposes self-representation constraints on different branches,learns the latent features of different branches,assigns different weights to fuse the sparse features of different branches,and adopts different temperature coefficients to control the learning rate of the adaptive weight,and finally employs the fusion features to make integrated predictions on test samples.
Keywords/Search Tags:Image Classification, Discriminative Sparse Representation Classification, Multilayer Dictionary Learning Model, Deep Sparse Representation Algorithms
PDF Full Text Request
Related items