Font Size: a A A

The Research On Robust Sparse Representation

Posted on:2021-01-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:X J WangFull Text:PDF
GTID:1368330626955414Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
With the explosive growth of data volume,how to processing and analysing data quickly and efficiently becomes more and more important.The theory of compressed sensing was proposed in this context.It attracts a lot of attention from academia and industry.The sparse representation is an indispensable prerequisite for the success of compressed sensing.In a given overcomplete dictionary(the number of base elements constituting the data space is greater than the number of dimensions),sparse representation model represents data as few atoms as possible.Due to its concise representation of data resulting in greatly reduced the requirements of hardware,it is widely used in image processing(image code,super-resolution reconstruction,image inpainting),audio processing(blind source separation,audio enhancement,audio compression),pattern recognition(face recognition,gesture recognition,expression recognition),etc.Nevertheless,the efficiency of sparse representation can not yet satisfy the requirement of practical applications at present.There are two main challenges.One is the deficiency of applicability of large-scale problems,such as how to deal with linear equations of very large matrices.The other is the deficiency of the robustness of the model,such as how to enhance the ability to process data with different characteristics,such as noise data.The former can be solved by increasing computing resources,however,it's difficult to improve the robustness of the model.The main reasons may be that the existing methods are instable when reconstructing data,the limitations of the ?1norm and the high requirements on the number and quality of training samples,lack ability to represent non-linear structured of data,and underutilized characteristic information of data,etc.To solve these problems,the sparse reconstruction algorithm and the dictionary representation method are studied systematically and deeply in this thesis.The main research work is summarized as follows.(1)Propose a weighted sparse representation method based on selfpaced-learning.This method can effectively exclude training samples that are significantly different from the test samples from the representation dictionary,and use the weighting method to consider the local information between the samples.And the sparse representation will be regarded as a gradual process,which may avoid the use of samples with large differences to represent testing samples.In so doing,the classification accuracy and stability can be improved.(2)Present an adaptive sparse dense mixture representation method based on non-convex optimization.This method uses a non-convex optimization method to decompose the training dictionary into a category dictionary and a non-category dictionary,which could greatly increase the representation ability of the dictionary,and overcome the limitation that training samples must be sufficient and of high quality.More importantly,the trace norm is used in the category dictionary,which will avoid to overemphasize the sparse information in the category dictionary and ignore its relevant information.So the representation samples can be selected adaptively according to the category dictionary structure and the model representation ability can be improved.(3)Propose an adaptive kernel sparse representation method.In this method,the data of the original feature space is mapped to the highdimensional feature space,and the adaptability of the trace norm to the dictionary structure is used in the high-dimensional space to obtain more discriminant training samples representing the sample to be tested.Combining the advantages of kernel method and trace norm,it can not only effectively make up for the shortcomings of sparse representation methods in processing nonlinear structured data but also deal with the more common pattern recognition problems.(4)Present an elastic network method based on autoencoders.By making full use of the data characteristic information,the elastic network model is combined with the autoencoders.An elastic network coding model is added between the encoder and decoder.In this layer,the sample to be tested can be represented by the elastic network model.At the same time,the representation coefficients are encoded by the autoencoders principle,which draws on the strength of the convolutional neural networks.The feature extraction ability of the algorithm is based on the advantage of the elastic network in the representation of coefficients,and an end-to-end training framework also designed.It greatly improves the classification ability of traditional sparse representation methods for images.This thesis focuses on the robustness of sparse representation.Several algorithms about sparse reconstruction and dictionary construction are proposed,which could promote the ability of processing different characteristics data for sparse representation.The research results may not only enrich the theoretical,but also further improve the ability to solve practical problems.
Keywords/Search Tags:Classification, Optimization Modeling, Sparse Representation, Self-Paced-Learning, Adaptability, Kernel Methods, Convolutional Neural Networks
PDF Full Text Request
Related items