Font Size: a A A

Research And Application Of Discriminative Dictionary Learning Algorithms Based On Data Representation

Posted on:2020-09-29Degree:MasterType:Thesis
Country:ChinaCandidate:T XuFull Text:PDF
GTID:2428330578483311Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Data representation has attracted much attention from researchers in fields of signal processing,image processing,computer vision,and pattern recognition,Dictionary learning(DL)methods has become one of the hot topics in current research.Discriminative dictionary learning(DDL)aims to learn a dictionary from training samples to enhance the discriminative capability of model's coding vectors.Support vector-guided dictionary learning(SVGDL)algorithm applies standard support vector machine(SVM)to the coding vector as the discriminating term,combines the dictionary learning with the training classifier,and takes the quadratic hinge-loss function as the free distribution weight of the loss judgment condition in the updating process of the coding vector.But,it does not further explore the generalization performance of classifiers constructed by coding vectors.The reason is that the traditional SVM based on the large-margin classification principle,and uses the boundary points of samples to establish the classification hyperplane,which fails to consider the data distribution information and affects the anti-noise ability of the classifier to some extent.At the same time,in the process of model's optimization,the classification performance of the classifier constructed with the recovered coded data is neglected,which is not only related to the large margin principle,but also to the basic fact that the radius of the minimum enclosing ball containing the data is concerned.To further improve the ability of dictionary discrimination,this thesis will study the above two reasons.The main contents are as follows:(1)In view of the fact that the classifier trained by SVGDL fails to consider the distribution information of data,the minimum class variance support vector machine with Fisher linear discriminant condition and large-margin classification principle is used as the discriminant condition of the model.In the process of updating the coding vector,the intra-class scatter matrix information is added to reduce the difference of the same sample,and the large-margin classifier between different classes is obtained.It makes the model better guide dictionary learning,and then proposed minimum class variance support vector a guided dictionary learning(MCVGDL)algorithm.(2)For SVGDL,the generalization performance of the classifier established by ignoring the reacquired data is not only related to the principle of large margin,but also to the basic fact that the minimum enclosing ball contains all there reacquired data.The upper bound theory of generalization error of SVM is used as an improved idea of the model,which can reduce the upper bound of generalization error of classifier and obtain a more real classifier with larger margin.Then,generalization error bound guided discriminative dictionary(GEBGDL)algorithm is proposed.To verify the classification performance of the two proposed algorithms.SVM,CRC,SRC and some typical dictionary learning algorithms are compared with MCVGDL and GRBGDL algorithms in this paper with different training samples and dictionary atoms in seven datasets,such as face recognition,object recognition and handwritten numeral recognition.The effects of model parameters on recognition rate and convergence of the algorithm are discussed.Experiments show that the proposed algorithm achieves better recognition rate and reduces the generalization error bound.
Keywords/Search Tags:Dictionary learning, coding vector, Support vector machine, minimum class variance support vector machine, Minimum enclosing ball, digital image classification
PDF Full Text Request
Related items