Font Size: a A A

Research On Linear Reconstruction Based Feature Extraction And Classification

Posted on:2015-01-16Degree:MasterType:Thesis
Country:ChinaCandidate:J J CuiFull Text:PDF
GTID:2298330431981028Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
In recent years, with the rapid development of computer and image processing technology, the people pay more and more attention on pattern recognition. How to improve the recognition effect is the critical task of us. The key is to extract high-quality feature and design the better classifier. There are two types classic feature extraction methods exist:the one type is the feature extraction based on identification (classification), such as Fisher Linear Discriminant Analysis (FLDA); the other one is based on reconstruction, such as principal component discriminant analysis (PCA) and Kernel-based learning approach. However, these two classic algorithms still exist many problems in practical applications, and the face images are easily obstructed by external interference. So how to improve the recognition rate effectively and make full use of the image information is our main research task. In view of the disadvantage of feature extraction algorithm based linear reconstruction and classifier, this paper we have a deep research on it. The experimental results on several face image databases show that are effective.The main work for this thesis can be summarized as follows:1、Locally Linear Reconstruction based on Kernel Principal Component AnalysisInspired by LLE, we assume that the data lie on a low-dimensional manifold which can be approximated linearly in a local area of the high-dimensional space. So we require that a sample point can be linearly reconstructed by its neighbors. We embed manifold to our approach so that the representative nodes we find become more reasonable. The optimal reconstruction coefficients can be obtained by solving an objective function. The experiments on AR and Yale face database verify the effectiveness of our algorithm.2、The l2-norm Representation Classifier Steered Discriminative Projection With Applications to Face RecognitionIn view of the excellent performance of least square regression model l2norm in reconstruction, we improve the previous research with the sparse reconstruction in place with l2norm in reconstruction. In the projection space, we use th l2-norm based reconstruction method calculation represent coefficients of samples. In our algorithm, we enlarger the between-class scatter and smaller within-class scatter, and it will lead to better classification results. So we maximize the ratio of between-class reconstruction residual to within-class reconstruction residual in the projected space and aim to achieve a better performance. The aim of our algorithm is to improve the computational cost, and the studies have shown that for face recognition, the l2-norm based representative is the best, both in computational efficiency, performance and robustness, so our method will do a good performance. In Yale-B and AR face databases experiments proved the superiority of the algorithm.3、Kernelized Laplacian Collaborative Representation based Classifier for Face RecognitionIn fact that the samples in real space are not linearly separable, we mapping them into a high dimension space make it linear separable. The study has proved that after mapping the geometric relationship between samples is still able to maintain. So we put forward the Kernelized Laplacian Collaborative Representation based Classifier to improve the recognition rates. The first step is that we mapping the sample into a high dimension space, and then we constrain our objective function with the least variance and constant geometric relationship. So we not only consider the locality also maintain the integrity. And then after a series of optimization mechanism, we obtain the reconstruction coefficient. The experiments on ORL and FERET face database verify the effectiveness of the algorithm.4、Two-Phase Kernelized Laplacian Test Sample Sparse Representation Method for Face Recognition.This chapter we embed the algorithm of last chapter into the two-phase of sparse reconstruction classification, propose the Two-Phase Kernelized Laplacian Test Sample Sparse Representation Method for Face Recognition. In the first stage, we use all the training samples as the dictionary to represent the testing sample, and then select the M nearest neighbors which make the most contribution to this sample to make up the subset dictionary. The second stage is to represent the testing sample and make the classification. The first stage finding the class label of the nearest neighbors and reconstruct the subset of training sample set, which purify the dictionary, is do help to the second stage. Due to mapping the samples into the kernel space and keeping the geometric structure between the samples, the experimental results are better than the algorithm proposed before.
Keywords/Search Tags:kernel method, manifold learning, sparse representation, Laplasse, local linearreconstruction, collaborative reconstruction, the local reconstruction error
PDF Full Text Request
Related items