Font Size: a A A

Feature Extraction And Classification Based On Low-Rank Matrix Decomposition

Posted on:2015-05-09Degree:MasterType:Thesis
Country:ChinaCandidate:S X YiFull Text:PDF
GTID:2298330431981233Subject:Computer applications
Abstract/Summary:PDF Full Text Request
Nowadays, face recognition is a hot issue in many research fields such as pattern recognition and computer vision. Face recognition mainly includes four parts:face image acquisition and image preprocessing, image detection, feature extraction and recognition. Among them, the feature extraction is one of the most fundamental problems in pattern recognition, that the nature of feature extraction is project high-dimensional data to a low dimensional subspace, which is more conducive to the classification. Therefore, the effective feature extraction is the key point in face recognition, fingerprint recognition and character recognition. Researchers have proposed many related algorithms, including principal component analysis (or K-L transform) and Fisher linear discrimination analysis are the most classic and widely used methods in feature extraction, but these algorithms are still many disadvantages. In many real applications, the training data is usually largely corrupted due to occlusion or noise (such as light, shade, etc.). So the main problem need to be solved is how to make use of image information to extract more effective feature vector to improve the recognition rate. The main idea of low-rank decomposition is that the original space is can be divided into multiple sub-spaces, these sub-spaces not only will be able to separate the noise, but also keep the whole structure of the spaces. Motivated by recent progress in low-rank matrix decomposition, in this paper, we do an in-depth research on some traditional methods based on feature extraction, and give some improvements, mainly related to the concepts of low-rank decomposition. The extensive experimental results in the paper will demonstrate the effectiveness of our methods.1、Robust Laplacian Sparse Coding for Image RepresentationSparse coding is a popular technique for finding interpretable sparse representation of images. It has been successfully applied in wide range of applications such as pattern recognition, signal processing and computer vision. A major drawback of existing sparse coding methods is that the over-complete dictionary is assumed to be noise free. In many real applications, the training data is usually largely corrupted due to occlusion or noise. Due to this assumption, Sparse coding methods experience significant degrades in performance when gross outliers are present in data. Despite its obvious importance, the problem of robust sparse coding learning has been relatively unexplored in computer vision. This paper develops the theory of Robust Laplacian Sparse Coding (RLSC) and presents an effective convex approach that uses recent advances on rank minimization. Several synthetic and real world data examples are used to illustrate the benefits of RLSC.2、Low-Rank Constrained Linear Discriminant AnalysisTraditional linear discriminant analysis is very sensitive to largely corrupted data. To address this problem, based on the recent success of low-rank matrix recovery, the paper proposes a novel low-rank constrained linear discriminant analysis (LRLDA) algorithm for head pose estimation and face recognition. By adding the low-rank constraint in our method, LRLDA can obtain more robustness and discriminating power compared with traditional LDA algorithms. The extensive experimental results demonstrate the effectiveness of LRLDA.3、Robust Marginal Fisher AnalysisNonlinear dimensionality reduction and face classifier selection are two key issues of face recognition. In this paper, an efficient face recognition algorithm named Robust Marginal Fisher Analysis (RMFA) is proposed, which uses the recent advances on rank minimization. Marginal Fisher Analysis (MFA) is a supervised manifold learning method who preserves the local manifold information. However, one major shortcoming of MFA is its brittleness with respect to grossly corrupted or outlying observations. So the main idea of RMFA is as follows. First, the high-dimensional face images are mapped into lower-dimensional discriminating feature space by low-rank matrix recovery (LR), which determines a low-rank data matrix from corrupted input data. Then try to obtain a set of projection axes that maximize the ratio of between-class scatter Sb against within-class scatter Sw by using MFA. Several experiments are used to illustrate the benefit and robustness of RMFA.4、Face Recognition Method Based On Non-negative And Low-Rank Matrix FactorizationThe main idea of Non-negative matrix decomposition (NMF) is decomposing a large matrix into two small matrixs, which makes the two small matrixs can restore to the original matrix completely, and the non-negative matrix of the decomposed ones do not contain negative. This conforms to the real data of the real world. Combined with the main idea of the low-rank. we proposed a method called face recognition based on nonnegative low-rank matrix decomposition method. The main idea of this method is:join the low rank constraints into the non-negative decomposition at the same time, which makes the decomposited vector has both characteristics of negative and low-rank and makes the algorithm more practical. Several experiments on AR and Extended Yale B face batabases prove the effectiveness and robustness of this method.
Keywords/Search Tags:feature extraction, face recognition, low-rank decomposition, image classification, pose estimation
PDF Full Text Request
Related items