Font Size: a A A

Manifold Learning And Semi-supervised Learning With Applications To Feature Extraction

Posted on:2012-06-27Degree:MasterType:Thesis
Country:ChinaCandidate:J ShiFull Text:PDF
GTID:2248330395964046Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Face recognition has become a hot issue in many research fields such as machine learning, pattern recognition and computer vision, and it has also been widely applied to the fields of business, security for police department etc. Feature extraction is always a basic problem of face recognition, whose key principle is to seek a meaningful low-dimensional representation from high-dimensional data. Recently, some studies have indicated that manifold learning methods are suitable to handle the nonlinear structure data such as faces, which aims at preserving the intrinsic local neighborhood information of original data, and also semi-supervised learning, which utilizes plentiful unlabeled samples and a small quantity of labeled samples to improve the learning performance, has been given more attention. However, there are still several shortages in these algorithms when they are used to practical application. On the basis of studies on manifold learning and semi-supervised learning, some improved algorithms have been presented in this paper and also extensive experiments on some generalized face image databases demonstrate the effectiveness of the proposed algorithms, and simultaneously the prototype system of face recognition is designed and implemented, which applys the existing image process algorithms and classcial face recognition methods to practical applications.The main work of this paper embodies five aspects as follows:1. Kernel Supervised Discriminant Projection AnalysisKernel method makes the nonlinearly separable data in the original space as linearly separable as possible in feature space via the nonlinear mapping. However, it cannot totally consider the property of locality in the dataset and its computational complexity is still a problem. Unsupervised discriminant projection (UDP), a manifold learning algorithm, effectively uses the local and non-local property of dataset, and yet it does essentially not make full use of the class information of samples. To address the problems kernel method and UDP caused, a novel method called kernel supervised discriminant projection analysis is presented. Firstly, it maps the training samples into a high-dimensional feature space via a nonlinear mapping determined by a kernel function, and then considers the local and non-local property and class information. So it not only preserves the local neighborhood information, but extracts the nonlinear discriminant characteristics for effective classification. The experimental results on Yale face image database show that the proposed method is effective.2. Unsupervised Discriminant Analysis Based on the Local and Non-local MeanConsidering UDP is easily influenced by outliers and simultaneously the computational time of UDP is relatively long, a feature extraction method called unsupervised discriminant analysis based on the local and non-local mean is developed. It utilizes the local and non-local mean to construct the local and non-local scatter, to some extent, overcomes the discriminant difficulty caused by outliers. Besides, compared with UDP, it is computationally more efficient. Experimental results on ORL, Yale and AR face databases show that the proposed method is more effective than UDP.3. Local Mean Based Generalized Scatter Difference Unsupervised Discriminant AnalysisAlthough UDP takes the local and non-local feature of dataset into consideration, the high-dimensional small sample size problem in face recognition application unavoidably emerges. Owing to the maximum scatter difference discriminant criterion is an improved form of Fisher discriminant criterion, which theoretically eliminates the high-dimensional small sample size problem, a new method called local mean based generalized scatter difference unsupervised discriminant analysis is proposed. The local and non-local scatter are firstly constructed through the local and non-local mean, and then the difference of between non-local scatter and C times local scatter is taken as the discriminant function criterion, so that the local information of sample distribution not only is preserved, but the high-dimensional small sample size problem is essentially overcome. Experiments on Yale and FERET face image databases validate its effectiveness.4. Mahalanobis Distance-based Semi-supervised Discriminant AnalysisTo solve the problem that there is often no sufficient class-label information of face samples in face recognition application and some relativity also exist among face sample features, a Mahalanobis distance-based semi-supervised discriminant analysis is presented. The method makes use of the Mahalanobis distance to perform Marginal Fisher Analysis (MFA) for labeled samples in the dataset, which is on the basis of the graph embedding framework, so that it not only preserves the intraclass compactness and the interclass separability, but extracts the discriminant characteristics for effective classification, and simultaneously the unlabeled samples are utilized to characterize the geometric structure of the dataset, and thus the local neighborhood information among samples is well preserved. Compared with the traditional feature extraction methods, the proposed method has better recognition performance, and the experiments on ORL, Yale and AR face databases demonstrate the effectiveness of this method.5. Local Correlation Semi-supervised Discriminant AnalysisLinear discriminant analysis (LDA) is a supervised linear feature extraction method, which utilizes the class information of the dataset, so that it has better classification effect. MFA is a supervised feature extraction based on the manifold learning, which constructs the intraclass neighborhood graph and interclass neighborhood graph so that the samples from the same class are as close to each other as possible while the samples of different classes are from each other. However, with the decrease of the number of labeled samples in the dataset, the performance of LDA and MFA will be lowered, and moreover the traditional Euclidean distance is incapable to character the intrinsic similarities among samples. A novel method named local correlation semi-supervised discriminant analysis is developed. It firstly constructs the intraclass similarity graph and interclass similarity graph through the similarities of samples, and then develops a new discriminant criterion to separate the K1small similarities samples from the same class of each sample neighborhood from the K2large similarities samples from different classes, and it is also extended to semi-supervised learning. Extensive experimental results on ORL and AR face databases demonstrate its effectiveness.
Keywords/Search Tags:Feature extraction, face recognition, principal component analysis, lineardiscriminant analysis, kernel method, manifold learning, semi-supervised learning, unsupervised discriminant projection, marginal fisher analysis, local feature, non-localfeature
PDF Full Text Request
Related items