Font Size: a A A

Research On Dimensionality Reduction Algorithms And Its Applications

Posted on:2009-05-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:T H ZhangFull Text:PDF
GTID:1118360305956608Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Recently, datasets of high dimensionality have been emerging in many domains of science and industry, such as computer vision, pattern recognition, bioinformatics, and astronomy. When dealing with these types of datasets, the high dimensionality is often an obstacle for any efficient processing of the data. The operations on them are computationally expensive and the results may be not optimal. Dimensionality reduction is the process of transforming data from a high dimensional space to a low dimensional space to reveal the intrinsic structure of the distribution of data. It plays a crucial role as a way of dealing with the"curse of dimensionality".In past decades, a large number of dimensionality reduction algorithms have been proposed and studied. These popular algorithms including the conventional algorithms e.g., PCA and LDA; the recent proposed manifold learning algorithms, e.g., LLE, ISOMAP, LE, and LTSA.However, most of existing dimensionality reduction algorithms still suffer from various open problems e.g., small sample size problem, out of sample problem, nonlinearity of the distribution of samples, and classification problem. To overcome these problems, in this paper, we propose a collection of new algorithms, enhanced algorithms, and the unifying framework for existing algorithms along with the new algorithms based on the proposed framework, for dimensionality reduction. The main contributions including: 1. We propose a new algorithm, called linear local tangent space alignment (LLTSA). It uses the tangent space in the neighborhood of a data point to represent the local geometry, and then aligns those local tangent spaces in the low dimensional space which is linearly mapped from the raw high dimensional space. LLTSA can be viewed as a linear approximation of LTSA.2. Inspired by the idea of locality preserving, we propose a novel subspace learning algorithm, termed Maximum Variance Projections (MVP), for face recognition. It is a linear discriminant algorithm which preserves local information by capturing the local geometry of the manifold. Two abilities of manifold learning and classification have been combined into the properties of our algorithm.3. We propose a new system for multimodal biometric recognition. In the developed system, Geometry Preserving Projections (GPP), as a new subspace selection approach (especially for the multimodal problem), is developed. GPP is a linear discriminant algorithm, and it effectively preserves local information by capturing the intra-modal geometry.4. Inspired by LTSA, we propose a new manifold learning algorithm, i.e., Local Coordinates Alignment (LCA). LCA obtains the local coordinates as representations of a local neighborhood by preserving the proximity relations on the patch. The extracted local coordinates are then aligned by the alignment trick to yield the global embeddings. In LCA, the local representation and the global alignment are explicitly implemented as two steps for intrinsic structure discovery. In addition, to solve the out of sample problem, the linearization approximation is applied to LCA, called Linear LCA or LLCA.5. We propose such a framework termed"patch alignment"to unify spectral analysis based dimensionality reduction algorithms. It consists of two stages: part optimization and whole alignment. For part optimization, different algorithms have different optimization criteria over patches, each of which is built by one measurement associated with its related ones. For whole alignment, all part optimizations are integrated into together to form the final global coordinate for all independent patches based on the alignment trick. As an application of this framework, we develop a new dimensionality reduction algorithm, Discriminative Locality Alignment (DLA), by imposing discriminative information in the part optimization stage.6. To improve the classification performance of ONPP, we propose a new algorithm termed Discriminative Orthogonal Neighborhood Preserving Projections (DONPP) is based on the framework of patch alignment. Moreover, this work extends DONPP as SDONPP, (semi-supervised DONPP), which is more powerful since it can make use of unlabeled as well as labeled samples.
Keywords/Search Tags:Dimensionality reduction, Manifold learning, Machine learning, Pattern recognition
PDF Full Text Request
Related items