Font Size: a A A

Dimensionality Reduction Methods Based On Manifold Analysis And Their Applications To Computer Vision

Posted on:2010-02-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:D HuangFull Text:PDF
GTID:1118360275480091Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Many high dimensional data in real world applications can be modelled as data points lying close to a low dimensional linear/nonlinear manifold. The underlying variations in image data sets correspond to the continuous physical changes such as pose, illumination of objects or expressions of the human faces. Discovering the manifold structure from a set of noisy data points sampled from the manifold is very challenging in the unsupervised learning. Recently, the manifold related methods for human-computer interaction have become an increasingly hot research area.The human visual perception is also closely related to the real world manifold. Artificial neural networks can be used as a powerful tool for modelling and interpreting the manifold structures in real world data. By manipulating the low dimensional free parameters of the learned manifold, one can also synthesize or estimate the expected real-world data. The "bi-directional" process of learning and synthesis is very much akin to the typical human cognitive behaviors, in which one learns from the unorganized observations and infers the unknown using the learned knowledge.Traditional dimensionality reduction techniques such as Principal Component Analysis (PCA) subject to linearity. Other methods including Self-Organizing Maps (SOM), manifold learning, and kernel method, have been developed to deal with the nonlinearity of the low-dimensional manifolds. But these methods have their limitations. Our approach draws inspiration from and improves upon the pioneering work. The main contributions of this thesis are as follows:1. A new mean-shifting Incremental PCA method is proposed based on the autocorrelation matrix. This method employs two transformations on the representation of the training data. The updated eigen-subspace is re-centered without recompute the autocorrelation matrix of the old data. Moreover, the storage requirement of the old information and the dimension of the autocorrelation matrix remain constant instead of increasing with the number of total input data. There's no need to store the old data after the updating. Compared to the existing algorithms, the proposed method is computational efficient for applications such as visual subspace learning and recognition.2. A new computational efficient Local PCA algorithm is proposed to combine the advantages of NGAS-PCA and PCA-SOM. Each unit is associated with its mean vector and covariance matrix. The new competition measure implicitly incorporate the reconstruction error and distance between the input data and the unit center. In the proposed algorithm, the extra updating step of the principal subspaces is eliminated from the data distribution learning process. One potential application of the proposed model is the nonlinear pattern learning and recalling. After the training process, the data distribution is represented by a collection of local linear units. No priori information for the optimal principal subspace is needed for the pattern representation.3. A deformable model, i.e. the generalized Topology Preserving SOM (gTP-SOM), is proposed to incorporate the Topology Preserving Self-Organizing Mapping into the neuron competition. It is inspired by the Visual Induced Self-Organizing Map (ViSOM) where the mapping preserves the inter-point distances of the input data on the neuron map as well as the topology. The gTPSOM is driven in parallel by an adaptive force field, which imposes constrains on the local boundary variation. Region aided active contour and Level sets are employed to implement the proposed model. The gTPSOM model is suitable for both the precise edge detection and the complex shape recovery with boundary strength variation.4. A new manifold based method is proposed to construct a nonlinear mapping between the input and the feature space, instead of considering manifold learning and synthesis in isolation. The nonlinear mapping is realized by modelling the Local Generative units in the input space and a Global Affine transformation in the feature space. These formulations result in simple solutions to transverse between the input space and the feature space for the Out-of-Sample data points. The proposed method avoids the Alternating Least Squares problem or local minima for both the manifold learning process and the bidirectional Out-of-Sample extension. Moreover, this method can estimate the underlying dimension and is robust to the number of neighbors.
Keywords/Search Tags:dimensionality reduction, manifold learning, self-organizing mapping, incremental principal component analysis
PDF Full Text Request
Related items