Font Size: a A A

Linear Manifold Discriminant Analysis Models And Algorithms

Posted on:2022-01-07Degree:DoctorType:Dissertation
Country:ChinaCandidate:L C HuFull Text:PDF
GTID:1488306755459564Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Excessive dimensionality leads to high storage overhead,heavy computation and huge time consumption in the training process of machine learning.And,as the ambient space expands exponentially with the increase of dimensionality,the proportion of training data in the whole data space drops sharply,thus resulting in the worse generalization of the training model.A significant way to address these issues is dimensionality reduction,which transforms the original high-dimensional spatial data into a low-dimensional subspace by some resultful means.Manifold learning assumes that the data in the high-dimensional ambient space are distributed on or near a low-dimensional embedded manifold,resulting in the dimensionality reduction problem can be transformed into a manifold restoration problem.Since nonlinear manifold learning algorithms have great limitations in time cost and out of sample extension,the development of manifold discriminant analysis under graph framework is promoted.In this dissertation,a series of attempts and discussions have been made on the widely existing problems in manifold discriminant analysis.The specific contributions are summarized as follows:· We proposed a patch-based multi-manifold orthogonal neighborhood-preserving discriminant analysis algorithm.From the perspective of path alignment,we consider the intra-class compactness,intra-class structure and inter-class separability simultaneously.Moreover,we infuse intra-class structure information described by the sample reconstruction into intra-class compactness loss,considering the compactness of two reconstruction groups instead of sample pairs in the same class.By analyzing the projection direction and the maximum inter-class margin,we select the samples that should participate in the inter-class separability on the patch.Meanwhile,a fast orthogonalization method is performed to obtain the orthogonal projection matrix.Besides,we perform our method in reproducing kernel Hilbert space which gives rise to nonlinear maps.Experimental results compared with some state-of-the-art methods on a toy dataset and several benchmark face image databases demonstrate the effectiveness of our algorithms.· We put forward a novel functional expression for linear discriminant analysis,which combines maximum margin criterion with a weighted strategy formulated by L1,2-norms to against outliers.Meanwhile,we simultaneously realize the adaptive calculation of weighted intra-class and global centroid to further reduce the influence of outliers,and employ the L2,1-norm to constrain row sparsity so that subspace learning and feature selection could be performed cooperatively.Besides,an effective alternating iterative algorithm is derived and its convergence is verified.From the complexity analysis,our proposed algorithm can deal with large-scale data processing.Experiments performed on several benchmark databases demonstrate that the proposed algorithm is more effective than some other state-of-the-art methods and has better generalization performance.· We propose a more powerful discriminant feature extraction framework.In our model,we formulate a new strategy induced by the non-squared L2 norm for enhancing the local intra-class compactness of the data manifold,which can achieve the joint learning of locality-aware graph structure and desirable projection matrix.Besides,we formulate a weighted retargeted regression to perform the marginal representation learning adaptively instead of using the general average inter-class margin.To alleviate the disturbance of outliers and prevent overfitting,we measure the regression term and locality-aware term together with the regularization term by forcing the row sparsity with the joint L2,1norms.Then,we derive an effective iterative algorithm for solving the proposed model.The experimental results over a range of benchmark databases demonstrate that the proposed algorithm outperforms some state-of-the-art approaches.· We propose a learning framework that implements category-oriented self-learning graph embedding,in which we achieve a flexible low-dimensional compact representation by imposing an adaptive graph learning process across the entire data while examining the inter-class separability of low-dimensional embedding by jointly learning a linear classifier.Besides,our framework can easily be extended to the semi-supervised situation.Extensive experiments on several widely-used benchmark databases demonstrate the effectiveness of the proposed method comparing with some state-of-the-art approaches.
Keywords/Search Tags:Manifold Discriminant Analysis, Graph Embedding Framework, Structured Graph Learning, Margin Representation Learning, Sparsity and Robustness
PDF Full Text Request
Related items