Font Size: a A A

Research On Graph-Motivated Unsupervised Dimensionality Reduction & Discriminant Subspace Learning And Their Applications

Posted on:2011-11-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:B YangFull Text:PDF
GTID:1118330338495796Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
With the rapid emergence of high-dimensional data, the so-called'curse of dimensionality'problem has become the sever challenge in pattern recognition and machine learning. In order to conquer the problem, the dimensionality reduction techniques are absolutely necessary. Among them, graph-motivated dimensionality reduction approaches have recently become a hot topic. As an important branch of dimensionality reduction, the discriminant subspace learning dependent on the provided class or pair-constraint prior knowledge often can increase classification performance. So, this thesis proposes construction approach of graph and further applies the proposed approach to unsupervised dimensionality reduction, and presents some discriminant subspace learning algorithms. The main contributions of this thesis are as follows:1. To address the difficult of neighbor parameter selection in the traditional neighborhood graphs, the construction approach of sample-dependent graph (SG) is proposed, in which each sample itself and the similarity between sample-pairs are used to determine its neighbors while the predefinition of neighbor parameter is not necessary. SG avoids the selection of neighbor parameter and thus often can more effectively fit intrinsic structures of data. Based on such properties of SG and the popularity of graph-motivated dimensionality reduction, we incorporate it into the off-the-shelf unsupervised Locality Preserving Projection (LPP) to develop the sample-dependent Locality Preserving Projection (SLPP). For some real tasks such as face recognition, compared with the dimensionality reduction algorithms based on traditional approaches of neighborhood graph construction, SG and SLPP show the effectiveness and feasibility.2. In graph-motivated unsupervised dimensionality reduction algorithms, the construction of graph based on locality leads to the so-called disguised discriminantion without true discriminant information. The consistency between the disguised discrimination and true class information plays a key role in performances of algorithms. We take Unsupervised Discriminant Projection (UDP) as an example to examine: the nature of disguised discrimination for graph-motivated unsupervised dimensionality reduction, an underlying factor of the corresponding algorithms sensitive to the local parameters, and how the inconsistency problem between the disguised discrimination and the trure class information impact the construction of graph and prediction performance. It is expected that the so-proposed theory is able to provide some insight on graph construction in unsupervised learning and is experimentally testified on the face recognition.3. From the viewpoint of constructing scatter matrices, we establish the Structurally Motivated framework for DA(SM). SM not only accommodates many current discriminant analysis algorithms, categorizes and identifies them in terms of the involved structure granularities used in scatter matrices, but also can provide insight on developing new discriminant analysis algorithms. In the structure granularity spectrum from class to cluster to locality, the cluster-granularity structure yet is not sufficiently used to construct scatter matrices for current DA algorithms. To fill up, we develop three discriminant analysis algorithms based on cluster granularity, SWDA, SBDA and SWBDA. The experiments on some tasks such as face recognition illustrate the effectiveness of the proposed algorithms and SM.4. In the Margin maximizing discriminant analysis (MMDA), the Support Vector Machines (SVM) only is used as a'black box'for the feature extraction, however, the so-extracted features are classified by other classifiers such as neural network. Clearly, the ability of SVM is not sufficiently utilized. To offset the limitation, we present a SVM-induced embedded discriminant subspace learning algorithm, in which classification and dimensionality reduction are unified into a framework, that is, SVM is simultaneously performed in the process of dimensionality reduction, thus performances of both are boosted and improved with each other. The performance of proposed algorithm is testified on the benchmark datasets.5. We propose the discriminant subspace learning algorithm based on SVM and linear discriminant analysis (LDA), which is iteratively solved. The proposed algorithm takes into account not only the between-class information but also the within-class information. Compared to the MMDA, it can more effectively utilize the information of data to learn the projection matrix and thus boost the performance of subsequent classification algorithms. It obtains the relatively fine performance on the classification following the dimensionality reduction for high-dimensional data.
Keywords/Search Tags:Dimensionality reduction, graph construction, subspace learning, discriminant analysis, support vector machines, face recognition, classification
PDF Full Text Request
Related items