Font Size: a A A

Research On Graph Learning For Dimensionnality Reduction And Its Applications

Posted on:2013-01-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:L M ZhangFull Text:PDF
GTID:1228330362466695Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Machine learning is one of the main research areas of artificial intelligence. Many machine learningalgorithms have a close relation to graph, such as spectral clustering, semi-supervised learning anddimensionality reduction (DR), where the graph is generally used to model the similarity of data. Inthese graph-based algorithms, the graph plays an important role, and thus how to build a high-qualitygraph has been becoming a hot research topic of machine learning. This dissertation provides a novelgraph construction idea, and then gives a systematic study through several proposed DR algorithms.The main contributions include:(1) Simultaneous DR and graph learning idea. The classic graph-building methods are alwaysindependent of the subsequent learning task, i.e., first construct graph, and then apply it to thelearning problems such as DR. As a result, if the built graph is “bad”, since it is unchanged duringthe process of applying to the subsequent problems, then the algorithmic performance will be notgood. To address this issue, we we put foreword the simultaneous DR and graph learning idea byfinishing graph construction and the subsequent learning problem (here, DR) simultaneously inone unified framework. In the following, several DR algorithms,(2)-(5), are given in order toshow this new idea more specificially.(2) Simultaneous DR and graph learning with the entropy regularization. Based on the proposedgraph construction strategy, we give a DR algorithm called Graph-optimized Locality PreservingProjection (GoLPP). For making the edge weights in graph as smooth as possible, GoLPPincorporates an entropy regularization term into the objective function, resulting a graph updatingformula corresponding to conventional heat kernel weights. Such an automatically optimizedgraph potentially alleviates the dependence on the k nearest neighbor criterion in original dataspace as in LPP, and some experiments on publicly data sets show the better performance ofGoLPP than LPP.(3) Semi-supervised simultaneous DR and graph learning. Due to a strong sum-to-one constraint,GoLPP is difficult to incorporate the supervised information naturally, as most of the classicalsemi-supervised algorithms. To address this problem, we first improve GoLPP through relaxingits constraint, and then incorporate the pairwise constraints into the graph optimized process,leading to a semi-supervised graph-updating formulation with natural probability explanation.The effectiveness of the proposed algorithm is verified on some UCI and face data sets.(4) Simultaneous DR and graph learning with pre-specified graph constraint. The graph-optimized process of GoLPP depends heavily on the transformed data information, which is not alwaysreliable according to some experiments results. In this section, we attempt to constrain the graphoptimization in the neighborhood of a pre-specified graph which models the original datastructure. As a result, the graph update formula is a weighted sum of the pre-defined graph and anew graph depending on transformed space, which makes full use of the original data anddifferent transformed data information. Experiments verify this new model has better flexibility.(5) Simultaneous DR and graph learning with sparse constraint. The maximum entropy principle inGoLPP makes the loss of sparsity of graph in GoLPP. We address this problem by sparseconstraint, and propose a new DR algorithm, which simultaneously performs the graphconstruction with sparse representation and the projection directions pursuing in a unifyingframework, resulting an automatically-optimized and sparse graph. Furthermore, this algorithmpotentially provides a link between sparsity preserving projection (SPP) and GoLPP. Experimentsresults indicate that the proposed algorithm has better performance, comparing with SPP andGoLPP on some public data sets.(6) A framework of simultaneous DR and graph learning with regularization. We unify the aboveproposed different DR algorithms under a common framework, where the graph is optimizedduring DR process, and different priors, motivations and assumptions might be incorporated intothe objective function by different regularization terms. Naturally, we may obtain differentoptimized graphs closely related to the special learn problem. Furthermore, this framework isgeneral and can be considered as an alternative platform to design new graph-based DRalgorithms.
Keywords/Search Tags:Machine learning, pattern recognition, dimensionality reduction, graph construction, locality preserving projection, sparse representation
PDF Full Text Request
Related items