Font Size: a A A

Riemannian Manifolds Optimization For Dimensionality Reduction Representation And Its Applications

Posted on:2020-10-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:H R ChenFull Text:PDF
GTID:1360330623456750Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In the era of big data,computer and multimedia technology are developing rapidly,which brings a large number of images and video data generating every moment.Faced with such a huge amount of data,identifying them effectively has become a huge challenge,and even simple storage and reading are becoming more and more difficult.Data dimensionality reduction is an important way to solve the problems of data storage,reading and recognition.Therefore,data dimensionality reduction has been a topic of widespread concern and yielded fruitful results.Traditional feature extraction methods are mainly based on Euclidean space for modeling and optimization.In Euclidean space,we commonly used Lagrangian or greedy algorithms to deal with some constrained optimization problems.However,these algorithms often lead to the generation of suboptimal solutions.In order to obtain a more accurate solution in numerical calculation,Riemannian manifold optimization has opened up a new direction.There are two significant advantages of Riemann manifold optimization.Firstly,some non-convex problems in Euclidean space can be transformed to convex problems on Riemannian manifolds by introducing appropriate measures,and then the numerical methods have be improved to find the global optimal solution.Secondly,for many problems with Riemannian geometric structure constraints,optimization on Riemannian manifolds can make better use of the geometric structure of the constraint space,thus the solving cost is reduced greatly.In view of the advantages of Riemannian manifold optimization,this paper studies the establishment and solution of data dimensionality reduction model on Riemannian manifold.The main innovative work of the paper includes the following aspects:First,for a class of composition functions,Riemannian manifolds optimization algorithm has a slow convergence speed because of using the first-order information of the function.In order to solve this problem,this paper proposes a Fast Optimization Algorithm(FOA)on Riemannian manifolds and theoretically proves that the function value sequence of the strategy has been convergent with rate.In addition,for the low rank representation problem,this paper proposes an augmented Lagrangian method based on Riemannian manifold,which is optimized by using fast optimization algorithm.The experimental results show that the fast optimization algorithm on Riemannian manifolds has faster convergence rate,and the low rank representation on Riemannian manifolds can obtain higher clustering accuracy.Second,in partial least squares regression problems,the existing methods are based on Euclidean space for modeling and optimization.For orthogonal constraints or generalized orthogonal constraints of partial least squares regression factors,greedy algorithm is commonly used to solve column by column,which often leads to suboptimal solutions.To overcome this shortcoming,this paper proposes partial least squares regression model and its optimization algorithm on Riemannian manifolds.The proposed algorithm optimizes the partial least squares factor as a whole to avoid generation of suboptimal solution.In addition,in order to avoid of over-fitting,this paper also proposes sparse partial least squares regression model on Riemannian manifolds and then applies it to image classification.Experimental results show that the proposed Riemannian manifold optimization model and algorithm have lower classification error rate than similar methods.Third,for the feature representation of images with Laplacian noise,all the Euclidean space-based methods construct model by maximizing L1 norm of covariance matrix of low-dimensional features,and then utilize greedy algorithms to optimize.However,greedy algorithms often result in sub-optimal solution.Therefore,by considering the constraints of projection matrix as product manifold,this paper proposes two-dimensional principal component analysis model based L1 norm in the product manifold space and then optimizes the projection matrix integrally to obtain the global optimal solution.Experimental results show that compared with similar methods,the product manifold optimization method has better denoising and feature extraction ability.Fourth,most PCA-based algorithms only consider the linear correlation of data features,however high-dimensional data features are often nonlinearly related.In order to solve this problem,this paper proposes maximum correlation principal component analysis model based on deep parametric learning.This model mapps the data features in nonlinear correlation to features in linear correlation by using deep parametric framework,and then utilize PCA to reduce dimensionality.Further more,the learning algorithm of proposed model has been derived.Experimental results show that compared with other commonly used linear and nonlinear dimensionality reduction algorithms,the proposed algorithm has better feature extraction ability on simulated data sets and several real data sets.
Keywords/Search Tags:Riemannian manifolds, Grassmann manifolds, Product manifolds, Fast optimization algorithm, Partial least squares regression, L1-2DPCA, Maximally correlated principle component analysis, Dimensionality reduction
PDF Full Text Request
Related items