Font Size: a A A

Research On Models And Algorithms For Low-tubal-rank Tensor Recovery

Posted on:2020-05-18Degree:DoctorType:Dissertation
Country:ChinaCandidate:A D WangFull Text:PDF
GTID:1488306512982029Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In many real applications,due to many reasons like failure of sensors,communication loss,mistakes of users,object collusion,etc.,data tensor may suffer from missing values,noises,and outliers.Then,one problem naturally arises: how to recover the true tensor from observations with noises,missing values and outliers? This problem has significance in both theory and practice.Low rank tensor learning has been widely applied in computer vision,data mining,machine learning,signal processing,etc.Different from matrix,tensor has multiple definitions of rank function,such as the classical CP rank and Tucker rank,leading to many low rank tensor models.In many tensor learning tasks,like tensor completion and tensor robust principle component analysis(TRPCA),the newly proposed low-tubal-rank models have achieved better performances than traditional low rank tensor models,and become an active research topic in tensor learning.Based on low-tubal-rank models,this dissertation studies the problems of noisy tensor completion and robust tensor recovery.Specifically,the main contributions of this paper are listed as follows:1.Aimed at tensor data with missing values and noises,an algorithm named iterative singular value thresholding of tensors are proposed.The convergence is first proved,and then a non-asymptotic upper bound on the estimation error is established and proved to be minimax optimal.Experiments on synthetic data verifies that the proposed upper bound can predict the scaling behavior of the estimation error,and experiments on real datasets show the effectiveness of the proposed algorithm.2.For noisy tensor completion,a tubal nuclear norm(TNN)regularized least squares estimator is proposed.From the statistical point,this paper establishes an upper bound on the estimation error non-asymptotically and a lower bound in minimax sense.From the optimization standpoint,two algorithms,i.e.,an ADMM-based algorithm and a CGD algorithm are developed.The latter is further proved to enjoy linear convergence rate to the statistical precision.Experiments on synthetic and real datasets show correctiveness of the proposed upper bound and effectiveness of the proposed algorithms,respectively.3.To recover a tensor against sparse corruptions,a triple factorization based TRPCA model is first proposed.Then,a non-convex ADMM algorithm is designed to solve the model.Further,the convergence and sub-optimality of the algorithm are established.Efficiency and effectiveness of the proposed algorithm is evaluated on experiments on both synthetic and real datasets4.Outliers and noises may coexist in real tensor data,thus robust tensor decomposition(RTD)is needed.A TNN-based RTD model is first proposed.Then,both deterministic and non-asymptotic bounds on the estimation error are established.Two algorithms,including an ADMM-based algorithm(ADMM)and a modified Frank-Wolfe algorithm(FW)are designed.FW gets rid of full SVDs needed by ADMM,and thus reduces the computational complexity.Simulations studies shows that proposed upper bound can predict the scaling behavior of the estimation error,and experiments on real datasets illustrates the supercity and efficiency of the proposed algorithms.5.The classical TNN is permutation sensitive and only defined to 3-way tensors,which limits its application in mining the rich correlations in higher order tensors.To alleviate the above issue,two permutation invariant tubal nuclear norms are defined.Then,RTD models based on them are proposed.The statistical performances of the relevant models are analyzed and two algorithms are developed to solve them.The correctiveness of the theorem is verified by simulation studies and effectiveness of the proposed models are shown through experiments on real datasets.
Keywords/Search Tags:Tensor singular value decomposition, low-tubal-rank structure, tensor completion, robust tensor decomposition, statistical performance, scalable algorithms
PDF Full Text Request
Related items