Font Size: a A A

Robust Low-Rank Tensor Completion Via Double Factor Norm Minimization

Posted on:2022-12-10Degree:MasterType:Thesis
Country:ChinaCandidate:J ZhangFull Text:PDF
GTID:2480306782971529Subject:Insurance
Abstract/Summary:PDF Full Text Request
With the rapid development of various technologies such as computer communication and other technologies,the data that people need to collect,process,and analyze has higher dimensions and more complex structures.Due to various factors such as acquisition equipment failure,object obstruction and communication interference in life,tensor data may have problems with noise and data loss.It becomes a hot issue that how to recover the original data from the lost observation data effectively in machine learning and other fields.Low rank tensor completion is widely used in machine learning and other fields.Based on different tensor ranks,such as traditional CP rank,Tucker rank,etc.,different low-rank tensor competion models are derived.The recently proposed tubal rank model has more outstanding performance than traditional models,and has received great attention in image and video inpainting tasks.However,there are two problems in the optimization model based on the tensor tubal rank:(1)The tensor nuclear norm is usually used to relax the rank function,which will penalize large singular values too heavily,and may cause excessive shrinkage that cannot be compact the low-rank structure of the tensor well.(2)With the process of model optimization,singular value decomposition is performed on large tensor nuclear norm that cause an expensive computational complexity.In order to solve the above two problems,double factor norm regularized tensor completion model is proposed.We give the definitions of tensor double nuclear norm and tensor Frobenius/nuclear hybrid norm.With the tensor-tensor product and tensor singular value decomposition based on any invertible linear transforms,it is proved that they are respectively equivalent to tensor Schatten-p quasi norm for p = 1/2 and p = 2/3,avoiding heavier penalty and isometric shrinkage for larger singular values.In addition,the large scale tensor is factorized into two much smaller factor tensors by factorization methods,which can reduce computational complexity.The original non-convex problem is converted into a convex problem,solving by the alternating direction multiplier method,so that each subproblem has a closed form solution.The experimental results on synthetic data and real data sets verify the efficiency of the double factor norm regularized tensor completion model.
Keywords/Search Tags:Tensor Completion, Tensor Double Nuclear Norm, Tensor Frobenius/Nuclear Hybrid Norm, Tensor Factorization
PDF Full Text Request
Related items