Font Size: a A A

Tensor Completion And Restoration Based On Low-rank Learning

Posted on:2022-05-08Degree:MasterType:Thesis
Country:ChinaCandidate:X WangFull Text:PDF
GTID:2518306539952809Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
With the advent of the era of big data,a large number of high-dimensional data,for example,images and video and so on,are being acquired and stored,and the obtained data is constantly incomplete and contains a lot of noise,which will greatly affect the data analysis.How to complete the missing data or recover the clean data from the noise have become an important problem in data processing.In recent years,the low rank model has attracted extensive attention in the academic community.Due to the internal structure of highdimensional data such as images and videos,and the strong correlation between the front and back frames,they can be represented in low-dimensional space.Therefore,tensor completion and recovery models based on low rank learning,especially matrix completion and recovery,are deeply studied.The main work is summarized as follows:1)In view of the fact that high-dimensional data are often non low rank,we designs a convolution robust principal component Analysis algorithm based on convolution kernel norm.The prior condition of low rank algorithm is that the data is low rank or approximately low rank,so it can't cope with non-low rank data effectively in practical applications.Although images and videos and other data themselves are not low rank,their convolution matrix is usually low rank.According to this principle,this thesis utilizes the low rank property of convolution matrix to constrain the original data structure,so as to transform the non-low rank structure into the low rank structure.The alternating direction method of multipliers is used to solve the convex optimization problem in the model,and finally the tensor recovery of non low rank is realized.Experiments on synthetic data and real data show the effectiveness of the algorithm.2)At present,tensor completion and recovery models emerge one after another.A large number of different completed matrices can be obtained based on these models.Aiming at how to verify the accuracy of matrix completion and select the optimal completion model for different datasets,a selecting method via self-validation is proposed.Traditional data-validation method need to split the observations into two subsets,a training set and a validation set first,and then obtain many full matrices from training set,and calculate the error on the validation set.The smaller the error is,the better the result of matrix completion.Finally,we choose the model that performs best on the validation set as the winner to produce the final results.Though straightforward,this method may fall in a non-optimal model that overfits the validation set.Therefore,this thesis suggests a different approach called self-validation,which accounts on a special metric based on isomeric condition that can evaluate the “goodness” of a completion.The metric outputs a score between 0 and 1 that measures the identifiable degree of completed matrix.The corresponding optimal completion model is obtained by selecting the completion with the highest score.Experiments on synthetic data and four real-world datasets show that our approach of self-validation performs better than data-validation.
Keywords/Search Tags:low rank, tensor completion and recovery, convolution kernel norm, convolution robust principal component analysis, self-validation
PDF Full Text Request
Related items