| Large-scale data with tensor structures appear in a wide variety of applications,such as computer vision,scientific simulations,sensor networks,and data mining.In most cases,these tensors are so large that it is inconvenient to compute,transfer,store,etc.directly.Fortunately,these tensors usually exhibit a low-rank property structure and can be low rank tensor approximation by tensor factorization.However,computing these largescale tensor factorizations may require significant computational resources.Sketching is an effective dimensionality reduction tool for data.Many scholars have proposed matrix sketching algorithms for low rank matrix approximation problems,which can compress original data while maximizing the retention of important information in the original data.In order to better solve tensor low rank approximation problems,this paper based tproduct factorization,proposing several types of sketching algorithms based on the transformed domain:(1)A two sided sketching algorithm based on transformed domain is proposed,and a rigorous theoretical analysis of the approximate error of the algorithm is provided.Combining the power iteration technique to improve the algorithm,a tensor low rank approximation Subspace-Sketch algorithm is proposed.In addition,we also consider adaptive block of original data,decomposing a large low rank tensor approximation problem into several small low rank tensor approximation problems,further reducing the storage complexity of the algorithm,and facilitating parallel computing.Experiments on low rank approximation of color images and grayscale video demonstrate the effectiveness of the algorithm in terms of time and storage.(2)A trilateral sketching algorithm based on transformed domain is proposed.Combining three linear sketching operators,a DCT TGaussian Sketch algorithm is proposed,and a rigorous theoretical analysis is provided for the approximate error of the DCT TGaussian Sketch algorithm.Experiments on low rank approximation of color images and grayscale video demonstrate the effectiveness of the algorithm in terms of time and low rank approximation. |