Font Size: a A A

A Large-Scale Tucker Decomposition Algorithm Based On Non-negative Orthogonal Constraints

Posted on:2022-10-16Degree:MasterType:Thesis
Country:ChinaCandidate:M H ChenFull Text:PDF
GTID:2518306569475894Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Tucker Decomposition(TD)is a widely used tensor decomposition algorithm.With orthogonal constraints,TD can better obtain the low rank approximation of the raw data,while with non-negative constraints,TD can typically make the model more interpretable.Existing non-negative Tucker decomposition algorithms mainly have the following problems:(1)The algorithm lacks universality because the hyperparameters need to be manually adjusted;(2)There is a large amount of redundant operations in the algorithm;(3)In large-scale cases,the whole data needs to be loaded into the memory,and there may still be intermediate processes with high computational complexity such as multi-dimensional mode-n product,SVD,matrix inversion,QR decomposition,etc,which lead to problems such as memory overflow and long operation time.To solve these problems,a Large-Scale Tucker Decomposition algorithm based on NonNegative Orthogonal constraints(NNO?LSTD)is proposed in this paper.Firstly,the closedform solutions of the orthogonal parameter and the factor matrix columns are derived according to the orthogonality,so as to adjust the parameters adaptively;Secondly,to reduce the amount of redundant computation,this paper takes advantages of the sparsity of the original tensor and solve only for the visible elements.In addition,the cache algorithms is used to further reduce the calculations.Finally,aiming at solving the memory overflow and slow computing problems in the large-scale cases,this paper adaptively adjusts the algorithm according to computing resources to avoid memory overflow and a parallel scheme is proposed to accelerate the running speed of the high complexity operations.In order to verify the effectiveness of the proposed algorithm,experiments on small and medium scale,large scale and different sparse cases are conducted and the proposed algorithm is applied to interpretable compression of convolutional neural networks and tensor completion.Experiments show that,compared with the existing algorithms,the proposed algorithm can solve the above problems effectively.More importantly,the proposed algorithm can achieve higher decomposition accuracy in small and medium cases,and it can still compute effectively in large scale cases while the existing methods can not.In addition,since the algorithm utilizes the sparsity of the tensor data,the calculation speed under the sparse data is improved significantly.In the application of interpretable compression of deep convolutional neural network,non-negative constraints are added to the convolutional layer in the network and NNO?LSTD is used to reduce the dimension of the weight kernel of the convolutional layer.Experiments demonstrate that,compared with the existing network compression methods,the proposed method effectively compresses the network size,and its interpretability of the feature maps is greatly improved.In terms of the application of tensor completion,the experimental results show that the algorithm has better completion performance than the existing algorithms on small and medium-scale data,and provides a feasible method for large-scale tensor completion.
Keywords/Search Tags:Non-negative Tucker decomposition, large scale, tensor completion, convolution neural network compression, interpretability
PDF Full Text Request
Related items