Font Size: a A A

Research On Time Series Data Analysis And Network Compression Based On Tensor Calculation

Posted on:2023-09-29Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y W JiFull Text:PDF
GTID:1528306914458604Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
The emerging artificial intelligence industry is developing rapidly,and the amount of data generated and required is also increasing.In addition,these data are often distributed in high-dimensional feature spaces.Data is often missing or redundant in the process of acquisition and application,which brings difficulties to data analysis.The current technology usually reduces the dimension of data in high-dimensional space into a vector or a matrix,but this transformation will destroy the inherent structural characteristics of the original data and lose the relationship between dimensions.Aiming at the problem of missing or redundant data in high-dimensional space,this paper studies the problem of missing time series data and network model compression based on the method of tensor computing.Specifically,the research topics of this paper and the points for improvement are as follows:(1)Aiming at the problem that multivariate time series data often suffers from data loss,this paper proposed a LBIATC.The method includes two additional regularization terms.The first regularization term is a bidirectional local time regularization term,which models local time correlation by adding two learnable variables,where the first variable represents the linear relationship between missing values at the current moment and observations in the past local time period,and the second variable represents the linear relationship between missing values at the current moment and observations in a local time period in the future.The second regularization term is a sparse regularization term,which maps the original tensor data to the frequency domain by using discrete cosine transform,and the l1 norm is imposed on the data in the frequency domain to achieve the effect of constraining the inherent sparsity of the time series.The experimental results show that,the missing value imputation performance of LBIATC proposed in this paper is superior to the current stateof-the-art missing value imputation methods on most time series datasets.(2)Aiming at the problem that the traditional tensor decomposition-based model compression algorithm needs to be trained many times,this paper proposes a fast tensor CP decomposition layer compression algorithm based on the tensor CP decomposition representation.Different from the traditional model compression algorithm based on tensor decomposition,this method does not need to pre-train an initial large network,but only needs one low-cost training to compress the weight matrix in the fully connected layer,the convolution kernel in the convolution layer,and the vector fully connected layer in the Capsule Network.The fast tensor CP decomposition layer can directly update the factors in CP format without performing an expensive tensor decomposition operation,which avoiding multiple training sessions.The experimental results show that,the fast CP decomposition layer proposed in this paper can not only compress the parameters of the model up to 60 times under the premise of ensuring that the loss of model accuracy is within 2%,but even the model accuracy on some datasets exceeds the initial network.(3)Aiming at the large number of tensor rank selection problems existing in traditional tensor decomposition-based model compression algorithms,this paper proposes a TP layer compression algorithm that does not require tensor rank selection.This method uses tensor-matrix product and tensor-vector product to replace the original matrix multiplication,tensor-matrix product and tensor outer product to replace the element-wise product operation,and tensor outer product to replace the convolution operation.Compared with other traditional compression algorithms,the TP layer compression algorithm only needs to be directly trained once on small devices with limited resources,without pre-training on large devices.At the same time,it can be combined with various convolutional neural networks and Capsule Networks as a novel layer structure.The experimental results show that,compared with the tensor decomposition algorithm,the proposed TP layer can compress the model parameters up to 40 times without fine-tuning and tensor rank selection,while ensuring the accuracy loss is within 3%.
Keywords/Search Tags:Tensor Computation, Multivariate Time Series Data Completion, Truncated Canonical Multivariate Decomposition, Network Compression, Tensor Canonical Multivariate Decomposition, Tensor Rank, Tensor Product
PDF Full Text Request
Related items