Font Size: a A A

Research On Tensor Decomposition Of Deep Convolutional Networks

Posted on:2022-07-31Degree:MasterType:Thesis
Country:ChinaCandidate:F YangFull Text:PDF
GTID:2518306515464044Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
With the dual support of high-performance hardware and big data-driven,deep convolution networks have made signifi cant breakthroughs in automatic driving,intelligent manufacturing,and medical imaging.However,with the improvement of performance,the architecture complexity and model parameters of deep convolution networks increase,which greatly limits the deployment of applications in resource-constrained edge devices.The deep convolution network compression method can effectively reduce redundant parameters and calculations in the network,and has become a research hotspot in the field of deep learning.As one of the important methods of deep convolution network compression,ten sor decomposition can eliminate redundant parameters in the network by tensor decomposition.According to the order of decomposition process,tensor decomposition of deep convolution network consists of four key steps: tensor representation of parameters,estimation of tensor rank,tensor decomposition,and performance recovery of the compressed model.In the decomposition process,the deficiencies of third-step tensor decomposition and fourthstep performance recovery of compressed model are the focus of this thesis.At the same time,in view of the narrow application field of the current deep convolution network compression method,the application field is expanded through research.The main research contents and contributions are as follows:1.The basic theories of tensor decomposition process of deep convolution network,deep convolution network,and tensor decomposition are systematically combed.Through the analysis of each step in the process of tensor de composition of deep convolution network,in the fourth step of performance recovery of compressed model,the performance of compressed model is affected becaused of ignoring the different redundancy characteristics of each networks layer.At the same time,in the third step of tensor decomposition,the generalization performance of Tucker decomposition and CP decomposition is limited,and the advanced tensor decomposition method has not been fully utilized in the field of deep convolution network compression.The above analysis provides a clear direction for the follow-up research of this thesis.2.An iterative tensor decomposition method based on network layer redundancy(ILRL)is proposed to solve the problem that the performance of compressed model is affected by ignoring the different network layer redundancy.This method is transformed from the original compression mechanism that only depends on the structural characteristics of each network layer to a new compression mechanism based on the redundancy characteristics of each network layer.Firstly,the rank of tensor decomposition is estimated in advance to sorting each network layer according to the redundancy of network layer parameters.Then,the network layer with more redundant parameters is compressed by the iterative compression strategy of alternating compression and fine-tuning.The experimental results show that the proposed ILRL is more accurate in eliminating redundant parameters and has a better balance between performance and compression rate in image super-resolution tasks.3.To solve the problem of rank allocation difficulty in Block Term Decomposition(BTD),a block term decomposition of deep convolutional networks by variational Bayesian(BTD-VB)is proposed.The BTD-VB algorithm automatically estimates the rank of DNN parameters in input and output channel dimension s through Global Analytic Solution of Empirical Variational Bayesian Matrix Factorization.Experimental results show that,in the image super-resolution task and image classification task,the proposed method can eliminate redundant parameters of deep convolution network to a greater extent,and reduce the parameter adjustment period of rank allocation in BTD.
Keywords/Search Tags:Deep Convolution Network, Tensor Decomposition, Network Compression, Variational Bayesian, Image Super-resolution
PDF Full Text Request
Related items