Font Size: a A A

The Research And Implementation Of SVD-Based Pruning For Deep Neural Network

Posted on:2020-03-09Degree:MasterType:Thesis
Country:ChinaCandidate:J WangFull Text:PDF
GTID:2428330572972164Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
In recent years,deep learning algorithms have demonstrated powerful modeling capabilities in solving abstract cognition problems,and have significantly improved performance in Audio Event Detection(AED)and Acoustic Scene Classification(AED)tasks?Therefore deep learning algorithms have been widely favored by the academic community.Powerful modeling ability of deep learning algorithms benefit from deeper network structures.more neurons and layers.Hence,the number of parameters of the neural network reaches millions or even billions,thus neural networks need to consume a lot of computing resources,which stricts requirements for the computing power of processors.In order to reduce the number of parameters in deep neural networks,this paper proposes the network compression method based on Singular Value Decomposition(SVD)to achieve the purpose of compressing deep neural network.Firstly,a SVD-based method is proposed to compress FNN.In FNN,matrix multiplication is the most important calculation method and parameters in networks are mainly concentrated in weight matrixs.For FNN pruning,this paper uses SVD method to decompose a large weight matrix into the product of two small matrices,and two small matrices are used to reconstruct the original network structure.After reconstruction,the amount of parameters in network are reduced to simplify the FNN.When applied to the DCASE2016 rare audio event detection task based on FNN,SVD pruning can preserve 4.35%of the parameters,with only 3%accuracy loss.For the convolutional neural network,the paper proposes a pruning method based on SVD decomposition for convolutional layer channels.The matrix decomposition is used to reduce the number of feature maps in the convolutional layer to reduce the convolutional neural network.The matrix decomposition method is used to compress the input channel and output channel of the original convolution layer,and reconstruct a new convolution layer to decompose the original convolution layer into three small convolution layers and combination of three small convolutional layers have fewer parameters than the original convolutional layer.Applied to the GCRNN-based DCASE2018 acoustic scene classification task,SVD pruning can preserve 10.67%parameters of the original convolution layer,with only 0.34%accuracy loss.For the compression of the recurrent neural network,this paper mainly studies the SVD pruning method of GRU(Gated Recurrent Unit).The main idea is to decompose the weighting matrix of update gate and the reset gate in the GRU,thus reducing the overall parameter amount of the network.After matrix decomposition,this paper use the weight sharing method to reduce the number of weight matrixs in the update gate and the reset gate after cropping,thereby further reducing the number of parameters in GRU.After compression,the new GRU network was reconstructed by using the weight matrix after pruning and the sharing matrix.Applied to the GCRNN-based acoustic scene classification task,SVD pruning can preserve the 23.00%parameter of GRU layer,with only 0.55%accuracy loss.In order to analyze the reason why SVD can effectively compress the neural network,this paper defines the weight activity,and the definition is a proportion that the absolute value of the weight parameter is greater than the set threshold.By analyzing the SVD compress fully connected neural network,convolutional neural network and recurrent neural network,this paper finds that SVD compression maintains the performance of the model by increasing the weight activity of the neural network.
Keywords/Search Tags:Deep Neural Network, Fully-connected Neural Network, Convolutional Neural Network, Recurrent Neural Network, Singular Value Decomposition
PDF Full Text Request
Related items