Font Size: a A A

Study On Acceleration Of Deep Convolutional Neural Network With Pruning

Posted on:2021-11-02Degree:MasterType:Thesis
Country:ChinaCandidate:Y F ZhouFull Text:PDF
GTID:2518306503480394Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
Channel pruning,widely used for accelerating Convolutional Neural Networks,has encountered its bottleneck because of two challenges: 1)accurate and intuisive measurement for redundancy;2)modeling inter-layer dependency which makes dynamical redundancy.Given that,we firstly introduce dropout techniques which contains a dropout rate explained as probability of dropping channels.Considering the difficulty of optimization,we derive a Gaussian dropout so that the dropout rates can be updated as parameters under Bayesian framework and thus as measurement for redundancy,can be learned by model itself.Secondly,we model the dropout noise across layers as a Markov chain and target its posterior to reflect the inter-layer dependency.Considering the closed form solution for posterior is intractable,we derive a sparsity-inducing Dirac-like prior which regularizes the distribution of the designed noise to automatically approximate the posterior.Compared with the existing methods,no additional overhead is required when the inter-layer dependency assumed.The redundant channels can be simply identified by tiny dropout noise and directly pruned layer by layer.Experiments on popular CNN architectures have shown that the proposed method outperforms several state-of-the-arts.Despite the effectiveness of RBP,we find that pruning is limited to only accelerate convolutional layers inside side branches in case of Res Nets,due to the requeirements for correspondence of main and side branches' output channel numbers.Fortunantely,low rank approximation is not influenced by this factor.We are then inspired to apply channel pruning methods under low rank approximation scheme.We thus introduce Rank Pruning Framework(RPF).In details,each convolutional layer is decomposed with hypothesis of full rank,which makes rank equivalent to the channel numbers between them.In this case,pruning input channel numbers of the later factor automatically lowers rank values and thus achieves acceleration.Various experiments on Res Nets are have proved that RPF combines both pros of channel pruning and low rank approximation,reaching a better balance between speed and performance on Res Nets.
Keywords/Search Tags:deep convolutional neural networks, model acceleration, pruning
PDF Full Text Request
Related items