Font Size: a A A

Research On Active Stepwise Pruning Method Of Deep Convolution Network Model

Posted on:2020-06-23Degree:MasterType:Thesis
Country:ChinaCandidate:T T YanFull Text:PDF
GTID:2428330602455784Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In recent years,Convolutional Neural Network(CNN)has been widely used in various fields.However,due to the complicated structure,it is difficult to deploy to devices such as mobile terminals.In order to realize the lossless application of the deep model to the mobile device and improve the user experience of the intelligent software,the deep convolution model needs to be compressed.At present,some model pruning methods have appeared.For example,Han et al.prune the network parameters by setting corresponding thresholds,but this method requires a lot of iterative fine-tuning of the pruned model,so it wastes a lot of time.At the same time,in the later stage of model pruning,the granularity of pruning is difficult to control,resulting in many parameters being incorrectly pruned.The parameters will not be restored after being pruned,making the prediction accuracy of the model drop.Guo et al.proposed a dynamic pruning method for the model,but this pruning method requires a lot of hyper-parameters to ensure the effectiveness of the pruning strategy.In this paper,we propose a mobile step-based active stepwise pruning method(ASP)to solve the problems encountered in some current pruning methods.In the method,the Log function is used to control the whole pruning process.This process realizes the active step-by-step pruning of the network model,and realizes the pruning and fine-tuning of the network model,thereby fundamentally reducing the iterative fine tuning times of the model.In the later stage of control model pruning,the sparsity change gradually slows down to ensure the careful pruning of the remaining parameters of the model,and finally improves the prediction accuracy of the model.Different from other methods,we need to set a lot of hyper-parameters.Our method only needs to set three hyper-parameters to effectively prune the model.At the same time,in order to avoid the important parameters lost in the process of model pruning to recover from some error pruning,we propose a model parameter repair method.For our proposed method,the pruning strategy and the parameter repair strategy constitute an alternating cycle process.By continuously updating the parameters in different layers of the network model,the prediction accuracy of the post-prune depth network model is improved.Experiments show that our method can effectively prune some deep network models.We conducted experiments on deep network models such as MobileNet,AlexNet,VGG-16,and ZFNet.The experimental results show that the pruning effects of 5.6X,19.4X,20.0X,and 15.2X are achieved on these models,and the calculation speed of the network is also Therefore,there is a great improvement,and the prediction accuracy of the deep network model has almost no loss.Compared with the existing network model compression method,the compression ratio of these methods on these models is increased by about 0.7% and 3.2%,respectively 2.5%,4.7%.
Keywords/Search Tags:Deep convolutional model, Model compression, Active stepwise Pruning, Parameter repairing, Pruning intensity, Logarithmic function
PDF Full Text Request
Related items