Font Size: a A A

Research On Lightweight Of Deep Network Based On Channel Pruning

Posted on:2021-04-07Degree:MasterType:Thesis
Country:ChinaCandidate:G Y LiuFull Text:PDF
GTID:2428330614453848Subject:Control engineering
Abstract/Summary:PDF Full Text Request
Deep convolution neural network has achieved remarkable results in various fields of machine vision.In the tasks of image recognition,semantic segmentation,behavior detection,image tracking,etc.,it has achieved performance beyond human capabilities.High-performance network models often have higher hardware requirements on computing power,resources,and energy consumption.In the process of engineering application,the disposition of resource consuming network model to resource constrained mobile/embedded devices is still a challenge to be solved.The model lightweight technology is a solution proposed by the engineering researchers,which can replace the deep network with the slimming network on the premise of a small amount of additional precision loss.This method is usually based on specific rules,after network sparse operation,a compact network model is obtained to complete the given machine learning task.In this paper,two kinds of channel pruning strategies are used to realize the lightweight design of DNNs,the selection criteria based on mixed statistical features and the allocation criteria based on reinforcement learning.Specifically,for pre-trained network models,the selection criteria based on of mixed statistical features adopts the threshold value as the sensitivity indicators instead of the fixed proportion of pruning method,and the statistical features in each feature layer are selected as the observation object,and then constructed the evaluation function evaluates the statistical scores of the feature maps in each layer,to filter out the channels with negative scores and masks the update of related weights through the masks.The allocation rules based on the reinforcement learning revolve around the channel pruning task,redefining the action and state space of reinforcement learning,using an improved Q-learning strategy to determine the pruning action,and a reward function that accuracy-guaranteed can help the environment screen out a sparse network model with better resource occupation and performance performing.Reinforcement learning strategies use rewards to modify the allocation of pruning indicators between layers,to screen channels that do not need to be updated again,and to mask the update of related weights through a mask.In order to ensure that the network model has good generalization performance,the sparse network model is retrained by appropriate fine-tuning to achieve performance without damage.After the end of the model cropping schedule,the source sparse connections in the original model are saved in the newly created compact model.Relevant experiments are performed on the CIFAR-10 dataset and the ILSVRC-12 dataset.The Pytorch framework is used to train and implement the model compression strategy.To the end that verify the effectiveness and universality of the proposed methods,mainstream deep network architectures: VGG and Res Net structures are selected to performe lightweight pruning respectively.The results prove the effectiveness of the two proposed methods,which can greatly improve the reasoning efficiency of the network model,reduce the space occupation,and the recognition ability of the network remains basically unchanged.
Keywords/Search Tags:convolutional neural network, lightweighting, channel pruning, mixed features, reinforcement learning
PDF Full Text Request
Related items