| With the development of deep learning,Deep Convolutional Neural Networks have been widely used in many computer vision tasks such as image classification,object detection,object tracking and image super-resolution.However,while increasing the depth of the convolutional neural network to improve its feature expression ability,the model gradually becomes over-parameterized,making it difficult to deploy the model on embedded devices with limited computing power and memory resources.Therefore,it is of great significance to study how to effectively reduce parameters and floating-point number operations of the network model without significantly degrading the performance of the network model.As one of the common network model compression techniques,channel pruning has strong ease of use and high compression rate.This paper mainly conducts in-depth research on the channel pruning algorithm.The main research contents and contributions are as follows:(1)This paper proposes a channel importance measurement method based on feature map correlation independence.Correlation independence can be intuitively understood as "substitutability" : if the feature map output by a channel can be replaced by feature maps output by other channels in the same layer,the feature map correlation independence is low.Channels with low relative independence of output feature maps can be considered redundant channels.By introducing a two-dimensional auxiliary matrix,the method uses the concept of information entropy to quantify the correlation independence of feature maps output by each channel.Then,the metric method is embedded in the global pruning method,and the quantified feature map correlation independence is regarded as the local importance of the corresponding channel,and the local importance is converted into the global importance through the genetic evolution algorithm,and the model is structured from a global perspective.pruning.Experiments show that this method can greatly reduce the amount of parameters and floating-point operations of the convolutional neural network model without affecting the accuracy of the model.(2)This paper proposes an adaptive global pruning algorithm based on double DDPG.The mutation process in the genetic evolutionary algorithm itself is random,and the mutation result has no guiding effect on the next mutation,which eventually leads to a long iterative process of evolution and unstable results.Based on the concept of reinforcement learning,this paper uses two DDPG agents to learn the global scale coefficient and bias coefficient of each layer in a continuously changing space,respectively.And a simulation space based on LSTM is designed to simulate the next state.Experiments show that compared with the current mainstream channel pruning algorithm,this algorithm can more accurately remove redundant channels in the network model,and the pruned network model still has a strong ability to extract feature information.(3)This paper proposes a model performance recovery strategy based on multi-network coordinated training.Inspired by mutual learning and knowledge distillation,based on the concept of grafting,this paper proposes a multi-network joint parallel training strategy based on adaptive weighting to improve the feature information extraction capability of channels in convolutional layers.The strategy uses different training parameters to train multiple identical networks,and the parameters are adaptively weighted between the networks during the training process.In order to solve the problem of slow convergence in multi-network training,this paper also adopts the stochastic gradient descent optimization algorithm based on gradient set.Compared with simple fine-tuning,retraining and knowledge distillation,the network model obtained by this training strategy is more accurate. |