Font Size: a A A

Compression And Acceleration Of Vision Algorithm Model Based On Convolutional Neural Networks

Posted on:2022-03-26Degree:MasterType:Thesis
Country:ChinaCandidate:T X SunFull Text:PDF
GTID:2518306536987619Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
In recent years,convolutional neural network technology has developed rapidly,and its excellent performance has been widely studied and verified in many fields of computer vision,and has achieved excellent results.However,convolutional neural networks also have shortcomings such as large parameters and calculations,and large energy consumption,which limit their popularization and application in actual industries.Therefore,model compression and acceleration have become an important research direction nowadays.This article focuses both on high-level vision task and low-level vision task in computer vision.For the classification problem,we propose two structured pruning methods(ACO and ALI)to compress the model.For the super-resolution problem of low-level vision task,we propose an incremental linear network quantization method to accelerate the model.We conduct experimental comparisons on multiple datasets and network structures to prove the effectiveness of our method.Specifically,the innovations and contributions of our three methods proposed in this article are as follow:1.When evaluating the importance of convolution kernels,the current mainstream pruning methods are usually based on a certain dimension feature of the convolution kernel,such as L1-norm,Euclidean distance,Taylor expansion series,etc.These single-dimensional evaluation methods tend to bring incomplete consideration,large pruning error and other issues.To solve these problems,we propose a pruning method based on ant colony optimization algorithm(ACO)which considers the correlation,similarity between different nodes and the relative size of the absolute value of the convolution kernel at the same time.Our method evaluates the convolution kernel comprehensively.At the same time,we improve the state transition rules in the ant colony optimization algorithm,so that the algorithm takes the best with a certain probability in the solution process,and randomly explores possible solutions with the remaining probability to avoid the final result from falling into the local optimum.Experiments prove the effectiveness of our method.2.We systematically analyze the information transfer process of the adjacent layers in the convolutional neural network in the convolution operation.Then,based on the convolution kernel importance scoring method of the single-layer proposed in Chapter 2,combined with the analysis of the information transfer process of the adjacent layers,we propose a scoring method based on adjacent layer information(ALI).This method uses the adjacent layer score to correct the single-layer scoring result and obtain the final convolution kernel importance score,which further reduces the pruning loss.In the end,our method has achieved better results than most existing public methods on multiple classification datasets and models.3.Compression and acceleration for super-resolution task is currently a relatively small field of research.We take the classic model SRRes Net as an example,and choose datasets Set5 and Set14 as our experiment samples.We quantify the model with the most commonly used linear quantization form.In order to reduce the model performance loss caused by the quantization process,we introduce an incremental network quantization method on the basis of direct quantization,by grouping the weights,quantization and fixing,and retraining methods,so that the quantified model achieves similar performance with the original model.At the same time,we explore the influence of the grouping method and the quantization of activation value of the feature map in the model.Our work provides experience and guidance for the compression and acceleration of super-resolution network.
Keywords/Search Tags:Structured Pruning, Ant Colony Optimization, Adjacent Layer Information, Super Resolution, Model Quantization
PDF Full Text Request
Related items