Font Size: a A A

Research On Deep Neural Network Model Compression And Its Application

Posted on:2021-02-08Degree:MasterType:Thesis
Country:ChinaCandidate:Y WangFull Text:PDF
GTID:2428330614963817Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
Researches have shown that in the field of image recognition,because of the large amount of network calculation parameters,deep neural networks are difficult to insert in hardware configuration and practical application deployment.Therefore,it is very important to find a reasonable network model compression method to make a complex neural network get a lightweight neural network after the compression method.The main research work and results are as follows:(1)Study based on a model algorithm of deep separable convolution model compression mode.Contrast and analyze the classification effect and network convolution parameter amount of the standard convolution VGG-16 network and the depth separable convolution VGG-16 network on the CIFIA-10 image data set.The convolution mode model algorithm can effectively reduce the VGG-16 network convolution parameter from 90 M to 12 M,and make the network model size from 552 M to 240 M.(2)Research based on channel pruning of network model compression.Evaluating the importance of the convolutional layer and the convolutional kernel of the convolutional neural network,and setting the convolutional layer and the convolutional kernel clipping factor.According to the cropping factor,the convolutional layer and the convolution kernel of the network model are trimmed to reduce the size of the VGG-16 network model from 552 M to 260 M.(3)Research based on quantitative model compression model algorithm of weight sharing network structure.Based on the weight parameters of the convolution kernel clustering algorithms are used to classify the parameters according to the distribution of the weight parameters.Create a category index number,and each weight parameter value shares the weight average of the category,which reduces the network's tedious operations to a certain extent,and effectively reduces the nerwork memory,which can reduce the original convolution parameter of VGG-16 from 90 M to 75 M.(4)Application based on deep neural network model compression algorithm.Based on the integration of the research results of(1),(2)and(3),combining the three network model compression algorithms,and each combination is compared and analyzed based on the VGG-16 network in the CIFIA-10 image data set.On the classification effect and the amount of network convolution parameters.Experiments show that on the basis of ensuring the image classification effect as much as possible,the above three compression algorithms can be combined to maximize the compression of the network model,the image classification recognition rate is reduced from 92.3% to 91.9%,and the network convolution parameter amount is from 90 M Reduced to about 8M.Combining the three network model compression algorithms to apply to YOLO-V3 newborn face detection and Res Net-34 newborn facial expression classification,experiments show that the original YOLO-V3 network model size can be compressed from 249 M to 45 M,The Res Net-34 network model is compressed from 274 M to 50M;the detection rate of the compressed YOLO-V3 is reduced from 93.9% to 93.3%,and the average recognition rate of the compressed Res Net-34 expression classification is reduced from 86.2% to 85.7%.A better way was choosed among face detection rate,expression classification recognition rate,and network model size.
Keywords/Search Tags:Deep Convolutional Neural Network, Expression Recognition, Face Detection, Model Compression
PDF Full Text Request
Related items