Font Size: a A A

Research On Computational Optimization Technology Based On Deep Learning

Posted on:2020-06-26Degree:MasterType:Thesis
Country:ChinaCandidate:Z N MaFull Text:PDF
GTID:2428330596473174Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
As one of the most popular technologies in recent years,deep learning has made many breakthrough achievements so far.The technology is gradually becoming mature and the theories are gradually enriched.At present,deep learning has been widely applied in many fields.However,because most deep neural networks have large network architectures,the network is deep and wide,there are a large number of parameters,and the amount of calculation is large.Therefore,the requirements for computer hardware are high,and generally require high-configuration graphics cards to operate.This is particularly difficult to deploy to mobile devices or embedded devices with limited resources.Therefore,this disertation studies the computational optimization technology of deep learning,which mainly includes the following aspects:Firstly,the basic knowledge of network training in deep learning is studied,including the convolution operation,pooling operation(maximum pooling and average pooling)of features in the network,and the selection and application of optimization algorithms in the back propagation of the network.The selection of activation functions,etc.,analyzes the impact of these factors on the accuracy of model classification throughout the training process.In addition,the common network structures such as VGGNet and ResNet are studied,and the commonly used deep learning frameworks Caffe,TensorFlow,Pytorch,etc.are briefly introduced.Secondly,this disertation analyzes the traditional deep separable convolution and points out the shortcomings of the model's running speed.On this basis,a new separable convolution method is proposed.The experiment proves that the separation convolution is changed.The number of time-groups can speed up the model and compare it to the depth separable convolution used in lightweight networks such as MobileNetV2.This convolution method can not only improve the accuracy of the network model,but also greatly improve the running speed.By referring to the advantages of each network,this disertation designs a lightweight convolutional neural network MNet based on the improved separable convolution,which is oriented to improve the running speed of the model.At the same time,the network holds as few parameters as possible.The network greatly reduces the space complexity and time complexity of the model in the case of ensuring less accuracy reduction.Then,for the problem of large computation and long operation time of complex models,the techniques of knowledge distillation,model pruning and model quantification are studied.On this basis,their own optimization methods are proposed.Finally,based on MNet,using knowledge distillation,model pruning,and model quantification,a computational optimization system is constructed to maximize the computational optimization.Then,several network models and MNets were compared on the embedded platform.The experiment proved that the accuracy of MNet is much lower than that of other networks except that the accuracy is slightly lower than that of large models.
Keywords/Search Tags:Deep Learning, Convolutional Neural Network, Model Compression, Model Pruning, Model Quantification
PDF Full Text Request
Related items