Font Size: a A A

A Quantitative Performance/Scale Analysis Of Deep Neural Networks

Posted on:2021-05-20Degree:MasterType:Thesis
Country:ChinaCandidate:W J WenFull Text:PDF
GTID:2428330647460168Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Deep neural networks have been widely used in computer vision,natural language processing,speech recognition and other fields.As the layers of deep neural networks become deeper,the parameter scale becomes larger and the time for training and running deep neural networks is getting longer,and the requirements for machine equipment are becoming higher.Considering the trend to deploy it on mobile devices such as smart phones,it is increasingly important to lower the computational volume and storage demand for deep neural networks.Deep neural network pruning aims to reduce network parameter redundancy and network scale.This paper starts with the weight values in the deep neural network,analyzes its size and its changing trend,and uses different methods to prune the weights to reduce the size of the deep neural network,reduce its requirements for hardware devices and energy consumption,and better deploy on mobile platforms.The main tasks are as follows:(1)A probe is developed to obtain the weight parameters of the deep neural network.By quantifying the weight parameters in the training process from the curve trend,slope,and dispersion degree,we unveiled the characteristics of the weights of the generative network in different stages of the training process and proposed a feasible condition for judging whether deep network training can be terminated in time.(2)A fine-grained single weight pruning method is used to prune deep convolutional neural network.The experimental results show that it can effectively reduce parameter redundancy.The resulting network is sparser and can match the accuracy provided by the original network.(3)A structured filter pruning method is used to prune generative adversarial network.The experimental results show that it can reduce the storage space and accelerate the running speed.(4)In order to demonstrate the impact of pruning in practice,a pruned deep network has been successfully deployed on two different Android mobiles for animation generation,which shows that the deep network can be shrunk and deployed on low-end platforms.
Keywords/Search Tags:Deep learning, Quantitative analysis, network pruning, Convolutional neural network, Generative Adversarial Network
PDF Full Text Request
Related items