Font Size: a A A

On The Learning And Compression Of Deep Neural Network Structure

Posted on:2022-10-07Degree:MasterType:Thesis
Country:ChinaCandidate:S B ShenFull Text:PDF
GTID:2518306536487854Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
In recent years,the technology of artificial intelligence,especially deep learning,has achieved outstanding progress and achievements in various areas,including pattern recognition,scene perception,task decision-making,et,al.At the same time,the development of edge computing and devices such as embedded terminals have also brought higher data processing and computing requirements.The excellent data feature extraction and analysis capabilities of Deep Neural Networks(DNN)have broad application prospects in edge computing scenarios However,the huge complexity of DNN severely limits its applications in edge computing with limited resources.Therefore,the key issue of this research is how to effectively reduce the complexity of DNN.To begin with,this paper summarizes the mainstream deep-learning compression,especially pruning schemes.On the basis of previous researches,we propose dynamic feature-map propagation that dynamically adjusts the forward propagation substructure of each neural network during the training phase,which aims to reduce the complexity of deep Convolutional Neural Network(CNN),optimize the traditional deep learning pruning procedure and realize end-to-end neural-network training and compression.Furthermore,this research proposes a Deep Structural Learning(DSL)strategy,By effectively evaluating the importance of different layers of the neural network,DSL assigns more neurons to the layers with higher importance.Considering that prior to training the neuralnetwork parameters,the priority is to learn a compact yet efficient structure of DNN,which is conducive to reducing the complexity of deep-learning in the training phase and improving the convergence speed of the parameters.In addition,this research incorporates DSL into the common DNN deployment framework,which is inspired by “cloud-edge collaboration”,aiming to simplify the process complexity of traditional model deployment and fully satisfy the privacy and personalization needs of edge applications.Finally,the research has carried out experimental verifications on mainstream DNN architectures and standard data sets,and the experimental results fully demonstrate the effectiveness of the aforementioned approaches.
Keywords/Search Tags:deep-neural-network pruning, dynamic feature-map propagation, End-to-end neural network training and compression, deep structural learning, cloud-edge collaboration, deep-neural-network deployment
PDF Full Text Request
Related items