Font Size: a A A

Analysis Of Network Redundancy In Deep Learning

Posted on:2019-04-12Degree:MasterType:Thesis
Country:ChinaCandidate:J X HuFull Text:PDF
GTID:2348330542956386Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Deep learning,as an important research direction in the field of artificial intelligence,has received extensive attention for its outstanding performance.Compared with the traditional machine learning techniques,deep learning achieves complex function approximation through deep nonlinear network structures,demonstrating the powerful ability to learn the essential features of data sets.However,the deep neural network model captures the features and noise in the sample data at the same time,leading to the appearance of over-fitting.In addition,complex models require higher requirements for storage space and computing resources.However,with the popularization of intelligent mobile devices,neural networks are widely deployed as a method mostly used in the field of deep learning and it also need to be more deployed in small-scale consumption level equipment.To find the balance of performance and memory and computing resources is particularly important.This paper studies the redundancy in deep neural networks and proposed a training optimization method based on deep neural networks.The main contents are as follows:First,we compared the factors that affect the final performance in the network model.Second,the importance of quantifying different connections is judged by removing the different connections in the network model.Thirdly,a SR optimization method based on deep neural network is proposed.Two kinds of convolutional neural network structures,LeNet and AlexNet,and MNIST,CIFAR-10 and CIFAR-100 datasets were respectively used for comparative experiments.The experimental results show that the SR training method can effectively improve the training performance of CNN,under the same experimental conditions,the accuracy of Top1 is increased by 1.8%-5.0%,and the network scale is effectively reduced.This paper verifies thatthe extensive redundancy generally exists in deep learning.We also find that difference weights have different importance in the neural network.This study provides some guidance for further research on the training process of deep neural networks and has great significance in academic research and engineering application.
Keywords/Search Tags:deeplearning, convolutionalneuralnetwork, weights, redundancy, artificialintelligence
PDF Full Text Request
Related items