Font Size: a A A

Research On Batch Normalization Of Deep Neural Network Based On Covariate Shift

Posted on:2019-09-16Degree:MasterType:Thesis
Country:ChinaCandidate:B HuangFull Text:PDF
GTID:2428330548961227Subject:Engineering
Abstract/Summary:PDF Full Text Request
Thanks to the arrival of the “big data” era and the development of high-performance computing devices,deep learning has become a hot topic in the field of artificial intelligence.How to perform deep neural network training at high speed has always been a difficulty in this field.Due to the complexity of the deep neural network architecture and the difficulty of its training,how to effectively conduct deep neural network training has received increasing attention from scholars.The difficulty of deep neural network training increases with the depth of the network.The important reason for this is the back-propagation algorithm due to the deep neural network itself.Due to the deepening of the network depth,backpropagation algorithms inevitably lead to gradient dispersion problems.In response to this,different researchers proposed different solutions,such as setting different activation functions,setting adaptive learning methods,or proposing new regularization methods,and so on.The batch normalization algorithm,a so-called reinitialization method,makes the process of training the deep network stable.So far,most deep neural network architectures rely on inserting a batch standardization layer in the feedforward network.Although this process makes it possible to train deeper neural networks,it also increases the amount of computation and increases the time.Overhead,and there is some inherent conflict with the regularization algorithms that have emerged.Makes the overall effect of the model worse.This paper explains the inconsistency of input distribution between hidden layers in deep neural networks from the perspective of covariate shift.Based on this theory,the role of the batch standardization layer was introduced.Aiming at the redundancy of the deep neural network batch standardization algorithm itself and the conflict with other regularization algorithms,a simplified batch-standardized layer algorithm Fast-Dropout is proposed.And propose a brand-new framework way,make the algorithm can cooperate with other regularization algorithms effectively.In this paper,we compare the time-cost comparison with the original batch standardization algorithm to study its acceleration effect,and verify the classification accuracy of the algorithm combined with other regularization algorithms through the classification problem on two different data sets.Experimental results show that the Fast-Dropout algorithm proposed in this paper has a smaller time overhead than the original batch standardization algorithm,and can effectively improve the classification accuracy by combining with other regularization algorithms.
Keywords/Search Tags:Deep Learning, Neural Network, Covariate Shift, Batch Normalization, Regularization
PDF Full Text Request
Related items