Font Size: a A A

Research Of Dropout Method Based On Convolutional Nerual Network

Posted on:2022-04-16Degree:MasterType:Thesis
Country:ChinaCandidate:T S XieFull Text:PDF
GTID:2518306524990029Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of computer software and hardware,deep learning has emerged in computer vision,natural language processing,driverless and other fields.Convolutional neural network,as one of the most important models in deep learning,has achieved excellent results in image classification,object detection and other directions,but it also has some problems to be solved,such as over fitting,large time and memory overhead.As one of the representative regularization methods,Dropout method can effectively suppress the occurrence of over fitting phenomenon by shielding some neurons from participating in training.However,in convolutional neural networks,the common Dropout method does not improve the performance of the network.This is because the adjacent neurons in convolutional neural networks have similar semantics,and random discarding of neurons will result in invalid discarding.Based on this characteristic,researchers improved the Dropout method to improve its performance in convolutional neural network,among which the typical representative methods are Spatial Dropout,Drop Block and so on.But we notice that this series of Dropout methods will cause a problem: the loss of information caused by shielding some neurons during training,which is an unavoidable problem of current Dropout methods.How to improve the efficiency of feature learning and avoid information loss is a challenging problem.In view of this shortcoming,we found that the image information carried by the feature map of different regions is not the same through the network visualization experiment,and inspired by it,we proposed a new method of Dropout based on convolutional neural network: Drop Reuse.In the network training,we use feature segmentation to discard neurons,and then use multiple full connection layers to reuse the features that should be discarded.Finally,we use multiple loss functions to update the parameters of the network,so as to strengthen the learning of some parts of features and avoid the occurrence of information loss.In this thesis,the performance of Drop Reuse is tested on the tasks of cross species image classification,fine-grained image classification and object detection.Experiments show that our method can improve the network performance on CIFAR-100,Image Net,Tiny Image Net,CUB-200-2011,Pascal VOC and other datasets,and can also improve the network performance on Res Net,Pyramid Net,Dense Net,Wide Re Net.We also prove that Drop Reuse is superior to other existing Dropout methods and self distillation methods,and can be superimposed with data augmentation and self distillation methods,which has strong competitiveness in the existing regularization methods.This thesis also explores the interpretability of Drop Reuse through a series of ablation experiments.We use t-SNE and quantitative statistics to analyze the network visually,so as to understand the reason why our algorithm improves the network performance.
Keywords/Search Tags:Deep learning, Convolutional neural network, Regularization method, Dropout, Self distillation method
PDF Full Text Request
Related items