| As one of the basic problems of computer vision,image classification has made a lot of progress with the introduction of deep learning.However,there will inevitably be over fitting phenomenon when training the classification model,resulting in the effect of the classification model is not ideal.Overfitting refers to that the result of network fitting is only close to the training set due to the complexity of the model or the lack of data,but it will not achieve good performance in the test set,that is,the network does not have the generalization.Dropout is a regularization measure and strategy for neural network.It makes the "importance" of each neuron converge by randomly discarding neurons during training,thus reducing the tendency of network overfitting.But dropout can't play a good role in convolution neural network,because the features extracted in convolution layer are related in space,and the neuron information discarded by dropout can't be completely shielded.Many methods attempt to improve Dropout by randomly dropping areas or channels.In this thesis,the common dropout algorithm is studied and compared.Based on this,a new regularization measure and strategy named FocusedDropout is proposed,which is the first nonrandom dropout measure and strategy for CNN.The network trained by FocusedDropout method can make the network pay more attention to the foreground information and the classification target by discarding the background and noise related units,thus the classification effect will be better.In the research process,it is found that the high activation value unit of convolutional neural network often corresponds to the classification target.Based on this point,FocusedDropout selects the channel with the highest average activation value in the training process,discards the unit with the activation value lower than the threshold value on the channel,and takes the unit higher than the threshold value as the preferred area.Supported by the space invariance feature contained in CNN,FocusedDropout only retains the units of the preferred region on the remaining channels.Therefore,the FocusedDropout method makes the network pay more attention to the features related to the classification target,which is more conducive to the regularization of the neural network and prevents the occurrence of overfitting.This thesis studies the training performance of the FocusedDropout method on multiple common models and multiple common datasets to prove the effectiveness and generality of the FocusedDropout method.In the process of training,FocusedDropout uses a new way,not to use FocusedDropout method on all training samples,but to randomly select 10% of batches to use FocusedDropout method,and keep the model in the test process.This way of training makes FocusedDropout only occupy a very small training time and training memory cost,and at the same time,it is easy to realize.The experimental results show that the FocusedDropout method has a good performance improvement on CIFAR-10 and CIFAR-100 datasets,and it has good generality and generalization ability for different convolutional neural network classification models including VGG,ResNet and DenseNet. |