Font Size: a A A

The Research Of VGGNet Based On Inception And Feature Reuse

Posted on:2021-11-01Degree:MasterType:Thesis
Country:ChinaCandidate:H ChenFull Text:PDF
GTID:2518306122964189Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In recent years,due to the progress of public large-scale datasets and high-performance computer systems,machine learning(especially deep learning)has developed rapidly,and network models have been constantly updated.Machine learning technology represented by deep learning is widely used in many fields such as image classification,object detection and autonomous driving.Among them,image classification is one of the hot direction in computer vision.VGGNet is one of the most important models in the development of image classification.It not only secured the second place in the ILSVRC 2014 competition with a top-5 error rate of 7.3%,but also pointed out the importance of depth to the performance improvement of convolutional networks.Besides,VGGNet played an important role in the structural design and development of the learning model.On the other hand,VGGNet has a large number of network parameters,a large amount of calculation,and is prone to over-fitting.It needs to be trained sequentially model by model.Therefore,how to reduce the number of network parameters,reduce model training time,and prevent over-fitting are three important challenges faced by VGGNet model optimization.This paper proposes an improved VGGNet model: Inception-VGGNet.The network model replaces part of the 3×3 convolution with Inception-1,Inception-2 and Inception-3modules,which increases the network width and network depth.Secondly,Inception-VGGNet uses batch normalization before the activation function after convolution,which reduces the internal covariate shift and makes the distribution of input data of each layer tend to be consistent,which is beneficial to accelerate network training.Furthermore,the improved network replaces the fully connected layer with global average pooling,which greatly reduces the number of network parameters and also has the effect of preventing over-fitting.Finally,in order to facilitate the fine-tuning of the model for different datasets,a linear layer is added in front of the classifier.In this paper,classification experiments are performed on the ILSVRC2012 and CIFAR datasets.The results show that Inception-VGGNet surpasses the VGGNet series network with the number of parameters of7.3M,the calculation amount of 17.3GFLOPs and the classification error rate of 6.92%.In addition,experimental results show that Inception-VGGNet is easier to converge when using batch normalization.Based on Inception-VGGNet,this paper proposes Dense Inception Res-VGGNet using two feature reuse methods: dense connection and residual learning.The network introduces residual learning into the Inception module,and proposes three Inception Res modules based on residual connection,which enhances the feature reuse within the module.Secondly,the network uses dense connections between Inception Res module stacks,further utilizing feature reuse.Compared with Inception-VGGNet,the new network based on feature reuse increases the network path,reduces the problem of gradient vanishing and gradient explosion,reduces the network singularities,enhances the flow of information and improved the utilization rate of low-level information without increasing the network parameters and calculation amount.This paper continues to test Dense Inception Res-VGGNet on the ILSVRC2012 and CIFAR datasets,and the results show that its performance is better than Inception-VGGNet with the same parameter and calculation amount.This paper also tests the robustness and depth scalability of Dense Inception Res-VGGNet.The experimental results show that the network performance is stable and the depth is scalable.
Keywords/Search Tags:Image classifcation, Inception-VGGNet, DenseInception-VGGNet, Feature reuse
PDF Full Text Request
Related items