Font Size: a A A

Research On The Mechanism Of Basic Elements Of Convolutional Neural Network

Posted on:2020-07-11Degree:MasterType:Thesis
Country:ChinaCandidate:Y KeFull Text:PDF
GTID:2428330602962026Subject:Control engineering
Abstract/Summary:PDF Full Text Request
In recent years,deep learning has attracted wide attention as a hot research direction,and it is also an important cornerstone of the development of artificial intelligence.The essence of deep learning is to make the computer combine the essential features of the lower layer and then obtain more abstract high-level attributes or features through the artificial neural network with multiple hidden layers,so as to discover the learning method of distributed feature representation of data,and have good Generalization.Convolutional neural networks are widely used classical learning frameworks for deep learning.Because of their weight sharing and local connection framework and pooling operations,the complexity of the model can be greatly reduced and the number of weight parameters can be reduced.The characteristics of the input data have a certain degree of translation and scaling invariance.The convolutional neural network has excellent performance in image feature extraction,classification and semantic segmentation.The framework of convolution neural network mainly includes convolution layer,pooling layer and full connection layer.Each layer plays an important role in the whole feature learning process.In this paper,the convolution and pooling operations,which are the core operations of convolution neural networks,are studied as follows:Firstly,the role of depth in convolutional neural network is explored experimentally,and the advantages of multi-hidden layer network are proved by comparative experiments.This paper decomposes the standard odd-size convolution kernels,constructs a neural network framework composed of even-size convolution kernels,and proves the advantages and feasibility of its practical application.According to the nature of target features,the selection rules of initial convolution kernels are proposed,which proves that the pre-training steps are not necessary.Secondly,different up-sampling methods are compared and studied.The experimental data prove the influence of location information and deconvolution steps on the whole up-sampling effect,and further prove the advantages of the neighbor upsampling method.
Keywords/Search Tags:Deep Learning, Convolutional Neural Network, Convolution Kernel, Pooling, UpSampling, Deconvolution
PDF Full Text Request
Related items