Font Size: a A A

An Improved Learning Algorithm For Convolutional Neural Networks

Posted on:2021-04-20Degree:MasterType:Thesis
Country:ChinaCandidate:L LuFull Text:PDF
GTID:2428330626460401Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
With the continuous improvement of the computer,the application of artificial neuralnetworks have gradually developed from the binary linear classification to the field of artificial intelligence.In particular,the wide application of convolutional neural networks have made people's life more intelligent and convenient.The application research of convolutional neural networks is a hotspot,and researchers have improved it in terms of network structure and learning algorithms to improve the application performance of the network in various fields.In addition,the theoretical research of neural networks is also crucial.Only by fully understanding the nature of neural networks can we more specifically improve the neural network and continuously perfect the neural network system.The main research work of this paper is from two aspects.In terms of theoretical research,most of works on the interpretation of neural networks are to visually explain the features learned by the network.This paper explores the relationship between the input neurons and the output neurons of neural network from the perspective of probability density.For classification problems,the theoretical derivation shows that the probability density function of the output neuron can be expressed as a mixture of three Gaussian density functions whose mean and variance are related to the information of the input neurons and the network parameters,under the assumption that the input neurons are independent of each other and obey a Gaussian distribution.Subsequently,experimental verification is performed on the three datasets.The experimental results showed that the theoretical distribution of the output neuron was basically in keeping with the actual distribution,which verifies the correctness of the theoretical derivation.In the aspect of application research,the classification performance of the network is closely related to its learning algorithm.The error back-propagation algorithm is usually used to train convolutional neural networks.However,there is no obvious difference in the contribution of different samples to the weight changes in this learning algorithm,which makes the adjustment of network parameters less susceptible to difficult-to-classify samples and reduces the classification accuracy of the network.In order to enhance this gap,this paper firstly defines the classification confidence as the degree to which a sample belongs to its correct category,and divides samples of each category into danger and safe according to a dynamic threshold.Secondly,an improved learning algorithm based on the difficulty of classification is presented to penalize the loss of danger samples,making the convolutionalneural network pay more attention to danger samples and learn more effective information.Finally,the experiment results,carried out on the MNIST dataset and three sub-datasets of CIFAR-10,showed that for the MNIST dataset,the accuracy of traditional CNN reached99.246%,while that of PCNN reached 99.3%;for three sub-datasets of CIFAR-10,the accuracies of traditional CNN were 96.15%,88.93%,and 94.92%,respectively,while those of PCNN were 96.44%,89.37%,and 95.22%,respectively.
Keywords/Search Tags:Artificial Neural Networks, Convolutional Neural Networks, Probability Density Function, Gaussian Distribution, Learning Algorithm
PDF Full Text Request
Related items