Font Size: a A A

Convergence Analysis Of Neural Network Learning Algorithm With Adaptive Learning Rate

Posted on:2017-01-09Degree:MasterType:Thesis
Country:ChinaCandidate:H J WangFull Text:PDF
GTID:2348330488958869Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
The development of artificial neural networks has mainly experienced rise, climax, drought and rise again in four stages. If the first time arise mainly because people feel curious, then, the second rise is mainly due to the difficulties encountered in many places, and artificial neural network can solve these challenges, because of its practical value.For the most widely used BP neural network, it also has many problem, such as the weight too big, slow convergence and often fall into local minimum value. In order to solve these problems, We usually add a penalty term to the error function. In terms of the boundedness of the network weights, and the convergence of the algorithm, strict condition is needed.In This article we apply Armijo-Wolfe rule to BP algorithm which have different penalty term, and get the boundedness of weights, convergence of the algorithm. The organization of the article is arranged as follow:1. The first chapter makes a brief introduction to the neural network, including the development stages of the neural network, the different network models of the neural network and so on.2. In the second chapter, the Armijo-Wolfe rule is applied to the neural network with general penalty. Under more relax conditions, we prove the boundedness of the weights, the convergence of the algorithm.3. In the third chapter, we consider a more general penalty term, and apply the Armijo-Wolfe rule to the neural network with L1/2 penalty term. The boundedness of the network weights and the convergence of the algorithm also proved.
Keywords/Search Tags:BP Algorithm, Penalty Term, Convergence, Armijo-Wolfe rule, L1/2 Regularization
PDF Full Text Request
Related items