Font Size: a A A

Convergence Analysis Of Gradient Algorithm For Training Some High-Order Neural Networks

Posted on:2013-02-24Degree:MasterType:Thesis
Country:ChinaCandidate:F DengFull Text:PDF
GTID:2248330374997714Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
The gradient algorithm is one of the most popular training algorithms for neural network. The major work of this paper is to research the gradient algorithm of the Pi-Sigma neural network, recurrent Pi-Sigma neural network and Ridge Polynomial neural network. Also, the convergence and monotonicity are discussed.The paper is organized as follows:The Chapter2takes the Pi-Sigma neural network as the model. On the basis of the original stochastic simple point online gradient algorithm, combined with the multiplier method, one new algorithm is proposed. Using the optimized theory method, the restrained question is transformed into the non-constraint question, and the multiplier penalty function is used to avoid low convergence rate caused by unreasonable initial weights. The linear convergence rate of the algorithm is proved theoretically and the simulated experimental results indicate that the algorithm is efficient.In Chapter3, a new gradient training algorithm is presented to train the recurrent pi-sigma neural networks, in which a penalty is added to the conventional error function. The algorithm can not only improve the generalization of neural networks, but also avoid the slow convergence caused by the case that the original weights are chosen too small, achieving a better convergence compared to the traditional gradient algorithm without the penalty term. Moreover, the convergence of the algorithm is also studied, and finally the simulated experimental results indicate that the algorithm is efficientIn Chapter4, the main work is to study the gradient method for ridge polynomial neural networks, including error function and weights. A monotonicity and two convergence theorems are proved. To illustrate above theoretical finding, a supporting experiment is also given.
Keywords/Search Tags:High-order neural network, Gradient algorithm, Multiplier method, Penalty, Convergence, Monotonicity
PDF Full Text Request
Related items