Font Size: a A A

Convergence Of Some Gradient Learning Algorithm For Recurrent Neural Networks

Posted on:2008-02-28Degree:MasterType:Thesis
Country:ChinaCandidate:X S DingFull Text:PDF
GTID:2178360218455233Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Artificial neural networks have been widely applied in various areas due to its virtuesof self-organization, self-adaption and self-learning. For some practical problems, we expectthe model to reflect the dynamical properties of a system. But the traditional feedforwardneural networks (FNN) is static, and difficult to deal with dynamic systems. Though a delayterm can be introduced into the network for this purpose, too many neurons are needed toperform the dynamic response, and the order of the system has to be known beforehand.Recently there has been a great progress in the research of recurrent neural networks (RNN).Compared with FNN, RNN is a dynamical network. It recurs inner state feed-back to de-scribe the nonlinear dynamical feature of the system.RNN is a kind of neural networks that include one or more feed-back loops. Thereare different forms to apply feedback to neural networks, leading to different structures ofRNN. In the learning algorithms, gradient descent method has been widely used. In 1989,Williams and Zipser introduced the real time recurrent learning algorithm (RTRL)[21]. Theliterature [22] gives a deterministic convergence of gradient descent algorithm for RNN forone neuron. This thesis continues the theoretical research on gradient learning algorithm forRNN.The first chapter is a literature review. In Chapter 2, some popular penalties are brieflyintroduced, and the unboundedness of weights of RNN without penalty term is discussed. Tosolve the problem, we introduce a corrective error function with penalty. Through this strat-egy, the convergence of gradient training of RNN with penalty and the boundedness of theweight are both guaranteed. The RTRL algorithm introduced in [21] is the generalization ofonline gradient method from FNN to RNN, but there remains a lack of theoretical assurance.The third chapter mainly verify the weak convergence and strong convergence theorem ofRTRL.
Keywords/Search Tags:Recurrent neural networks, Penalty, Gradient descent, boundedness, Convergence
PDF Full Text Request
Related items