Font Size: a A A

Structure Design, Learning Algorithms And Applications Of Recurrent Neural Networks

Posted on:2006-05-20Degree:DoctorType:Dissertation
Country:ChinaCandidate:S D QiaoFull Text:PDF
GTID:1118360155472175Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Recurrent neural networks are tied up with human intelligence, signal processing, statistical physics and nonlinear control etc, while fewer efforts are made to their theory research comparing with their applications now. This thesis deals with some theory aspects of recurrent neural networks, including structure design, structure selection and learning algorithms, and has recurrent multilayer perceptron, TSK and network ensembles as processing objects.The central work in the thesis is listed as following:(DA concise TSK network is designed. The new TSK can be interpreted into relatively simple fuzzy rules, so keeps the plausible comprehensibility, that makes it superior to the networks presented by Chia-Feng Juang and P.A.Mastorocostas.(2) Another recurrent TSK is designed that is based on the 2-order operations, and it also keeps the plausible comprehensibility. Its identifying ability is proved by our simulations, which reveals that the 2-order operations are feasible for structure design.(3) A new GA is proposed to optimize the structure of TSKs. The algorithm takes advantages of the structural property of TSKs, and notably improves the evolving efficiency. The algorithm greatly simplified the topology of TSKs in our simulations.(4) A group of formulas are proposed to calculate the dynamic derivatives of RMLP networks. Compared with those presented by GV.Puskorius, our formulas saved forty percent to seventy percent of computing time and almost half of storage space in our simulations.(5) A method is proposed to reduce the computation complexity of UKF when using UKF to train RMLP. Our method saved almost 80% computing time for UKF(2Nw+1) and UKF(Nw+2) that are two faster versions of UKF , and saved 90% storage space for UKF(2NW+1) in our simulations.(6) A proposition is proved on the negative correlation algorithm of network ensembles. Our proposition reveals that the algorithm is a form of multi-object optimization, which simplifies the negative correlation algorithm;(7) DEKF is used to train fuzzy multilayer perceptron, which fulfils the urge requirement of an effective learning method for the network;(8) Some modifications are made to the adaptive state-filtering algorithm presented by A.G.Parlos. The modified filtering system is more easy for realization and robust than the original one.
Keywords/Search Tags:Recurrent TSK, Recurrent MLP, Fuzzy MLP, Network Ensemble, Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), Dynamic Derivatives, Adaptive State Filtering, Negative Correlation Training Method
PDF Full Text Request
Related items