Font Size: a A A

The Improvement On Growing And Pruning RBF (GAP-RBF) Sequential Learning Algorithm

Posted on:2007-02-01Degree:MasterType:Thesis
Country:ChinaCandidate:X DengFull Text:PDF
GTID:2178360182477834Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
In this article, systematic analysis and research are made to the various sequential learning methods of RBFNN (Radial Basis Function Neural Network), and by introducing the Decoupled Kalman Filter (DEKF) and online-learning method to RBFNN based on the basic theory of RBFNN and characteristic of sequential learning algorithm, the performance of RBFNN is greatly improved and enhanced.The key factor that influences the performance of RBFNN is the choice of hidden layer center. Firstly, several well known sequential learning methods that are employed to determine the number of RBFNN's hidden layer nodes are analyzed, with individual advantages and disadvantages pointed out; Secondly, on the basis of GAP-RBF (Growing and Pruning RBF) algorithm which has a superior online-learning performance, a modified sequential algorithm, used to train Direct Linking RBF-DRBF network, is put forward by replacing the EKF of GAP-RBF with DEKF. Like GAP-RBF, this new algorithm do not need to retrain the learning samples, furthermore, because the computing complex of DEKF is less than EKF, the learning speed of the new one is faster than GAP-RBF. Furthermore, the DRBF networks trained by this new algorithm have batter abilities to simulate the function with both linear and nonlinear items.Finally, the benchmark of function simulation shows that the precision and size of network are as the same as GAP-RBF, but the learning speed modified GAP-RBF is improved. When the simulated function has both linear and nonlinear items, the DRBF trained by the new algorithm has batter ability than RBF using GAP-RBF algorithm.
Keywords/Search Tags:Growing and Pruning RBF(GAP-RBF), DRBF, Function simulation
PDF Full Text Request
Related items