Font Size: a A A

Research On Learning Algorithms Of Radial Basis Function Neural Networks

Posted on:2006-01-18Degree:MasterType:Thesis
Country:ChinaCandidate:B LiFull Text:PDF
GTID:2168360155965993Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
Because of its global generalization capability and simple structure, RBF neural networks have attracted considerably more researchers' interests, and found successful applications in many research areas such as function approximation and nonlinear time series prediction. This dissertation studies the learning of radial basis function (RBF) neural networks. After surveying the virtues and deficiencies of predecessors' works, this dissertation presents two improved learning algorithms that are more efficient and produce more compact networks than existing algorithms.Existing learning algorithms can be categorized into two classes, off-line and on-line (or sequential). Since sequential algorithms reflect more time-variant features of the presented samples, they are more applicable to real-time systems with RBF neural networks. In a sequential learning algorithm, the training for a sample should be completed before the next sample enters the RBF neural networks. The computational time of an iteration of the algorithm should be less than the sampling interval of the system. Reduction of the algorithm's complexity is an important point that we would consider. Another important issue is the generalization performance of the resulting networks. Generally, a RBF network with fewer hidden neurons may have better generalization performance if they have the same approximation accuracy for the training samples.In Chapter 3, a brief introduction to the off-line training is given, and the orthogonalized least squares algorithm is detailed. In Chapter 4, several sequential learning algorithms for RBF neural networks, such as RAN[1], RANEKF[5], MRAN[22], and GGAP-RBF (GAP-RBF) [23] [24] are described. The advantages and disadvantages of these algorithms are summarized.Based on the MRAN algorithm, an improved learning algorithm referred to as an IRAN is presented in Chapter 5. Instead of the EKF algorithm in the MRAN, the presented algorithm adjusts the output-layer weights by a recursive least-squaresalgorithm with Givens QR decomposition, resulting in faster learning and less computational complexity. The algorithm uses a new pruning strategy to remove redundant neurons, which leads to more compact networks.In chapter 6, Based on the GGAP-RBF algorithm presented by GB. Huang, this dissertation presents another improved learning algorithm that give much better performance than GGAP-RBF and other algorithms. The improvements can be summarized in two aspects. (1) The dynamical regulation of overlap threshold [31] is introduced into the GGAP-RBF algorithm, such that the parameter of distance threshold used in the novelty criterion can be obtained automatically. (2) The response widths of hidden neuron are updated by a self-adjustment algorithm. These two improvements greatly reduce the number of adjustable parameters of the algorithm. The improved algorithm is referred to as a generalized IRAN (GIRAN).In the 7th chapter, the IRAN and GIRAN learning algorithms are compared with RAN, RANEKF, MRAN, and GGAP-RBF (GAP-RBF) algorithms by simulations on four benchmark problems in the function approximation area. The results indicate that the IRAN and GIRAN algorithms can provide comparable generalization performance with a considerably reduced network size and training time.
Keywords/Search Tags:Sequential learning algorithm, RBF neural networks, Hidden neurons, Benchmark problems, Pruning strategy
PDF Full Text Request
Related items