Font Size: a A A

Second order training algorithms for radial basis function neural network

Posted on:2012-09-16Degree:M.SType:Thesis
University:The University of Texas at ArlingtonCandidate:Tyagi, KanishkaFull Text:PDF
GTID:2468390011963746Subject:Engineering
Abstract/Summary:
A systematic two step batch approach for constructing and training of Radial basis function (RBF) neural networks is presented. Unlike other RBF learning algorithms, the proposed paradigm uses optimal learning factors (OLF's) to train the network parameters, i.e. spread parameters, mean vector parameters and weighted distance measure (DM) coefficients. Newton's algorithm is proposed for obtaining multiple optimal learning factors (MOLF) for the network parameters. The weights connected to the output layer are trained by a supervised-learning algorithm based on orthogonal least squares (OLS). The error obtained is then back-propagated to tune the RBF parameters. The proposed hybrid training algorithm has been compared with the Levenberg Marquardt and recursive least square based RLS-RBF training algorithms. Simulation results show that regardless of the input data dimension, the proposed algorithms are a significant improvement in terms of convergence speed, network size and generalization over conventional RBF training algorithms which use a single optimal learning factor (SOLF). Analyses of the proposed training algorithms on noisy input data have also been carried out. The ability of the proposed algorithm is further substantiated by using k-fold cross validation. Initialization of network parameters using Self Organizing Map (SOM), efficient calculation of Hessian matrix for network parameters, Newton's method for optimization, optimal learning factors and orthogonal least squares are the subject matter of present work.
Keywords/Search Tags:Network, Training, Optimal learning factors, RBF
Related items