Font Size: a A A

Optimal Design And Applications Of RBF Neural Networks

Posted on:2018-12-20Degree:MasterType:Thesis
Country:ChinaCandidate:X S QianFull Text:PDF
GTID:2348330542967132Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Radial basis function?RBF?neural networks have been widely used in many fields because of their fast learning process and universal approximation capability.Learning of RBF neural networks involves two tasks:determining the network structure and optimizing network parameters.Usually,the network structure is determined in advance,and the network parameters are then optimized.This strategy can yield a fast learning procedure.However,it often leads to poor generalization capability in practical applications because it is very difficult to determine an appropriate network structure in advance.Therefore,a favorable RBF neural network learning algorithm should be able to optimize the network structure and adjustable parameters simultaneously.Based on the study of the maximum spread?MS?and maximum data coverage?MDC?algorithms,a modified MDC?MMDC?algorithm is proposed in this thesis.Similar to the MS and MDC algorithms,the MMDC algorithm is able to automatically determine the number of hidden neurons,RBF centers and widths.Meanwhile,the MMDC algorithm can further reduce the network size.Moreover,an effective implementation scheme is provided for the three algorithms to reduce memory requirements when dealing with large-scal problems.In order to produce a compact network with good generalization capability,an efficient generalized hybrid constructive?GHC?learning algorithm is proposed for multi-output RBF neural netwotks,which can simultaneously optimize the network structure and adjustable parameters.The GHC learning algorithm first employs an initialization method based on the MS algorithm to select the important initial hidden neurons and candidate ones.Then,a structured parameter optimization?SPO?algorithm is presented to optimize the RBF centers,widths and output weights simultaneously,which leads to a significant reduction in problem dimension.Therefore,the computational complexity of training a fixed-size network can be greatly reduced.By incorporating an improved incremental constructive?IIC?scheme,the training can be built on previous optimized results after adding one candidate neuron,which adjusts the network size without adding extra computational burden.Finally,a method based on the Akaike's information criterion is utilized to determine the optimal network structure,which helps to avoid the time-consuming trial-and-error procedure.In addition,the L1 regularization is introduced into RBF neural networks to prevent overfitting,and a sparse constructive?SC?learning algorithm is proposed to effcciently train the L1-regularized RBF neural neetwork.The SC learning algorithm first employs the MMDC algorithm to determine an appropriate initial network structure.Then,a specialized orthant-wise limited-memory quasi-Newton algorithm is presented to optimize adjustable parameters of the L1-regularized RBF neural neetwork.During the optimization procedure,the SC learning algorithm is able to produce a sparse RBF neural network with good generalization capability by automatically pruning the redundant hidden neurons and output weights.Finally,the three proposed algorithms are applied to pattern classification.Their performance is evaluated in comparison with several algorithms on a large number of data sets.Experimental results demonstrate that the MMDC algorithm can effectively reduce the network size,and both the GHC and SC learning algorithms can produce a compact network with good generalization capability.In addition,the three proposed algorithms are able to deal with large-scale data sets with low memory requirement.
Keywords/Search Tags:RBF neural networks, Modified maximum data coverage algorithm, Generalized hybrid constructive learning algorithm, Sparse constructive learning algorithm, Pattern classification
PDF Full Text Request
Related items