Font Size: a A A

Several Results Of Approximation Capability Of Radial Basis Function And Multilayer Perceptron Neural Networks

Posted on:2008-05-11Degree:DoctorType:Dissertation
Country:ChinaCandidate:D NanFull Text:PDF
GTID:1100360218453557Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Neural network theory and methods have been developed rapidly in the past two decades and have been applied in diverse areas, such as engineering, computer science, physics, biology, economy and managements, etc. Many researches in this respect can be converted into problems of approximating multivariate functions by superpositions of the neuron activation function of the network. In mathematical terminology, these problems can be expressed that under what conditions can multivariate functions be represented by superposition of univariate functions, which is also the thirteenth conjecture of Hilbert's.In this thesis, the nonlinear approximation property of neural networks with one hidden layer is investigated and the approximation capability of radial-basis-function (RBF) neural networks is analyzed theoretically, including approximation any given function, approximation a compact set of functions and the system identification capability of RBF neural networks. In other words, under what conditions can the family of functions approximates any given function, a compact set of functions and any given operate T : LP1(K11)→LP2(K2), where ci,λi∈Ri, x, yi∈Rn, i = 1,2,...,N, K, K1, K2(?)Rn are compact sets, 1≤p, P1, P2<∞, the activation function g is typically the Gaussian function, and the network is said to be radial basis function neural network.Moveover, the approximation capability of feedforward neural networks to a compact set of functions is concerned in this thesis. We use to denote a family of neural networks, where F2(x) is the output of the network for the input x,λj the weight between the output neuron and the j-th hidden neuron, and 9 the activation function.τj (x) is the input value to the j-th hidden neuron which is determined by the weights between the j-th hidden neuron and the input neurons. To elaborate, we shall prove the following: If a family of feedforward neural networks with a hidden layer is dense in H, a metric linear space of functions, then given a compact set V(?) H and an error boundε, one can choose and fix the quantity of the hidden neurons and the weights between the input and hidden layers, such that in order to approximate any function f∈V with accuracyε, one only has to further choose suitable weights between the hidden and output layers.This thesis is organized as follows:Some background information about FNN is reviewed and some popular results are introduced in Chapter 1.Some elementary sentences and fundamental properties of distributions are introduced in Chapter 2, including the relationship between fundamental space and distributions, supports of distributions, distributions as derivatives, convolutions and so on.The third chapter mainly deals with the approximation capability of RBF neural networks, including approximation any given function, a compact set of functions and any given operate. These result improve some recent results such as [1-5] et. al.The approximation capability of feedforward neural networks to a compact set of functions is investigated in Chapter 4. We follow a general approach that covers all the existing results and gives some new results in this respect. A few examples of straightforward applications of this result to RBF, MLP and other neural networks in some metric linear spaces such as Lp(K) and C(K) is presented in the following. Some of these results have been proved (cf. [1, 2, 6, 7]) in terms of the particular settings of the problems, while the others are new up to this knowledge.
Keywords/Search Tags:RBF Neural networks, Multilayer perceptron, L~p approximation, Continuous functionals and operators, System identification
PDF Full Text Request
Related items