Font Size: a A A

Convergence Of Batch Gradient Descent Algorithm For High Order Double Parallel Neural Networks

Posted on:2010-02-25Degree:MasterType:Thesis
Country:ChinaCandidate:J P GuoFull Text:PDF
GTID:2178360275457977Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
BP neural network is a kind of feedforward neural networks and is widely used in many applications.BP network has many advantages,but it only uses additive neurons and hence has poor capability for solving complex nonlinear problems.To resolve this problem, product neurons are applied in feedforward neural networks to enhance the nonlinear mapping capability.This kind of feedforward neural network based on high order neurons is called high order feedforward neural networks.On the other hand,there are many nonlinear problems with linear part in engineering,which can not be resolved by a simple perceptron or a common feedforward neural networks.Double parallel feedforward neural networks (DPFNN) are proposed for this case.In this thesis,high order double parallel neural network (HODPNN) is proposed,based on the network structures and feature of the above two networks,to resolve this kind of complex nonlinear problems with linear part.Numerical experiments indicate that the neural network has a better learning performance in solving this kind of problem,compared with Pi-Sigma neural network and DPFNN.Gradient algorithm is a simple optimization algorithm and is often applied for neural networks gaining.There are two different ways to input the samples during the training process:online version and batch version.The main work of this thesis is to study batch gradient descent algorithm for HODPNN in convergence.The structure of this thesis is organized as follows:Chapter 1 gives a brief introduction of artificial neural network.Chapter 2 gives proof of weak convergence and strong convergence of batch gradient descent algorithm for HODPNN.Finally,two numerical experiments on functional approximation are carried out in Chapter 3,where the learning performance of the network is also discussed.
Keywords/Search Tags:High order neural network, Double parallel feedforward neural network, High order double parallel neural network, Batch gradient algorithm
PDF Full Text Request
Related items