Font Size: a A A

Multi-layer Feedforward Neural Networks With Multioutput Neuron Model And Its Application

Posted on:2005-07-13Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y J ShenFull Text:PDF
GTID:1118360152969055Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
Neural networks is a high nonlinear dynamical and adaptive and self –organizational system. It can be used to describe the intelligent activation of cognition, decision and control e.t., which makes the intelligent cognition and simulation become the major facet of neural networks theoretic research. At the same time, neural networks becomes the bedrock of information parallel distributed processing (PDP). PDP becomes a new hot domain in the medium and upper term of 80's last century. It ulteriorly extends the connotation of computation, which makes neural computation and evolution computation become a new research domain. It arouses enormous passion and broad interesting of scientist including computer science, artificial intelligent, cognition science, information science, micro-electronics, automatic control and robot, brain neurology e.t..However, the traditional M-P neuron model which used connective weights and a nonlinear activation function simulate the operation of neural's synapse and soma respectively. During the training process, weights are tunable while the function is settled beforehand. Obviously this model is a simplified one comparing with that of the biology neural. Thus its capability is limited. Based on this, a tunable activation function neuron model (TAF) and a multilay feedforward neural networks with this neural model (TAF-MFNN) are presented. Compared to the traditional multilayer feedforward neural network (MFNN), the TAF-MFNN can deal with more difficult problems easily. Also it can simplify network's architecture and achieve excellent network performance and generalization capability. However, the speed of convergence of BP algorithm which is used to train the networks is slow. The other shortcoming of this algorithm is that it is prone to get into a local optimum. It is well known, The RLS algorithm uses a modified BP algorithm to minimize the mean squared error between the desired output and the actual output with respect to the summation. Therefore, in our research, we propose to transform the architecture of the TAF-MFNN and enable it with a faster learning algorithm. The modified neural networks is equilvalent to the original networks. By simulations, the modified algorithm can improve the convergent speed and accuracy. Based on this, we have ameliorated the TAF model and presented a new neural networks with multi-output neural model (MO-MFNN). The algorithms such as RLS algorithm, LM algorithm, LMAM algorithm and OLMAM algorithm are used to train MO-MFNN. It is obtained the conclusion: when the training samples is small, LM algorithm or LMAM algorithm or OLMAM algorithm is selected to train the networks; When the training sample is very large, RLS algorithm is used to train the networks. The mean squared error function is used extensively in the training of backpropagation neural networks. Until now, most of the fast learning algorithms were derived based on the MS error function. Despite the popularity of the MS error function, there are two main shortcomings in applying those MS error based algorithms for general applications. On the one hand, there are many sub-optimal solutions on the MS error surface. The networks training may easily stall because of being stuck in one of the sub-optimal solutions. On the other hand, the MS error function, in general, is a universal objective function to cater for harsh criteria of different applications. However, there is a common view that different applications may emphasize on different aspects. To have an optimal performance such as a low training error and high generalization capability, additional assumptions and heuristic information on a particular application have to been included. One of the technique to absorb the a priori knowledge is regularization. Namely, a regularized index function is constructed. The learning algorithms for MO-MFNN with regularization are researched in this paper. By simulations, it is shows that it can decrease the computation complexity and storage by using of MO-MFNN. Neural networ...
Keywords/Search Tags:multi-output model, RLS algorithm, LM algorithm, LMAM algorithm, OLMAM algorithm, nonlinear system, adaptive control
PDF Full Text Request
Related items